Text

PHP-CS-Fixer is yet another extremely handy utility to emerge from Sensiolabs to help manage code style compliance.

I use it as part of my git workflow (see my magento githooks repo) and I find it really does help keep code consistent.

When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).

Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it's unclear how to build one directly from the sources.

After a bit of digging around in the code and issues list, I found they are using the php-box project to build release phars. It's actually very simple, but to save others having to figure it all out, just follow these steps.

$ git clone https://github.com/fabpot/PHP-CS-Fixer
  $ cd PHP-CS-Fixer
  $ composer.phar require --dev 'kherge/box=~2.4'
  $ vendor/kherge/box/bin/box build
  > Building...
  

Do a quick _ls_ and you'll notice you have a minty fresh php-cs-fixer.phar file.

Done.

Text

If you're trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go

$ usermod -a -G thegroup theuser
  

And job done. But OSX uses Open Directory rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don't work.

The dscl (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free sudo.

$ dscl localhost --append /Local/Default/Groups/<groupnamehere> <usernamehere>
  
Text

One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.

So, later on, you'll probably want to pull changes down from the remote and you'll end up seeing something similar to this

➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
  Please specify which branch you want to rebase against.
  See git-pull(1) for details
  
  git pull <remote> <branch>
  
  If you wish to set tracking information for this branch you can do so with:
  
  git branch --set-upstream-to=origin/<branch> zendesk
  

Now it's not that hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.

In your ~/.gitconfig under the alias section, add this

    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
  

You can use the alias by issuing the following command in your terminal

$ git sup
  

This will look at the current branch and set its upstream to origin/branchname

If you tend to use another remote name other than origin, change the alias accordingly.

I have a few other useful aliases which you can checkout (hah, sorry :)) in my full gitconfig.

Tags: git
Text

Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.

For some workarounds checkout http://mywiki.wooledge.org/BashFAQ/024

Tags: bash shell unix
Text

No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.

Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.

Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
  

http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging

To ensure you've kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.

$ git config --global merge.ff only
  

This way, you'll get a gentle reminder to rebase. If that's not feasible then you can force through the merge with

$ git merge --no-ff
  
Tags: git
Text

I've had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I'd chuck up a quick review for it.

The Grumpy Programmer (actual name Chris Hartjes) amusingly blogs and tweets all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out.

Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I've been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran's testing regime will benefit from a read through.

The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O'Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O'Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.

As a quick aside, the book is published by LeanPub, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so.

So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.

The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a 'cookbook'. I would have loved to have seen more on using Data Builders for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. Factories and data builders are a couple concepts that once learned, significantly reduce the friction of TDD.

I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).

Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same.

You can buy it now at grumpy-phpunit.com

Text

A quick note to help me remember how to do this.

The problem: you want to select the smallest value from a set of values.

Let's say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don't want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.

MySQL has a neat way to do this. Simply go:

SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
  

The magic is in the NULLIF function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.

Tags: sql mysql
Text

By now, I'm pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix's idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.

So I'm happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It's no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop.

But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you're born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX's realisation of Unix leaves a lot to be desired.

With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.

I'll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.

Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based 'package' manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you're a competent C programmer (and even then) you're going to have a bad time.

One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.

So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of Magento and the Magento Test Automation Framework (TAF).

MySQL

MySQL is probably the easiet part of the whole thing to setup. So let's start there. For reference, the database files are stored under /opt/local/var/db/mysql55.

In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That's annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.

$ sudo port install mysql55-server mysql55 mysql_select
  $ sudo port select mysql mysql55
  $ sudo port load mysql55-server
  

We will want a database for our magento application.

$ mysqladmin -uroot -p create magento 
  

PHP / PHP-FPM

Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require.

$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR
  
  $ cd /opt/local/etc/php54
  $ cp php-fpm.conf.default php-fpm.conf
  $ cp php.ini-development php.ini
  
  $ sudo vim php.ini
  # set date.timezone and cgi.fix_pathinfo = 0
  
  $ sudo vim php-fpm.conf
  # make any changes for min / max num servers, error logging etc
  

The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock

echo 'pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock' | sudo tee --append /opt/local/var/db/mysql.ini
  

Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.

$ sudo port load php54-fpm
  

PHP-Select

As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than '/opt/local/bin/php' you have '/opt/local/bin/php54'. PHP Select, which we installed earlier fixes this by effectively 'activating' one version and creating the usual executable names we're accustomed to.

$ sudo port select php php54 
  

PEAR

PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn't even meant be used by end users (WAT?!).

There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method. http://trac.macports.org/ticket/37683

So this goes a long way to explaining why Macports doesn't set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.

$ sudo pear config-set php_bin /opt/local/bin/php
  $ sudo pear config-set php_dir /opt/local/lib/php/pear
  $ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
  $ sudo pear config-set bin_dir /opt/local/bin
  $ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
  $ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
  $ sudo pear config-set www_dir /opt/local/lib/php/pear/www
  $ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
  $ sudo pear config-set data_dir /opt/local/lib/php/pear/data
  $ echo 'PATH=$PATH:/opt/local/lib/php/pear/bin' >> ~/.bashrc # or zshrc if you use zsh
  

Another issue you'll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.

Add an alias to your bashrc/zshrc in the form:

alias pear='PHP_PEAR_PHP_BIN=php pear'
  

Reload your bashrc/zshrc.

$ source .bashrc (or source .zshrc)
  

Now the alias is active we can check that it's working

$ /opt/local/lib/php/pear/bin/pear version
  PEAR Version: 1.9.4
  PHP Version: 5.3.15
  Zend Engine Version: 2.3.0
  Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
  
  $ pear version
  PEAR Version: 1.9.4
  PHP Version: 5.4.12
  Zend Engine Version: 2.4.0
  Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
  

Now to make installing PEAR packages easier I turn the channel autodiscovery option on, which means you don't have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)

$ sudo pear config-set auto_discover 1
  

Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.

$ sudo pear channel-discover pear.phing.info
  $ sudo pear channel-discover pear.phpunit.de
  $ sudo pear channel-discover pear.symfony-project.com
  $ sudo pear install --alldeps phing/phing 
  $ sudo pear install --alldeps phpunit/phpunit
  $ sudo pear install phpunit/PHP_Invoker
  $ sudo pear install phpunit/PHPUnit_Selenium
  $ sudo pear install -f symfony/YAML
  

PECL/Extensions

Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:

$ echo 'extension=yaml.so' | sudo tee /opt/local/var/db/php54/yaml.ini
  

Nginx

Apache/Nginx. It doesn't really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:

$ sudo port variants nginx 
  

To install:

$ sudo port install nginx +ssl
  $ cd /opt/local/etc/nginx
  $ sudo cp fastcgi.conf.default fastcgi.conf
  $ sudo cp fastcgi_params.default fastcgi_params
  $ sudo cp mime.types.default mime.types
  $ sudo cp nginx.conf.default nginx.conf
  $ sudo mkdir conf.d sites-available sites-enabled ssl
  

Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento.

$ sudo vim nginx.conf  
  # Insert the following towards the bottom of the file (but inside the http block) 
  map $scheme $fastcgi_https {
     default off;
     https on;
  }
  
  ##
  # Virtual Host Configs
  ##
  include conf.d/*.conf;
  include sites-enabled/*;
  

For each app just add a server block to sites-available, then symlink it to sites-enabled.

$ sudo vim sites-available/magento.dev.conf
  # ...     
  $ cd sites-enabled
  $ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
  

This is the server block definition I use for magento development, feel free to modify it for your needs.

server {
      listen 80;
      listen 443 ssl;
  
      ssl_certificate     ssl/magento.dev.crt;
      ssl_certificate_key ssl/magento.dev.key;
  
      server_name magento.dev;
      root /Users/aaron/Sites/magento;
  
      location / {
          index index.html index.php; ## Allow a static html file to be shown first
          try_files $uri $uri/ @handler; ## If missing pass the URI to Magento's front handler
          expires 30d; ## Assume all files are cachable
      }
  
      ## These locations would be hidden by .htaccess normally
      location /app/                { deny all; }
      location /includes/           { deny all; }
      location /lib/                { deny all; }
      location /media/downloadable/ { deny all; }
      location /pkginfo/            { deny all; }
      location /report/config.xml   { deny all; }
      location /var/                { deny all; }
      location /shell/              { deny all; }
  
      ## Disable .htaccess and other hidden files
      location ~ /\. {
          deny all;
          access_log off;
          log_not_found off;
      }
  
      location ~ \.php$ { ## Execute PHP scripts
          if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss
  
          expires        off; ## Do not cache dynamic content
          fastcgi_intercept_errors on;
          fastcgi_pass   127.0.0.1:9000;
          fastcgi_param  HTTPS $fastcgi_https;
          fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
          fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration > Configuration > Manage Stores
          fastcgi_param  MAGE_RUN_TYPE store;
          proxy_read_timeout 120;
          proxy_connect_timeout 120;
          include        fastcgi_params; ## See /etc/nginx/fastcgi_params
      }
  
      location @handler { ## Magento uses a common front handler
          rewrite / /index.php;
      }
  }
  

We've said our application lives on a server called 'magento.dev'. So let's tell our hosts file about that.

$ vim /etc/hosts
  # Insert or append to an existing line
  # 127.0.0.1 localhost magento.dev
  

Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
  $ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
  $ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
  

Once that's done, we can start nginx.

$ sudo port load nginx
  

Web App Directory Config

I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:

$ chmod a+x /Users/aaron &amp;&amp; chmod a+x /Users/aaron/Sites
  

Install Magento and TAF

N98 Magerun is the coolest thing to happen to Magento development since well, I can't remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.

$ cd /Users/aaron/Sites
  $ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
  $ chmod a+x magerun.phar
  $ ./magerun.phar install
  # Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name 'magento'.
  

After all that work, hitting http://magento.dev should now bring up the magento demo store!

I've been playing with Magento's Test Automation Framework and it was the motivation for finally getting everything working properly natively.

TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.

Unfortunately the documentation for TAF is a bit of a mess (I'll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.

Installation is actually pretty easy. I am assuming you don't have git already installed (remember you can use port variants to see what extension options are available):

$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
  $ sudo port install git-extras
  $ cd /Users/aaron/Sites
  $ git clone https://github.com/magento/taf taf
  $ cd taf # /Users/aaron/Sites/taf
  $ cp phpunit.xml.dist phpunit.xml
  $ cp config/config.yml.dist config/config.yml
  $ cd .. # /Users/aaron/Sites
  $ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
  

To run the test suite open up a new terminal

$ cd /Users/aaron/Sites
  $ java -jar selenium-server.jar
  

Now the test suite is good to go

$ cd /Users/aaron/Sites/taf
  $ ./runtests.sh
  

The test suite takes a loooooong time, so go for a run or something.

Hopefully these steps help out other PHP developers suffering from OSX.

Text

The PHP Mysql port is a little bit of a pain. By default Macports doesn't set a default mysql sock. That leads to an error something like this:

SQLSTATE[HY000] [2002] No such file or directory.
  

You fix it by just appending the sock file for the mysql version you're using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this

echo 'pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock' | sudo tee /opt/local/var/db/mysql.ini
  
Text

In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:

$ sshfs -o ssh_command="ssh -i ~/ssh_keys/[email protected]" [email protected]:/var/www/ ~/Sites/awshost

Actually, right after I posted this, I realised there's a better way to do it. That is, use the 'IdentityFile' option instead (as per the format of .ssh/config).

$ sshfs -o "IdentityFile=~/ssh_keys/[email protected]" [email protected]:/var/www/ ~/Sites/awshost
  

If you have any problems then add '-o debug' to the above command to help track it down.

Tags: ssh cli unix
Text

There are only two hard things in Computer Science: cache invalidation and naming things.

-- Phil Karlton

I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the last instance of the needle.

strstr — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.

strrchr -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.

Let's look at how these work with an example

$url = 'http://www.google.com/a/b/c/d,img';
  echo strrchr($url, '/'); // prints /d.img  
  echo strstr($url, '/');   // prints //www.google.com/a/b/c/d.img
  

Now I've been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.

At this point the arguments and criticisms over the core API have been exhausted and there's little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I'm not sure how possible auto-boxing is with PHP, and indeed if it's even a good idea. But definitely some object wrappers would help ease this API pain.

Tags: php wtf
Text

A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.

Text

The best way to predict the future is to invent it. -- Alan Kay

Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called Dealers of Lightning by Michael Hiltzik. It covers the rise and fall of Xerox's Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.

I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in The Early History of Smalltalk (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.

I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.

The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions. The book closes by looking at some of the reasons why Xerox couldn't transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read.

One of the most striking individuals of the story is the Impressario Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn't.

The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox's Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.

It is Taylor's utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs.

The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor's CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay's assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.

The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple's infamous raid on PARC. If you've ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik's account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.

By the end of the book Taylor's time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor's forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple's Personal Computer products.

Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation's scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it.

It is the book's focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing's birth to life.

Text

I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:

Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: '' is not a valid value for 'decimal'.

I have developed custom modules which add free or bonus items to a customer's cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].

Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.

$unitPrice = $item->getBaseCalculationPrice();
  if (Mage::helper('weee')->includeInSubtotal()) {
      $unitPrice += $item->getBaseWeeeTaxAppliedAmount();
  }
  // ...
  <unit-price currency="{$this->getCurrency()}">{$unitPrice}</unit-price>
  

Now, if the product's baseprice is 0, then for some unfathomable reason it's set to '', not 0. As the unit-price element expects a decimal value, an empty string fails validation.

The fix is pretty trivial

$unitPrice = $item->getBaseCalculationPrice();
  if (Mage::helper('weee')->includeInSubtotal()) {
      $unitPrice += $item->getBaseWeeeTaxAppliedAmount();
  }
  
  $unitPrice = ((float) $unitPrice > 0) ? $unitPrice : 0.00;
  

The store I needed to fix only used US dollars so I haven't tested how the use of other currencies or locales might affect this fix.

To apply the fix, don't modify the core codepool, but instead take advantage of the local and community codepool's higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.

[1]: 'Don't Live With Broken Windows' is a tip I first read about in The Pragmatic Programmer. It is used to help fight Software Entropy (software's tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained.

When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, 'Dont Live With Broken Windows'.

[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.

Tags: magento bugs
Text

When a barman calls 'time' at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.

Thankfully there's a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.

_-s_ lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.

# terminate the nginx master process immediately
  $ sudo nginx -s stop 
  # terminate the nginx master process once all outstanding connections have been completed
  $ sudo nginx -s quit 
  
  $ abonner@avalanche:~$ ps aux | grep nginx
  root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
  www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
  www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
  www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
  

Let you visitors finish their drink, don't terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.

Read more about Nginx's command line options

Tags: nginx devops