Text

By now, I'm pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix's idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.

So I'm happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It's no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop.

But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you're born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX's realisation of Unix leaves a lot to be desired.

With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.

I'll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.

Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based 'package' manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you're a competent C programmer (and even then) you're going to have a bad time.

One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.

So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of Magento and the Magento Test Automation Framework (TAF).

MySQL

MySQL is probably the easiet part of the whole thing to setup. So let's start there. For reference, the database files are stored under /opt/local/var/db/mysql55.

In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That's annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.

$ sudo port install mysql55-server mysql55 mysql_select
  $ sudo port select mysql mysql55
  $ sudo port load mysql55-server
  

We will want a database for our magento application.

$ mysqladmin -uroot -p create magento 
  

PHP / PHP-FPM

Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require.

$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR
  
  $ cd /opt/local/etc/php54
  $ cp php-fpm.conf.default php-fpm.conf
  $ cp php.ini-development php.ini
  
  $ sudo vim php.ini
  # set date.timezone and cgi.fix_pathinfo = 0
  
  $ sudo vim php-fpm.conf
  # make any changes for min / max num servers, error logging etc
  

The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock

echo 'pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock' | sudo tee --append /opt/local/var/db/mysql.ini
  

Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.

$ sudo port load php54-fpm
  

PHP-Select

As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than '/opt/local/bin/php' you have '/opt/local/bin/php54'. PHP Select, which we installed earlier fixes this by effectively 'activating' one version and creating the usual executable names we're accustomed to.

$ sudo port select php php54 
  

PEAR

PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn't even meant be used by end users (WAT?!).

There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method. http://trac.macports.org/ticket/37683

So this goes a long way to explaining why Macports doesn't set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.

$ sudo pear config-set php_bin /opt/local/bin/php
  $ sudo pear config-set php_dir /opt/local/lib/php/pear
  $ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
  $ sudo pear config-set bin_dir /opt/local/bin
  $ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
  $ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
  $ sudo pear config-set www_dir /opt/local/lib/php/pear/www
  $ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
  $ sudo pear config-set data_dir /opt/local/lib/php/pear/data
  $ echo 'PATH=$PATH:/opt/local/lib/php/pear/bin' >> ~/.bashrc # or zshrc if you use zsh
  

Another issue you'll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.

Add an alias to your bashrc/zshrc in the form:

alias pear='PHP_PEAR_PHP_BIN=php pear'
  

Reload your bashrc/zshrc.

$ source .bashrc (or source .zshrc)
  

Now the alias is active we can check that it's working

$ /opt/local/lib/php/pear/bin/pear version
  PEAR Version: 1.9.4
  PHP Version: 5.3.15
  Zend Engine Version: 2.3.0
  Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
  
  $ pear version
  PEAR Version: 1.9.4
  PHP Version: 5.4.12
  Zend Engine Version: 2.4.0
  Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
  

Now to make installing PEAR packages easier I turn the channel autodiscovery option on, which means you don't have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)

$ sudo pear config-set auto_discover 1
  

Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.

$ sudo pear channel-discover pear.phing.info
  $ sudo pear channel-discover pear.phpunit.de
  $ sudo pear channel-discover pear.symfony-project.com
  $ sudo pear install --alldeps phing/phing 
  $ sudo pear install --alldeps phpunit/phpunit
  $ sudo pear install phpunit/PHP_Invoker
  $ sudo pear install phpunit/PHPUnit_Selenium
  $ sudo pear install -f symfony/YAML
  

PECL/Extensions

Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:

$ echo 'extension=yaml.so' | sudo tee /opt/local/var/db/php54/yaml.ini
  

Nginx

Apache/Nginx. It doesn't really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:

$ sudo port variants nginx 
  

To install:

$ sudo port install nginx +ssl
  $ cd /opt/local/etc/nginx
  $ sudo cp fastcgi.conf.default fastcgi.conf
  $ sudo cp fastcgi_params.default fastcgi_params
  $ sudo cp mime.types.default mime.types
  $ sudo cp nginx.conf.default nginx.conf
  $ sudo mkdir conf.d sites-available sites-enabled ssl
  

Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento.

$ sudo vim nginx.conf  
  # Insert the following towards the bottom of the file (but inside the http block) 
  map $scheme $fastcgi_https {
     default off;
     https on;
  }
  
  ##
  # Virtual Host Configs
  ##
  include conf.d/*.conf;
  include sites-enabled/*;
  

For each app just add a server block to sites-available, then symlink it to sites-enabled.

$ sudo vim sites-available/magento.dev.conf
  # ...     
  $ cd sites-enabled
  $ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
  

This is the server block definition I use for magento development, feel free to modify it for your needs.

server {
      listen 80;
      listen 443 ssl;
  
      ssl_certificate     ssl/magento.dev.crt;
      ssl_certificate_key ssl/magento.dev.key;
  
      server_name magento.dev;
      root /Users/aaron/Sites/magento;
  
      location / {
          index index.html index.php; ## Allow a static html file to be shown first
          try_files $uri $uri/ @handler; ## If missing pass the URI to Magento's front handler
          expires 30d; ## Assume all files are cachable
      }
  
      ## These locations would be hidden by .htaccess normally
      location /app/                { deny all; }
      location /includes/           { deny all; }
      location /lib/                { deny all; }
      location /media/downloadable/ { deny all; }
      location /pkginfo/            { deny all; }
      location /report/config.xml   { deny all; }
      location /var/                { deny all; }
      location /shell/              { deny all; }
  
      ## Disable .htaccess and other hidden files
      location ~ /\. {
          deny all;
          access_log off;
          log_not_found off;
      }
  
      location ~ \.php$ { ## Execute PHP scripts
          if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss
  
          expires        off; ## Do not cache dynamic content
          fastcgi_intercept_errors on;
          fastcgi_pass   127.0.0.1:9000;
          fastcgi_param  HTTPS $fastcgi_https;
          fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
          fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration > Configuration > Manage Stores
          fastcgi_param  MAGE_RUN_TYPE store;
          proxy_read_timeout 120;
          proxy_connect_timeout 120;
          include        fastcgi_params; ## See /etc/nginx/fastcgi_params
      }
  
      location @handler { ## Magento uses a common front handler
          rewrite / /index.php;
      }
  }
  

We've said our application lives on a server called 'magento.dev'. So let's tell our hosts file about that.

$ vim /etc/hosts
  # Insert or append to an existing line
  # 127.0.0.1 localhost magento.dev
  

Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
  $ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
  $ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
  

Once that's done, we can start nginx.

$ sudo port load nginx
  

Web App Directory Config

I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:

$ chmod a+x /Users/aaron && chmod a+x /Users/aaron/Sites
  

Install Magento and TAF

N98 Magerun is the coolest thing to happen to Magento development since well, I can't remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.

$ cd /Users/aaron/Sites
  $ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
  $ chmod a+x magerun.phar
  $ ./magerun.phar install
  # Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name 'magento'.
  

After all that work, hitting http://magento.dev should now bring up the magento demo store!

I've been playing with Magento's Test Automation Framework and it was the motivation for finally getting everything working properly natively.

TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.

Unfortunately the documentation for TAF is a bit of a mess (I'll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.

Installation is actually pretty easy. I am assuming you don't have git already installed (remember you can use port variants to see what extension options are available):

$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
  $ sudo port install git-extras
  $ cd /Users/aaron/Sites
  $ git clone https://github.com/magento/taf taf
  $ cd taf # /Users/aaron/Sites/taf
  $ cp phpunit.xml.dist phpunit.xml
  $ cp config/config.yml.dist config/config.yml
  $ cd .. # /Users/aaron/Sites
  $ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
  

To run the test suite open up a new terminal

$ cd /Users/aaron/Sites
  $ java -jar selenium-server.jar
  

Now the test suite is good to go

$ cd /Users/aaron/Sites/taf
  $ ./runtests.sh
  

The test suite takes a loooooong time, so go for a run or something.

Hopefully these steps help out other PHP developers suffering from OSX.

Text

I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:

Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: '' is not a valid value for 'decimal'.

I have developed custom modules which add free or bonus items to a customer's cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].

Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.

$unitPrice = $item->getBaseCalculationPrice();
  if (Mage::helper('weee')->includeInSubtotal()) {
      $unitPrice += $item->getBaseWeeeTaxAppliedAmount();
  }
  // ...
  <unit-price currency="{$this->getCurrency()}">{$unitPrice}</unit-price>
  

Now, if the product's baseprice is 0, then for some unfathomable reason it's set to '', not 0. As the unit-price element expects a decimal value, an empty string fails validation.

The fix is pretty trivial

$unitPrice = $item->getBaseCalculationPrice();
  if (Mage::helper('weee')->includeInSubtotal()) {
      $unitPrice += $item->getBaseWeeeTaxAppliedAmount();
  }
  
  $unitPrice = ((float) $unitPrice > 0) ? $unitPrice : 0.00;
  

The store I needed to fix only used US dollars so I haven't tested how the use of other currencies or locales might affect this fix.

To apply the fix, don't modify the core codepool, but instead take advantage of the local and community codepool's higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.

[1]: 'Don't Live With Broken Windows' is a tip I first read about in The Pragmatic Programmer. It is used to help fight Software Entropy (software's tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained.

When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, 'Dont Live With Broken Windows'.

[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.

Tags: magento bugs
Text

Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!

I'll be writing about how awesome Magerun is shortly, but just one of its cool features is the ability to dump out a merged version of Magento's config.

This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.

The resulting xml though is pretty raw and unformatted, but xmllint can fix that.

Xmllint expects a file to work with and cannot use piped input. So we need to use bash's Process Substitution feature to avoid having to create temporary files.

$ xmllint --format <(magerun config:dump)
  

So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install's configuration together.

Text

If you've ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.

In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.

The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it's annoying, if you don't want this log data, why write it in the first place.

So my solution is to disable it by removing the observer events that perform the logging.

I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string 'disabled', the existing observer event is removed and replaced with something that will never be fired.

Now, you don't need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...

Text

Ahh a little WTF to start the morning.

I'm going through some PCI scan results this morning, and in the main it's going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I've audited these pages, they definitely get routed through magento's escaping code.

On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.

I'm running Mage 1.6.x so this code may look a little different if you're running 1.7

Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:

Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.

$title = $this->__("Search results for: '%s'", $this->helper('catalogsearch')->getQueryText());
  

This fix is easy, replace line 10 with:

$title = $this->__("Search results for: '%s'", $this->helper('catalogsearch')->getEscapedQueryText());
  

This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy & paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.

Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.

Text

If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module's adminhtml.xml.

This overrides the core code pool's adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.

As always with any magento configuration / module changes, you may need to clear caches for this to take effect.

Tags: magento
Text

Magento makes use of design patterns, or at least an interpretation of design patterns. One particularly pernicious one, is Mage::getSingleton().

A Singleton, if you've not heard the term before, was popularised in the Design Patterns book by the Gang of Four (Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.

It's used heavily in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter's Once is not enough gives a pretty good (and fun to read) overview of them, or you can read Kenton Varda who looks in-depth at the topic. I generally think though, that in Object Oriented software, you're seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.

In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP's memory_limit.

This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.

In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.

It's fairly basic, it doesn't look at multi product combinations but it works well enough for simple cases.

/**
   * Apply pricing rules to a synthetic quote to calculate discounted price
   * 
   * @param string $couponCode
   * @param Mage_Catalog_Model_Product $product
   * @return  float
   */
  public function applyToProduct($couponCode, $product)
  {
      $quote = Mage::getModel('sales/quote');
      $item = Mage::getModel('sales/quote_item')
          ->setQuote($quote)
          ->setProduct($product)
          ->setQty(1)
          ->setBaseDiscountCalculationPrice($product->getPrice())
          ->setDiscountCalculationPrice($product->getPrice());
  
      $validator = Mage::getSingleton('salesrule/validator')
          ->init(1, 1, $couponCode);
  
      $validator->process($item);
  
      return $product->getPrice() - $item->getDiscountAmount();
  }
  

Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn't, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.

Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate.

That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.

Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.

My suspicion is that PHPs garbage collection wasn't cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.

The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it's implementation and its scope to hose memory.

Text

I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling Mage::getSingleton('cataloginventory/stock_status')->rebuild() would set all grouped products to be out of stock. This didn't happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6

Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store's products will be unavailable.

Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()

/**
   * Retrieve Product(s) status for store
   * Return array where key is a product_id, value - status
   *
   * @param array|int $productIds
   * @param int $storeId
   * @return array
   */
  public function getProductStatus($productIds, $storeId = null)
  {
     $statuses = array();
  
     $attribute      = $this->_getProductAttribute('status');
     $attributeTable = $attribute->getBackend()->getTable();
     $adapter        = $this->_getReadAdapter();
  
     if (!is_array($productIds)) {
         $productIds = array($productIds);
     }
  
     if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
         $select = $adapter->select()
             ->from($attributeTable, array('entity_id', 'value'))
             ->where('entity_id IN (?)', $productIds)
             ->where('attribute_id = ?', $attribute->getAttributeId())
             ->where('store_id = ?', Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);
  
         $rows = $adapter->fetchPairs($select);
     } else {
         $valueCheckSql = $adapter->getCheckSql('t2.value_id > 0', 't2.value', 't1.value');
  
         $select = $adapter->select()
             ->from(
                 array('t1' => $attributeTable),
                 array('value' => $valueCheckSql))
             ->joinLeft(
                 array('t2' => $attributeTable),
                 't1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = ' . (int)$storeId,
                 array('t1.entity_id')
             )
             ->where('t1.store_id = ?', Mage_Core_Model_App::ADMIN_STORE_ID)
             ->where('t1.attribute_id = ?', $attribute->getAttributeId())
             ->where('t1.entity_id IN(?)', $productIds);
         $rows = $adapter->fetchPairs($select);
     }
  
     foreach ($productIds as $productId) {
         if (isset($rows[$productId])) {
             $statuses[$productId] = $rows[$productId];
         } else {
             $statuses[$productId] = -1;
         }
     }
  
     return $statuses;
  }
  

This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock.

In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.

In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select->assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.

Zend DB's fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.

The fix is straight forward enough, replace

$select = $adapter->select()
              ->from(
                  array('t1' => $attributeTable),
                  array('value' => $valueCheckSql))
  

with

$select = $adapter->select()
              ->from(
                  array('t1' => $attributeTable),
                  array('entity_id', 'value' => $valueCheckSql))
  

This way the product id is always used as the key in fetchPairs and you get a status result for each product.

Tags: magento bugs
Text

Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.

Then, to include it in a template (say homepage.phtml):

<?php echo $this->getLayout()->createBlock('cms/block')->setBlockId('indentifer')->toHtml() ?>
  
Text

I'm seldom surprised by some of the horrors under the Magento hood, but today's little gem takes some beating.

On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click 'edit address', the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.

You might see an otherwise meaningless error like this:

Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
  

The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.

/**
   * Edit order address form
   */
  public function addressAction()
  {
      $addressId = $this->getRequest()->getParam('address_id');
      $address = Mage::getModel('sales/order_address')
          ->getCollection()
          ->getItemById($addressId);
      if ($address) {
          Mage::register('order_address', $address);
          $this->loadLayout();
          $this->renderLayout();
      } else {
          $this->_redirect('*/*/');
      }
  }
  

The problem is in the $address->getCollection()->getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.

Now, there's another, really simple way to do the same thing. It doesn't involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It's very familiar.

$address = Mage::getModel('sales/order_address')->load($addressId);
  

This one line does the same thing far more efficiently. Now, the thing that worries me, is I can't see any reason why they aren't doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.

So why is it not done this way?

Tags: magento wtf
Text

Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening.

Turns out there's already something there to do it

Mage_Payment_Method_Abstract::_debug($data);
  

If you want to call it from outside the payment_method inheritance tree use

Mage_Payment_Method_Abstract::debugData($data);
  

In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module

echo Mage::getStoreConfig('payment/usaepay/debug'); 
  >> 1
  
Tags: magento php
Text

Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.

In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.

Tags: magento
Text

If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.

As one of my clients' sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.

If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.

$product->setNumSales(1234); $product->getResource()->saveAttribute($product, 'num_sales');

The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog->attribute.

Using the getResource()-->saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.

Update 4 Mar 2014 - Please take a look at DannyD's comment below for a more robust approach to mass attribute updates.

Tags: magento