Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?
This year I didn’t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the ‘classics’. My motivation was stirred by Panagiotis Louridas’s essay ‘Rereading the Classics’ from the book Beautiful Architecture. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.
Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino’s Why Read the Classics (1986)
The classics […] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.
A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.
The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.
A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.
Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our ‘modern’ OO languages.
It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download Pharo, read Alan Kay’s The Early History of Smalltalk (1993), or Dan Ingalls’s Design Principles behind Smalltalk (1981) and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.
It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it’s hard not to feel like some of the energy and innovation of Kay’s thinking didn’t survive the C++ and Java succession.
Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of ‘Agile’ or at least ‘iterative’ development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.
Maybe one reason we tend to forget what others have learned is that the average developer only reads one technical book per year. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks’s Mythical Man Month in 2004 rather than 2012.
My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.
I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn’t leave a lot of spare time and what spare time there is, is usually late at night when it’s hard to concentrate.
So since I made a determined effort to read more, I’ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I’ve found is to read a chapter at a time, whenever you can. Whether it’s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.
It requires conscious effort though, and some material is more suited to this style of reading than others. After a long day just before bed it’s pointless trying to delve into SICP. It helps to read certain books during the work day, for example I’ve been re-reading GOOS and going through Kent Beck’s TDD by Example over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I’m working on when I go back into the office.
I have a huge reading list setup in Google Reader, and I’m starting to think it’s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.
As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay’s ACM paper The Early History of Smalltalk. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:
Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.
In Software Engineering, we are so busy looking forward that we don’t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.
It is 2013, we (still) don’t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency.
Here’s a small selection I have picked up recently that I didn’t know.
Run the previous command as sudo. This is great when you realise you needed to run something as root.
Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.
cat /etc/passwd | column -s':' -t
Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.
These next few are specific to zsh, and while I do love bash, since switching to zsh I haven’t really looked back. It’s things like this that when you work with a terminal every single day, you can’t give up.
aaron@tempest ~ $ d 0 ~ aaron@tempest ~ $ cd /etc aaron@tempest /etc $ d 0 /etc 1 ~ aaron@tempest /etc $ 1 ~ aaron@tempest ~ $
The ‘d’ command lists the directory stack, and then entering an integer will switch you directly to the directory index in the stack. It is a killer app.
Moving directories also is very pleasant in zsh. Use ‘..’ to move up a directory, and simply type the name of the directory in, to move into a directory.
aaron@tempest ~ $ .. aaron@tempest /Users $ aaron aaron@tempest ~ $
This last one is a trick I’ve know for a few years, I don’t know how much time this has saved me exactly, but I use it every single day.
In vim, if you’re editing a file that requires root (or any other user) permissions, you can write the file by doing
:w !sudo tee %
I use it so much that I’ve set up a leader key binding in my .vimrc
nnoremap <leader>sr :w !sudo tee %<CR>
There’s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied…
I make all my configs available online at github, if you’re interested in seeing how I setup my environment.
Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.
To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).
$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql
There’s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).
This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.
The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.
$ PHP_IDE_CONFIG='serverName=mydevmachine.local' php -dxdebug.remote_host=192.168.0.1 myphpscript.php
Now this will work fine, however if we want to debug during a phpunit test normally you would do this
$ PHP_IDE_CONFIG='serverName=mydevmachine.local' phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
Unfortunately this doesn’t appear to work at the moment (version 3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I’ve told PHPUnit. I’ll look into this a bit more later, but I suspect PHPUnit isn’t passing on the php.ini settings in a timely fashion for xdebug to hook into.
The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).
$ ssh -R 9000:localhost:9000 myvm.local
This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client’s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.
It’s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn’t a bad alternative. But once you’ve experienced the power of PHPStorm’s debugging implementation it’s hard to go back.
It has been an exciting time to be a PHP Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more ‘in vogue’ languages.
In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology’s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of Object Oriented Programming in the early 00s. This led to PHP 5’s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when Ruby on Rails exploded into everyone’s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of ‘fullfat’ Model View Controller (MVC) Frameworks, the most successful being Zend’s and Symfony.
The arrival of PHP 5.3 and features like Namespaces, PHAR, Closures and the ubiquity of Github is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies.
One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, PEAR wasn’t really cutting it.
The success of Symfony2 in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.
Luckily for us, the guys behind Composer, (again, taking considerable cues from the Ruby community) have licked it. Composer, in tandem with Symfony2 components allow PHP Developers to confidently build on top of other developers’ libraries.
Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki’s garage, PEAR was conceived as PHP’s answer to PERL’s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR’s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can’t install both. Dependency and package management has moved on a long way since 1999.
Over time PEAR’s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.
Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves.
There’s no compelling need now to constantly rewrite basic library components (I think we’ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.
PHPSpec, Behat, Twig, Mockery, Doctrine, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.
Anyway, so (after a fashion) I come to the tool that motivated me to write this post, n98-magerun.
The name is horrible, but the tool itself is brilliant. In short, it’s Drush for Magento and it’s wonderful. It is one of those tools that makes you wonder what on earth you did before it.
I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months Christian Münch and friends have overseen a small tool quickly develop into the kind of utility we’ve all wanted but never had the time/patience to build ourselves.
Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it’s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.
It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.
Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!
I’ll be writing about how awesome Magerun is shortly, but just one of its cool features is the ability to dump out a merged version of Magento’s config.
This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.
The resulting xml though is pretty raw and unformatted, but xmllint can fix that.
Xmllint expects a file to work with and cannot use piped input. So we need to use bash’s Process Substitution feature to avoid having to create temporary files.
$ xmllint --format <(magerun config:dump)
So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install’s configuration together.
A number of git commands take the —name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.
$ git show --name-only <commit>
This will give you a list of affected files in commit
Alternatively if you don’t care what differs in the specific contents between two branches, and only want to see different files you can do
$ git diff master..origin/master --name-only
This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what’s different before merging or rebasing.
If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the —skip-grant option. This is roughly analogous to starting a Unix system in single user mode.
First thing, shut down the running instance and then restart it directly
$ sudo -u <mysql_user> mysqld_safe --skip-grant-tables --skip-networking
The —skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.
Once you’ve started the server up, login without a password, and issue an update query to the mysql.user table.
$ mysql -uroot mysql mysql> UPDATE user SET password=password('newpassword') WHERE User = 'root'
Close down mysqld and restart. You’re good to go.
TL;DR Basically it is all Google’s fault.
We’ve seen some pretty epic PHP rants this year, probably the most famous among them are PHP a Fractal of Bad Design, and Jeff Atwood’s latest (in what seems to be a biennial broadside) The PHP Singularity.
The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There’s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don’t, particularly, matter.
One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don’t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there’s more examples of bad code out there compared to pretty much any other platform because there’s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer’s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.
A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.
This can be partially forgiven, because owing to the age of the language there’s plenty of out of date information out there with high rankings in Google. The ubiquitous w3schools is an unfortunate example of bad practices coming well ahead of sites with more modern approaches to solving problems in PHP.
So ‘New PHP’ is very different to ‘Old PHP’. But the ‘Old PHP’ is what most people seem to find when searching in Google and this confuses people.
We see this manifested in blogs like Will Rails become the new PHP. This blog has a number of spectacular shortcomings, most egregiously the author’s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.
There’s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools’ PageRank. Which means that brilliant efforts like PHP The Right Way get swamped by old, incorrect and at times dangerous dreck.
And what the ‘Will Rails become the new PHP’ author perhaps hasn’t realised, is Rails, at least in the terms he’s trying to couch it, has already become the ‘New PHP’. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I’ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.
If you’re well versed in a platform you learn through brutal experience what works, what doesn’t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don’t quite have that sense, beyond my own background experience with other programming languages.
So let’s look at an example. I’ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I’m using the respond_to method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn’t make a lot of sense for a webservice that otherwise speaks XML and JSON.
Other global exceptions similarly respond with HTML. I don’t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.
In my journey of (Google) discovery I came across this blog post and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard?
Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe’s Faust, photocopied and in the original Gothic script (not easy). What we have here is a God Method in training.
Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I’m going to go right here and Cargo Cult this code into my webapp and move on. Just like all the rookie PHP coders do, right?
Well, I didn’t do that. I saw that this rescue_from method was pretty awesome and so I went to the Rails API docs to look it up.
API docs for any language are pretty terse, but what jumped out at me was this line:
“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
This isn’t great documentation admittedly, but it means basically, if you put rescue_from Exception at the bottom of the list of rescue_from handlers in your application_controller.rb file, then since everything derives from Exception, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn’t realise this, and so his solution was needlessly complicated.
What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn’t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice.
Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn’t solve the problem. Efforts like PHP The Right Way is how PHP is trying to fix it. If Rails really doesn’t want to be the ‘Old PHP’, they need to realise it’s less to do with languages and platforms, and more about SEO.
Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.
To see what branches your local repo things exists you do something like this:
$ git branch -rv > origin/1620-upgrade 2e0cc56 Ignore active local.xml from vc > origin/HEAD -> origin/master > origin/cas-sso 2351be5 Add gateway logiin and logout support > origin/giveaways 63daf5a Use cms blocks for banner placements > origin/master 496c975 Merge affiliate module > origin/newskin d7220c9 Optimise skin and ui images > origin/release 496c975 Merge affiliate module
So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)
To refresh then I need to prune my branches list. The git incantation to do this is
$ git remote prune origin > Pruning origin > URL: dev@vcs:git/store.git * [pruned] origin/1620-upgrade * [pruned] origin/giveaways * [pruned] origin/newskin
Looking at the remote branch list again:
$ git branch -rv > origin/HEAD -> origin/master > origin/cas-sso 2351be5 Add gateway logiin and logout support > origin/master 496c975 Merge affiliate module > origin/release 496c975 Merge affiliate module
Adding New Magento Cache Types -
A simple configuration recipe for adding new cache tags to the Magento backend’s “clear cache” feature.
If you’ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.
In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.
The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it’s annoying, if you don’t want this log data, why write it in the first place.
So my solution is to disable it by removing the observer events that perform the logging.
I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string ‘disabled’, the existing observer event is removed and replaced with something that will never be fired.
Now, you don’t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full…
Ahh a little WTF to start the morning.
I’m going through some PCI scan results this morning, and in the main it’s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I’ve audited these pages, they definitely get routed through magento’s escaping code.
On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.
I’m running Mage 1.6.x so this code may look a little different if you’re running 1.7
Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:
Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.
$title = $this->__("Search results for: '%s'", $this->helper('catalogsearch')->getQueryText());
This fix is easy, replace line 10 with:
$title = $this->__("Search results for: '%s'", $this->helper('catalogsearch')->getEscapedQueryText());
This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy & paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.
Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.
I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.
Unfortunately this doesn’t happen very often and when using a stack of Rubygems, you almost always get bitten by something.
My cause for complaint today is Vagrant and Chef, well specifically Chef Solo. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The Vagrant docs on Chef Solo unfortunately fib you, they say you can use Data Bags with Chef Solo, but by default you cannot.
This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.
I use Veewee to build my vagrant base boxes (you should too, it’s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.
You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.
If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module’s adminhtml.xml.
This overrides the core code pool’s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.
As always with any magento configuration / module changes, you may need to clear caches for this to take effect.