Text

If you're trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go

$ usermod -a -G thegroup theuser
  

And job done. But OSX uses Open Directory rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don't work.

The dscl (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free sudo.

$ dscl localhost --append /Local/Default/Groups/<groupnamehere> <usernamehere>
  
Text

If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode.

First thing, shut down the running instance and then restart it directly

$ sudo -u <mysql_user> mysqld_safe --skip-grant-tables --skip-networking
  

The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.

Once you've started the server up, login without a password, and issue an update query to the mysql.user table.

$ mysql -uroot mysql
  mysql> UPDATE user SET password=password('newpassword') WHERE User = 'root'
  

Close down mysqld and restart. You're good to go.

Text

GNU Find never ceases to amaze me with its utility.

Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.

So specifically, I wanted to delete all the files that were created between a start and end date.

GNU find makes this easy

$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
  

This instructs find to list (-ls) all files (-type f) that are newer than a file called 655958024588 (-newer) and not newer than 627812220533 (! -newer).

If you do not have two files to act as date range boundaries, you can use touch to create them.

$ touch -t yyyymmddHHMM start_date_file
  $ touch -t yyyymmddHHMM end_date_file
  

Then supply these file names to -newer and ! -newer.

To delete the files we can use -exec.

$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
  

Here it's the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes '{}' with each found filename) and \; terminates the command sequence (much like it does in regular bash).

Text

For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.

Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let's be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system, specifically for modern processors abandoning backwards compatibility for long-obsolete ones.

In the early days you had to start from what was called a (and I don't think they do it this way anymore) 'stage 1' install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You'd run the command 'emerge system' and it would go off and build gcc, glibc, coreutils etc. Once these were built you'd then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.

Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.

The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.

These days if you do something silly, like I don't know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can't boot the machine).

This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.

Here's how you do it.

Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the 'try without installing' option from the bootloader.

Once you're online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/ so say /mnt/oneiric (it can be whatever). If you're not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push 'p' to print the partition table.

$ mount -t ext4 /dev/sda5 /mnt/ubuntu
  

If you have a separate boot partition mount that too.

$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
  

Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.

$ mount -t proc none /mnt/ubuntu/proc
  $ mount -o bind /dev /mnt/ubuntu/dev
  $ mount -o bind /sys /mnt/ubuntu/sys
  

In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.

Last thing, we want to have functional network name resolution so we copy over the host's /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf

Now the chroot is ready

$ chroot /mnt/ubuntu /bin/bash
  $ source /etc/profile
  

The chroot will be pretty much as it would be if you'd booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host.

My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.

Text

Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.

$ varnishd -C -f /path/to/mysetup.vcl
  > ...
  

Varnish will compile the file and print out it's full configuration as output. If there's an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this

> Message from VCC-compiler:
  > Expected an action, 'if', '{' or '}'
  > ('input' Line 82 Pos 6)
  >     vcl_hash(req.http.Cookie);
  > -----########------------------
  >
  > Running VCC-compiler failed, exit 1
  
Text

If you want to add a unix user to a supplementary group (say for example user 'aaron' belongs to group 'aaron' but I want to add him to the 'wheel' group as well) you use the usermod command.

$ usermod -a -G wheel aaron
  

The -a argument is very important, it ensures arguments to -G append to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.

Text

Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.

The Basics

The process for creating and restoring a snapshot is trivial

$ mysqldump -uuser -p mydatabase > mydump.sql    # export
  $ mysql -uuser -p mydatabase < mydump.sql        # import
  

For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.

It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.

$ mysqldump -uuser -p mydatabase | bzip2 -c > mydump.sql.bz2
  

While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.

When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.

$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump
  $ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump
  

Locking and Transactional Integrity

As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.

InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.

Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.

$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase
  

The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.

Improving Import Performance

Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.

Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.

By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.

Further Reading

This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the official docs.