jeremykendall.net

Will Senior Engineer for Food

UPDATE: Sorry companies that didn’t snatch me up while I was available, but I’m no longer on the market! As of August 2016 I’m working as a Senior Software Engineer at Alegion. I’m stoked to be a part of an amazing company and a member of one of the best teams I’ve ever had the pleasure to work with.


After two amazing, challenging, crazy, exhausting years with Graph Story, I’ve decided that it’s time to find a new challenge and a new workplace to call home.

I’ve grown more as a developer in the past two years than I have in my entire career. I’ve also worked harder than I’ve ever worked before. At the end of these past two years I find myself utterly exhausted and in need of a change of pace.

Available for Hire

If you’re in search of a Senior Engineer to join your dev team, I’m looking and immediately available! I bring ~15 years of web development experience to the table, a passion for testable, maintainable code, an insatiable appetite for learning new things, and some leadership experience.

If you’d like to know more, here’s my resume (a printable version is available here), you can take a look at my open source work on GitHub, or even spend a few minutes browsing through some of my photos.

What I’m Looking For

I want to get out of my comfort zone and learn and do some new things. Don’t need a PHP dev but you’re looking for a great Senior Engineer? If you’re willing to teach, I’m willing to learn (with the caveat that I’d much prefer an open source shop).

I need to work remotely. My wife and I are about to have kiddo #2 and both of our families are here in Memphis. Relocating right now would be extremely difficult, although I would consider relocating at some point in the near future for the right opportunity.

Additionally, here are the top three things I’m looking for in a new employer:

  1. Work-life balance: I’m a crazy hard worker, and I can and will work some really long hours, but I’m looking for 40-50 hours/week as a regular thing.
  2. A strong engineering culture working on big things: I’m less concerned about what you do as a company than I am with your engineering culture and practices. If you have a medium-large group of brilliant engineers that walk the best-practices walk, I’m in.
  3. Tech community support and involvement: I want to work for a company that values sending engineers to conferences, supports and uses open source software, gives back to the community by means of supporting user groups, open sourcing tools where possible, and contributing to open source projects as a matter of course.

Contact Info

If you’d like to talk turkey, hit me up at jeremy -at- jeremykendall -dot- net. I look forward to hearing from you!

Detecting and Converting File Encoding

I had a couple of files show up in a project that weren’t utf-8 encoded and needed to be converted. In the past, I found detecting encoding and converting from one encoding to another to be an arcane and challenging task. This morning it only took a few tries on Google and, a few superuser.com answers later, I was good to go.

Encoding Detection

I was quickly able to determine that the CSV file in question was encoded with utf-16le by using the following command:

1
2
$ file -I unknown-encoding.csv
unknown-encoding.csv: text/plain; charset=utf-16le

Converting to UTF-8

Converting the file to a new encoding was just as easy:

1
iconv -f utf-16le -t utf-8 unknown-encoding.csv > new-encoding.csv

References

The commands above were sourced from the following superuser questions and accepted answers:

Forcing an NTP Update

UPDATE: Dan Horrigan pointed out a much better solution to this problem on Twitter, one that has nothing to do with NTP at all. He adds a VirtualBox config option in his Vagrantfiles to update the virtual machine’s time from the host every 10 seconds. Nice!

UPDATE 2: Oops. I didn’t explain that correctly. Here’s Dan to set the matter straight:

ORIGINAL POST: I frequently find myself needing to update the time on my Vagrant boxes. This is especially true when I’m testing Query Auth between two different Vagrant boxes (mimicking a client/server relationship) as Query Auth will fail a request with a timestamp that varies too greatly from the server’s timestamp (+-15 seconds by default). Even though I run NTP on all my virtual servers, time drift is a frequent problem. Furthermore, when the drift is too great (can’t find a reference to how large the drift needs to be), NTP won’t correct it all at once.

A quick Google search later and, once again, it was Stack Overflow the rescue, courtesy of Martin Schröder’s answer to the question “How to force a clock update using ntp?”

One quick bash script later and excessive clock drift is now a trivial issue.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#!/bin/bash
# Forces an ntp update
#
# Based on SO user Martin Schröder's answer to "How to force a clock update
# using ntp?": http://askubuntu.com/a/256004/41943

# Fail fast (set -e will bail at first error)
set -e

if [ "$EUID" -ne 0 ]; then
    echo "ERROR: '$0' must be as root."
    exit 1
fi

service ntp stop

echo "Running 'ntpd -gq'"
ntpd -gq

service ntp start

Postscript

If you’re unfamiliar with the particulars of creating bash scripts, here are the steps you can take to create your own version of the above script. You may need to preface the commands using sudo.

  • Copy and paste the above script into a text file (I’ve named mine force-ntp-update).
  • On the command line, call chmod a+x /path/to/force-ntp-update to allow all users to execute the command.
    • If you’re the only user who should be able to execute the command, change the above to chmod u+x /path/to/force-ntp-update.
  • Move the script somewhere in your path: mv /path/to/force-ntp-update /usr/local/bin.
  • Done!

Now anytime you need to force an update, simply call sudo force-ntp-update.

Ubuntu 14.04 Gearman Config Bug

While configuring Gearman (for async jobs here at Graph Story), I ran across a bug that causes Gearman to ignore its config file. The bug shows up on Ubuntu 14.04 Trusty Tahr in Gearman version 1.0.6-3 installed from the Ubuntu repositories.

Full disclosure: I can only confirm the existence of this bug in the above mentioned environment. I have not tested for the bug in any other configuration.

Bug Report - TL;DR

There is a bug report filed by Artyom Nosov which details the issue and includes a patch. The patch diff is linked from the bug report and is included below:

1
2
3
4
5
6
7
8
9
10
11
12
diff -urN gearmand-1.0.6-old/debian/gearman-job-server.upstart gearmand-1.0.6/debian/gearman-job-server.upstart
--- gearmand-1.0.6-old/debian/gearman-job-server.upstart2013-11-11 01:58:42.000000000 +0400
+++ gearmand-1.0.6/debian/gearman-job-server.upstart20132013-12-13 22:30:32.392281779 +0400
@@ -9,4 +9,7 @@

  respawn

   -exec start-stop-daemon --start --chuid gearman --exec /usr/sbin/gearmand -- --log-file=/var/log/gearman-job-server/gearman.log
   +script
   +    . /etc/default/gearman-job-server
   +    exec start-stop-daemon --start --chuid gearman --exec /usr/sbin/gearmand -- $PARAMS --log-file=/var/log/gearman-job-server/gearman.log
   +end script

The Fix

Gearman runtime configuration is handled by /etc/default/gearman-job-server in Ubuntu. The default upstart job does not use the Gearman config file, causing Gearman to run with its defaults, which may or may not be acceptable to you. To resolve the issue, update /etc/init/gearman-job-server.conf using the diff included above. For reference, here is my entire gearman-job-server.conf file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# -*- upstart -*-

# Upstart configuration script for "gearman-job-server".

description "gearman job control server"

start on (filesystem and net-device-up IFACE=lo)
stop on runlevel [!2345]

respawn

# exec start-stop-daemon --start --chuid gearman --exec /usr/sbin/gearmand -- --log-file=/var/log/gearman-job-server/gearman.log

# PATCH: https://bugs.launchpad.net/ubuntu/+source/gearmand/+bug/1260830
script
    . /etc/default/gearman-job-server
    exec start-stop-daemon --start --chuid gearman --exec /usr/sbin/gearmand -- $PARAMS --log-file=/var/log/gearman-job-server/gearman.log
end script

The Benefit

Gearman will now respect the settings found in /etc/default/gearman-job-server, allowing you to configure Gearman with command line options that you’d otherwise have to add to the init script. In the case of the Graph Story implementation, I’m binding Gearman to localhost and using MySQL as a persistent queue. Here’s what that config file looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# This is a configuration file for /etc/init.d/gearman-job-server; it allows
# you to perform common modifications to the behavior of the gearman-job-server
# daemon startup without editing the init script (and thus getting prompted by
# dpkg on upgrades).  We all love dpkg prompts.

# Examples ( from http://gearman.org/index.php?id=manual:job_server )
#
# Use drizzle as persistent queue store
# PARAMS="-q libdrizzle --libdrizzle-db=some_db --libdrizzle-table=gearman_queue"
#
# Use mysql as persistent queue store
# PARAMS="-q libdrizzle --libdrizzle-host=10.0.0.1 --libdrizzle-user=gearman \
#                       --libdrizzle-password=secret --libdrizzle-db=some_db \
#                       --libdrizzle-table=gearman_queue --libdrizzle-mysql"
#
# Missing examples for memcache persitent queue store...

# Parameters to pass to gearmand.
PARAMS="--listen=localhost \
        -q mysql \
        --mysql-host=localhost \
        --mysql-port=3306 \
        --mysql-user=redacted \
        --mysql-password=redacted \
        --mysql-db=gearman \
        --mysql-table=gearman_queue"

Side Note and Pro Tip: MySQL Persistent Queue

Creating the MySQL database and providing privileges to your database user allows Gearman to create its own gearman_queue database table. Don’t risk creating an incompatible database table; let Gearman do that work for you.

From Zero to Slim Framework: Getting Your First Project Off the Ground

When I was a brand new web developer, I was overwhelmed by the amount of general knowledge required to get a project off the ground: Web server (as in configuring a server OS), web server (as in nginx or Apache), PHP installation, PHP configuration, application configuration, and so forth. I was willing and able to learn, but even the best blog posts and documentation frequently assumed a certain level of existing knowledge, much of which I didn’t have. My goal with this post is to help you get your first Slim Framework project started without assuming any knowledge on your part. We’ll literally go from absolutely nothing to a functioning Slim-Skeleton application.

Requirements

While I’m not going to make any general knowledge assumptions (call me out in the comments if I miss the mark), I am going to set a few requirements. In short, we’ll be using virtualization technology (Vagrant and VirtualBox), the Ubuntu 14.04 LTS operating system, and Composer, a dependency management tool for PHP.

Why Do This All “By Hand”?

While there’s an “easier, softer way” (think Phansible or PuPHPet), I think it’s important to know what’s going on behind the scenes. What if you need to fix something on your server once it’s in production? What if you have to tweak a setting here or there, or create a new vhost or nginx site? It’s a good idea to have done it at least once by hand before moving on to automated solutions. You’ll have a better feel for what’s happening, why it’s happening, and how to fix any problems that might arise in the future.

Preparing the Host Environment

By “host”, I mean your computer. These are the first steps we’ll take to prepare your computer to host the virtual machine we’ll use to run the tutorial code.

  • Install Vagrant: Grab an installer for your OS here.
  • Install VirtualBox: Grab an installer for your OS here.

Create Your VM

  • Create a directory for your project, and then change into your project directory
  • Make a sub-directory for your Slim project within your new project directory: mkdir zero-to-slim.dev
  • Now run vagrant init ubuntu/trusty64 from your project directory

vagrant init will create a Vagrantfile, the file that tells Vagrant how to build your VM. We’ll need to edit that file and add a few settings.

Edit /path/to/project/Vagrantfile and add the following lines after config.vm.box:

1
2
3
4
5
6
  config.vm.hostname = "zero-to-slim.dev"
  config.vm.network :private_network, ip: "192.168.56.103"
  config.vm.synced_folder "./zero-to-slim.dev", "/var/www/zero-to-slim.dev", id: "web-root",
      owner: "vagrant",
      group: "www-data",
      mount_options: ["dmode=775,fmode=664"]

Save and close your Vagrantfile, and then run vagrant up. You’ll be treated to some output as your VM is built. Once that’s done, the VM is ready to configure.

Connect and Configure VM

Run vagrant ssh from your project directory. The following steps will be completed on the VM, not your host machine.

If you’re on Windows, the ssh command won’t be available to you. You’ll need to use a program like PuTTY instead. Here are some step-by-step instructions for configuring your Windows box to connect to your new VM.

  • sudo apt-get update
  • sudo apt-get install curl vim wget python-software-properties -y
  • sudo add-apt-repository ppa:ondrej/php5 -y
  • sudo apt-get update

Choose a Web Server and a PHP Version

You must choose one or the other, not both. nginx is extremely popular, but if you’re more comfortable with Apache I’ve included instructions below.

nginx and PHP-FPM

  • sudo apt-get install php5-fpm php5-cli php5-xdebug nginx -y

Apache2 and PHP

  • sudo apt-get install php5 php5-cli php5-xdebug apache2 -y

Install Composer Globally

See Composer’s Getting Started section for the most up-to-date installation instructions (There’s an installer available for those of you on Windows). Below is how I install Composer on both Mac and Linux.

  • curl -sS https://getcomposer.org/installer | php
  • sudo mv composer.phar /usr/local/bin/composer

Use Composer to Install Slim Skeleton

Heads up, this is going to take a while. Composer is an amazing technology, but in this case it’s pretty slow (this has more to do with the VM than with Composer). Run the command, grab a cup of coffee, and come on back to finish up.

  • composer create-project slim/slim-skeleton /var/www/zero-to-slim.dev

If you suspect there’s a problem with the composer create-project command due to how long it takes to get feedback, you can add the verbose option (-vvv) to the command, like so: composer create-project slim/slim-skeleton /var/www/zero-to-slim.dev -vvv

Configure Your Web Server

Since the following files aren’t in a synced folder, which would allow you to edit them from your host machine, and since you need to be root to edit them, the best way to do so is directly on the server using a text editor. I’ve chosen Vim in this case. Once again, here’s an opportunity to learn about a tool that you may find yourself needing at some point in the future, even if you only ever use it to edit a file or two on your server(s).

Editing a Config File with Vim

  • Type sudo vim <server_config_file> (either /etc/nginx/sites-available/default or /etc/apache2/sites-available/000-default.conf, depending)
  • Type gg on the keyboard to ensure you’re at the top of the file
  • Delete everything in the file with dG
  • Type i to enter insert mode
  • Copy the appropriate config from below and paste it into the config document
  • Hit ESC to exit insert mode
  • Type :wq to write your changes and quit the file.

Congrats! You’ve just edited your server config using Vim, an accomplishment in itself.

nginx and PHP-FPM

Replace the contents of /etc/nginx/sites-available/default with the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
server {
    listen      80;
    server_name zero-to-slim.dev;
    root        /var/www/zero-to-slim.dev/public;

    try_files $uri /index.php;

    # this will only pass index.php to the fastcgi process which is generally safer but
    # assumes the whole site is run via Slim.
    location /index.php {
        fastcgi_connect_timeout 3s;     # default of 60s is just too long
        fastcgi_read_timeout 10s;       # default of 60s is just too long
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        include fastcgi_params;
    }
}

Save and close the default config, and then restart nginx: sudo service nginx restart

Apache2

Replace the contents of /etc/apache2/sites-available/000-default.conf with the following:

1
2
3
4
5
6
7
8
9
10
<VirtualHost *:80>
    DocumentRoot "/var/www/zero-to-slim.dev/public"
    ServerName zero-to-slim.dev

    <Directory "/var/www/zero-to-slim.dev/public">
        AllowOverride All
        Order allow,deny
        Allow from all
    </Directory>
</VirtualHost>
  • Save and close the default config
  • Enable mod_rewrite for URL rewriting: sudo a2enmod rewrite
  • Restart Apache: sudo service apache2 restart

Configure PHP for Dev Environment

Since editing php.ini with Vim would require quite a few detailed instructions, I’ve opted to show you how to override php.ini settings by adding an additional config file.

On your host machine, create a file in your project directory name 00-php.ini. Paste the following contents into the file:

1
2
3
4
error_reporting = -1
display_errors = On
display_startup_errors = On
html_errors = On

These settings ensure that PHP will report any and all errors it encounters, and that PHP will display those errors on screen. The html_errors setting will provide a nice looking error formatted by xdebug.

Since your project directory is synced to /vagrant on your VM, the file you just created will be available on your VM. Copy the new config file into the directory scanned by PHP for additional config files and restart your web server. The following commands should be executed on your VM, not your host machine.

nginx and PHP-FPM

  • sudo cp /vagrant/00-php.ini /etc/php5/fpm/conf.d/
  • sudo service php5-fpm restart

Apache2 and PHP

  • sudo cp /vagrant/00-php.ini /etc/php5/apache2/conf.d/
  • sudo service apache2 restart

Final Configuration

You should see the Slim welcome page. If you don’t, there should be an error displayed telling you exactly what went wrong. Double check the steps above to make sure you didn’t miss anything. If you still have problems, drop me a line in the comments and I’ll help you get them sorted out.

Wrapping Up

So there you have it. You started with nothing at all and now have a working Slim Framework application. Congrats!

FYI, I highly recommend basing your Slim apps on the Slim Skeleton. Install it, modify it for your specific application needs, and you’ll have a much easier time getting up and running, I promise. That’s what I did when I first started with Slim.

Next time we’ll talk about how to automate this process. Now that you know what’s involved, there’s no point in wasting precious dev time manually configuring new machines for each new project.

Composer Platform Packages

Here’s something about Composer that I can never remember, I always have to look up, and I always have a hard time finding where it is in the documentation. Ladies and gentlemen, I give you platform packages:

Platform packages

Composer has platform packages, which are virtual packages for things that are installed on the system but are not actually installable by Composer. This includes PHP itself, PHP extensions and some system libraries.

  • php represents the PHP version of the user, allowing you to apply constraints, e.g. >=5.4.0. To require a 64bit version of php, you can require the php-64bit package.

  • hhvm represents the version of the HHVM runtime (aka HipHop Virtual Machine) and allows you to apply a constraint, e.g., ‘>=2.3.3’.

  • ext-<name> allows you to require PHP extensions (includes core extensions). Versioning can be quite inconsistent here, so it’s often a good idea to just set the constraint to *. An example of an extension package name is ext-gd.

  • lib-<name> allows constraints to be made on versions of libraries used by PHP. The following are available: curl, iconv, icu, libxml, openssl, pcre, uuid, xsl.

You can use composer show --platform to get a list of your locally available platform packages.

So that’s the relevant portion of the documentation, and its composer show --platform that lists local platform packages. Maybe now I’ll remember.

PHP Password Hashing: A Dead Simple Implementation

[UPDATE: Added a new section at the end of the post]

[UPDATE 2: Added a section RE: StorageDecorator]

tl;dr: Install Password Validator and all of your password troubles will be solved. All of them. It’ll even upgrade your old hashes transparently. Sup?

Hashing Done Wrong

We all know to encrypt passwords for highest level of security. Unfortunately, too many do it like this:

1
2
3
4
5
6
7
8
class SecurityFail
{
    // Encrypt Passwords for Highest Level of Security.
    static public function encrypt($pword)
    {
        return md5($pword);
    }
}

While there was never any excuse for getting it that wrong, there’s now no excuse for getting it wrong at all. Developers, meet the new(-ish) PHP password hashing functions (and the userland implementation password-compat).

Hashing Done Right

First, alter the password column in your user database to VARCHAR(255). Current BCRYPT passwords are 60 characters in length, but when PHP upgrades the default hash (which will happen at some point), you want to be ready. Really, just do it.

When it’s time to create a new user password, throw the plain text password into password_hash():

1
$hash = password_hash($plainTextPassword, PASSWORD_DEFAULT);

The next time a user logs in, use a little password_verify() action:

1
$isValid = password_verify($plainTextPassword, $hashedPassword);

If the password is valid, check to see if it needs to be rehashed with password_needs_rehash():

1
$needsRehash = password_needs_rehash($hashedPassword, PASSWORD_DEFAULT);

If the password needs to be rehashed, run it through password_hash() again and persist the result.

Trivial, right? Right!

Even Trivial-er

Since implementing the code above might take as many as two or three hours out of your day, I went ahead and implemented it for you. Behold, Password Validator!

Password Validator

The Password Validator library validates password_hash generated passwords, rehashes passwords as necessary, and can upgrade legacy passwords (if configured to do so).

The really big deal here is the ease of upgrading from your current legacy hashes to the new, more secure, PHP generated hashes. More on that later.

Usage

Password Validation

If you’re already using password_hash generated passwords in your application, you need do nothing more than add the validator in your authentication script. The validator uses password_verify to test the validity of the provided password hash.

1
2
3
4
5
6
7
8
use JeremyKendall\Password\PasswordValidator;

$validator = new PasswordValidator();
$result = $validator->isValid($_POST['password'], $hashedPassword);

if ($result->isValid()) {
    // password is valid
}

If your application requires options other than the password_hash defaults, you can set both the salt and cost options with PasswordValidator::setOptions().

1
2
3
4
5
$options = array(
    'salt' => 'SettingYourOwnSaltIsNotTheBestIdea',
    'cost' => 11,
);
$validator->setOptions($options);

IMPORTANT: PasswordValidator uses a default cost of 10. If your existing hash implementation requires a different cost, make sure to specify it using PasswordValidator::setOptions(). If you do not do so, all of your passwords will be rehashed using a cost of 10.

Rehashing

Each valid password is tested using password_needs_rehash. If a rehash is necessary, the valid password is rehashed using password_hash with the provided options. The result code Result::SUCCESS_PASSWORD_REHASHED will be returned from Result::getCode() and the rehashed password is available via Result::getPassword().

1
2
3
4
if ($result->isValid() && $result->getCode() == Result::SUCCESS_PASSWORD_REHASHED) {
    $rehashedPassword = $result->getPassword();
    // Persist rehashed password
}

IMPORTANT: If the password has been rehashed, it’s critical that you persist the updated password hash. Otherwise, what’s the point, right?

Upgrading Legacy Passwords

You can use the PasswordValidator whether or not you’re currently using password_hash generated passwords. The validator will upgrade your current legacy hashes to the new password_hash generated hashes. All you need to do is provide a validator callback for your password hashing scheme and then decorate the validator with the UpgradeDecorator.

1
2
3
4
5
6
7
8
9
10
11
12
use JeremyKendall\Password\Decorator\UpgradeDecorator;

// Example callback to validate a sha512 hashed password
$callback = function ($password, $passwordHash) {
    if (hash('sha512', $password) === $passwordHash) {
        return true;
    }

    return false;
};

$validator = new UpgradeDecorator(new PasswordValidator(), $callback);

The UpgradeDecorator will validate a user’s current password using the callback. If the user’s password is valid, it will be hashed with password_hash and returned in the Result object, as above.

If the callback determines the password is invalid, the password will be passed along to the PasswordValidator in case it’s already been upgraded.

Persisting Rehashed Passwords

Whenever a validation attempt returns Result::SUCCESS_PASSWORD_REHASHED, it’s important to persist the updated password hash.

1
2
3
4
if ($result->getCode() === Result::SUCCESS_PASSWORD_REHASHED) {
    $rehashedPassword = $result->getPassword();
    // Persist rehashed password
}

While you can always perform the test and then update your user database manually, if you choose to use the Storage Decorator all rehashed passwords will be automatically persisted.

The Storage Decorator takes two constructor arguments: An instance of PasswordValidatorInterface and an instance of the JeremyKendall\Password\Storage\StorageInterface.

StorageInterface

The StorageInterface includes a single method, updatePassword(). A class honoring the interface might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<?php

namespace Example;

use JeremyKendall\Password\Storage\StorageInterface;

class UserDao implements StorageInterface
{
    public function __construct(\PDO $db)
    {
        $this->db = $db;
    }

    public function updatePassword($identity, $password)
    {
        $sql = 'UPDATE users SET password = :password WHERE username = :identity';
        $stmt = $this->db->prepare($sql);
        $stmt->execute(array('password' => $password, 'username' => $identity));
    }
}

Storage Decorator

With your UserDao in hand, you’re ready to decorate a PasswordValidatorInterface.

1
2
3
4
5
6
7
8
use Example\UserDao;
use JeremyKendall\Password\Decorator\StorageDecorator;

$storage = new UserDao($db);
$validator = new StorageDecorator(new PasswordValidator(), $storage);

// If validation results in a rehash, the new password hash will be persisted
$result = $validator->isValid('password', 'passwordHash', 'username');

IMPORTANT: You must pass the optional third argument ($identity) to isValid() when calling StorageDecorator::isValid(). If you do not do so, the StorageDecorator will throw an IdentityMissingException.

Validation Results

Each validation attempt returns a Result object. The object provides some introspection into the status of the validation process.

  • Result::isValid() will return true if the attempt was successful
  • Result::getCode() will return one of three possible int codes:
    • Result::SUCCESS if the validation attempt was successful
    • Result::SUCCESS_PASSWORD_REHASHED if the attempt was successful and the password was rehashed
    • Result::FAILURE_PASSWORD_INVALID if the attempt was unsuccessful
  • Result::getPassword() will return the rehashed password, but only if the password was rehashed

Database Schema Changes

As mentioned above, because this library uses the PASSWORD_DEFAULT algorithm, it’s important your password field be VARCHAR(255) to account for future updates to the default password hashing algorithm.

Helper Scripts

There are two helper scripts available, both related to the password hash functions (these functions are only available after running composer install).

version-check

If you’re not already running PHP 5.5+, you should run version-check to ensure your version of PHP is capable of using password-compat, the userland implementation of the PHP password hash functions. Run ./vendor/bin/version-check from the root of your project. The result of the script is pass/fail.

cost-check

The default cost used by password_hash is 10. This may or may not be appropriate for your production hardware, and it’s entirely likely you can use a higher cost than the default. cost-check is based on the finding a good cost example in the PHP documentation. Simply run ./vendor/bin/cost-check from the command line and an appropriate cost will be returned.

NOTE: The default time target is 0.2 seconds. You may choose a higher or lower target by passing a float argument to cost-check, like so:

1
2
$ ./vendor/bin/cost-check 0.4
Appropriate 'PASSWORD_DEFAULT' Cost Found:  13

Wrapping Up

The addition of native password hashing functions is the most important security update to PHP since, well, I don’t know when. There’s no excuse for not implementing them in your applications, and the Password Validator library makes it trivial. That’s especially true when it comes to updating your legacy password hashes, which many of us need to do. Even if you only use the Password Validator as a roadmap for your own implementation, I strongly recommend upgrading ASAP.

Kudos

I was remiss to not add this bit of kudos when I originally published this post. Better late than never.

Credit for the new password hashing functions goes to Anthony Ferrara. He submitted the original RFC and created the password-compat library. The PHP community owes Anthony a debt of gratitude for making password hash security so ridiculously simple. Seriously, if I can grok it, you know it’s idiot proof :-)

Without Anthony’s hard work (and PHP core’s unanimous ‘Yes’ votes, and the password-compat contributors), my small contribution wouldn’t have been possible. Kudos to you all.

Installing Phpass From Openwall via Composer

[UPDATE: Added a PHP version clarification at the end of the post.]

Managing dependencies via Composer is one of the most revolutionary advancements in the history of PHP. Composer packages are frequently hosted on Github, listed on Packagist, and required in your project via the require field in composer.json.

So Where is phpass?

What happens when that’s not the case? One library of note, phpass, is not available on Github (or any other supported VCS)1 and therefore can’t simply be added to the require field for easy installation. All is not lost, however, thanks to Composer’s package repository feature2.

Behold, Composer’s ‘Package’ Repository!

After reviewing the package repository docs, I found it ridiculously easy to require phpass in my project. Here’s what you have to do.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
    "repositories": [
    {
        "type": "package",
        "package": {
            "name": "openwall/phpass",
            "version": "0.3",
            "dist": {
                "url": "http://www.openwall.com/phpass/phpass-0.3.tar.gz",
                "type": "tar"
            },
            "autoload": {
                "classmap": ["PasswordHash.php"]
            }
        }
    }
    ],
    "require": {
        "openwall/phpass": "0.3"
    }
}

Now you can run composer install (or composer update, as appropriate) and Composer will install phpass as a project dependency. Sweet!

UPDATE - CLARIFICATION: Using phpass is only advisable for PHP versions that won’t support the new password hashing functions. That’s any version of PHP less than 5.3.7:

If you’re at PHP >= 5.3.7, enjoy this article as a Composer tip you might not have know about until now and use password_compat. If you’re at PHP < 5.3.7, this is both a Composer tip and an admonition to upgrade you password security. Do it!

Many thanks to Meroje and @craig_bass for pointing out that password_compat is superior, making it clear that I needed to post a clarification.


  1. Yes, there are phpass repos on Github, but Anthony Ferrara recommends against them. When Anthony talks security, I listen.

  2. Be aware, there are significant drawbacks to this method (noted at the bottom of the Package documentation), but sometimes it’s the only way.

Creating Case-insensitive Routes With the Slim Framework

This Stack Overflow question about case-insensitive routing with the Slim Framework caught my eye recently. The question asks, in part:

How can I avoid setting up two separate routes [with different cases] that trigger the same callback function?

I found the question intriguing because 1) I love Slim and 2) I’ve never really thought about whether or not URLs are case-sensitive to begin with. My immediate thought was, “They’re already case-insensitive! Has dude even tested this?”

Are URLs Case-sensitive?

Well, kinda. It frequently, but not always, boils down to whether or not the web server’s filesystem is case sensitive. The HTTP server (Apache, nginx, etc) can get involved, as they can (always?) be configured to serve URLs with or without regard to case sensitivity. The web application and/or framework is involved too. If all other factors are case-insensitive but your application’s router is case-sensitive, then your application’s URLs will be too. This is the case with the Slim Framework.

Should URLs Be Case-sensitive?

Again, kinda. In “HTML and URLs”, the w3c has this to say:

URLs in general are case-sensitive (with the exception of machine names). There may be URLs, or parts of URLs, where case doesn’t matter, but identifying these may not be easy. Users should always consider that URLs are case-sensitive.

In general and should are the operative words there. It’s OK either way1, but assuming case-sensitivity is prudent.

So What About Slim?

URLs in the Slim Framework are case-sensitive2. That’s correct based on what we learned about URL case-sensitivity from the w3c, but what if you have a use case that requires case-insensitive URLs? What do do then?

The Magic of Slim Hooks

What are Slim Hooks? From the documentation:

A “hook” is a moment in the Slim application lifecycle at which a priority list of callables assigned to the hook will be invoked. A hook is identified by a string name.

Slim provides six default hooks, three invoked before the current route is dispatched and three invoked after. By using one of the hooks that’s invoked before the route is matched, we can alter the incoming URL’s path to match the case of the routes we’ve defined in our Slim application.

The Case-insensitive Route Hook

First, if there are any routes with mixed case, change them all to lower case. That’s the de facto standard for creating routes anyhow, and if any requests come in that match the old, mixed case routes, we’ll take care of those with the hook in the below example. Same issue, we’re just flipping it on its head. Seriously, don’t get weird about changing those routes.

Next, register this callback on the slim.before.router hook:

1
2
3
$app->hook('slim.before.router', function () use ($app) {
    $app->environment['PATH_INFO'] = strtolower($app->environment['PATH_INFO']);
});

This works because Slim matches the routes you’ve defined against the Slim Environment’s PATH_INFO (originally found in $_SERVER['PATH_INFO']). Since your routes are lower case and the incoming request paths are lower case, you’ve accomplished case insensitive routing in 3 lines of code. BOOM.


  1. It’s interesting to note that domain names are case-insensitive, regardless of whether or not that site’s URLs are case-insensitive.

  2. A pull request against Slim 2.4 is in the works that will allow case-insensitive routes via a config setting.

Using Callbacks to Bypass Guzzle's Cache Plugin

I’m a big fan of the Guzzle PHP HTTP client. I use it whenever I need to make requests of 3rd party APIs from my applications. If you’re still writing cURL requests by hand or have rolled your own HTTP client, I highly recommend checking out Guzzle.

I’m currently making heaviest use of Guzzle in my photo-a-day project, Flaming Archer, in order to get photo data from Flickr. To keep from hammering the Flickr API, I’m caching all of those requests. Guzzle makes caching ridiculously easy by way of their plugin system and their HTTP Cache plugin.

The problem with the caching plugin, at least at first blush, is how to bypass the cache in certain specific instances where caching might not be appropriate. The docs are a little light in this area, so it took me a few minutes to get it sorted out. Let’s start at the top.

The Guzzle Client

“Clients create requests, send requests, and set responses on a request object. When instantiating a client object, you can pass an optional "base URL” and optional array of configuration options."

Here’s an example of creating a Guzzle Client, based on my use case of making requests against the Flickr API.

1
2
3
4
5
6
7
8
use Guzzle\Http\Client;

$client = new Client('http://api.flickr.com');
$client->setDefaultOption('query', array(
    'api_key' => 'EXAMPLE_API_KEY',
    'format' => 'json',
    'nojsoncallback' => 1,
));

I use the client for the GET requests I need to make against the Flickr API. Each request will include the above default options in the query string. Nice!

Adding Caching

Since I don’t want to hammer the crap out of the Flickr API and start hitting the rate limit1, I wanted to cache each request. Thankfully, Guzzle has an awesome plugin system that includes an HTTP Cache plugin.

“Guzzle can leverage HTTP’s caching specifications using the Guzzle\Plugin\Cache\CachePlugin. The CachePlugin provides a private transparent proxy cache that caches HTTP responses.”

Rather than rolling my own caching strategy (My first solution was to write a decorator for caching), I decided to use Guzzle’s native plugin and leave all the caching work to them.

1
2
3
4
5
6
7
8
9
10
11
use Guzzle\Cache\Zf2CacheAdapter;
use Guzzle\Plugin\Cache\CachePlugin;
use Guzzle\Plugin\Cache\DefaultCacheStorage;
use Zend\Cache\Backend\TestBackend;

$backend = new TestBackend();
$adapter = new Zf2CacheAdapter($backend);
$storage = new DefaultCacheStorage($adapter);
$cachePlugin = new CachePlugin($storage);

$client->addSubscriber($cachePlugin);

The cache plugin will now intercept and cache GET and HEAD requests made by the client.

Custom Caching Decisions

So what if, now that you’re caching each GET request, there’s a request or requests you don’t want cached? Guzzle makes solving that problem trivial by allowing for “custom caching decisions”, but the documentation on how to make those custom decisions is decidedly light.

“… you can set a custom can_cache object on the constructor of the CachePlugin and provide a Guzzle\Plugin\Cache\CanCacheInterface object. You can use the Guzzle\Plugin\Cache\CallbackCanCacheStrategy to easily make a caching decision based on an HTTP request and response.”

Wat?

That was clear as mud to me, so I spent a few minutes digging through the source. This is what I came up with:

  • The CallbackCanCacheStrategy provides a method of providing callbacks to the cache plugin that, based on a boolean response, determine whether or not a particular request or response should be cached.
  • The CallbackCanCacheStrategy accepts two optional arguments to its constructor: a callable that will be invoked for requests and a callable that will be invoked for responses. The request callback gets an instance of Guzzle\Http\Message\RequestInterface, and the response callback gets an instance of Guzzle\Http\Message\Response.

Bypassing Cache

In my case, I want to cache everything except for calls to the flickr.photos.search API method. Since all of the GET requests I’m making include a method query string param, it was trivial to write the callback that got me where I needed to go.

1
2
3
4
5
6
7
8
9
$canCache = new CallbackCanCacheStrategy(
    function ($request) {
        if ($request->getQuery()->get('method') === 'flickr.photos.search') {
            return false;
        }

        return true;
    }
);

Putting It All Together

Now that I’ve built my $client, $storage, and $canCache strategy, here’s how I put it all together.

1
2
3
4
5
6
$cachePlugin = new CachePlugin(array(
    'can_cache' => $canCache,
    'storage' => $storage,
));

$client->addSubscriber($cachePlugin);

Now all of my GET requests are cached except for those using the flickr.photos.search method. BOOM.


  1. I can’t find the documentation on API rate limiting right now, but I know it’s limited and I don’t want to hit that limit.