jeremykendall.net

PHP and Capistrano 3: Notes to Self

I spent quite a bit of my day yesterday trying to work out a painless, scripted, idiot-proof deployment process with Capistrano for my photo-a-day website. I’ve been doing a lot of work on the site lately, which means a lot of deployments, and I’ve been very unhappy with myself for implementing what amounts to “deployment worst practices” when it comes to my personal projects.

The last time I worked with Capistrano was about two years ago, and a lot has changed since then. Capistrano v3 was released in June of 2013 and brought with it a lot of great changes, but for a guy who doesn’t know ruby and relies on tutorials and Stack Overflow questions for help, the version bump brought a lot of pain as well.

Challenges

Every Tutorial is Wrong

Just know going into this that (almost) every tutorial you find is going to be a Capistrano v2 tutorial. Enough has changed between v2 and v3 to make those tutorials just misleading enough to cause a good amount of pain.

Stack Overflow is a Capistrano v3 Desert

As of this writing, there are ten questions tagged capistrano3 on Stack Overflow. Seriously. Ten. And only four of those include accepted answers.

Capistrano v3 Documentation is Lacking

The documentation available for v3 is seriously lacking, although the problem is more one of quantity than quality. What’s available is good, there’s just nowhere near enough of it.

Caveat Emptor

These are indeed “Notes to Self”. I hope they help you out, but if they don’t, I’m giving you fair warning. Please feel free to add what’s missing to the comments.

Reading the Capistrano docs is highly recommended. These notes are supplemental.

PHP + Capistrano v3

NOTE: You can find the application source, which now includes Capistrano v3, on GitHub.

Ruby

First, I had to reinstall rvm, a ruby version manager. What I don’t know about installing a ruby dev environment could fill books, so I let rvm take care of that for me.

1
\curl -L https://get.rvm.io | bash -s stable --ruby

(Capistrano requires Ruby 1.9 or newer, so the current stable Ruby from rvm will work fine.)

Installation

Install Capistrano. There are a few options, but I used gem install.

1
gem install capistrano

NOTE: All of the PHP tutorials I found instruct you to install the railsless-deploy gem. As Capistrano v3 “doesn’t ship with any Railsisms”, this is no longer necessary and the railsless-deploy project is obsolete.

I’ll probably go back and add a Gemfile since I’ll be adding the composer gem and want everything I need in one place.

Preparing Your App

The capify command is no longer, it’s now cap install.

1
2
cd /path/to/project
cap install

The documentation on this portion, “Preparing Your Application”, is one of the places the documentation shines, IMHO.

Capistrano Files

  • Capfile: Kind of like a bootstrap. Takes care of necessary required configs and globs for custom tasks.
  • config/deploy.rb: Common items across all deployments go here
  • config/deploy/{staging, production}.rb: Environment specific deployment settings

Roles

I’m still not 100% clear on this, but roles don’t seem to be roles in the ACL sense, but rather roles in the “division of server responsibility” sense, hence the roles :web, :app, and :db.

The docs say that you can dump the :app and :db roles if you like, but if you’re going to use the :linked_files and :linked_dirs feature (which are pretty cool) you’ll need to leave the :app role in place. I’m obviously doing something wrong or missing something here, but that’s what I had to do.

I found it extremely helpful to refer to the deploy.rb.erb template. I removed most of the example text before realizing I needed to use some of it, and referencing the template was nice.

SSH Forwarding

SSH Agent Forwarding is the way to go here, IMHO. I already had a key agent running so the forwarding was dead simple, almost like magic. If you don’t already have an agent running, here’s some good info from GitHub Help.

Server Config

The section on how to set up the proper Capistrano directories on your server is way down deep in the Authorisation portion of the docs. Short version: you need two dirs in the root of your project (on the server): releases and shared, and they must be readable and writeable by both the webserver and deploy user.

Getting permissions right is easy for some, but I always return to the Setting Up Permissions section of the Symfony2 docs to get them set up properly.

Making Composer Work

Once I got my deploy.rb and deploy/production.rb right and tested (by deploying, natch), I needed to create a task to run composer install. Getting that right turned out to be pretty difficult because of a design decision in SSHKit.

Short story: No spaces in command line commands.

I finally got my composer command running by doing this:

1
2
3
4
5
6
7
8
9
10
desc 'composer install'
task :composer_install do
    on roles(:web) do
        within release_path do
            execute 'composer', 'install', '--no-dev', '--optimize-autoloader'
        end
    end
end

after :updated, 'deploy:composer_install'
  • The within release bit in the command tells the task to cd into the release directory before running composer.
  • There are before and after hooks you can apply to the deploy flow. The final line above is a hook that runs after deploy:updated.

Composer Gem for Capistrano v3

Of course, as soon as I got it working, Peter Mitchell sent me this tweet:

I haven’t yet replaced my hacked version with the gem, but I’ll use it as soon as I do any refactoring.

Deployment Annoyances

cap production deploy --dry-run never worked for me, although cap production deploy worked fine. No idea why, who cares at this point. Maybe I’ll dig in later.

Also, cap production deploy really wants to run deploy:restart, even though it doesn’t show up anywhere in the deploy flow. I replaced the :restart task that’s in the default deploy.rb template, made sure it was empty, and deploy finally worked.

Linked Files and Directories

The :linked_files and :linked_directories feature is really nice. I’ve used it for logs, a local.php config, Twig caching, my SQLite database, and my generated rss file. The linked items are for files and dirs that should be shared between deploys.

Also, those files and dirs need to be present in the shared dir before deploying. Deploy will puke if they’re not present.

Capistrano Variables

I couldn’t find a listing of these anywhere, but release_path is one of them, and it points to the latest release path.

That’s All For Now

I hope the notes help when you get ready to write your own Capistrano v3 scripts for use with PHP. I’ll update this as I learn more, and I’ll make sure to point out any excellent point made in the comments.

Restarting VirtualBox on OSX

My Vagrant + VirtualBox VM workflow was distrupted late this afternoon by an error during vagrant up:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
[default] Importing base box 'precise64'...
[default] Matching MAC address for NAT networking...
[default] Setting the name of the VM...
[default] Clearing any previously set forwarded ports...
[default] Creating shared folders metadata...
[default] Clearing any previously set network interfaces...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

    Command: ["hostonlyif", "create"]

    Stderr: 0%...
    Progress state: NS_ERROR_FAILURE
    VBoxManage: error: Failed to create the host-only adapter
    VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory

    VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterface, interface IHostNetworkInterface
    VBoxManage: error: Context: "int handleCreate(HandlerArg*, int, int*)" at line 68 of file VBoxManageHostonly.cpp

Googling for Failed to create the host-only adapter didn’t seem to return anything useful, so I tried with failed to open /dev/vboxnetctl and immediately found an answer.

The trick, according to GitHub user lslucas, is to restart VirtualBox from the command line like so:

1
 $ sudo /Library/StartupItems/VirtualBox/VirtualBox restart

That worked like a champ for me, and I was immediately able to get back to work.

For what it’s worth, here’s my current software config:

  • VirtualBox 4.2.16
  • Vagrant 1.2.2
  • Mac OS X 10.8.4

Nashville, I Hardly Knew Ye

tl;dr:

  • My wife Megan and I are moving back to Memphis, TN
  • All of our family is there
  • The pregnancy has been tough, we’d like to raise our son around family
  • I’m leaving Nashville, but I’ll still be working for OpenSky
  • I’ll miss you all terribly, but I’ll be back regularly

So Long, And Thanks For All The Awesome

I might as well get right to it, especially since I ruined the surprise above: Megan and I have decided to move back to Memphis. The decision to leave Nashville, especially after spending such a short time here, was a really difficult one, but I’m convinced this is the best decision for me and my family.

A Little Background

Megan and I are both Memphis natives. We’ve got deep roots in the “Bluff City”, and both of our families (excepting one of my brothers and his family) are still there. We moved to Nashville late last year after the incomparable Scott Gordon hooked me up with an awesome gig here in town.

In April of this year we found out we’d be having a baby! That was amazing, wonderful news, but the pregnancy has been difficult from day one. Recently, things got really, really scary. After a lot of discussion and soul searching, Megan and I decided the best decision for us and our new family would be moving home to be around the support, care, and help of our families.

I’m Leaving, But Only Mostly

So come mid- to late-October, when our current lease is up, we’re heading back to M-town. There is a huge Nashville-related silver lining, however. OpenSky is letting me move to Memphis while staying on staff with them! That means regular trips back to Nashville to work in the local office, get some face time with the team, and hang out with all my Nashville friends.

So Let’s Do Lunch

Nashville friends, we need to do lunch, pronto. Two months seems like a long time, but I’ll be driving that moving truck west on I-40 before we know it. I want to get together while it’s still relatively easy to do so. I’ve spent to little time with y'all as it is.

MEMPHIS, I AM ALMOST BACK IN YOU

Leaving Nashville is sad, coming back to Memphis is going to be awesome. I’m looking forward to hanging out with my Memphis people, getting involved in the Memphis PHP and the #memtech communities again, reconnecting with Memphis Roller Derby, and being able to see the fam whenever.

Exciting Times Ahead

Things have been wild the past few years, and they’re about to get a lot more exciting. Change is tough, and I’m feeling the pressure, but I feel great about where I’m at, where Megan and I are at, and our future. I can’t wait to see what comes next.

Trending on GitHub

Thanks to yesterday’s link love from PHPDeveloper, I’ve made both the Trending PHP Developer list and the Trending PHP Repo list on GitHub.

With internet fame being so fleeting, I took some screenshots for posterity. Now please excuse me while I get some ice for my arm. I seem to have injured myself while patting myself on the back.

API Query Authentication With Query Auth

Most APIs require some sort of query authentication: a method of signing API requests with an API key and signature. The signature is usually generated using a shared secret. When you’re consuming an API, there are (hopefully) easy to follow steps to create signatures. When you’re writing your own API, you have to whip up both server-side signature validation and a client-side signature creation strategy. Query Auth endeavors to handle both of those tasks; signature creation and signature validation.

Philosophy

Query Auth is intended to be – and is written as – a bare bones library. Many of niceties and abstractions you’d find in a fully featured API library or SDK are absent. The point of the library is to provide you with the ability to focus on writing the meat of your API while offloading the authentication bits.

What’s Included?

There are three components to Query Auth: request signing for API consumers and creators, request signature validation for API creators, and API key and API secret generation.

Request Signing

1
2
3
4
5
6
7
8
9
10
11
12
$collection = new QueryAuth\NormalizedParameterCollection();
$signer = new QueryAuth\Signer($collection);
$client = new QueryAuth\Client($signer);

$key = 'API_KEY';
$secret = 'API_SECRET';
$method = 'GET';
$host = 'api.example.com';
$path = '/resources';
$params = array('type' => 'vehicles');

$signedParameters = $client->getSignedRequestParams($key, $secret, $method, $host, $path, $params);

Client::getSignedRequestParams() returns an array of parameters to send via the querystring (for GET requests) or the request body. The parameters are those provided to the method (if any), plus timestamp, key, and signature.

Signature Validation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$collection = new QueryAuth\NormalizedParameterCollection();
$signer = new QueryAuth\Signer($collection);
$server = new QueryAuth\Server($signer);

$secret = 'API_SECRET_FROM_PERSISTENCE_LAYER';
$method = 'GET';
$host = 'api.example.com';
$path = '/resources';
// querystring params or request body as an array,
// which includes timestamp, key, and signature params from the client's
// getSignedRequestParams method
$params = 'PARAMS_FROM_REQUEST';

$isValid = $server->validateSignature($secret, $method, $host, $path, $params);

Server::validateSignature() will return either true or false. It might also throw one of three exceptions:

  • MaximumDriftExceededException: If timestamp is too far in the future
  • MinimumDriftExceededException: It timestamp is too far in the past
  • SignatureMissingException: If signature is missing from request params

Drift defaults to 15 seconds, meaning there is a 30 second window during which the request is valid. The default value can be modified using Server::setDrift().

Key Generation

You can generate API keys and secrets in the following manner.

1
2
3
4
5
6
7
8
9
$randomFactory = new \RandomLib\Factory();
$keyGenerator = new QueryAuth\KeyGenerator($randomFactory);

// 40 character random alphanumeric string
$key = $keyGenerator->generateKey();

// 60 character random string containing the characters
// 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ./
$secret = $keyGenerator->generateSecret();

Both key and secret are generated using Anthony Ferrara’s RandomLib random string generator.

That’s Kinda Ugly, Dude

As I pointed out, the Query Auth library is pretty bare bones. There are a lot of opportunities for abstraction that would make the library much easier to use and much nicer to look at. If I added them to Query Auth, however, that would lock library users into whichever HTTP client I chose to use. The same concern would go for whatever other abstractions I decided on. The point here is to offload query authentication, and only query authentication, to the Query Auth library.

Sample Implementation

In order to demonstrate how one might implement the Query Auth library, I’ve whipped up a sample implementation for you.

The sample uses Vagrant and VirtualBox to allow you to see the whole thing in action. Slim Framework runs the API, Guzzle is used to make requests to the API, and both a GET and POST request are implemented. JSend, Jamie Schembri’s PHP implementation of the OmniTI JSend specifiction, is used to send messages back from the API, and Parsedown PHP, Emanuil Rusev’s Markdown parser for PHP, is used to render the sample implementation’s documentation.

Request Signing

In the sample implementation, request signing has been abstracted in the Example\ApiRequestSigner class. Signing requests is now as simple as passing the request object and credentials object to the signRequest method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/**
 * Signs API request
 *
 * @param RequestInterface $request     HTTP Request
 * @param ApiCredentials   $credentials API Credentials
 */
public function signRequest(RequestInterface $request, ApiCredentials $credentials)
{
    $signedParams = $this->client->getSignedRequestParams(
            $credentials->getKey(),
            $credentials->getSecret(),
            $request->getMethod(),
            $request->getHost(),
            $request->getPath(),
            $this->getParams($request)
            );

    $this->replaceParams($request, $signedParams);
}

Signature Validation

In the sample implementation, signature validation has been abstracted in the Example\ApiRequestValidator class. Validating request signatures is now as simple as passing the request object and credentials object to the isValid method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/**
 * Validates an API request
 *
 * @param  Request        $request     HTTP Request
 * @param  ApiCredentials $credentials API Credentials
 * @return bool           True if valid, false if invalid
 */
public function isValid(Request $request, ApiCredentials $credentials)
{
    return $this->server->validateSignature(
        $credentials->getSecret(),
        $request->getMethod(),
        $request->getHost(),
        $request->getPath(),
        $this->getParams($request)
    );
}

Signing a GET Request

Signing a request is now extremely clean and simple. Here’s the GET example from the sample implementation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
/**
 * Sends a signed GET request which returns a famous mangled phrase
 */
$app->get('/get-example', function() use ($app, $credentials, $requestSigner) {

    // Create request
    $guzzle = new GuzzleClient('http://query-auth.dev');
    $request = $guzzle->get('/api/get-example');

    // Sign request
    $requestSigner->signRequest($request, $credentials);

    $response = $request->send();

    $app->render('get.html', array('request' => (string) $request, 'response' => (string) $response));
});

Validating a GET Request

Validating a GET request is equally clean and simple. Note the try/catch that handles possible exceptions from the validation class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
/**
 * Validates a signed GET request and, if the request is valid, returns a
 * famous mangled phrase
 */
$app->get('/api/get-example', function () use ($app, $credentials, $requestValidator) {

    try {
        // Validate the request signature
        $isValid = $requestValidator->isValid($app->request(), $credentials);

        if ($isValid) {
            $mistakes = array('necktie', 'neckturn', 'nickle', 'noodle');
            $format = 'Klaatu... barada... n... %s!';
            $data = array('message' => sprintf($format, $mistakes[array_rand($mistakes)]));
            $jsend = new JSendResponse('success', $data);
        } else {
            $jsend = new JSendResponse('fail', array('message' => 'Invalid signature'));
        }
    } catch (\Exception $e) {
        $jsend = new JSendResponse('error', array(), $e->getMessage());
    }

    $response = $app->response();
    $response['Content-Type'] = 'application/json';
    echo $jsend->encode();
});

Sample Request and Response

The code above produces the below request and response:

Request

1
2
3
GET /api/get-example?key=ah5yEgQzjuFsC9nWsRI4Nar3ikOqWVPcD3OntHpg&timestamp=1376416267&signature=3DqimkvigYBorGi8wHfil9lB8oCWhB%2BHYt6rVfE4zx4%3D HTTP/1.1
Host: query-auth.dev
User-Agent: Guzzle/3.7.2 curl/7.22.0 PHP/5.5.1-2+debphp.org~precise+2

Response

1
2
3
4
5
6
7
8
HTTP/1.1 200 OK
Date: Tue, 13 Aug 2013 17:51:07 GMT
Server: Apache/2.4.6 (Ubuntu)
X-Powered-By: PHP/5.5.1-2+debphp.org~precise+2
Content-Length: 75
Content-Type: application/json

{"status":"success","data":{"message":"Klaatu... barada... n... necktie!"}}

Wrapping Up

So there you have it: QueryAuth to sign and validate API requests (and generate keys and secrets!) and a sample implementation to get you going. If you find this helpful, or have any questions or comments, please let me know. If you find any horrible mistakes, please feel free to submit an issue or a pull request, or you can always submit the offending code to CSI: PHP :-)

Vagrant Synced Folders Permissions

UPDATE: Since writing this post Vagrant has changed the synced folder settings and I’ve come across a new (and better?) way of handling this problem. Scroll down for the updates.

Having trouble getting your Synced Folders permissions just right in your Vagrant + VirtualBox VM? They’ve been giving me some grief lately. Here are the (undocumented) Vagrantfile options that finally got it sorted out.

Permissions Challenges

The issue that got me digging into this was trying to get permissions just so to allow my web apps to write logs. As you probably know, apache (or your web server of choice), sometimes needs write access to certain web application directories. I’ve always take care of that by adding the apache user group to the directories in question and then giving that user full access. Doing that in Ubuntu looks something like:

1
2
chown -R jeremykendall.www-data /path/to/logs
chmod 775 /path/to/logs

If you’ve tried doing something similar in your Vagrant shared folders, you’ve likely failed. This, as it turns out, doesn’t work with VirtualBox shared folders – you have to make the changes in your Vagrantfile.

Setting Permissions via the Vagrantfile

UPDATE: Thanks to Joe Ferguson for pointing out in the comments that Vagrant has been upgraded and my example was no longer current. Below are both examples marked by Vagrant version.

Here’s my new synced_folder setting in my Vagrantfile:

Vagrant v1.1+:

1
2
3
4
5
  # Vagrant v1.1+
  config.vm.synced_folder "./", "/var/sites/dev.query-auth", id: "vagrant-root",
    owner: "vagrant",
    group: "www-data",
    mount_options: ["dmode=775,fmode=664"]

Vagrant 1.0.x:

1
2
3
4
5
  # Vagrant v1.0.x
  config.vm.synced_folder "./", "/var/sites/dev.query-auth", id: "vagrant-root",
    :owner => "vagrant",
    :group => "www-data",
    :extra => "dmode=775,fmode=664"

I’m sure you can immediately see what resolved the issue. Lines 3 and 4 set the owner and group, respectively, and line 5 sets directory and file modes appropriately. That simple fix was frustratingly difficult because I couldn’t find it documented anywhere. After much searching and opening far too many browser tabs, I cobbled together the info above. A quick vagrant reload later and I was off to the races.

UPDATE: Alternate Method

An alternate method that doesn’t include modifying your synced folder permissions is changing the web user to the vagrant user. Bad idea? Security problem? Not on your dev VM it ain’t, and that’s good enough for me. Big thanks to Chris Tankersley for all the help getting this one figured out.

Chris and I both put together gists, and this is how I’m currently doing it in Flaming Archer, but probably the best method for changing the apache user to the vagrant user comes from the Intracto Puppet apache manifest.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Source https://raw.github.com/Intracto/Puppet/master/apache2/manifests/init.pp

# Change user
exec { "ApacheUserChange" :
    command => "sed -i 's/APACHE_RUN_USER=www-data/APACHE_RUN_USER=vagrant/' /etc/apache2/envvars",
    onlyif  => "grep -c 'APACHE_RUN_USER=www-data' /etc/apache2/envvars",
    require => Package["apache2"],
    notify  => Service["apache2"],
}

# Change group
exec { "ApacheGroupChange" :
    command => "sed -i 's/APACHE_RUN_GROUP=www-data/APACHE_RUN_GROUP=vagrant/' /etc/apache2/envvars",
    onlyif  => "grep -c 'APACHE_RUN_GROUP=www-data' /etc/apache2/envvars",
    require => Package["apache2"],
    notify  => Service["apache2"],
}

Additionally, if you’re copying and pasting from anywhere, don’t forget to change the apache lockfile permissions:

1
2
3
4
5
6
7
# Source https://github.com/Intracto/Puppet/blob/master/apache2/manifests/init.pp

exec { "apache_lockfile_permissions" :
    command => "chown -R vagrant:www-data /var/lock/apache2",
    require => Package["apache2"],
    notify  => Service["apache2"],
}

ACL on Shared Folders That Are Not NFS

One of the reasons the above methods are necessary is that you can’t use ACLs on shared directories. If none of the above options appeal to you, it’s possible to use ACLs on your VM as long as the directories aren’t shared. For more information, see Frank Stelzer’s comment regarding setfacl on a Vagrant box.

I Love You Guys

Tuesday was a frustrating day at work, one I spent chasing a bug that I was never able to find. My friends on Twitter made it all better.

Number of the Contributor

My contributions at work, Aug 05 2012 - Aug 05 2013.

Woe to you, o earth and sea, for the devil sends the beast with wrath, because he knows the time is short … Let him who hath understanding reckon the number of the beast: for it is a human number; its number is six hundred and sixty six.

The Composer Kerfuffle

Wednesday night I discovered, much to my shock and dismay, that Composer’s install command now defaults to installing development dependencies along with a project’s required dependencies. That discovery prompted me to start a “small twitter shitstorm” with this tweet:

Before I delve into why I’m shocked and dismayed, I want to say a few things about Composer and the Composer team. In all of my years of programming in PHP, I’m not sure there’s been a more important, more game changing, or more exciting project than Composer. Being able to easily manage project dependencies has revolutionized the way I develop. Composer, and the related packagist.org, have been a larger quality-of-life improvement for me than any other tool I’ve added to my toolkit over the years. I’d like to extend my sincerest thanks to Nils Adermann, Jordi Boggiano and the many community contributors who have worked so hard and so diligently to make Composer a reality.

My Beef with the Change

1. There was never any public discussion about the change

Beyond a few brief asides in a couple of github issues and pull requests, I can’t find anywhere this change discussed publicly. For a project of this size and this importance, removing the community from the decision making process was a terrible mistake. I’d much prefer to have argued all of this before the change than after.

Yesterday, Jordi posted “Composer: Installing require-dev by default” to explain his rationale for the change. One of his points is that he made a note in the 1.0.0-alpha changelog regarding the upcoming change. While this is true, I find it insufficient. I rarely read changelogs (my bad), but I certainly don’t read changelogs to discover what’s coming in the future. That’s what road maps, blog posts, and PRs are for. Putting that note in the changelog was “too little, too early”.

2. Composer philosophy and workflow has always been, “install for production, update for development”

The longstanding Composer rule of thumb has always been “install for production, update for development”. Adam Brett’s post on Composer workflow and the difference between update and install is a good example of this rule of thumb. Humorously enough, Jordi reinforces that rule of thumb (while defending the change that composer update installs dev requirements by default) in his blog post immediately prior to the post defending the composer install change:

“The install command on the hand remains the same. It does not install dev dependencies by default, and it will actually remove them if they were previously installed and you run it without –dev. Again this makes sense since in production you should only run install to get the last verified state (stored in composer.lock) of your dependencies installed.”

That rule of thumb is now turned on its head, and the default “composer install in production” advice now needs updating, careful warnings, and caveats.

3. Tools should never default to dev (unless they’re meant for dev, of course)

This is a philosophical point on my part, but it’s one I don’t think is unique to me and it’s one that I think can be well defended. My point here is that one should always write code and tools in such a way that deployments to production will only ever result in production code being deployed. Clear as mud? Let me try with an example.

I frequently use environment variables to allow my applications to detect which environment they’re running in. If those environment variables don’t exist, then the application should default to production. Why? Because dev environments are the special case, not production, and it’s far too easy to forget to add those environment variables when deploying. I make my life easier by making the production environment as idiot-proof as possible, and not the other way around.

This philosophy was in place in the prior behavior of composer install (and composer update, for that matter). Now that it’s changed, the production environment is far more likely to suffer than the development environment. Forgetting to add the –dev flag in development is a lot less (potentially) costly than forgetting to add the –no-dev flag in production.

In a seeming contradiction, I’ve said that I have no problem with composer update defaulting to installing dev dependencies. I’ve gone back and forth on that a bit when considering my “tools should never default to dev” position, but I don’t think I’m being inconsistent here. Since the rule of thumb encourages using update in development and never in production, then update becomes a dev tool which can safely default to installing development dependencies. Having said that, if consistency between commands is important, then composer update should no longer default to dev and the changes to both install and update should be reverted.

In Closing

Composer has become an integral part of my workflow, and a critical piece of the PHP development process in general. I loved Composer before this change and I’ll love Composer after. That said, changing the Composer command that is intended primarily for production use is extremely disruptive and a very bad call, especially considering how the change came about.

Piedmont Natural Gas: Customer Service WIN

(This is the follow-up to yesterday’s post, “Dear Piedmont Natural Gas”. Start there for the full story.)

As I write this follow-up post, the gas has been turned back on, my hot water heater is working away, and my wife and I find ourselves on the far side of a bad situation turned good.

We last left off with a phone call from Piedmont corporate and a promise to have our gas turned back on by 5 pm CST. Not only did Piedmont come through, they came through big. The technician they sent was early to the appointment and went above-and-beyond to make sure we were taken care of. While I wish this entire situation never happened, the outcome is the very best of a bad situation, and I want to close this out with a big thanks to Piedmont Natural Gas.

Once Piedmont corporate learned about our problem, they went the extra mile to resolve it to our satisfaction as quickly as possible. I don’t know everything they did, but they went as far as reviewing the recordings of both our Friday and Saturday customer service phone calls, calling both my wife and myself to schedule reconnection, and taking the time to explain exactly what happened, why it happened, and then take responsibility for it. Kudos to Piedmont.

Thanks also to all of you for being so supportive. In the grand scheme of things, this was small potatoes, but that didn’t make it any less upsetting. The retweets, supportive comments, and personal commiserations went a long way towards making this a lot easier.