Matthew Daly's Blog

I'm a web developer in Norfolk. This is my blog...

10th August 2016 8:45 pm

An Introduction to Managing Your Servers With Ansible

If, like me, you’re a web developer who sometimes also has to wear a sysadmin’s hat, then you’ll probably be coming across the same set of tasks each time you set up a new server. These may include:

  • Provisioning new servers on cloud hosting providers such as Digital Ocean
  • Setting up Cloudflare
  • Installing a web server, database and other required packages
  • Installing an existing web application, such as Wordpress
  • Configuring the firewall and Fail2ban
  • Keeping existing servers up to date

These can get tedious and repetitive fairly quickly - who genuinely wants to SSH into each server individually and run the updates regularly? Also, if done manually, there’s a danger of the setup for each server being inconsistent. Shell scripts will do this, but aren’t easy to read and not necessarily easy to adapt to different operating systems. You need a way to be able to manage multiple servers easily, maintain a series of reusable “recipes” and do it all in a way that’s straightforward to read - in other words, a configuration management system.

There are others around, such as Chef, Puppet, and Salt, but my own choice is Ansible. Here’s why I went for Ansible:

  • Playbooks and roles are defined as YAML, making them fairly straightforward to read and understand
  • It’s written in Python, making it easy to create your own modules that leverage existing Python modules to get things done
  • It’s distributed via pip, making it easy to install
  • It doesn’t require you to install anything new on the servers, so you can get started straight away as soon as you can access a new server
  • It has modules for interfacing with cloud services such as Digital Ocean and Amazon Web Services

Ansible is very easy to use, but you do still need to know what is actually going on to get the best out of it. It’s intended as a convenient abstraction on top of the underlying commands, not a replacement, and you should know how to do what you want to do manually before you write an Ansible playbook to do it.

Setting up

You need to have Python 2 available. Ansible doesn’t yet support Python 3 (Grr…) so if you’re using an operating system that has switched to Python 3, such as Arch Linux, you’ll need to have Python 2 installed as well. Assuming you have pip installed, then run this command to install it:

$ sudo pip install ansible

Or for users on systems with Python 3 as the main Python:

$ sudo pip2 install ansible

For Windows users, you’ll want to drop sudo. On Unix-like OS’s that don’t have sudo installed, drop it and run the command as root.

Our first Ansible command

We’ll demonstrate Ansible in action with a Vagrant VM. Drop the following Vagrantfile into your working directory:

# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "debian/jessie64"
config.vm.network "forwarded_port", guest: 80, host: 8080
end

Then fire up the VM:

$ vagrant up

This VM will be our test bed for running Ansible. If you prefer, you can use a remote server instead.

Next, we’ll configure Ansible. Save this as ansible.cfg:

[defaults]
hostfile = inventory
remote_user = vagrant
private_key_file = .vagrant/machines/default/virtualbox/private_key

In this case the remote user is vagrant because we’re using Vagrant, but to manage remote machines you would need to change this to the name of the account that you use on the server. The value of private_key_file will also normally be something like /home/matthew/.ssh/id_rsa.pub, but here we’re using the Vagrant-specific key.

Note the hostfile entry - this points to the list of hosts you want to manage with Ansible. Let’s create this next. Save the following as inventory:

testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222

Note that we explicitly need to set the port here because we’re using Vagrant. Normally it will default to port 22. A typical entry for a remote server might look like this:

example.com ansible_ssh_host=192.168.56.101

Note also that we can refer to hosts by the name we give it, which can be as meaningful (or not) as you want.

Let’s run our first command:

$ ansible all -m ping
testserver | SUCCESS => {
"changed": false,
"ping": "pong"
}

We called Ansible with the hosts set to all, therefore every host in the inventory was contacted. We used the -m flag to say we were calling a module, and then specified the ping module. Ansible therefore pinged each server in turn.

We can call ad-hoc commands using the -a flag, as in this example:

$ ansible all -a "uptime"
testserver | SUCCESS | rc=0 >>
17:26:57 up 19 min, 1 user, load average: 0.00, 0.04, 0.13

This command gets the uptime for the server. If you only want to run the command on a single server, you can specify it by name:

$ ansible testserver -a "uptime"
testserver | SUCCESS | rc=0 >>
17:28:21 up 20 min, 1 user, load average: 0.02, 0.04, 0.13

Here we specified the server as testserver. What about if you want to specify more than one server, but not all of them? You can create groups of servers in inventory, as in this example:

[webservers]
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
example.com ansible_ssh_host=192.168.56.101

You could then call the following to run the uptime command on all the servers in the webservers group:

$ ansible webservers -a 'uptime'

If you want to run the command as a different user, you can do so:

$ ansible webservers -a 'uptime' -u bob

Note that for running uptime we haven’t specified the -m flag. This is because the command module is the default, but it’s very basic and doesn’t support shell variables. For more complex interactions you might need to use the shell module, as in this example:

$ ansible testserver -m shell -a 'echo $PATH'
testserver | SUCCESS | rc=0 >>
/usr/local/bin:/usr/bin:/bin:/usr/games

For installing a package on Debian or Ubuntu, you might use the apt module:

$ ansible testserver -m apt -a "name=git state=present" --become
testserver | SUCCESS => {
"cache_update_time": 0,
"cache_updated": false,
"changed": true,
"stderr": "",
"stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n git-man liberror-perl\nSuggested packages:\n git-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk\n gitweb git-arch git-cvs git-mediawiki git-svn\nThe following NEW packages will be installed:\n git git-man liberror-perl\n0 upgraded, 3 newly installed, 0 to remove and 83 not upgraded.\nNeed to get 4552 kB of archives.\nAfter this operation, 23.5 MB of additional disk space will be used.\nGet:1 http://httpredir.debian.org/debian/ jessie/main liberror-perl all 0.17-1.1 [22.4 kB]\nGet:2 http://httpredir.debian.org/debian/ jessie/main git-man all 1:2.1.4-2.1+deb8u2 [1267 kB]\nGet:3 http://httpredir.debian.org/debian/ jessie/main git amd64 1:2.1.4-2.1+deb8u2 [3262 kB]\nFetched 4552 kB in 1s (3004 kB/s)\nSelecting previously unselected package liberror-perl.\r\n(Reading database ... \r(Reading database ... 5%\r(Reading database ... 10%\r(Reading database ... 15%\r(Reading database ... 20%\r(Reading database ... 25%\r(Reading database ... 30%\r(Reading database ... 35%\r(Reading database ... 40%\r(Reading database ... 45%\r(Reading database ... 50%\r(Reading database ... 55%\r(Reading database ... 60%\r(Reading database ... 65%\r(Reading database ... 70%\r(Reading database ... 75%\r(Reading database ... 80%\r(Reading database ... 85%\r(Reading database ... 90%\r(Reading database ... 95%\r(Reading database ... 100%\r(Reading database ... 32784 files and directories currently installed.)\r\nPreparing to unpack .../liberror-perl_0.17-1.1_all.deb ...\r\nUnpacking liberror-perl (0.17-1.1) ...\r\nSelecting previously unselected package git-man.\r\nPreparing to unpack .../git-man_1%3a2.1.4-2.1+deb8u2_all.deb ...\r\nUnpacking git-man (1:2.1.4-2.1+deb8u2) ...\r\nSelecting previously unselected package git.\r\nPreparing to unpack .../git_1%3a2.1.4-2.1+deb8u2_amd64.deb ...\r\nUnpacking git (1:2.1.4-2.1+deb8u2) ...\r\nProcessing triggers for man-db (2.7.0.2-5) ...\r\nSetting up liberror-perl (0.17-1.1) ...\r\nSetting up git-man (1:2.1.4-2.1+deb8u2) ...\r\nSetting up git (1:2.1.4-2.1+deb8u2) ...\r\n",
"stdout_lines": [
"Reading package lists...",
"Building dependency tree...",
"Reading state information...",
"The following extra packages will be installed:",
" git-man liberror-perl",
"Suggested packages:",
" git-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk",
" gitweb git-arch git-cvs git-mediawiki git-svn",
"The following NEW packages will be installed:",
" git git-man liberror-perl",
"0 upgraded, 3 newly installed, 0 to remove and 83 not upgraded.",
"Need to get 4552 kB of archives.",
"After this operation, 23.5 MB of additional disk space will be used.",
"Get:1 http://httpredir.debian.org/debian/ jessie/main liberror-perl all 0.17-1.1 [22.4 kB]",
"Get:2 http://httpredir.debian.org/debian/ jessie/main git-man all 1:2.1.4-2.1+deb8u2 [1267 kB]",
"Get:3 http://httpredir.debian.org/debian/ jessie/main git amd64 1:2.1.4-2.1+deb8u2 [3262 kB]",
"Fetched 4552 kB in 1s (3004 kB/s)",
"Selecting previously unselected package liberror-perl.",
"(Reading database ... ",
"(Reading database ... 5%",
"(Reading database ... 10%",
"(Reading database ... 15%",
"(Reading database ... 20%",
"(Reading database ... 25%",
"(Reading database ... 30%",
"(Reading database ... 35%",
"(Reading database ... 40%",
"(Reading database ... 45%",
"(Reading database ... 50%",
"(Reading database ... 55%",
"(Reading database ... 60%",
"(Reading database ... 65%",
"(Reading database ... 70%",
"(Reading database ... 75%",
"(Reading database ... 80%",
"(Reading database ... 85%",
"(Reading database ... 90%",
"(Reading database ... 95%",
"(Reading database ... 100%",
"(Reading database ... 32784 files and directories currently installed.)",
"Preparing to unpack .../liberror-perl_0.17-1.1_all.deb ...",
"Unpacking liberror-perl (0.17-1.1) ...",
"Selecting previously unselected package git-man.",
"Preparing to unpack .../git-man_1%3a2.1.4-2.1+deb8u2_all.deb ...",
"Unpacking git-man (1:2.1.4-2.1+deb8u2) ...",
"Selecting previously unselected package git.",
"Preparing to unpack .../git_1%3a2.1.4-2.1+deb8u2_amd64.deb ...",
"Unpacking git (1:2.1.4-2.1+deb8u2) ...",
"Processing triggers for man-db (2.7.0.2-5) ...",
"Setting up liberror-perl (0.17-1.1) ...",
"Setting up git-man (1:2.1.4-2.1+deb8u2) ...",
"Setting up git (1:2.1.4-2.1+deb8u2) ..."
]
}

Here we specify that a particular package should be state=present or state=absent. Also, note the --become flag, which allows us to become root. If you’re using an RPM-based Linux distro, you can use the yum module in the same way.

Finally, let’s use the git module to check out a project on the server:

$ ansible testserver -m git -a "repo=https://github.com/matthewbdaly/django_tutorial_blog_ng.git dest=/home/vagrant/example version=HEAD"
testserver | SUCCESS => {
"after": "3542098e3b01103db4d9cfc724ba3c71c45cb314",
"before": null,
"changed": true,
"warnings": []
}

Here we check out a Git repository. We specify the repo, destination and version.

You can call any installed Ansible module in an ad-hoc fashion in the same way. Refer to the documentation for a list of modules.

Playbooks

Ad-hoc commands are useful, but they don’t offer much extra over using SSH. Playbooks allow you to define a repeatable set of commands for a particular use case. In this example, I’ll show you how to write a playbook that does the following:

  • Installs and configures Nginx
  • Clones the repository for my site into the web root

This is sufficiently complex to demonstrate some more of the functionality of Ansible, while also demonstrating playbooks in action.

Create a new folder called playbooks, and inside it save the following as sitecopy.yml:

---
- name: Copy personal website
hosts: testserver
become: True
tasks:
- name: Install Nginx
apt: name=nginx update_cache=yes
- name: Copy config
copy: >
src=files/nginx.conf
dest=/etc/nginx/sites-available/default
- name: Activate config
file: >
dest=/etc/nginx/sites-enabled/default
src=/etc/nginx/sites-available/default
state=link
- name: Delete /var/www directory
file: >
path=/var/www
state=absent
- name: Clone repository
git: >
repo=https://github.com/matthewbdaly/matthewbdaly.github.io.git
dest=/var/www
version=HEAD
- name: Restart Nginx
service: name=nginx state=restarted

Note the name fields - these are comments that will show up in the output when each step is run. First we use the apt module to install Nginx, then we copy over the config file and activate it, then we empty the existing /var/www and clone the repository, and finally we restart Nginx.

Also, note the following fields:

  • hosts defines the hosts affected
  • become specifies that the commands are run using sudo

We also need to create the config for Nginx. Create the files directory under playbooks and save this file as playbooks/files/nginx.conf:

server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /var/www;
index index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ =404;
}
}

Obviously if your Nginx config will be different, feel free to amend it as necessary. Finally, we run the playbook using the ansible-playbook command:

$ ansible-playbook playbooks/sitecopy.yml
PLAY [Copy personal website] ***************************************************
TASK [setup] *******************************************************************
ok: [testserver]
TASK [Install Nginx] ***********************************************************
changed: [testserver]
TASK [Copy config] *************************************************************
changed: [testserver]
TASK [Activate config] *********************************************************
changed: [testserver]
TASK [Delete /var/www directory] ***********************************************
changed: [testserver]
TASK [Clone repository] ********************************************************
changed: [testserver]
TASK [Restart Nginx] ***********************************************************
changed: [testserver]
PLAY RECAP *********************************************************************
testserver : ok=7 changed=6 unreachable=0 failed=0

If we had a playbook that we wanted to run on only a subset of the hosts it applied to, we could use the -l flag, as in this example:

$ ansible-playbook playbooks/sitecopy.yml -l testserver

Using these same basic concepts, you can invoke many different Ansible modules to achieve many different tasks. You can spin up new servers on supported cloud hosting companies, you can set up a known good fail2ban config, you can configure your firewall, and many more tasks. As your playbooks get bigger, it’s worth moving sections into separate roles that get invoked within multiple playbooks, in order to reduce repetition.

Finally, I mentioned earlier that you can use Ansible to update all of your servers regularly. Here’s the playbook I use for that:

---
- name: Update system
hosts: all
become: True
tasks:
- name: update system
apt: upgrade=full update_cache=yes

This connects to all hosts using the all shortcut we saw earlier, and upgrades all existing packages. Using this method is a lot easier than connecting to each one in turn via SSH and updating it manually.

Summary

Ansible is an extremely useful tool for managing servers, but to get the most out of it you have to put in a fair bit of work reading the documentation and writing your own playbooks for your own use cases. It’s simple to get started with, and if you’re willing to put in the time writing your own playbooks then in the long run you’ll save yourself a lot of time and grief by making it easy to set up new servers and administer existing ones. Hopefully this has given you a taster of what you can do with Ansible - from here on the documentation is worth a look as it lists all of the modules that ship with Ansible. If there’s a particular task you dread, such as setting up a mail server, then Ansible is a very good way to automate that away so it’s easier next time.

My experience is that it’s best to make an effort to try to standardise on two or three different stacks for different purposes, and create Ansible playbooks for those stacks. For instance, I’ve tended to use PHP 5, Apache, MySQL, Memcached and Varnish for Wordpress sites, and PHP 7, Nginx, Redis and PostgreSQL for Laravel sites. That way I know that any sites I build with Laravel will be using that stack. Knowing my servers are more consistent makes it easier to work with them and identify problems.

8th August 2016 5:05 pm

Testing Your API Documentation With Dredd

Documenting your API is something most developers agree is generally a Good Thing, but it’s a pain in the backside, and somewhat boring to do. What you really need is a tool that allows you to specify the details of your API before you start work, generate documentation from that specification, and test your implementation against that specification.

Fortunately, such a tool exists. The Blueprint specification allows you to document your API using a Markdown-like syntax. You can then create HTML documentation using a tool like Aglio or Apiary, and test it against your implementation using Dredd.

In this tutorial we’ll implement a very basic REST API using the Lumen framework. We’ll first specify our API, then we’ll implement routes to match the implementation. In the process, we’ll demonstrate the Blueprint specification in action.

Getting started

Assuming you already have PHP 5.6 or better and Composer installed, run the following command to create our Lumen app skeleton:

$ composer create-project --prefer-dist laravel/lumen demoapi

Once it has finished installing, we’ll also need to add the Dredd hooks:

$ cd demoapi
$ composer require ddelnano/dredd-hooks-php

We need to install Dredd. It’s a Node.js tool, so you’ll need to have that installed. We’ll also install Aglio to generate HTML versions of our documentation:

$ npm install -g aglio dredd

We also need to create a configuration file for Dredd, which you can do by running dredd init. Or you can just copy the one below:

dry-run: null
hookfiles: null
language: php
sandbox: false
server: 'php -S localhost:3000 -t public/'
server-wait: 3
init: false
custom:
apiaryApiKey: ''
names: false
only: []
reporter: apiary
output: []
header: []
sorted: false
user: null
inline-errors: false
details: false
method: []
color: true
level: info
timestamp: false
silent: false
path: []
hooks-worker-timeout: 5000
hooks-worker-connect-timeout: 1500
hooks-worker-connect-retry: 500
hooks-worker-after-connect-wait: 100
hooks-worker-term-timeout: 5000
hooks-worker-term-retry: 500
hooks-worker-handler-host: localhost
hooks-worker-handler-port: 61321
config: ./dredd.yml
blueprint: apiary.apib
endpoint: 'http://localhost:3000'

If you choose to run dredd init, you’ll see prompts for a number of things, including:

  • The server command
  • The blueprint file name
  • The endpoint
  • Any Apiary API key
  • The language you want to use

There are Dredd hooks for many languages, so if you’re planning on building a REST API in a language other than PHP, don’t worry - you can still test it with Dredd, you’ll just get prompted to install different hooks.

Note the hookfiles section, which specifies a hookfile to run during the test in order to set up the API. We’ll touch on that in a moment. Also, note the server setting - this specifies the command we should call to run the server. In this case we’re using the PHP development server.

If you’re using Apiary with your API (which I highly recommend), you can also set the following parameter to ensure that every time you run Dredd, it submits the results to Apiary:

custom:
apiaryApiKey: <API KEY HERE>
apiaryApiName: <API NAME HERE>

Hookfiles

As mentioned, the hooks allow you to set up your API. In our case, we’ll need to set up some fixtures for our tests. Save this file at tests/dredd/hooks/hookfile.php:

<?php
use Dredd\Hooks;
use Illuminate\Support\Facades\Artisan;
require __DIR__ . '/../../../vendor/autoload.php';
$app = require __DIR__ . '/../../../bootstrap/app.php';
$app->make(\Illuminate\Contracts\Console\Kernel::class)->bootstrap();
Hooks::beforeAll(function (&$transaction) use ($app) {
putenv('DB_CONNECTION=sqlite');
putenv('DB_DATABASE=:memory:');
Artisan::call('migrate:refresh');
Artisan::call('db:seed');
});
Hooks::beforeEach(function (&$transaction) use ($app) {
Artisan::call('migrate:refresh');
Artisan::call('db:seed');
});

Before the tests run, we set the environment up to use an in-memory SQLite database. We also migrate and seed the database, so we’re working with a clean database. As part of this tutorial, we’ll create seed files for the fixtures we need in the database.

This hookfile assumes that the user does not need to be authenticated to communicate with the API. If that’s not the case for your API, you may want to include something like this in your hookfile’s beforeEach callback:

$user = App\User::first();
$token = JWTAuth::fromUser($user);
$transaction->request->headers->Authorization = 'Bearer ' . $token;

Here we’re using the JWT Auth package for Laravel to authenticate users of our API, and we need to set the Authorization header to contain a valid JSON web token for the given user. If you’re using a different method, such as HTTP Basic authentication, you’ll need to amend this code to reflect that.

With that done, we need to create the Blueprint file for our API. Recall the following line in dredd.yml:

blueprint: apiary.apib

This specifies the path to our documentation. Let’s create that file:

$ touch apiary.apib

Once this is done, you should be able to run Dredd:

$ dredd
info: Configuration './dredd.yml' found, ignoring other arguments.
info: Using apiary reporter.
info: Starting server with command: php -S localhost:3000 -t public/
info: Waiting 3 seconds for server command to start...
warn: Parser warning in file 'apiary.apib': (warning code undefined) Could not recognize API description format. Falling back to API Blueprint by default.
info: Beginning Dredd testing...
complete: Tests took 619ms
complete: See results in Apiary at: https://app.apiary.io/public/tests/run/4aab4155-cfc4-4fda-983a-fea280933ad4
info: Sending SIGTERM to the backend server
info: Backend server was killed

With that done, we’re ready to start work on our API.

Our first route

Dredd is not a testing tool in the usual sense. Under no circumstances should you use it as a substitute for something like PHPUnit - that’s not what it’s for. It’s for ensuring that your documentation and your implementation remain in sync. However, it’s not entirely impractical to use it as a Behaviour-driven development tool in the same vein as Cucumber or Behat - you can use it to plan out the endpoints your API will have, the requests they accept, and the responses they return, and then verify your implementation against the documentation.

We will only have a single endpoint, in order to keep this tutorial as simple and concise as possible. Our endpoint will expose products for a shop, and will allow users to fetch, create, edit and delete products. Note that we won’t be implementing any kind of authentication, which in production is almost certainly not what you want - we’re just going for the simplest possible implementation.

First, we’ll implement getting a list of products:

FORMAT: 1A
# Demo API
# Products [/api/products]
Product object representation
## Get products [GET /api/products]
Get a list of products
+ Request (application/json)
+ Response 200 (application/json)
+ Body
{
"id": 1,
"name": "Purple widget",
"description": "A purple widget",
"price": 5.99,
"attributes": {
"colour": "Purple",
"size": "Small"
}
}

A little explanation is called for. First the FORMAT section denotes the version of the API. Then, the # Demo API section denotes the name of the API.

Next, we define the Products endpoint, followed by our first method. Then we define what should be contained in the request, and what the response should look like. Blueprint is a little more complex than that, but that’s sufficient to get us started.

Then we run dredd again:

$ dredd.yml
info: Configuration './dredd.yml' found, ignoring other arguments.
info: Using apiary reporter.
info: Starting server with command: php -S localhost:3000 -t public/
info: Waiting 3 seconds for server command to start...
info: Beginning Dredd testing...
fail: GET /api/products duration: 61ms
info: Displaying failed tests...
fail: GET /api/products duration: 61ms
fail: headers: Header 'content-type' has value 'text/html; charset=UTF-8' instead of 'application/json'
body: Can't validate real media type 'text/plain' against expected media type 'application/json'.
statusCode: Status code is not '200'
request:
method: GET
uri: /api/products
headers:
Content-Type: application/json
User-Agent: Dredd/1.5.0 (Linux 4.4.0-31-generic; x64)
body:
expected:
headers:
Content-Type: application/json
body:
{
"id": 1,
"name": "Purple widget",
"description": "A purple widget",
"price": 5.99,
"attributes": {
"colour": "Purple",
"size": "Small"
}
}
statusCode: 200
actual:
statusCode: 404
headers:
host: localhost:3000
connection: close
x-powered-by: PHP/7.0.8-0ubuntu0.16.04.2
cache-control: no-cache
date: Mon, 08 Aug 2016 10:30:33 GMT
content-type: text/html; charset=UTF-8
body:
<!DOCTYPE html>
<html>
<head>
<meta name="robots" content="noindex,nofollow" />
<style>
/* Copyright (c) 2010, Yahoo! Inc. All rights reserved. Code licensed under the BSD License: http://developer.yahoo.com/yui/license.html */
html{color:#000;background:#FFF;}body,div,dl,dt,dd,ul,ol,li,h1,h2,h3,h4,h5,h6,pre,code,form,fieldset,legend,input,textarea,p,blockquote,th,td{margin:0;padding:0;}table{border-collapse:collapse;border-spacing:0;}fieldset,img{border:0;}address,caption,cite,code,dfn,em,strong,th,var{font-style:normal;font-weight:normal;}li{list-style:none;}caption,th{text-align:left;}h1,h2,h3,h4,h5,h6{font-size:100%;font-weight:normal;}q:before,q:after{content:'';}abbr,acronym{border:0;font-variant:normal;}sup{vertical-align:text-top;}sub{vertical-align:text-bottom;}input,textarea,select{font-family:inherit;font-size:inherit;font-weight:inherit;}input,textarea,select{*font-size:100%;}legend{color:#000;}
html { background: #eee; padding: 10px }
img { border: 0; }
#sf-resetcontent { width:970px; margin:0 auto; }
.sf-reset { font: 11px Verdana, Arial, sans-serif; color: #333 }
.sf-reset .clear { clear:both; height:0; font-size:0; line-height:0; }
.sf-reset .clear_fix:after { display:block; height:0; clear:both; visibility:hidden; }
.sf-reset .clear_fix { display:inline-block; }
.sf-reset * html .clear_fix { height:1%; }
.sf-reset .clear_fix { display:block; }
.sf-reset, .sf-reset .block { margin: auto }
.sf-reset abbr { border-bottom: 1px dotted #000; cursor: help; }
.sf-reset p { font-size:14px; line-height:20px; color:#868686; padding-bottom:20px }
.sf-reset strong { font-weight:bold; }
.sf-reset a { color:#6c6159; cursor: default; }
.sf-reset a img { border:none; }
.sf-reset a:hover { text-decoration:underline; }
.sf-reset em { font-style:italic; }
.sf-reset h1, .sf-reset h2 { font: 20px Georgia, "Times New Roman", Times, serif }
.sf-reset .exception_counter { background-color: #fff; color: #333; padding: 6px; float: left; margin-right: 10px; float: left; display: block; }
.sf-reset .exception_title { margin-left: 3em; margin-bottom: 0.7em; display: block; }
.sf-reset .exception_message { margin-left: 3em; display: block; }
.sf-reset .traces li { font-size:12px; padding: 2px 4px; list-style-type:decimal; margin-left:20px; }
.sf-reset .block { background-color:#FFFFFF; padding:10px 28px; margin-bottom:20px;
-webkit-border-bottom-right-radius: 16px;
-webkit-border-bottom-left-radius: 16px;
-moz-border-radius-bottomright: 16px;
-moz-border-radius-bottomleft: 16px;
border-bottom-right-radius: 16px;
border-bottom-left-radius: 16px;
border-bottom:1px solid #ccc;
border-right:1px solid #ccc;
border-left:1px solid #ccc;
}
.sf-reset .block_exception { background-color:#ddd; color: #333; padding:20px;
-webkit-border-top-left-radius: 16px;
-webkit-border-top-right-radius: 16px;
-moz-border-radius-topleft: 16px;
-moz-border-radius-topright: 16px;
border-top-left-radius: 16px;
border-top-right-radius: 16px;
border-top:1px solid #ccc;
border-right:1px solid #ccc;
border-left:1px solid #ccc;
overflow: hidden;
word-wrap: break-word;
}
.sf-reset a { background:none; color:#868686; text-decoration:none; }
.sf-reset a:hover { background:none; color:#313131; text-decoration:underline; }
.sf-reset ol { padding: 10px 0; }
.sf-reset h1 { background-color:#FFFFFF; padding: 15px 28px; margin-bottom: 20px;
-webkit-border-radius: 10px;
-moz-border-radius: 10px;
border-radius: 10px;
border: 1px solid #ccc;
}
</style>
</head>
<body>
<div id="sf-resetcontent" class="sf-reset">
<h1>Sorry, the page you are looking for could not be found.</h1>
<h2 class="block_exception clear_fix">
<span class="exception_counter">1/1</span>
<span class="exception_title"><abbr title="Symfony\Component\HttpKernel\Exception\NotFoundHttpException">NotFoundHttpException</abbr> in <a title="/home/matthew/Projects/demoapi/vendor/laravel/lumen-framework/src/Concerns/RoutesRequests.php line 450" ondblclick="var f=this.innerHTML;this.innerHTML=this.title;this.title=f;">RoutesRequests.php line 450</a>:</span>
<span class="exception_message"></span>
</h2>
<div class="block">
<ol class="traces list_exception">
<li> in <a title="/home/matthew/Projects/demoapi/vendor/laravel/lumen-framework/src/Concerns/RoutesRequests.php line 450" ondblclick="var f=this.innerHTML;this.innerHTML=this.title;this.title=f;">RoutesRequests.php line 450</a></li>
<li>at <abbr title="Laravel\Lumen\Application">Application</abbr>->handleDispatcherResponse(<em>array</em>('0')) in <a title="/home/matthew/Projects/demoapi/vendor/laravel/lumen-framework/src/Concerns/RoutesRequests.php line 387" ondblclick="var f=this.innerHTML;this.innerHTML=this.title;this.title=f;">RoutesRequests.php line 387</a></li>
<li>at <abbr title="Laravel\Lumen\Application">Application</abbr>->Laravel\Lumen\Concerns\{closure}() in <a title="/home/matthew/Projects/demoapi/vendor/laravel/lumen-framework/src/Concerns/RoutesRequests.php line 636" ondblclick="var f=this.innerHTML;this.innerHTML=this.title;this.title=f;">RoutesRequests.php line 636</a></li>
<li>at <abbr title="Laravel\Lumen\Application">Application</abbr>->sendThroughPipeline(<em>array</em>(), <em>object</em>(<abbr title="Closure">Closure</abbr>)) in <a title="/home/matthew/Projects/demoapi/vendor/laravel/lumen-framework/src/Concerns/RoutesRequests.php line 389" ondblclick="var f=this.innerHTML;this.innerHTML=this.title;this.title=f;">RoutesRequests.php line 389</a></li>
<li>at <abbr title="Laravel\Lumen\Application">Application</abbr>->dispatch(<em>null</em>) in <a title="/home/matthew/Projects/demoapi/vendor/laravel/lumen-framework/src/Concerns/RoutesRequests.php line 334" ondblclick="var f=this.innerHTML;this.innerHTML=this.title;this.title=f;">RoutesRequests.php line 334</a></li>
<li>at <abbr title="Laravel\Lumen\Application">Application</abbr>->run() in <a title="/home/matthew/Projects/demoapi/public/index.php line 28" ondblclick="var f=this.innerHTML;this.innerHTML=this.title;this.title=f;">index.php line 28</a></li>
</ol>
</div>
</div>
</body>
</html>
complete: 0 passing, 1 failing, 0 errors, 0 skipped, 1 total
complete: Tests took 533ms
[Mon Aug 8 11:30:33 2016] 127.0.0.1:44472 [404]: /api/products
complete: See results in Apiary at: https://app.apiary.io/public/tests/run/0153d5bf-6efa-4fdb-b02a-246ddd75cb14
info: Sending SIGTERM to the backend server
info: Backend server was killed

Our route is returning HTML, not JSON, and is also raising a 404 error. So let’s fix that. First, let’s create our Product model at app/Product.php:

<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Product extends Model
{
//
}

Next, we need to create a migration for the database tables for the Product model:

$ php artisan make:migration create_product_table
Created Migration: 2016_08_08_105737_create_product_table

This will create a new file under database/migrations. Open this file and paste in the following:

<?php
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class CreateProductTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
// Create products table
Schema::create('products', function (Blueprint $table) {
$table->increments('id');
$table->string('name');
$table->text('description');
$table->float('price');
$table->json('attributes');
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
// Drop products table
Schema::drop('products');
}
}

Note that we create fields that map to the attributes our API exposes. Also, note the use of the JSON field. In databases that support it, like PostgreSQL, it uses the native JSON support, otherwise it works like a text field. Next, we run the migration to create the table:

$ php artisan migrate
Migrated: 2016_08_08_105737_create_product_table

With our model done, we now need to ensure that when Dredd runs, there is some data in the database, so we’ll create a seeder file at database/seeds/ProductSeeder:

<?php
use Illuminate\Database\Seeder;
use Carbon\Carbon;
class ProductSeeder extends Seeder
{
/**
* Run the database seeds.
*
* @return void
*/
public function run()
{
// Add product
DB::table('products')->insert([
'name' => 'Purple widget',
'description' => 'A purple widget',
'price' => 5.99,
'attributes' => json_encode([
'colour' => 'purple',
'size' => 'Small'
]),
'created_at' => Carbon::now(),
'updated_at' => Carbon::now(),
]);
}
}

You also need to amend database/seeds/DatabaseSeeder to call it:

<?php
use Illuminate\Database\Seeder;
class DatabaseSeeder extends Seeder
{
/**
* Run the database seeds.
*
* @return void
*/
public function run()
{
$this->call('ProductSeeder');
}
}

I found I also had to run the following command to find the new seeder:

$ composer dump-autoload

Then, call the seeder:

$ php artisan db:seed
Seeded: ProductSeeder

We also need to enable Eloquent, as Lumen disables it by default. Uncomment the following line in bootstrap/app.php:

$app->withEloquent();

With that done, we can move onto the controller.

Creating the controller

Create the following file at app/Http/Controllers/ProductController:

<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Product;
class ProductController extends Controller
{
private $product;
public function __construct(Product $product) {
$this->product = $product;
}
public function index()
{
// Get all products
$products = $this->product->all();
// Send response
return response()->json($products, 200);
}
}

This implements the index route. Note that we inject the Product instance into the controller. Next, we need to hook it up in app/Http/routes.php:

<?php
/*
|--------------------------------------------------------------------------
| Application Routes
|--------------------------------------------------------------------------
|
| Here is where you can register all of the routes for an application.
| It is a breeze. Simply tell Lumen the URIs it should respond to
| and give it the Closure to call when that URI is requested.
|
*/
$app->get('/api/products', 'ProductController@index');

Then we run Dredd again:

$ dredd
info: Configuration './dredd.yml' found, ignoring other arguments.
info: Using apiary reporter.
info: Starting server with command: php -S localhost:3000 -t public/
info: Waiting 3 seconds for server command to start...
info: Beginning Dredd testing...
[Mon Aug 8 12:36:28 2016] 127.0.0.1:45466 [200]: /api/products
fail: GET /api/products duration: 131ms
info: Displaying failed tests...
fail: GET /api/products duration: 131ms
fail: body: At '' Invalid type: array (expected object)
request:
method: GET
uri: /api/products
headers:
Content-Type: application/json
User-Agent: Dredd/1.5.0 (Linux 4.4.0-31-generic; x64)
body:
expected:
headers:
Content-Type: application/json
body:
{
"id": 1,
"name": "Purple widget",
"description": "A purple widget",
"price": 5.99,
"attributes": {
"colour": "Purple",
"size": "Small"
}
}
statusCode: 200
actual:
statusCode: 200
headers:
host: localhost:3000
connection: close
x-powered-by: PHP/7.0.8-0ubuntu0.16.04.2
cache-control: no-cache
content-type: application/json
date: Mon, 08 Aug 2016 11:36:28 GMT
body:
[
{
"id": 1,
"name": "Purple widget",
"description": "A purple widget",
"price": "5.99",
"attributes": "{\"colour\":\"purple\",\"size\":\"Small\"}",
"created_at": "2016-08-08 11:32:24",
"updated_at": "2016-08-08 11:32:24"
}
]
complete: 0 passing, 1 failing, 0 errors, 0 skipped, 1 total
complete: Tests took 582ms
complete: See results in Apiary at: https://app.apiary.io/public/tests/run/83da2d67-c846-4356-a3b8-4d7c32daa7ef
info: Sending SIGTERM to the backend server
info: Backend server was killed

Whoops, looks like we made a mistake here. The index route returns an array of objects, but we’re looking for a single object in the blueprint. We also need to wrap our attributes in quotes, and add the created_at and updated_at attributes. Let’s fix the blueprint:

FORMAT: 1A
# Demo API
# Products [/api/products]
Product object representation
## Get products [GET /api/products]
Get a list of products
+ Request (application/json)
+ Response 200 (application/json)
+ Body
[
{
"id": 1,
"name": "Purple widget",
"description": "A purple widget",
"price": 5.99,
"attributes": "{\"colour\": \"Purple\",\"size\": \"Small\"}",
"created_at": "*",
"updated_at": "*"
}
]

Let’s run Dredd again:

$ dredd
info: Configuration './dredd.yml' found, ignoring other arguments.
info: Using apiary reporter.
info: Starting server with command: php -S localhost:3000 -t public/
info: Waiting 3 seconds for server command to start...
info: Beginning Dredd testing...
pass: GET /api/products duration: 65ms
complete: 1 passing, 0 failing, 0 errors, 0 skipped, 1 total
complete: Tests took 501ms
[Mon Aug 8 13:05:54 2016] 127.0.0.1:45618 [200]: /api/products
complete: See results in Apiary at: https://app.apiary.io/public/tests/run/7c23d4ae-aff2-4daf-bbdf-9fd76fc58b97
info: Sending SIGTERM to the backend server
info: Backend server was killed

And now we can see that our test passes.

Next, we’ll implement a test for fetching a single product:

## Get a product [GET /api/products/1]
Get a single product
+ Request (application/json)
+ Response 200 (application/json)
+ Body
{
"id": 1,
"name": "Purple widget",
"description": "A purple widget",
"price": 5.99,
"attributes": "{\"colour\": \"Purple\",\"size\": \"Small\"}",
"created_at": "*",
"updated_at": "*"
}

Note the same basic format - we define the URL that should be fetched, the content of the request, and the response, including the status code.

Let’s hook up our route in app/Http/routes.php:

$app->get('/api/products/{id}', 'ProductController@show');

And add the show() method to the controller:

public function show($id)
{
// Get individual product
$product = $this->product->findOrFail($id);
// Send response
return response()->json($product, 200);
}

Running Dredd again should show this method has been implemented:

$ dredd
info: Configuration './dredd.yml' found, ignoring other arguments.
info: Using apiary reporter.
info: Starting server with command: php -S localhost:3000 -t public/
info: Waiting 3 seconds for server command to start...
info: Beginning Dredd testing...
pass: GET /api/products duration: 66ms
[Mon Aug 8 13:21:31 2016] 127.0.0.1:45750 [200]: /api/products
pass: GET /api/products/1 duration: 17ms
complete: 2 passing, 0 failing, 0 errors, 0 skipped, 2 total
complete: Tests took 521ms
[Mon Aug 8 13:21:31 2016] 127.0.0.1:45752 [200]: /api/products/1
complete: See results in Apiary at: https://app.apiary.io/public/tests/run/bb6d03c3-8fad-477c-b140-af6e0cc8b96c
info: Sending SIGTERM to the backend server
info: Backend server was killed

That’s our read support done. We just need to add support for POST, PATCH and DELETE methods.

Our remaining methods

Let’s set up the test for our POST method first:

## Create products [POST /api/products]
Create a new product
+ name (string) - The product name
+ description (string) - The product description
+ price (float) - The product price
+ attributes (string) - The product attributes
+ Request (application/json)
+ Body
{
"name": "Blue widget",
"description": "A blue widget",
"price": 5.99,
"attributes": "{\"colour\": \"blue\",\"size\": \"Small\"}"
}
+ Response 201 (application/json)
+ Body
{
"id": 2,
"name": "Blue widget",
"description": "A blue widget",
"price": 5.99,
"attributes": "{\"colour\": \"blue\",\"size\": \"Small\"}",
"created_at": "*",
"updated_at": "*"
}

Note we specify the format of the parameters that should be passed through, and that our status code should be 201, not 200 - this is arguably a more correct choice for creating a resource. Be careful of the whitespace - I had some odd issues with it. Next, we add our route:

$app->post('/api/products', 'ProductController@store');

And the store() method in the controller:

public function store(Request $request)
{
// Validate request
$valid = $this->validate($request, [
'name' => 'required|string',
'description' => 'required|string',
'price' => 'required|numeric',
'attributes' => 'string',
]);
// Create product
$product = new $this->product;
$product->name = $request->input('name');
$product->description = $request->input('description');
$product->price = $request->input('price');
$product->attributes = $request->input('attributes');
// Save product
$product->save();
// Send response
return response()->json($product, 201);
}

Note that we validate the attributes, to ensure they are correct and that the required ones exist. Running Dredd again should show the route is now in place:

$ dredd
info: Configuration './dredd.yml' found, ignoring other arguments.
info: Using apiary reporter.
info: Starting server with command: php -S localhost:3000 -t public/
info: Waiting 3 seconds for server command to start...
info: Beginning Dredd testing...
pass: GET /api/products duration: 69ms
[Mon Aug 8 15:17:35 2016] 127.0.0.1:47316 [200]: /api/products
pass: GET /api/products/1 duration: 18ms
[Mon Aug 8 15:17:35 2016] 127.0.0.1:47318 [200]: /api/products/1
pass: POST /api/products duration: 42ms
complete: 3 passing, 0 failing, 0 errors, 0 skipped, 3 total
complete: Tests took 575ms
[Mon Aug 8 15:17:35 2016] 127.0.0.1:47322 [201]: /api/products
complete: See results in Apiary at: https://app.apiary.io/public/tests/run/cb5971cf-180d-47ed-abf4-002378941134
info: Sending SIGTERM to the backend server
info: Backend server was killed

Next, we’ll implement PATCH. This targets an existing object, but accepts parameters in the same way as POST:

## Update existing products [PATCH /api/products/1]
Update an existing product
+ name (string) - The product name
+ description (string) - The product description
+ price (float) - The product price
+ attributes (string) - The product attributes
+ Request (application/json)
+ Body
{
"name": "Blue widget",
"description": "A blue widget",
"price": 5.99,
"attributes": "{\"colour\": \"blue\",\"size\": \"Small\"}"
}
+ Response 200 (application/json)
+ Body
{
"id": 2,
"name": "Blue widget",
"description": "A blue widget",
"price": 5.99,
"attributes": "{\"colour\": \"blue\",\"size\": \"Small\"}",
"created_at": "*",
"updated_at": "*"
}

We add our new route:

$app->patch('/api/products/{id}', 'ProductController@update');

And our update() method:

public function update(Request $request, $id)
{
// Validate request
$valid = $this->validate($request, [
'name' => 'string',
'description' => 'string',
'price' => 'numeric',
'attributes' => 'string',
]);
// Get product
$product = $this->product->findOrFail($id);
// Update it
if ($request->has('name')) {
$product->name = $request->input('name');
}
if ($request->has('description')) {
$product->description = $request->input('description');
}
if ($request->has('price')) {
$product->price = $request->input('price');
}
if ($request->has('attributes')) {
$product->attributes = $request->input('attributes');
}
// Save product
$product->save();
// Send response
return response()->json($product, 200);
}

Here we can’t guarantee every parameter will exist, so we test for it. We run Dredd again:

$ dredd
info: Configuration './dredd.yml' found, ignoring other arguments.
info: Using apiary reporter.
info: Starting server with command: php -S localhost:3000 -t public/
info: Waiting 3 seconds for server command to start...
info: Beginning Dredd testing...
pass: GET /api/products duration: 74ms
[Mon Aug 8 15:27:14 2016] 127.0.0.1:47464 [200]: /api/products
pass: GET /api/products/1 duration: 19ms
[Mon Aug 8 15:27:14 2016] 127.0.0.1:47466 [200]: /api/products/1
pass: POST /api/products duration: 36ms
[Mon Aug 8 15:27:14 2016] 127.0.0.1:47470 [201]: /api/products
[Mon Aug 8 15:27:14 2016] 127.0.0.1:47474 [200]: /api/products/1
pass: PATCH /api/products/1 duration: 34ms
complete: 4 passing, 0 failing, 0 errors, 0 skipped, 4 total
complete: Tests took 2579ms
complete: See results in Apiary at: https://app.apiary.io/public/tests/run/eae98644-44ad-432f-90fc-5f73fa674f66
info: Sending SIGTERM to the backend server
info: Backend server was killed

One last method to implement - the DELETE method. Add this to apiary.apib:

## Delete products [DELETE /api/products/1]
Delete an existing product
+ Request (application/json)
+ Response 200 (application/json)
+ Body
{
"status": "Deleted"
}

Next, add the route:

$app->delete('/api/products/{id}', 'ProductController@destroy');

And the destroy() method in the controller:

public function destroy($id)
{
// Get product
$product = $this->product->findOrFail($id);
// Delete product
$product->delete();
// Return empty response
return response()->json(['status' => 'deleted'], 200);
}

And let’s run Dredd again:

$ dredd
info: Configuration './dredd.yml' found, ignoring other arguments.
info: Using apiary reporter.
info: Starting server with command: php -S localhost:3000 -t public/
info: Waiting 3 seconds for server command to start...
info: Beginning Dredd testing...
pass: GET /api/products duration: 66ms
[Mon Aug 8 15:57:44 2016] 127.0.0.1:48664 [200]: /api/products
pass: GET /api/products/1 duration: 19ms
[Mon Aug 8 15:57:44 2016] 127.0.0.1:48666 [200]: /api/products/1
pass: POST /api/products duration: 45ms
[Mon Aug 8 15:57:44 2016] 127.0.0.1:48670 [201]: /api/products
pass: PATCH /api/products/1 duration: 24ms
[Mon Aug 8 15:57:44 2016] 127.0.0.1:48674 [200]: /api/products/1
pass: DELETE /api/products/1 duration: 27ms
complete: 5 passing, 0 failing, 0 errors, 0 skipped, 5 total
complete: Tests took 713ms
[Mon Aug 8 15:57:44 2016] 127.0.0.1:48678 [200]: /api/products/1
complete: See results in Apiary at: https://app.apiary.io/public/tests/run/a3e11d59-1dad-404b-9319-61ca5c0fcd15
info: Sending SIGTERM to the backend server
info: Backend server was killed

Our REST API is now finished.

Generating HTML version of your documentation

Now we have finished documenting and implementing our API, we need to generate an HTML version of it. One way is to use aglio:

$ aglio -i apiary.apib -o output.html

This will write the documentation to output.html. There’s also scope for choosing different themes if you wish.

You can also use Apiary, which has the advantage that they’ll create a stub of your API so that if you need to work with the API before it’s finished being implemented, you can use that as a placeholder.

Summary

The Blueprint language is a useful way of documenting your API, and makes it simple enough that it’s hard to weasel out of doing so. It’s worth taking a closer look at the specification as it goes into quite a lot of detail. It’s hard to ensure that the documentation and implementation remain in sync, so it’s a good idea to use Dredd to ensure that any changes you make don’t invalidate the documentation. With Aglio or Apiary, you can easily convert the documentation into a more attractive format.

You’ll find the source code for this demo API on Github, so if you get stuck, take a look at that. I did have a fair few issues with whitespace, so bear that in mind if it behaves oddly. I’ve also noticed a few quirks, such as Dredd not working properly if a route returns a 204 response code, which is why I couldn’t use that for deleting - this appears to be a bug, but hopefully this will be resolved soon.

I’ll say it again, Dredd is not a substitute for proper unit tests, and under no circumstances should you use it as one. However, it can be very useful as a way to plan how your API will work and ensure that it complies with that plan, and to ensure that the implementation and documentation don’t diverge. Used as part of your normal continuous integration setup, Dredd can make sure that any divergence between the docs and the application is picked up on and fixed as quickly as possible, while also making writing documentation less onerous.

5th June 2016 4:32 pm

Using Jenkins Pipelines

I use Jenkins as my main continuous integration solution at work, largely for two reasons:

  • It generally works out cheaper to host it ourselves than to use one of the paid CI solutions for closed-source projects
  • The size of the plugin ecosystem

However, we also use Travis CI for testing one or two open-source projects, and one distinct advantage Travis has is the way you can configure it using a single text file.

With the Pipeline plugin, it’s possible to define the steps required to run your tests in a Jenkinsfile and then set up a Pipeline job which reads that file from the version control system and runs it accordingly. Here’s a sample Jenkinsfile for a Laravel project:

node {
// Mark the code checkout 'stage'....
stage 'Checkout'
// Get some code from a Bitbucket repository
git credentialsId: '5239c33e-10ab-4c1b-a4a0-91b96a07955e', url: 'git@bitbucket.org:matthewbdaly/my-app.git'
// Install dependencies
stage 'Install dependencies'
// Run Composer
sh 'composer install'
// Test stage
stage 'Test'
// Run the tests
sh "vendor/bin/phpunit"
}

Note the steps it’s broken down into:

  • stage defines the start of a new stage in the build
  • git defines a point where we check out the code from the repository
  • sh defines a point where we run a shell command

Using these three commands it’s straightforward to define a fairly simple build process for your application in a way that’s more easily repeatable when creating new projects - for instance, you can copy this over to a new project and change the source repository URL and you’re pretty much ready to go.

Unfortunately, support for the Pipeline plugin is missing from a lot of Jenkins plugins - for instance, I can’t publish the XML coverage reports. This is something of a deal-breaker for most of my projects as I use these kind of report plugins a lot - it’s one of the reasons I chose Jenkins over Travis. Still, this is definitely a big step forward, and if you don’t need this kind of reporting then there’s no reason not to consider using the Pipeline plugin for your Jenkins jobs. Hopefully in future more plugins will be amended to work with Pipeline so that it’s more widely usable.

22nd May 2016 11:29 pm

Adding Google AMP Support to My Site

You may have heard of Google’s AMP Project, which allows you to create mobile-optimized pages using a subset of HTML. After seeing the sheer speed at which you can load an AMP page (practically instantaneous in many cases), I was eager to see if I could apply it to my own site.

I still wanted to retain the existing functionality for my site, such as comments and search, so I elected not to rewrite the whole thing to make it AMP-compliant. Instead, I opted to create AMP versions of every blog post, and link to them from the original. This preserves the advantages of AMP since search engines will be able to discover it from the header of the original, while allowing those wanting a richer experience to view the original, where the comments are hosted. You can now view the AMP version of any post by appending amp/ to its URL.

The biggest problem was the images in the post body, as the <img> tag needs to be replaced by the <amp-img> tag, which also requires an explicit height and width. I wound up amending the renderer for AMP pages to render an image tag as an empty string, since I have only ever used one image in the post body and I think I can live without them.

It’s also a bit of a pain styling it as it will be awkward to use Bootstrap. I’ve therefore opted to skip Bootstrap for now and write my own fairly basic theme for the AMP pages instead.

It’ll be interesting to see what effect having the AMP versions of the pages available will have on my site in terms of search results. It obviously takes some time before the page gets crawled, and until then the AMP version won’t be served from the CDN used by AMP, so I really can’t guess what effect it will have right now.

14th May 2016 9:00 pm

Broadcasting Events With Laravel and Socket.io

PHP frameworks like Laravel aren’t really set up to handle real-time events properly, so if you want to build a real-time app, you’re generally better off with another platform, such as Node.js. However, if that only forms a small part of your application, you may still prefer to work with PHP. Fortunately it’s fairly straightforward to hand off the real-time aspects of your application to a dedicated microservice written using Node.js and still use Laravel to handle the rest of the functionality.

Here I’ll show you how I built a Laravel app that uses a separate Node.js script to handle sending real-time updates to the user.

Events in Laravel

In this case, I was building a REST API to serve as the back end for a Phonegap app that allowed users to message each other. The API includes an endpoint that allows users to create and fetch messages. Now, in theory, we could just repeatedly poll the endpoint for new messages, but that would be inefficient. What we needed was a way to notify users of new messages in real time, which seemed like the perfect opportunity to use Socket.io.

Laravel comes with a simple, but robust system that allows you to broadcast events to a Redis server. Another service can then listen for these events and carry out jobs on them, and there is no reason why this service has to be written in PHP. This makes it easy to decouple your application into smaller parts. In essence the functionality we wanted was as follows:

  • Receive message
  • Push message to Redis
  • Have a separate service pick up message on Redis
  • Push message to clients

First off, we need to define an event in our Laravel app. You can create a boilerplate with the following Artisan command:

$ php artisan make:event NewMessage

This will create the file app/Events/NewMessage.php. You can then customise this as follows:

<?php
namespace App\Events;
use App\Events\Event;
use App\Message;
use Illuminate\Queue\SerializesModels;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
class NewMessage extends Event implements ShouldBroadcast
{
use SerializesModels;
public $message;
/**
* Create a new event instance.
*
* @return void
*/
public function __construct(Message $message)
{
// Get message
$this->message = $message;
}
/**
* Get the channels the event should be broadcast on.
*
* @return array
*/
public function broadcastOn()
{
return ['room_'.$this->message->room_id];
}
}

This particular event is a class that accepts a single argument, which is an instance of the Message model. This model includes an attribute of room_id that is used to determine which room the message is posted to - note that this is returned in the broadcastOn() method.

When we want to trigger our new event, we can do so as follows:

use App\Events\NewMessage;
Event::fire(new NewMessage($message));

Here, $message is the saved Eloquent object containing the message. Note the use of SerializesModels - this means that the Eloquent model is serialized into JSON when broadcasting the event.

We also need to make sure Redis is set as our broadcast driver. Ensure the Composer package predis/predis is installed, and set BROADCAST_DRIVER=redis in your .env file. Also, please note that I found that setting QUEUE_DRIVER=redis in .env as well broke the broadcasting system, so it looks like you can’t use Redis as both a queue and a broadcasting system unless you set up multiple connections.

Next, we need another server-side script to handle processing the received events and pushing the messages out. In my case, this was complicated by the fact that we were using HTTPS, courtesy of Let’s Encrypt. I installed the required dependencies for the Node.js script as follows:

$ npm install socket.io socket.io-client ioredis --save-dev

Here’s an example Node.js script for processing the events:

var fs = require('fs');
var pkey = fs.readFileSync('/etc/letsencrypt/live/example.com/privkey.pem');
var pcert = fs.readFileSync('/etc/letsencrypt/live/example.com/fullchain.pem')
var options = {
key: pkey,
cert: pcert
};
var app = require('https').createServer(options);
var io = require('socket.io')(app);
var Redis = require('ioredis');
var redis = new Redis();
app.listen(9000, function() {
console.log('Server is running!');
});
function handler(req, res) {
res.setHeader('Access-Control-Allow-Origin', '*');
res.writeHead(200);
res.end('');
}
io.on('connection', function(socket) {
//
});
redis.psubscribe('*', function(err, count) {
//
});
redis.on('pmessage', function(subscribed, channel, message) {
message = JSON.parse(message);
console.log('Channel is ' + channel + ' and message is ' + message);
io.emit(channel, message.data);
});

Note we use the https module instead of the http one, and we pass the key and certificate as options to the server. This server runs on port 9000, but feel free to move it to any arbitrary port you wish. In production, you’d normally use something like Supervisor or systemd to run a script like this as a service.

Next, we need a client-side script to connect to the Socket.io instance and handle any incoming messages. Here’s a very basic example that just dumps them to the browser console:

var url = window.location.protocol + '//' + window.location.hostname;
var socket = io(url, {
'secure': true,
'reconnect': true,
'reconnection delay': 500,
'max reconnection attempts': 10
});
var chosenEvent = 'room_' + room.id;
socket.on(chosenEvent, function (data) {
console.log(data);
});

Finally, we need to configure our web server. I’m using Nginx with PHP-FPM and PHP 7, and this is how I configured it:

upstream websocket {
server 127.0.0.1:9000;
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
client_max_body_size 50M;
server_tokens off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
root /var/www/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip on;
gzip_proxied any;
gzip_types text/plain text/css application/javascript application/x-javascript text/xml application/xml application/xml-rss text/javascript text/js application/json;
expires 1y;
charset utf-8;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /.well-known {
root /var/www/public;
allow all;
}
location /socket.io {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass https://websocket;
}
}

Any requests to /socket.io are proxied to port 9000, where our chat handling script is listening. Note that we allow the HTTPS connection to be upgraded to a WebSocket one.

Once that’s done, you just need to restart your PHP application and Nginx, and start running your chat script, and everything should be working fine. If it isn’t, the command redis-cli monitor is invaluable in verifying that the event is being published correctly.

Summary

Getting this all working together did take quite a bit of trial and error, but that was mostly a matter of configuration. Actually implementing this is pretty straightforward, and it’s an easy way to add some basic real-time functionality to an existing Laravel application.

Recent Posts

Adding React to a Legacy Project

Do You Still Need Jquery?

An Approach to Writing Golden Master Tests for PHP Web Applications

Understanding the Pipeline Pattern

Replacing Switch Statements With Polymorphism in PHP

About me

I'm a web and mobile app developer based in Norfolk. My skillset includes Python, PHP and Javascript, and I have extensive experience working with CodeIgniter, Laravel, Django, Phonegap and Angular.js.