Matthew Daly's Blog

I'm a web developer in Norfolk. This is my blog...

25th September 2017 10:18 pm

A Generic PHP SMS Library

This weekend I published sms-client, a generic PHP library for sending SMS notifications. It’s intended to offer a consistent interface when sending SMS notifications by using swappable drivers. That way, if your SMS service provider suddenly goes out of business or bumps up their prices, it’s easy to switch to a new one.

Out of the box it comes with drivers for the following services:

  • Nexmo
  • ClockworkSMS

In addition, it provides the following test drivers:

  • Null
  • Log
  • RequestBin

Here’s an example of how you might use it with the ClockworkSMS driver:

use GuzzleHttp\Client as GuzzleClient;
use GuzzleHttp\Psr7\Response;
use Matthewbdaly\SMS\Drivers\Clockwork;
use Matthewbdaly\SMS\Client;
$guzzle = new GuzzleClient;
$resp = new Response;
$driver = new Clockwork($guzzle, $resp, [
'api_key' => 'MY_CLOCKWORK_API_KEY',
]);
$client = new Client($driver);
$msg = [
'to' => '+44 01234 567890',
'content' => 'Just testing',
];
$client->send($msg);

If you want to roll your own driver for it, it should be easy - just create a class that implements the Matthewbdaly\SMS\Contracts\Driver interface. Most of the existing drivers work using Guzzle to send HTTP requests to an API, but you don’t necessarily have to do that - for instance, you could create a driver for a mail-to-SMS gateway by using Swiftmailer or the PHP mail class. If you create a driver for it, please feel free to submit a pull request so I can add it to the repository.

For Laravel or Lumen users, there’s an integration package that should make it easier to use. For users of other frameworks, it should still be fairly straightforward to integrate.

8th September 2017 10:05 pm

Installing Nginx Unit on Ubuntu

Recently Nginx announced the release of the first beta of Unit, an application server that supports Python, PHP and Go, with support coming for Java, Node.js and Ruby.

The really interesting part is that not only does it support more than one language, but Unit can be configured by making HTTP requests, rather than by editing config files. This makes it potentially very interesting to web developers like myself who have worked in multiple languages - I could use it to serve a Python or PHP web app, simply by making different requests during the setup process. I can see this being a boon for SaaS providers - you could pick up the language from a file, much like the runtime.txt used by Heroku, and set up the application on the fly.

It’s currently in public beta, and there are packages for Ubuntu, so I decided to try it out. I’ve created the Ansible role below to set up Unit on an Ubuntu 16.04 server or VM:

---
- name: Install keys
apt_key: url=http://nginx.org/keys/nginx_signing.key state=present
- name: Setup main repo
apt_repository: repo='deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx' state=present
- name: Setup source rep
apt_repository: repo='deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx' state=present
- name: Update system
apt: upgrade=full update_cache=yes
- name: Install dependencies
apt: name={{ item }} state=present
with_items:
- nginx
- unit
- golang
- php-dev
- php7.0-dev
- libphp-embed
- libphp7.0-embed
- python-dev
- python3
- python3-dev
- php7.0-cli
- php7.0-mcrypt
- php7.0-pgsql
- php7.0-sqlite3
- php7.0-opcache
- php7.0-curl
- php7.0-mbstring
- php7.0-dom
- php7.0-xml
- php7.0-zip
- php7.0-bcmath
- name: Copy over Nginx configuration
copy: src=nginx.conf dest=/etc/nginx/sites-available/default owner=root group=root mode=0644

Note the section that copies over the Nginx config file. Here is that file:

upstream unit_backend {
server 127.0.0.1:8300;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
fastcgi_param HTTP_PROXY "";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
root /var/www/public;
index index.php index.html index.htm;
server_name server_domain_or_IP;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri /index.php =404;
proxy_pass http://unit_backend;
proxy_set_header Host $host;
}
}

This setup proxies all dynamic requests to the Unit backed in a similar fashion to how it would normally pass it to PHP-FPM.

There were still a few little issues. It doesn’t exactly help that the Nginx package provided with this repository isn’t quite the same as the one in Ubuntu by default - not only is it the unstable version, but it doesn’t set up the sites-available and sites-enabled folders, so I had to do that manually. I also had an issue with Systemd starting the process (at /run/control.unit.sock) with permissions that didn’t allow Nginx to access it. I’m not that familiar with Systemd so I wound up just setting the permissions of the file manually, but that doesn’t persist between restarts. I expect this issue isn’t that big a deal to someone more familiar with Systemd, but I haven’t been able to resolve it yet.

I decided to try it out with a Laravel application. I created a new Laravel app and set it up with the web root at /var/www. I then saved the following configuration for it as app.json:

{
"listeners": {
"*:8300": {
"application": "myapp"
}
},
"applications": {
"myapp": {
"type": "php",
"workers": 20,
"user": "www-data",
"group": "www-data",
"root": "/var/www/public",
"index": "index.php"
}
}
}

This is fairly basic, but a good example of how you configure an application with Unit. The listener section maps a port to an application, while the applications section defines an application called myapp. In this case, we specify that the type should be php. Note that each platform has slightly different options - for instance, the Python type doesn’t have the index or root options, instead having the path option, which specifies the path to the wsgi.py file.

I then ran the following command to upload the file:

$ curl -X PUT -d @app.json --unix-socket /run/control.unit.sock http://localhost

Note that we send it direct to the Unix socket file - this way we don’t have to expose the API to the outside. After this was done, the Laravel app began working as expected.

We can then make a GET request to view the configured applications:

$ curl --unix-socket /run/control.unit.sock http://localhost/
{
"listeners": {
"*:8300": {
"application": "saas"
}
},
"applications": {
"saas": {
"type": "php",
"workers": 20,
"user": "www-data",
"group": "www-data",
"root": "/var/www/public",
"index": "index.php"
}
}
}

It’s also possible to update and delete existing applications via the API using PUT and DELETE requests.

Final thoughts

This is way too early to be seriously considering using Unit in production. It’s only just been released as a public beta, and it’s a bit fiddly to set up. However, it has an enormous amount of promise.

One thing I can’t really see right now is whether it’s possible to use a virtualenv with it for Python applications. In the Python community it’s standard practice to use Virtualenv to isolate the dependencies for individual applications, and it’s not clear how I’d be able to go about using this, if it is possible. For deploying Python applications, lack of virtualenv support would be a deal breaker, and I hope this gets clarified soon.

I’d also be curious to see benchmarks of how it compares to something like PHP-FPM. It’s quite possible that it may be less performant than other solutions. However, I will keep a close eye on this in future.

2nd September 2017 2:45 pm

Making Internal Requests With Laravel

Recently I’ve been working on a Phonegap app that needs to work offline. The nature of relational databases can often make this tricky if you’re dealing with related objects and you’re trying to retrofit it to something that wasn’t built with this use case in mind.

Originally my plan was to push each request that would have been made to a queue in WebSQL, and then on reconnect, make every request individually. It quickly became apparent, however, that this approach had a few problems:

  • If one request failed, the remaining requests had to be stopped from executing
  • It didn’t allow for storing the failed transactions in a way that made them easy to retrieve

Instead, I decided to create a single sync endpoint for the API that would accept an object containing all the requests that would be made, and then step through each one. If it failed, it would get the failed request and all subsequent ones in the object, and store them in the database. That way, even if the data didn’t sync correctly, it wasn’t lost, and if necessary it could be resolved manually.

Since the necessary API endpoints already existed, and were thoroughly tested, it was not a good idea to start duplicating that functionality. Instead, I implemented the functionality to carry out internal requests, and I thought I’d share how you can do this.

For any service you may build for your Laravel applications, it’s a good idea to create an interface for it first:

<?php
namespace App\Contracts;
interface MakesInternalRequests
{
/**
* Make an internal request
*
* @param string $action The HTTP verb to use.
* @param string $resource The API resource to look up.
* @param array $data The request body.
* @return \Illuminate\Http\Response
*/
public function request(string $action, string $resource, array $data = []);
}

That way you can resolve the service using dependency injection, making it trivial to replace it with a mock when testing.

Now, actually making an internal request is pretty easy. You get the app instance (you can do so by resolving it using dependency injection as I do below, or call the app() helper). Then you put together the request you want to make and pass it as an argument to the app’s handle() method:

<?php
namespace App\Services;
use Illuminate\Http\Request;
use App\Contracts\MakesInternalRequests;
use Illuminate\Foundation\Application;
use App\Exceptions\FailedInternalRequestException;
/**
* Internal request service
*/
class InternalRequest implements MakesInternalRequests
{
/**
* The app instance
*
* @var $app
*/
protected $app;
/**
* Constructor
*
* @param Application $app The app instance.
* @return void
*/
public function __construct(Application $app)
{
$this->app = $app;
}
/**
* Make an internal request
*
* @param string $action The HTTP verb to use.
* @param string $resource The API resource to look up.
* @param array $data The request body.
* @throws FailedInternalRequestException Request could not be synced.
* @return \Illuminate\Http\Response
*/
public function request(string $action, string $resource, array $data = [])
{
// Create request
$request = Request::create('/api/' . $resource, $action, $data, [], [], [
'HTTP_Accept' => 'application/json',
]);
// Get response
$response = $this->app->handle($request);
if ($response->getStatusCode() >= 400) {
throw new FailedInternalRequestException($request, $response);
}
// Dispatch the request
return $response;
}
}

Also note that I’ve created a custom exception, called FailedInternalRequestException. This is fired if the status code returned from the internal requests is greater than or equal to 400 (thus denoting an error):

<?php
namespace App\Exceptions;
use Illuminate\Http\Request;
use Illuminate\Http\Response;
/**
* Exception for when a bulk sync job fails
*/
class FailedInternalRequestException extends \Exception
{
/**
* Request instance
*
* @var $request
*/
protected $request;
/**
* Response instance
*
* @var $response
*/
protected $response;
/**
* Constructor
*
* @param Request $request The request object.
* @param Response $response The response object.
* @return void
*/
public function __construct(Request $request, Response $response)
{
parent::__construct();
$this->request = $request;
$this->response = $response;
}
/**
* Get request object
*
* @return Request
*/
public function getRequest()
{
return $this->request;
}
/**
* Get response object
*
* @return Response
*/
public function getResponse()
{
return $this->response;
}
}

You can catch this exception in an appropriate place and handle it as you wish. Now, if you import the internal request class as $dispatcher, you can just call $dispatcher->request($action, $resource, $data), where $action is the HTTP verb, $resource is the API resource to send to, and $data is the data to send.

It’s actually quite rare to have to do this. In this case, because this was a REST API and every request made to it was changing the state of the application (there were no GET requests, only POST, PUT, PATCH and DELETE), it made sense to just break down the request body and do internal requests against the existing API, since otherwise I’d have to duplicate the existing functionality. I would not recommend this approach for something like fetching data to render a page on the server side, as there are more efficient ways of accomplishing it. In all honesty I can’t think of any other scenario where this would genuinely be the best option. However, it worked well for my use case and allowed me to implement this functionality quickly and simply.

19th August 2017 3:40 pm

Run Your Tests Locally With Sismo

Continuous integration is a veritable boon when working on any large software project. However, the popularity of distributed version control systems like Git over the older, more centralised ones like Subversion means that when you commit your changes, they don’t necessarily get pushed up to a remote repository immediately. While this is a good thing because it means you can commit at any stage without worrying about pushing up changes that break everyone else’s build, it has the downside that the tests aren’t automatically run on every commit, just every push, so if you get sloppy about running your tests before every commit you can more easily get caught out. In addition, a full CI server like Jenkins is a rather large piece of software that you don’t really want to run locally if you can help it, and has a lot of functionality you don’t need.

Sismo is a small, simple continuous integration server, implemented in PHP, that’s ideal for running locally. You can set it up to run your tests on every commit, and it has an easy-to-use web interface. Although it’s a PHP application, there’s no reason why you couldn’t use it to run tests for projects in other languages, and because it’s focused solely on running your test suite without many of the other features of more advanced CI solutions, it’s a good fit for local use. Here I’ll show you how I use it.

Setting up Sismo

Nowadays I don’t generally install a web server on a computer directly, preferring to use Vagrant or the dev server as appropriate, so Sismo generally doesn’t have to coexist with anything else. I normally install PHP7’s FastCGI implementation and Nginx, along with the SQLite bindings (which Sismo needs):

$ sudo apt-get install nginx php7.0-fpm php7.0-sqlite3

Then we can set up our Nginx config at /etc/nginx/sites-available/default:

server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
fastcgi_param HTTP_PROXY "";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
root /var/www/html;
index sismo.php index.html index.htm;
server_name server_domain_or_IP;
location / {
try_files $uri $uri/ /sismo.php?$query_string;
}
location ~ \.php$ {
try_files $uri /sismo.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index sismo.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SISMO_DATA_PATH "/home/matthew/.sismo/data";
fastcgi_param SISMO_CONFIG_PATH "/home/matthew/.sismo/config.php";
include fastcgi_params;
}
}

You’ll probably want to adjust the paths as appropriate. Then set up the required folders:

$ mkdir ~/.sismo
$ mkdir ~/.sismo/data
$ touch ~/.sismo/config.php
$ chmod -R a+w ~/.sismo/

Then, download Sismo and put it in your web root (here it’s at /var/www/html/sismo.php).

Now, say you have a project you want to test (I’m using my Laravel ETag middleware for this example). We need to specify the projects we want to test in ~/.sismo/config.php:

<?php
$projects = array();
$notifier = new Sismo\Notifier\DBusNotifier();
Sismo\Project::setDefaultCommand('if [ -f composer.json ]; then composer install; fi && vendor/bin/phpunit');
$projects[] = new Sismo\GithubProject('Laravel ETag Middleware', '/home/matthew/Projects/laravel-etag-middleware', $notifier);
return $projects;

Hopefully this shouldn’t be too difficult to understand. We create an array of projects, then specify a notifier (this is Linux-specific - refer to the documentation for using Growl on Mac OS). Next, we specify that by default the tests should run composer install followed by vendor/bin/phpunit. We then specify this project is a Github project - it also supports Bitbucket, or plain SSH, or the default Project, but in general it shouldn’t be a problem to use it with any repository as you can just run it against the local copy. Finally we return the list of projects.

Now, we should be able to run our tests as follows:

$ php /var/www/html/sismo.php build
Building Project "Laravel ETag Middleware" (into "68a087")

That should be working, but it doesn’t get us anything we don’t get by running the tests ourselves. To trigger the build, we need to set up a post-commit hook for our project in .git/hooks/post-commit:

#!/bin/sh
php /var/www/html/sismo.php --quiet --force build laravel-etag-middleware `git log -1 HEAD --pretty="%H"` &>/dev/null &

You should now be able to view your project in the Sismo web interface at http://localhost:

Sismo

Clicking on the project should take you through to its build history:

Sismo project page

From here on, it should be straightforward to add new projects as and when necessary. Because you can change the command on a per-project basis, you can quite happily use it to run tests for Python or Node.js projects as well as PHP ones, and it’s not hard to configure it.

I personally find it very useful to have something in place to run my tests on every commit like this, and while you could just use a post-commit hook for that, this approach is less obtrusive because it doesn’t force you to wait around for your test suite to finish.

14th August 2017 12:40 pm

Profiling Your Laravel Application With Clockwork

If you’re building any non-trivial application, it’s always a good idea to profile it to find performance problems. Laravel Debugbar is the usual solution for profiling Laravel web applications, but it isn’t really much use for REST API’s or single-page web apps that consume them.

Recently I was introduced to Clockwork, which is a server-side extension for profiling PHP applications. It’s made it a whole lot easier to track down issues like excessive numbers of queries when building an API, and as a result I’ve been able to dramatically improve the performance of an API I’ve been working on. Here I’ll show you how you can use it on a project.

Installing Clockwork

Clockwork is available via Composer:

$ composer require itsgoingd/clockwork

You also need to register the service provider in config/app.php:

   Clockwork\Support\Laravel\ClockworkServiceProvider::class,

And register the middleware globally in app/Http/Kernel.php:

protected $middleware = [
\Clockwork\Support\Laravel\ClockworkMiddleware::class,
]

Note that it only works when APP_DEBUG is set to true in your .env file. This means that you can keep it in your application without worrying about exposing too much data in production, as long as debug mode is not active on your production server (which it shouldn’t be anyway).

You will also need to install the Chrome extension in order to actually work with the returned data. Clockwork works by adding its own route to your Laravel application, and this extension makes sure that it makes the appropriate request on loading a page, and then displays the data in the dev tools.

Once it’s all installed and your application is running, open the dev tools and you should see the new Clockwork tab in there. On the left of this tab is a list of requests - if you make a request, you’ll see it added to the list. When you click on each request, you’ll see the following tabs, where applicable:

Request

Request tab

This is similar to Chrome’s network tab in that it shows all of the headers for a given request. It’s not anything you can’t get using Chrome’s existing dev tools, but because it doesn’t show any static content it’s arguably a bit easier to navigate.

Timeline

Timeline tab

This shows how long the response takes to respond, which can be helpful in identifying slower requests.

In addition, you can create your own events using the clock() helper, which will appear in the timeline, as in this example:

clock()->startEvent('email_sent', 'Email sent.');
clock()->endEvent('email_sent');

Log

Log tab

The log tab is only displayed if you use the clock() helper to log data. You can log text or JSON objects as appropriate:

clock('Message text.'); // 'Message text.' appears in Clockwork log tab
clock(['hello' => 'world']); // logs json representation of the array

This is arguably more convenient than using the Log facade to write to the application log, since it’s kept in the browser and you can easily see what request caused what message to be logged.

Database

Database tab

The database tab displays details of the queries made by a request. This is useful for identifying things such as:

  • Repeated queries that should be cached
  • The n+1 problem (which can be resolved by use of eager loading)
  • Slow queries that need to be optimised

Note that if a particular endpoint does not trigger a query, this tab will not be visible.

Cookies

Cookies tab

For a REST API, you shouldn’t really have much use for cookies, but if you do, this tab lets you view the cookies set on the request.

Session

Session tab

As with cookies, the session isn’t normally something you’d use for an API, but this tab lets you view it.

Views

Views tab

This tab shows the views used on the page, and all of the data passed to them.

Routes

Routes tab

This tab shows all of the routes defined within your application.

Clockwork isn’t limited to Laravel - you can also use it with Lumen, Slim 2, and CodeIgniter 2.1, and it’s possible to write your own integration for other frameworks. It’s still fundamentally browser-based, so it’s difficult to use it if your API doesn’t have at least some kind of web front end (whether that’s a single page web app or Phonegap app that consumes the API, or that the API is itself browseable and returns HTML in a web browser), but I’ve found it to be superior to Laravel Debugbar for most of what I do.

Recent Posts

Creating Custom Assertions With Phpunit

Catching Debug Statements in PHP

An Azure Filesystem Integration for Laravel

Using Phpiredis With Laravel

Simple Fuzzy Search With Laravel and Postgresql

About me

I'm a web and mobile app developer based in Norfolk. My skillset includes Python, PHP and Javascript, and I have extensive experience working with CodeIgniter, Laravel, Django, Phonegap and Angular.js.