Matthew Daly's Blog

I'm a web developer in Norfolk. This is my blog...

18th February 2018 6:10 pm

Put Your Laravel Controllers on a Diet

MVC frameworks are a tremendously useful tool for modern web development. They offer easy ways to carry out common tasks, and enforce a certain amount of structure on a project.

However, that doesn’t mean using them makes you immune to bad practices, and it’s quite easy to wind up falling into certain anti-patterns. Probably the most common is the Fat Controller.

What is a fat controller?

When I first started out doing professional web development, CodeIgniter 2 was the first MVC framework I used. While I hadn’t used it before, I was familiar with the general concept of MVC. However, I didn’t appreciate that when referring to the model layer as a place for business logic, that wasn’t necessarily the same thing as the database models.

As such, my controllers became a dumping ground for anything that didn’t fit into the models. If it didn’t interact with the database, I put it in the controller. They quickly became bloated, with many methods running to hundreds of lines of code. The code base became hard to understand, and when I was under the gun on projects I found myself copying and pasting functionality between controllers, making the situation even worse. Not having an experienced senior developer on hand to offer criticism and advice, it was a long time before I realised that this was a problem or how to avoid it.

Why are fat controllers bad?

Controllers are meant to be simple glue code that receives requests and returns responses. Anything else should be handed off to the model layer. As noted above, however, that’s not the same as putting it in the models. Your model layer can consist of many different classes, not just your Eloquent models, and you should not fall into the trap of thinking your application should consist of little more than models, views and controllers.

Placing business logic in controllers can be bad for many reasons:

  • Code in controllers can be difficult to write automated tests for
  • Any logic in a controller method may need to be repeated elsewhere if the same functionality is needed for a different route, unless it’s in a private or protected method that is called from elsewhere, in which case it’s very hard to test in isolation
  • Placing it in the controller makes it difficult to pull out and re-use on a different project
  • Making your controller methods too large makes them complex and hard to follow

As a general rule of thumb, I find that 10 lines of code for any one method for a controller is where it starts getting a bit much. That said, it’s not a hard and fast rule, and for very small projects it may not be worthwhile. But if a project is large and needs to be maintained for any reasonable period of time, you should take the trouble to ensure your controllers are as skinny as is practical.

Nowadays Laravel is my go-to framework and I’ve put together a number of strategies for avoiding the fat controller anti-pattern. Here’s some examples of how I would move various parts of the application out of my controllers.

Validation

Laravel has a nice, easy way of getting validation out of the controllers. Just create a custom form request for your input data, as in this example:

<?php
namespace App\Http\Requests;
use Illuminate\Foundation\Http\FormRequest;
class CreateRequest extends FormRequest
{
/**
* Determine if the user is authorized to make this request.
*
* @return bool
*/
public function authorize()
{
return true;
}
/**
* Get the validation rules that apply to the request.
*
* @return array
*/
public function rules()
{
return [
'email' => 'required|email'
];
}
}

Then type-hint the form request in the controller method, instead of Illuminate\Http\Request:

<?php
namespace App\Http\Controllers;
use App\Http\Requests\CreateRequest;
class HomeController extends Controller
{
public function store(CreateRequest $request)
{
// Process request here..
}
}

Database access and caching

For non-trivial applications I normally use decorated repositories to handle caching and database access in one place. That way my caching and database layers are abstracted out into separate classes, and caching is nearly always handled seamlessly without having to do much work.

Complex object creation logic

If I have a form or API endpoint that needs to:

  • Create more than one object
  • Transform the incoming data in some way
  • Or is non-trivial in any other way

I will typically pull it out into a separate persister class. First, you should create an interface for this persister class:

<?php
namespace App\Contracts\Persisters;
use Illuminate\Database\Eloquent\Model;
interface Foo
{
/**
* Create a new Model
*
* @param array $data
* @return Model
*/
public function create(array $data);
/**
* Update the given Model
*
* @param array $data
* @param Model $model
* @return Model
*/
public function update(array $data, Model $model);
}

Then create the persister class itself:

<?php
namespace App\Persisters;
use Illuminate\Database\Eloquent\Model;
use App\Contracts\Repositories\Foo as Repository;
use App\Contracts\Persisters\Foo as FooContract;
use Illuminate\Database\DatabaseManager;
use Carbon\Carbon;
class Foo implements FooContract
{
protected $repository;
protected $db;
public function __construct(DatabaseManager $db, Repository $repository)
{
$this->db = $db;
$this->repository = $repository;
}
/**
* Create a new Model
*
* @param array $data
* @return Model
*/
public function create(array $data)
{
$this->db->beginTransaction();
$model = $this->repository->create([
'date' => Carbon::parse($data['date'])->toDayDateTimeString(),
]);
$this->db->commit();
return $model;
}
/**
* Update the given Model
*
* @param array $data
* @param Model $model
* @return Model
*/
public function update(array $data, Model $model)
{
$this->db->beginTransaction();
$updatedmodel = $this->repository->update([
'date' => Carbon::parse($data['date'])->toDayDateTimeString(),
$model
]);
$this->db->commit();
return $updatedmodel;
}
}

Then you can set up the persister in a service provider so that type-hinting the interface returns the persister:

<?php
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
class AppServiceProvider extends ServiceProvider
{
/**
* Bootstrap any application services.
*
* @return void
*/
public function boot()
{
//
}
/**
* Register any application services.
*
* @return void
*/
public function register()
{
$this->app->bind(
'App\Contracts\Persisters\Foo',
'App\Persisters\Foo',
});
}
}

This approach means that complex logic, such as creating multiple related objects, can be handled in a consistent way, even if it needs to be called from multiple places.

Triggering actions as a result of something

Events are tailor-made for this use case, and Laravel documents them very well, so I won’t repeat it here. Suffice to say, if something needs to happen, but the response sent by the application doesn’t necessarily depend on it returning something immediately, then it’s probably worth considering making it an event. If it’s going to be called from multiple places, it’s even more worthwhile.

For instance, if you have a contact form, it’s worth taking the time to create an event for when a new contact is received, and handle proessing the contact within the listener for that event. Also, doing so means you can queue that event and have it handled outside the context of the application, so that it responds to the user more quickly. If you’re sending an acknowledgement email for a new user registration, you don’t need to wait for that email to be sent before you return the response, so queueing it can improve response times.

Interacting with third-party services

If you have some code that needs to interact with a third-party service or API, it can get quite complex, especially if you need to process the content in some way. It therefore makes sense to pull that functionality out into a separate class.

For instance, say you have some code in your controller that uses an HTTP client to fetch some data from a third-party API and display it in the view:

public function index(Request $request)
{
$data = $this->client->get('http://api.com/api/items');
$items = [];
foreach ($data as $k => $v) {
$item = [
'name' => $v['name'],
'description' => $v['data']['description'],
'tags' => $v['data']['metadata']['tags']
];
$items[] = $item;
}
return view('template', [
'items' => $items
]);
}

This is a very small example (and a lot simpler than most real-world instances of this issue), but it illustrates the principle. Not only does this code bloat the controller, it might also be used elsewhere in the application, and we don’t want to copy and paste it elsewhere - therefore it makes sense to extract it to a service class.

<?php
namespace App\Services
use GuzzleHttp\ClientInterface as GuzzleClient;
class Api
{
protected $client;
public function __construct(GuzzleClient $client)
{
$this->client = $client;
}
public function fetch()
{
$data = $this->client->get('http://api.com/api/items');
$items = [];
foreach ($data as $k => $v) {
$item = [
'name' => $v['name'],
'description' => $v['data']['description'],
'tags' => $v['data']['metadata']['tags']
];
$items[] = $item;
}
return $items;
}
}

Our controller can then type-hint the service and refactor that functionality out of the method:

public function __construct(App\Services\Api $api)
{
$this->api = $api;
}
public function index(Request $request)
{
$items = $this->api->fetch();
return view('template', [
'items' => $items
]);
}

Including common variables in the view

If data is needed in more than one view (eg show the user’s name on every page when logged in), consider using view composers to retrieve this data rather than fetching them in the controller. That way you’re not having to repeat that logic in more than one place.

Formatting content for display

Logically this belongs in the view layer, so you should write a helper to handle things like formatting dates. For more complex stuff, such as formatting HTML, you should be doing this in Blade (or another templating system, if you’re using one) - for instance, when generating an HTML table, you should consider using a view partial to loop through them. For particularly tricky functionality, you have the option of writing a custom Blade directive.

The same applies for rendering other content - for rendering JSON you should consider using API resources or Fractal to get any non-trivial logic for your API responses out of the controller. Blade templates can also work for non-HTML content such as XML.

Anything else…

These examples are largely to get you started, and there will be occasions where something doesn’t fall into any of the above categories. However, the same principle applies. Your controllers should stick to just receiving requests and sending responses, and anything else should normally be deferred to other classes.

Fat controllers make developer’s lives very difficult, and if you want your code base to be easily maintainable, you should be willing to refactor them ruthlessly. Any functionality you can pull out of the controller becomes easier to reuse and test, and as long as you name your classes and methods sensibly, they’re easier to understand.

4th February 2018 12:12 am

Using Lando As An Alternative to Vagrant

Although Vagrant is very useful for ensuring consistency between development environments, it’s quite demanding on system resources. Running a virtual machine introduces quite a bit of overhead, and it can be troublesome to provision.

This week I was introduced to Lando as an alternative to Vagrant. Rather than running a virtual machine like Vagrant does by default, Lando instead spins up Docker containers for the services you need, meaning it has considerably less overhead than Vagrant. It also includes presets for a number of frameworks and CMS’s, including:

  • Drupal 7
  • Drupal 8
  • Wordpress
  • Laravel

Considering that Vagrant needs quite a bit of boilerplate to set up the server for different types of projects, this gives Lando an obvious advantage. The only issue I’ve had with it is that it’s been unreliable when I’ve had to use it on Windows, which I don’t do much anyway.

Getting started

Lando requires that you have Docker installed. Once that’s done you can download and install it fro the website. Then you can run lando init to set it up:

$ lando init
? What recipe do you want to use? wordpress
? Where is your webroot relative to the init destination? .
? What do you want to call this app? wp-site
NOW WE'RE COOKING WITH FIRE!!!
Your app has been initialized!
Go to the directory where your app was initialized and run
`lando start` to get rolling.
Check the LOCATION printed below if you are unsure where to go.
Here are some vitals:
NAME wp-site
LOCATION /home/matthew/Projects/wp-site
RECIPE wordpress
DOCS https://docs.devwithlando.io/tutorials/wordpress.html

Here I’ve chosen the wordpress recipe, in the current directory, with the name wp-site. This generates the following file as .lando.yml:

name: wp-site
recipe: wordpress
config:
webroot: .

Then, if we run lando start, it will set up the required services:

$ lando start
landoproxyhyperion5000gandalfedition_proxy_1 is up-to-date
Creating network "wpsite_default" with the default driver
Creating volume "wpsite_appserver" with default driver
Creating volume "wpsite_data" with default driver
Creating volume "wpsite_data_database" with default driver
Creating wpsite_appserver_1 ...
Creating wpsite_database_1 ...
Creating wpsite_database_1
Creating wpsite_appserver_1 ... done
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4454k 100 4454k 0 0 3288k 0 0:00:01 0:00:01 --:--:-- 3290k
OS: Linux 4.13.0-32-generic #35-Ubuntu SMP Thu Jan 25 09:13:46 UTC 2018 x86_64
Shell:
PHP binary: /usr/local/bin/php
PHP version: 7.1.13
php.ini used:
WP-CLI root dir: phar://wp-cli.phar
WP-CLI vendor dir: phar://wp-cli.phar/vendor
WP_CLI phar path: /tmp
WP-CLI packages dir:
WP-CLI global config:
WP-CLI project config:
WP-CLI version: 1.5.0
BOOMSHAKALAKA!!!
Your app has started up correctly.
Here are some vitals:
APPSERVER URLS https://localhost:32802
http://localhost:32803
http://wp-site.lndo.site
https://wp-site.lndo.site

Note the APPSERVER URLS section - the site can be accessed locally via HTTP or HTTPS. For this recipe, it also installs WP CLI.

If we run docker ps, we can see that it’s running three Docker containers:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e920e152091 devwithlando/php:7.1-apache "/lando-entrypoint.s…" 16 minutes ago Up 16 minutes 0.0.0.0:32803->80/tcp, 0.0.0.0:32802->443/tcp wpsite_appserver_1
82ea60b1214f mysql:latest "/lando-entrypoint.s…" 16 minutes ago Up 16 minutes 0.0.0.0:32801->3306/tcp wpsite_database_1
e51d831199d7 traefik:1.3-alpine "/lando-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:58086->8080/tcp landoproxyhyperion5000gandalfedition_proxy_1

Apache lives in one container, MySQL in another, while the third runs Traefik, a lightweight load balancer, which listens on port 80. Traefik does the work of redirecting HTTP requests to the right place.

As I’ve been unhappy with the amount of resources Vagrant uses for a while, and I usually run Ubuntu (making using Docker straightforward), I’m planning on using Lando extensively in future. It’s lighter and faster to set up, and has sane defaults for most of the frameworks and CMS’s I use regularly, making it generally quicker and easier to work with.

29th January 2018 10:00 pm

How I Deploy Laravel Apps

A while back I provided details of the web server setup I used for Django applications. Nowadays I tend to use Laravel most of the time, so I thought I’d share an example of the sort of setup I use to deploy that.

Server OS

As before I generally prefer Debian Stable where possible. If that’s not possible for any reason then the current Ubuntu LTS is an acceptable substitute.

Web server

My usual web server these days is Nginx with PHP 7 or better via FPM. I generally use HTTP2 where possible, with SSL via Let’s Encrypt.

Here’s my typical Nginx config:

fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=my-app:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; img-src 'self' https://placehold.it; style-src 'self' https://fonts.googleapis.com ; font-src 'self' https://themes.googleusercontent.com; frame-src 'none'; object-src 'none'";
server_tokens off;
server {
listen 80;
listen [::]:80;
server_name my-app.domain;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
include snippets/ssl-my-app.domain.conf;
include snippets/ssl-params.conf;
client_max_body_size 50M;
fastcgi_param HTTP_PROXY "";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
root /var/www/my-app.domain/current/public;
index index.php index.html index.htm;
server_name my-app.domain;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm-my-app.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_cache my-app;
fastcgi_cache_valid 200 60m;
}
location ~ /.well-known {
allow all;
}
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
gzip on;
gzip_vary on;
gzip_types application/json text/xml application/xml;
}
location ~* \.(?:rss|atom)$ {
expires 1h;
add_header Cache-Control "public";
gzip on;
gzip_vary on;
gzip_types application/xml+rss;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
gzip on;
gzip_vary on;
gzip_types text/css application/javascript text/javascript;
}
}

The times for FastCGI caching tend to vary in practice - sometimes it’s not appropriate to use it all, while for others it can be cached for some time.

It’s generally fairly safe to cache CSS and JS for a long time with a Laravel app if you’re using Mix to version those assets, so I feel comfortable caching them for a year. Images are a bit dicier, but still don’t change often so a month seems good enough.

I’ll typically give each application its own pool, which means copying the file at /etc/php/7.0/fpm/pool.d/www.conf to another file in the same directory, amending the pool name and path to set a new location for the socket, and then restarting Nginx and PHP-FPM. Here are the fields that should be changed:

; Start a new pool named 'www'.
; the variable $pool can be used in any directive and will be replaced by the
; pool name ('www' here)
[my-app.domain]
...
listen = /var/run/php/php7.0-fpm-my-app.sock

Database

I’m a fan of PostgreSQL - it’s stricter than MySQL/MariaDB, and has some very useful additional field types, so where possible I prefer to use it over MySQL or MariaDB.

Cache and session backend

Redis is my usual choice here - I make heavy use of cache tags so I need a backend for the cache that supports them, and Memcached doesn’t seem to have as much inertia as Redis these days. Neither needs much in the way of configuration, but you can get a slight speed boost by using phpiredis.

Queue

I sometimes use Redis for this too, but it can be problematic if you’re using Redis as the queue and broadcast backend, so these days I’m more likely to use Beanstalk and keep Redis for other stuff. I use Supervisor for running the queue worker, and this is an example of the sort of configuration I would use:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work --sleep=3 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=8
redirect_stderr=true
stdout_logfile=/var/log/worker.log

This is fairly standard for Laravel applications.

Scheduler

I often make use of the Laravel scheduled tasks system. Here’s the typical cron job that would be used for that:

* * * * * php /var/www/artisan schedule:run >> /dev/null 2>&1

Again, this is standard for Laravel applications. It runs the scheduler every minute, and the scheduler then determines if it needs to do something.

Provisioning

To set all this up, I’ll generally use Ansible. In addition to this, I’ll generally also set up fail2ban to block various attacks via both HTTP and SSH.

28th January 2018 8:20 pm

Why the Speed of Your MVC Framework Is Usually a Red Herring

Skim through any programming-related forum and you’ll often find statements along the lines of the following:

  • “I chose Lumen for my website because the benchmarks show it’s faster than Laravel”
  • “I’m using raw queries because they’re faster than using an ORM”
  • “I wrote the site in pure PHP to avoid the overhead of a framework”

Making my web apps performant is something I care deeply about. Yet every time I see something like this I cringe. Why? Because statements like these are full of wild misconceptions about the real performance bottlenecks in modern web applications. I don’t blame framework vendors for publishing benchmarks of their applications, since the performance of web apps is a big issue, but they are often misleading even when they’re correct, and it’s all too easy for inexperienced developers to think that performance is a matter of picking the fastest framework, rather than following a methodology of identifying and dealing with performance bottlenecks.

In this post I’ll explain why the performance of the framework, while not a non-issue, should come way down the list of factors involved in choosing a framework (or not to use one at all), behind functionality and developer productivity, and how many other factors not related to the choice of framework are involved.

Benchmarks don’t include real-world optimisations

When benchmarking a number of frameworks together, you’ll typically be testing some fairly basic behaviour such as rendering a view, and maybe making a database query. It’s rare for them to also include things such as caching queries or sending the correct HTTP caching headers.

Also, it’s quite common for the party creating the benchmark to have their own preference they’re more familiar with, in which case they’ll have a better idea of how to optimise that one. If they don’t know how to optimise all of them to the same extent, the end results is going to be biased. For example, in the case of Laravel, running php artisan optimize can significantly improve application performance by caching large chunks of the application.

In addition, the configuration for the web server is quite likely to be suboptimal compared to a production server. For instance, they may not have the opcode cache installed, or Nginx may not set the right headers on static assets. Under these circumstances the benchmarks are very likely to be misleading. Ultimately, if you chose to completely rewrite an entire application from scratch in a new framework to claw back a few milliseconds, how do you know you’ll actually see that translate into better performance in production for your particular use case?

And if you’re even considering running a supposedly performance-critical application on shared hosting, you should hang your head in shame…

Your from-scratch implementation of functionality is probably slower than an existing one

If you’re building some functionality from scratch instead of using an off-the-shelf library on the basis of performance, just stop. Existing libraries have usually had a great deal of attention already, should have working test suites, and depending on how active the developer community around them is, they may well have found and resolved the most egregious performance bottlenecks. Yours, on the other hand, will be new, untested, and could easily have serious bottlenecks if you haven’t profiled it extensively. It’s therefore very, very unlikely that you’ll be able to produce something more performant than the existing solutions, unless those existing solutions are old, barely maintained ones.

The only time this might be worthwhile is if all the existing implementations have boatloads of functionality, and you only need a small portion of that functionality. Even then, you should consider if it’s worth your while for a tiny speed boost. Or if you want to write a new library for it, go ahead - just don’t kid yourself about it being for the sake of performance.

Smaller frameworks are faster because they do less

Microframeworks such as Lumen are generally faster (at least in the artificial world of benchmarks), but that’s because they leave out functionality that’s not necessary for their targeted use case. Lumen is aimed at building microservices, and it leaves out things like templating, file handling, and other functionality not focused solely on building microservices. That means it’s less useful for other use cases. Any code that gets added to the application will make it marginally slower just by virtue of being there.

Under these circumstances it’s blindingly obvious that the framework that has to do less setup (eg instantiate fewer services, perform less operations on the request and response), is nearly always going to respond faster, regardless of suitability for more complex work.

If you start building a site with Lumen, but then discover that you need some functionality that Laravel has and Lumen doesn’t, you have two choices:

  • Switch to Laravel
  • Add that functionality to your application (either through additional packages or rolling it yourself)

I’ve often had plans to use Lumen for a project in the past, but then discovered that it would benefit from some of Laravel’s functionality. Under those circumstances I’ve switched straight over to Laravel - my time is too valuable to my employer to waste reimplementing functionality Laravel already has, and that functionality will inevitably have some overhead. Put it this way - I do a lot of Phonegap work, so building APIs is a big part of what I do, but I’ve only ever finished one project using Lumen (a push notification microservice). Every other time, sooner or later I’ve run into a situation where the additional functionality of Laravel would be useful and switched over.

There are occasions when a lighter framework like Lumen makes sense, but only when I simply don’t need the additional functionality of Laravel. It just doesn’t make sense to go for Lumen and then start adding functionality Laravel already has - any new implementation isn’t likely to be as solid, well-tested and performant as Laravel’s implementation.

Framework performance is often less relevant if you’re using Varnish

In my experience, if you have a site or API that is under heavy load, then if it’s possible to use Varnish with it, that will have a far more significant effect on performance than switching between PHP frameworks.

Because Varnish sits in front of your web server, when you’re serving cached content, anything after Varnish is completely irrelevant to the performance- it won’t hit the backend again until the cached content has expired. Varnish is effectively a key-value store, and is written in C, so it’s far more performant than just about any backend in any framework you could possibly write. And it’s configurable enough that with sufficient experience it can usually be helpful for most applications.

Varnish isn’t appropriate for every use case, and it doesn’t help with uncached requests (except by reducing the load on the application) but where high performance is necessary it can be a very big help indeed. The speed boost from having Varnish in front of your site and properly configured dwarfs any boost of a few milliseconds from switching PHP framework.

There are other HTTP caching servers available too - for instance, it’s possible to use Nginx as a web cache, and Cloudflare is a hosted service that offers similar performance benefits. Regardless, the same applies - if you can handle a request using the caching server rather than the application behind it, the performance will be immensely better, without having to change your application code.

ORM vs raw queries is a drop in the ocean

There will always be some overhead from using any ORM. However, this is nearly always so minor as to be a non-issue.

For example, while there might be some slight performance increase from writing raw SQL instead of using an ORM, it’s generally dwarfed by the cost of making the query in the first place. You can get a far, far bigger improvement in performance by efficiently caching the responses than by rewriting ORM queries in raw SQL.

An ORM does make certain types of slow inefficient queries more likely, as well as making “hidden” queries (such as in Laravel when it fetches the user from the session), but that’s something that can be resolved by using a profiler like Clockwork to identify the slow or unnecessary queries and refactoring them. Most ORM’s have tools to handle things like the N+1 problem - for instance, Eloquent has the with() method to eager-load related tables, which is generally a lot more convenient than explicitly writing a query to do the eager-loading for you.

Using an ORM also comes with significant benefits to developers:

  • It’s generally easier to express relations between tables
  • It helps avoid the mental context switch between PHP and SQL
  • It does a lot of the work of sanitizing data for you
  • It helps make your application portable between different databases (eg so you can run your tests using an in-memory SQLite database but use MySQL in production)
  • Where you have logic that can’t be expressed using the ORM, it’s generally easy to drop down to writing raw SQL for that part

In my experience, querying the database is almost always the single biggest bottleneck (the only other thing that can be as bad is if you’re making requests to a slow third-party API), and any overhead from the ORM is a drop in the ocean in comparison. If you have a slow query in a web application, then rewriting it as a raw query is probably the very last thing you should consider doing, after:

  • Refactoring the query or queries to be more efficient/remove unnecessary queries
  • Making sure the appropriate indices are set on your database
  • Caching the responses

Caching in particular is quite hard to do - it’s difficult to come up with a reliable and reusable strategy for caching responses without serving stale content, but once you can do so, it makes a huge difference to application performance.

Writing all your queries as raw queries is a micro-optimisation - it’s a lot of work for not that much payback, and it’s hardly ever worth the bother. Even if you have a single, utterly horrendous query or set of queries that has a huge overhead, there are better ways to deal with it - under those circumstances I’d be inclined to create a stored procedure in a migration and call that rather than making the query directly.

Summary

So to sum it up, if someone tells you you should use framework X because it’s faster than framework Y, they might be somewhat right, but that misses the point completely. Benchmarks are so artificial as to be almost useless for determining how your production code will perform. Any half-decent framework will give you the tools you need to optimise performance, and your use of those tools will have a far, far more signficant effect on the response time of your application than picking between different frameworks. I’ve never found a single MVC framework whose core is slow enough that I can’t make it fast enough with the capabilities provided.

Also, considering that these days server hardware is dirt cheap (at time of writing US$5 gets you a Digital Ocean droplet with 1GB of RAM for a month), whereas developers are far, far more expensive, it’s more cost effective to optimise for the developer’s time, not server time, so it makes sense to pick a framework that makes you productive, not one that makes the application productive. That’s no excuse for slow, shitty applications, but when all else fails, spinning up additional servers is a far more cost-effective solution than spending days on end rewriting your entire application in a different framework that benchmarks show might perform better by a few milliseconds.

22nd January 2018 12:00 pm

Deploying Your Laravel Application With Deployer

Deployment processes have a nasty tendency to be a mish-mash of cobbled-together scripts or utilities in many web shops, with little or no consistency in practice between them. As a result, it’s all too easy for even the most experienced developer to mess up a deployment.

I personally have used all kinds of bodged-together solutions. For a while I used Envoy scripts to deploy my Laravel apps, but then there was an issue with the SSH library in PHP 7 that made it impractical to use it. Then I adopted Fabric, which I’d used before for deploying Django apps and will do fine for deploying PHP apps too, but it wasn’t much more sophisticated than using shell scripts for deployment purposes. There are third-party services like Deploybot, but these are normally quite expensive for what they are.

A while back I heard of Deployer, but I didn’t have the opportunity to try it until recently on a personal project as I was working somewhere that had its own in-house deployment process. It’s a PHP-specific deployment tool with recipes for deploying applications built with various frameworks and CMS’s, including Laravel, Symfony, CodeIgniter and Drupal.

Installing Deployer

Deployer is installed as a .phar file, much like you would with Composer:

$ curl -LO https://deployer.org/deployer.phar
$ mv deployer.phar /usr/local/bin/dep
$ chmod +x /usr/local/bin/dep

With that done, you should be able to run the following command in your project’s directory to create a Deployer script:

$ dep init

In response, you should see a list of project types:

Welcome to the Deployer config generator
This utility will walk you through creating a deploy.php file.
It only covers the most common items, and tries to guess sensible defaults.
Press ^C at any time to quit.
Please select your project type [Common]:
[0] Common
[1] Laravel
[2] Symfony
[3] Yii
[4] Yii2 Basic App
[5] Yii2 Advanced App
[6] Zend Framework
[7] CakePHP
[8] CodeIgniter
[9] Drupal
>

Here I chose Laravel as I was deploying a Laravel project. I was then prompted for the repository URL - this will be filled in with the origin remote if the current folder is already a Git repository:

Repository [git@gitlab.com:Group/Project.git]:
>

You’ll also see a message about contributing anonymous usage data. After answering this, the file deploy.php will be generated:

<?php
namespace Deployer;
require 'recipe/laravel.php';
// Configuration
set('repository', 'git@gitlab.com:Group/Project.git');
set('git_tty', true); // [Optional] Allocate tty for git on first deployment
add('shared_files', []);
add('shared_dirs', []);
add('writable_dirs', []);
// Hosts
host('project.com')
->stage('production')
->set('deploy_path', '/var/www/project.com');
host('beta.project.com')
->stage('beta')
->set('deploy_path', '/var/www/project.com');
// Tasks
desc('Restart PHP-FPM service');
task('php-fpm:restart', function () {
// The user must have rights for restart service
// /etc/sudoers: username ALL=NOPASSWD:/bin/systemctl restart php-fpm.service
run('sudo systemctl restart php-fpm.service');
});
after('deploy:symlink', 'php-fpm:restart');
// [Optional] if deploy fails automatically unlock.
after('deploy:failed', 'deploy:unlock');
// Migrate database before symlink new release.
before('deploy:symlink', 'artisan:migrate');

By default it has two hosts, beta and production, and you can refer to them by these names. You can also add or remove hosts, and amend the existing ones. Note the deploy path as well - this sets the place where the application gets deployed to.

Note that it’s set up to expect the server to be using PHP-FPM and Nginx by default, so if you’re using Apache you may need to amend the command to restart the server. Also, note that if like me you’re using PHP 7 on a distro like Debian that also has PHP 5 around, you’ll probably need to change the references to php-fpm as follows:

desc('Restart PHP-FPM service');
task('php-fpm:restart', function () {
// The user must have rights for restart service
// /etc/sudoers: username ALL=NOPASSWD:/bin/systemctl restart php-fpm.service
run('sudo systemctl restart php7.0-fpm.service');
});
after('deploy:symlink', 'php-fpm:restart');

You will also need to make sure the acl package is installed - on Debian and Ubuntu you can install it as follows:

$ sudo apt-get install acl

Now, the recipe for deploying a Laravel app will include the following:

  • Pulling from the Git remote
  • Updating any Composer dependencies to match composer.json
  • Running the migrations
  • Optimizing the application

In addition, one really great feature Deployer offers is rollbacks. Rather than checking out your application directly into the project root you specify, it numbers each release and deploys it in a separate folder, before symlinking that folder to the project root as current. That way, if a release cannot be deployed successfully, rather than leaving your application in an unfinished state, Deployer will symlink the previous version so that you still have a working version of your application.

If you have configured Deployer for that project, you can deploy using the following command where production is the name of the host you’re deploying to:

$ dep deploy production

The output will look something like this:

✔ Executing task deploy:prepare
✔ Executing task deploy:lock
✔ Executing task deploy:release
➤ Executing task deploy:update_code
Counting objects: 761, done.
Compressing objects: 100% (313/313), done.
Writing objects: 100% (761/761), done.
Total 761 (delta 384), reused 757 (delta 380)
Connection to linklater.shellshocked.info closed.
✔ Ok
✔ Executing task deploy:shared
✔ Executing task deploy:vendors
✔ Executing task deploy:writable
✔ Executing task artisan:storage:link
✔ Executing task artisan:view:clear
✔ Executing task artisan:cache:clear
✔ Executing task artisan:config:cache
✔ Executing task artisan:optimize
✔ Executing task artisan:migrate
✔ Executing task deploy:symlink
✔ Executing task php-fpm:restart
✔ Executing task deploy:unlock
✔ Executing task cleanup
✔ Executing task success
Successfully deployed!

As you can see, we first of all lock the application and pull the latest version from the Git remote. Next we copy the files shared between releases (eg the .env file, the storage/ directory etc), update the dependencies, and make sure the permissions are correct. Next we link the storage, clear all the cached content, optimize our app, and migrate the database, before we set up the symlink. Finally we restart the web server and unlock the application.

In the event you discover a problem after deploy and need to rollback manually, you can do so with the following command:

$ dep rollback production

That makes it easy to ensure that in the event of something going wrong, you can quickly switch back to an earlier version with zero downtime.

Deployer has made deployments a lot less painful for me than any other solution I’ve tried. The support for rollbacks means that if something goes wrong it’s trivial to switch back to an earlier revision.

Recent Posts

Put Your Laravel Controllers on a Diet

Using Lando As An Alternative to Vagrant

How I Deploy Laravel Apps

Why the Speed of Your MVC Framework Is Usually a Red Herring

Deploying Your Laravel Application With Deployer

About me

I'm a web and mobile app developer based in Norfolk. My skillset includes Python, PHP and Javascript, and I have extensive experience working with CodeIgniter, Laravel, Django, Phonegap and Angular.js.