Matthew Daly's Blog

I'm a web developer in Norfolk. This is my blog...

29th January 2018 10:00 pm

How I Deploy Laravel Apps

A while back I provided details of the web server setup I used for Django applications. Nowadays I tend to use Laravel most of the time, so I thought I’d share an example of the sort of setup I use to deploy that.

Server OS

As before I generally prefer Debian Stable where possible. If that’s not possible for any reason then the current Ubuntu LTS is an acceptable substitute.

Web server

My usual web server these days is Nginx with PHP 7 or better via FPM. I generally use HTTP2 where possible, with SSL via Let’s Encrypt.

Here’s my typical Nginx config:

fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=my-app:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; img-src 'self' https://placehold.it; style-src 'self' https://fonts.googleapis.com ; font-src 'self' https://themes.googleusercontent.com; frame-src 'none'; object-src 'none'";
server_tokens off;
server {
listen 80;
listen [::]:80;
server_name my-app.domain;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
include snippets/ssl-my-app.domain.conf;
include snippets/ssl-params.conf;
client_max_body_size 50M;
fastcgi_param HTTP_PROXY "";
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
root /var/www/my-app.domain/current/public;
index index.php index.html index.htm;
server_name my-app.domain;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm-my-app.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_cache my-app;
fastcgi_cache_valid 200 60m;
}
location ~ /.well-known {
allow all;
}
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
gzip on;
gzip_vary on;
gzip_types application/json text/xml application/xml;
}
location ~* \.(?:rss|atom)$ {
expires 1h;
add_header Cache-Control "public";
gzip on;
gzip_vary on;
gzip_types application/xml+rss;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
gzip on;
gzip_vary on;
gzip_types text/css application/javascript text/javascript;
}
}

The times for FastCGI caching tend to vary in practice - sometimes it’s not appropriate to use it all, while for others it can be cached for some time.

It’s generally fairly safe to cache CSS and JS for a long time with a Laravel app if you’re using Mix to version those assets, so I feel comfortable caching them for a year. Images are a bit dicier, but still don’t change often so a month seems good enough.

I’ll typically give each application its own pool, which means copying the file at /etc/php/7.0/fpm/pool.d/www.conf to another file in the same directory, amending the pool name and path to set a new location for the socket, and then restarting Nginx and PHP-FPM. Here are the fields that should be changed:

; Start a new pool named 'www'.
; the variable $pool can be used in any directive and will be replaced by the
; pool name ('www' here)
[my-app.domain]
...
listen = /var/run/php/php7.0-fpm-my-app.sock

Database

I’m a fan of PostgreSQL - it’s stricter than MySQL/MariaDB, and has some very useful additional field types, so where possible I prefer to use it over MySQL or MariaDB.

Cache and session backend

Redis is my usual choice here - I make heavy use of cache tags so I need a backend for the cache that supports them, and Memcached doesn’t seem to have as much inertia as Redis these days. Neither needs much in the way of configuration, but you can get a slight speed boost by using phpiredis.

Queue

I sometimes use Redis for this too, but it can be problematic if you’re using Redis as the queue and broadcast backend, so these days I’m more likely to use Beanstalk and keep Redis for other stuff. I use Supervisor for running the queue worker, and this is an example of the sort of configuration I would use:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work --sleep=3 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=8
redirect_stderr=true
stdout_logfile=/var/log/worker.log

This is fairly standard for Laravel applications.

Scheduler

I often make use of the Laravel scheduled tasks system. Here’s the typical cron job that would be used for that:

* * * * * php /var/www/artisan schedule:run >> /dev/null 2>&1

Again, this is standard for Laravel applications. It runs the scheduler every minute, and the scheduler then determines if it needs to do something.

Provisioning

To set all this up, I’ll generally use Ansible. In addition to this, I’ll generally also set up fail2ban to block various attacks via both HTTP and SSH.

28th January 2018 8:20 pm

Why the Speed of Your MVC Framework Is Usually a Red Herring

Skim through any programming-related forum and you’ll often find statements along the lines of the following:

  • “I chose Lumen for my website because the benchmarks show it’s faster than Laravel”
  • “I’m using raw queries because they’re faster than using an ORM”
  • “I wrote the site in pure PHP to avoid the overhead of a framework”

Making my web apps performant is something I care deeply about. Yet every time I see something like this I cringe. Why? Because statements like these are full of wild misconceptions about the real performance bottlenecks in modern web applications. I don’t blame framework vendors for publishing benchmarks of their applications, since the performance of web apps is a big issue, but they are often misleading even when they’re correct, and it’s all too easy for inexperienced developers to think that performance is a matter of picking the fastest framework, rather than following a methodology of identifying and dealing with performance bottlenecks.

In this post I’ll explain why the performance of the framework, while not a non-issue, should come way down the list of factors involved in choosing a framework (or not to use one at all), behind functionality and developer productivity, and how many other factors not related to the choice of framework are involved.

Benchmarks don’t include real-world optimisations

When benchmarking a number of frameworks together, you’ll typically be testing some fairly basic behaviour such as rendering a view, and maybe making a database query. It’s rare for them to also include things such as caching queries or sending the correct HTTP caching headers.

Also, it’s quite common for the party creating the benchmark to have their own preference they’re more familiar with, in which case they’ll have a better idea of how to optimise that one. If they don’t know how to optimise all of them to the same extent, the end results is going to be biased. For example, in the case of Laravel, running php artisan optimize can significantly improve application performance by caching large chunks of the application.

In addition, the configuration for the web server is quite likely to be suboptimal compared to a production server. For instance, they may not have the opcode cache installed, or Nginx may not set the right headers on static assets. Under these circumstances the benchmarks are very likely to be misleading. Ultimately, if you chose to completely rewrite an entire application from scratch in a new framework to claw back a few milliseconds, how do you know you’ll actually see that translate into better performance in production for your particular use case?

And if you’re even considering running a supposedly performance-critical application on shared hosting, you should hang your head in shame…

Your from-scratch implementation of functionality is probably slower than an existing one

If you’re building some functionality from scratch instead of using an off-the-shelf library on the basis of performance, just stop. Existing libraries have usually had a great deal of attention already, should have working test suites, and depending on how active the developer community around them is, they may well have found and resolved the most egregious performance bottlenecks. Yours, on the other hand, will be new, untested, and could easily have serious bottlenecks if you haven’t profiled it extensively. It’s therefore very, very unlikely that you’ll be able to produce something more performant than the existing solutions, unless those existing solutions are old, barely maintained ones.

The only time this might be worthwhile is if all the existing implementations have boatloads of functionality, and you only need a small portion of that functionality. Even then, you should consider if it’s worth your while for a tiny speed boost. Or if you want to write a new library for it, go ahead - just don’t kid yourself about it being for the sake of performance.

Smaller frameworks are faster because they do less

Microframeworks such as Lumen are generally faster (at least in the artificial world of benchmarks), but that’s because they leave out functionality that’s not necessary for their targeted use case. Lumen is aimed at building microservices, and it leaves out things like templating, file handling, and other functionality not focused solely on building microservices. That means it’s less useful for other use cases. Any code that gets added to the application will make it marginally slower just by virtue of being there.

Under these circumstances it’s blindingly obvious that the framework that has to do less setup (eg instantiate fewer services, perform less operations on the request and response), is nearly always going to respond faster, regardless of suitability for more complex work.

If you start building a site with Lumen, but then discover that you need some functionality that Laravel has and Lumen doesn’t, you have two choices:

  • Switch to Laravel
  • Add that functionality to your application (either through additional packages or rolling it yourself)

I’ve often had plans to use Lumen for a project in the past, but then discovered that it would benefit from some of Laravel’s functionality. Under those circumstances I’ve switched straight over to Laravel - my time is too valuable to my employer to waste reimplementing functionality Laravel already has, and that functionality will inevitably have some overhead. Put it this way - I do a lot of Phonegap work, so building APIs is a big part of what I do, but I’ve only ever finished one project using Lumen (a push notification microservice). Every other time, sooner or later I’ve run into a situation where the additional functionality of Laravel would be useful and switched over.

There are occasions when a lighter framework like Lumen makes sense, but only when I simply don’t need the additional functionality of Laravel. It just doesn’t make sense to go for Lumen and then start adding functionality Laravel already has - any new implementation isn’t likely to be as solid, well-tested and performant as Laravel’s implementation.

Framework performance is often less relevant if you’re using Varnish

In my experience, if you have a site or API that is under heavy load, then if it’s possible to use Varnish with it, that will have a far more significant effect on performance than switching between PHP frameworks.

Because Varnish sits in front of your web server, when you’re serving cached content, anything after Varnish is completely irrelevant to the performance- it won’t hit the backend again until the cached content has expired. Varnish is effectively a key-value store, and is written in C, so it’s far more performant than just about any backend in any framework you could possibly write. And it’s configurable enough that with sufficient experience it can usually be helpful for most applications.

Varnish isn’t appropriate for every use case, and it doesn’t help with uncached requests (except by reducing the load on the application) but where high performance is necessary it can be a very big help indeed. The speed boost from having Varnish in front of your site and properly configured dwarfs any boost of a few milliseconds from switching PHP framework.

There are other HTTP caching servers available too - for instance, it’s possible to use Nginx as a web cache, and Cloudflare is a hosted service that offers similar performance benefits. Regardless, the same applies - if you can handle a request using the caching server rather than the application behind it, the performance will be immensely better, without having to change your application code.

ORM vs raw queries is a drop in the ocean

There will always be some overhead from using any ORM. However, this is nearly always so minor as to be a non-issue.

For example, while there might be some slight performance increase from writing raw SQL instead of using an ORM, it’s generally dwarfed by the cost of making the query in the first place. You can get a far, far bigger improvement in performance by efficiently caching the responses than by rewriting ORM queries in raw SQL.

An ORM does make certain types of slow inefficient queries more likely, as well as making “hidden” queries (such as in Laravel when it fetches the user from the session), but that’s something that can be resolved by using a profiler like Clockwork to identify the slow or unnecessary queries and refactoring them. Most ORM’s have tools to handle things like the N+1 problem - for instance, Eloquent has the with() method to eager-load related tables, which is generally a lot more convenient than explicitly writing a query to do the eager-loading for you.

Using an ORM also comes with significant benefits to developers:

  • It’s generally easier to express relations between tables
  • It helps avoid the mental context switch between PHP and SQL
  • It does a lot of the work of sanitizing data for you
  • It helps make your application portable between different databases (eg so you can run your tests using an in-memory SQLite database but use MySQL in production)
  • Where you have logic that can’t be expressed using the ORM, it’s generally easy to drop down to writing raw SQL for that part

In my experience, querying the database is almost always the single biggest bottleneck (the only other thing that can be as bad is if you’re making requests to a slow third-party API), and any overhead from the ORM is a drop in the ocean in comparison. If you have a slow query in a web application, then rewriting it as a raw query is probably the very last thing you should consider doing, after:

  • Refactoring the query or queries to be more efficient/remove unnecessary queries
  • Making sure the appropriate indices are set on your database
  • Caching the responses

Caching in particular is quite hard to do - it’s difficult to come up with a reliable and reusable strategy for caching responses without serving stale content, but once you can do so, it makes a huge difference to application performance.

Writing all your queries as raw queries is a micro-optimisation - it’s a lot of work for not that much payback, and it’s hardly ever worth the bother. Even if you have a single, utterly horrendous query or set of queries that has a huge overhead, there are better ways to deal with it - under those circumstances I’d be inclined to create a stored procedure in a migration and call that rather than making the query directly.

Summary

So to sum it up, if someone tells you you should use framework X because it’s faster than framework Y, they might be somewhat right, but that misses the point completely. Benchmarks are so artificial as to be almost useless for determining how your production code will perform. Any half-decent framework will give you the tools you need to optimise performance, and your use of those tools will have a far, far more signficant effect on the response time of your application than picking between different frameworks. I’ve never found a single MVC framework whose core is slow enough that I can’t make it fast enough with the capabilities provided.

Also, considering that these days server hardware is dirt cheap (at time of writing US$5 gets you a Digital Ocean droplet with 1GB of RAM for a month), whereas developers are far, far more expensive, it’s more cost effective to optimise for the developer’s time, not server time, so it makes sense to pick a framework that makes you productive, not one that makes the application productive. That’s no excuse for slow, shitty applications, but when all else fails, spinning up additional servers is a far more cost-effective solution than spending days on end rewriting your entire application in a different framework that benchmarks show might perform better by a few milliseconds.

22nd January 2018 12:00 pm

Deploying Your Laravel Application With Deployer

Deployment processes have a nasty tendency to be a mish-mash of cobbled-together scripts or utilities in many web shops, with little or no consistency in practice between them. As a result, it’s all too easy for even the most experienced developer to mess up a deployment.

I personally have used all kinds of bodged-together solutions. For a while I used Envoy scripts to deploy my Laravel apps, but then there was an issue with the SSH library in PHP 7 that made it impractical to use it. Then I adopted Fabric, which I’d used before for deploying Django apps and will do fine for deploying PHP apps too, but it wasn’t much more sophisticated than using shell scripts for deployment purposes. There are third-party services like Deploybot, but these are normally quite expensive for what they are.

A while back I heard of Deployer, but I didn’t have the opportunity to try it until recently on a personal project as I was working somewhere that had its own in-house deployment process. It’s a PHP-specific deployment tool with recipes for deploying applications built with various frameworks and CMS’s, including Laravel, Symfony, CodeIgniter and Drupal.

Installing Deployer

Deployer is installed as a .phar file, much like you would with Composer:

$ curl -LO https://deployer.org/deployer.phar
$ mv deployer.phar /usr/local/bin/dep
$ chmod +x /usr/local/bin/dep

With that done, you should be able to run the following command in your project’s directory to create a Deployer script:

$ dep init

In response, you should see a list of project types:

Welcome to the Deployer config generator
This utility will walk you through creating a deploy.php file.
It only covers the most common items, and tries to guess sensible defaults.
Press ^C at any time to quit.
Please select your project type [Common]:
[0] Common
[1] Laravel
[2] Symfony
[3] Yii
[4] Yii2 Basic App
[5] Yii2 Advanced App
[6] Zend Framework
[7] CakePHP
[8] CodeIgniter
[9] Drupal
>

Here I chose Laravel as I was deploying a Laravel project. I was then prompted for the repository URL - this will be filled in with the origin remote if the current folder is already a Git repository:

Repository [git@gitlab.com:Group/Project.git]:
>

You’ll also see a message about contributing anonymous usage data. After answering this, the file deploy.php will be generated:

<?php
namespace Deployer;
require 'recipe/laravel.php';
// Configuration
set('repository', 'git@gitlab.com:Group/Project.git');
set('git_tty', true); // [Optional] Allocate tty for git on first deployment
add('shared_files', []);
add('shared_dirs', []);
add('writable_dirs', []);
// Hosts
host('project.com')
->stage('production')
->set('deploy_path', '/var/www/project.com');
host('beta.project.com')
->stage('beta')
->set('deploy_path', '/var/www/project.com');
// Tasks
desc('Restart PHP-FPM service');
task('php-fpm:restart', function () {
// The user must have rights for restart service
// /etc/sudoers: username ALL=NOPASSWD:/bin/systemctl restart php-fpm.service
run('sudo systemctl restart php-fpm.service');
});
after('deploy:symlink', 'php-fpm:restart');
// [Optional] if deploy fails automatically unlock.
after('deploy:failed', 'deploy:unlock');
// Migrate database before symlink new release.
before('deploy:symlink', 'artisan:migrate');

By default it has two hosts, beta and production, and you can refer to them by these names. You can also add or remove hosts, and amend the existing ones. Note the deploy path as well - this sets the place where the application gets deployed to.

Note that it’s set up to expect the server to be using PHP-FPM and Nginx by default, so if you’re using Apache you may need to amend the command to restart the server. Also, note that if like me you’re using PHP 7 on a distro like Debian that also has PHP 5 around, you’ll probably need to change the references to php-fpm as follows:

desc('Restart PHP-FPM service');
task('php-fpm:restart', function () {
// The user must have rights for restart service
// /etc/sudoers: username ALL=NOPASSWD:/bin/systemctl restart php-fpm.service
run('sudo systemctl restart php7.0-fpm.service');
});
after('deploy:symlink', 'php-fpm:restart');

You will also need to make sure the acl package is installed - on Debian and Ubuntu you can install it as follows:

$ sudo apt-get install acl

Now, the recipe for deploying a Laravel app will include the following:

  • Pulling from the Git remote
  • Updating any Composer dependencies to match composer.json
  • Running the migrations
  • Optimizing the application

In addition, one really great feature Deployer offers is rollbacks. Rather than checking out your application directly into the project root you specify, it numbers each release and deploys it in a separate folder, before symlinking that folder to the project root as current. That way, if a release cannot be deployed successfully, rather than leaving your application in an unfinished state, Deployer will symlink the previous version so that you still have a working version of your application.

If you have configured Deployer for that project, you can deploy using the following command where production is the name of the host you’re deploying to:

$ dep deploy production

The output will look something like this:

✔ Executing task deploy:prepare
✔ Executing task deploy:lock
✔ Executing task deploy:release
➤ Executing task deploy:update_code
Counting objects: 761, done.
Compressing objects: 100% (313/313), done.
Writing objects: 100% (761/761), done.
Total 761 (delta 384), reused 757 (delta 380)
Connection to linklater.shellshocked.info closed.
✔ Ok
✔ Executing task deploy:shared
✔ Executing task deploy:vendors
✔ Executing task deploy:writable
✔ Executing task artisan:storage:link
✔ Executing task artisan:view:clear
✔ Executing task artisan:cache:clear
✔ Executing task artisan:config:cache
✔ Executing task artisan:optimize
✔ Executing task artisan:migrate
✔ Executing task deploy:symlink
✔ Executing task php-fpm:restart
✔ Executing task deploy:unlock
✔ Executing task cleanup
✔ Executing task success
Successfully deployed!

As you can see, we first of all lock the application and pull the latest version from the Git remote. Next we copy the files shared between releases (eg the .env file, the storage/ directory etc), update the dependencies, and make sure the permissions are correct. Next we link the storage, clear all the cached content, optimize our app, and migrate the database, before we set up the symlink. Finally we restart the web server and unlock the application.

In the event you discover a problem after deploy and need to rollback manually, you can do so with the following command:

$ dep rollback production

That makes it easy to ensure that in the event of something going wrong, you can quickly switch back to an earlier version with zero downtime.

Deployer has made deployments a lot less painful for me than any other solution I’ve tried. The support for rollbacks means that if something goes wrong it’s trivial to switch back to an earlier revision.

12th January 2018 1:16 pm

Creating a Caching User Provider for Laravel

If you have a Laravel application that requires users to log in and you use Clockwork or Laravel DebugBar to examine the queries that take place, you’ll probably notice a query that fetches the user model occurs quite a lot. This is because the user’s ID gets stored in the session, and is then used to retrieve the model.

This query is a good candidate for caching because not only is that query being made often, but it’s also not something that changes all that often. If you’re careful, it’s quite easy to set your application up to cache the user without having to worry about invalidating the cache.

Laravel allows you to define your own user providers in order to fetch the user’s details. These must implement Illuminate\Contracts\Auth\UserProvider and must return a user model from the identifier provided. Out of the box it comes with two implementations, Illuminate\Auth\EloquentUserProvider and Illuminate\Auth\DatabaseUserProvider, with the former being the default. Our caching user provider can extend the Eloquent one as follows:

<?php
namespace App\Auth;
use Illuminate\Auth\EloquentUserProvider;
use Illuminate\Contracts\Cache\Repository;
use Illuminate\Contracts\Hashing\Hasher as HasherContract;
class CachingUserProvider extends EloquentUserProvider
{
/**
* The cache instance.
*
* @var Repository
*/
protected $cache;
/**
* Create a new database user provider.
*
* @param \Illuminate\Contracts\Hashing\Hasher $hasher
* @param string $model
* @param Repository $cache
* @return void
*/
public function __construct(HasherContract $hasher, $model, Repository $cache)
{
$this->model = $model;
$this->hasher = $hasher;
$this->cache = $cache;
}
/**
* Retrieve a user by their unique identifier.
*
* @param mixed $identifier
* @return \Illuminate\Contracts\Auth\Authenticatable|null
*/
public function retrieveById($identifier)
{
return $this->cache->tags($this->getModel())->remember('user_by_id_'.$identifier, 60, function () use ($identifier) {
return parent::retrieveById($identifier);
});
}
}

Note that we override the constructor to accept a cache instance as well as the other arguments. We also override the retrieveById() method to wrap a call to the parent’s implementation inside a callback that caches the response. I usually tag anything I cache with the model name, but if you need to use a cache backend that doesn’t support tagging this may not be an option. Our cache key also includes the identifier so that it’s unique to that user.

We then need to add our user provider to the auth service provider:

<?php
namespace App\Providers;
use Illuminate\Support\Facades\Gate;
use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider;
use App\Auth\CachingUserProvider;
use Illuminate\Support\Facades\Auth;
class AuthServiceProvider extends ServiceProvider
{
/**
* Register any authentication / authorization services.
*
* @return void
*/
public function boot()
{
$this->registerPolicies();
Auth::provider('caching', function ($app, array $config) {
return new CachingUserProvider(
$app->make('Illuminate\Contracts\Hashing\Hasher'),
$config['model'],
$app->make('Illuminate\Contracts\Cache\Repository')
);
});
}
}

Note here that we call this provider caching, and we pass it the hasher, the model name, and an instance of the cache. Then, we need to update config/auth.php to use this provider:

'providers' => [
'users' => [
'driver' => 'caching',
'model' => App\Eloquent\Models\User::class,
],
],

The only issue now is that our user models will continue to be cached, even when they are updated. To be able to flush the cache, we can create a model event that fires whenever the user model is updated:

<?php
namespace App\Eloquent\Models;
use Illuminate\Notifications\Notifiable;
use Illuminate\Foundation\Auth\User as Authenticatable;
use App\Events\UserAmended;
class User extends Authenticatable
{
use Notifiable;
protected $dispatchesEvents = [
'saved' => UserAmended::class,
'deleted' => UserAmended::class,
'restored' => UserAmended::class,
];
}

This will call the UserAmended event when a user model is created, updated, deleted or restored. Then we can define that event:

<?php
namespace App\Events;
use Illuminate\Broadcasting\Channel;
use Illuminate\Queue\SerializesModels;
use Illuminate\Broadcasting\PrivateChannel;
use Illuminate\Broadcasting\PresenceChannel;
use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Broadcasting\InteractsWithSockets;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
use App\Eloquent\Models\User;
class UserAmended
{
use Dispatchable, InteractsWithSockets, SerializesModels;
/**
* Create a new event instance.
*
* @return void
*/
public function __construct(User $model)
{
$this->model = $model;
}
}

Note our event contains an instance of the user model. Then we set up a listener to do the work of clearing the cache:

<?php
namespace App\Listeners;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Contracts\Queue\ShouldQueue;
use App\Events\UserAmended;
use Illuminate\Contracts\Cache\Repository;
class ClearUserId
{
/**
* Create the event listener.
*
* @return void
*/
public function __construct(Repository $cache)
{
$this->cache = $cache;
}
/**
* Handle the event.
*
* @param object $event
* @return void
*/
public function handle(UserAmended $event)
{
$this->cache->tags(get_class($event->model))->forget('user_by_id_'.$event->model->id);
}
}

Here, we get the user model’s class again, and clear the cache entry for that user model.

Finally, we hook up the event and listener in the event service provider:

<?php
namespace App\Providers;
use Illuminate\Support\Facades\Event;
use Illuminate\Foundation\Support\Providers\EventServiceProvider as ServiceProvider;
class EventServiceProvider extends ServiceProvider
{
/**
* The event listener mappings for the application.
*
* @var array
*/
protected $listen = [
'App\Events\UserAmended' => [
'App\Listeners\ClearUserId',
],
];
/**
* Register any events for your application.
*
* @return void
*/
public function boot()
{
parent::boot();
//
}
}

With that done, our user should be cached after the first load, and flushed when the model is amended.

Handling eager-loaded data

It may be that you’re pulling in additional data from the user model in your application, such as roles, permissions, or a separate profile model. Under those circumstances it makes sense to treat that data in the same way by eager-loading it along with your user model.

<?php
namespace App\Auth;
use Illuminate\Auth\EloquentUserProvider;
use Illuminate\Contracts\Cache\Repository;
use Illuminate\Contracts\Hashing\Hasher as HasherContract;
class CachingUserProvider extends EloquentUserProvider
{
/**
* The cache instance.
*
* @var Repository
*/
protected $cache;
/**
* Create a new database user provider.
*
* @param \Illuminate\Contracts\Hashing\Hasher $hasher
* @param string $model
* @param Repository $cache
* @return void
*/
public function __construct(HasherContract $hasher, $model, Repository $cache)
{
$this->model = $model;
$this->hasher = $hasher;
$this->cache = $cache;
}
/**
* Retrieve a user by their unique identifier.
*
* @param mixed $identifier
* @return \Illuminate\Contracts\Auth\Authenticatable|null
*/
public function retrieveById($identifier)
{
return $this->cache->tags($this->getModel())->remember('user_by_id_'.$identifier, 60, function () use ($identifier) {
$model = $this->createModel();
return $model->newQuery()
->with('roles', 'permissions', 'profile')
->where($model->getAuthIdentifierName(), $identifier)
->first();
});
}
}

Because we need to amend the query itself, we can’t just defer to the parent implementation like we did above and must instead copy it over and amend it to eager-load the data.

You’ll also need to set up model events to clear the cache whenever one of the related fields is updated, but it should be fairly straightforward to do so.

Summary

Fetching a user model (and possibly some relations) on every page load while logged in can be a bit much, and it makes sense to cache as much as you can without risking serving stale data. Using this technique you can potentially cache a lot of repetitive, unnecessary queries and make your application faster.

This technique will also work in cases where you’re using other methods of maintaining user state, such as JWT, as long as you’re making use of a guard for authentication purposes, since all of these guards will still be using the same user provider. In fact, I first used this technique on a REST API that used JWT for authentication, and it’s worked well in that case.

10th January 2018 10:07 pm

Adding Opensearch Support to Your Site

For the uninitiated, OpenSearch is the technology that lets you enter a site’s URL, and then press Tab to start searching on that site - you can see it in action on this site. It’s really useful, and quite easy to implement if you know how.

OpenSearch relies on having a particular XML file available. Here’s the opensearch.xml file for this site:

<?xml version="1.0" encoding="UTF-8"?>
<OpenSearchDescription xmlns:moz="http://www.mozilla.org/2006/browser/search/"
xmlns="http://a9.com/-/spec/opensearch/1.1/">
<ShortName>matthewdaly.co.uk</ShortName>
<Description>Search matthewdaly.co.uk</Description>
<InputEncoding>UTF-8</InputEncoding>
<Url method="get" type="text/html"
template="http://www.google.com/search?q={searchTerms}&amp;sitesearch=matthewdaly.co.uk"/>
</OpenSearchDescription>

In this case, as this site uses a static site generator I can’t really do the search on the site, so it’s handed off to a Google site-specific search, but the principle is the same. The three relevant fields are as follows:

  • ShortName - The short name of the site (this should usually just be the domain name)
  • Description - A human-readable description such as Search mysite.com
  • Url - Specifies the HTTP method that should be used to search (GET or POST), and a template for the URL. The search is automatically inserted where {searchTerms} appears

A more typical example of the Url field might be as follows:

<Url method="get" type="text/html"
template="http://www.example.com/search?q={searchTerms}"/>

Normally you will be pointing the template to your site’s own search page. Note that OpenSearch doesn’t actually do any searching itself - it just tells your browser where to send your search request.

With that file saved as opensearch.xml, all you have to do is add it to the <head> in your HTML:

<link href="/opensearch.xml" rel="search" title="Search title" type="application/opensearchdescription+xml">

And that should be all you need to do to get OpenSearch working.

For Laravel sites, I’ve recently created a package for implementing Opensearch that should help as well. With that you need only install the package, and set the fields in the config to point at your existing search page, in order to get OpenSearch working.

Recent Posts

Mutation Testing With Infection

Switching from Vim to Neovim

Better Strings in PHP

Forcing SSL in Codeigniter

Logging to the ELK Stack With Laravel

About me

I'm a web and mobile app developer based in Norfolk. My skillset includes Python, PHP and Javascript, and I have extensive experience working with CodeIgniter, Laravel, Django, Phonegap and Angular.js.