Matthew Daly's Blog

I'm a web developer in Norfolk. This is my blog...

12th April 2018 11:57 pm

Making Wordpress Less Shit

I’m not going to sugarcoat it. As a developer, I think Wordpress is shit, and I’m not alone in that opinion. Its code base dates from a time before many of the developments of the last few years that have hugely improved PHP as a language, as well as the surrounding ecosystem such as Composer and PSR-FIG, and it’s likely it couldn’t adopt many of those without making backward-incompatible changes that would affect its own ecosystem of plugins and themes. It actively forces you to write code that is far less elegant and efficient than what you might write with a proper framework such as Laravel, and the quality of many of the plugins and themes around is dire.

Unfortunately, it’s also difficult to avoid. Over a quarter of all websites run Wordpress, and most developers will have to work with it at some point in their careers. However, there are ways that you can improve your experience when working with Wordpress somewhat. In this post I’m going to share some methods you can use to make Wordpress less painful to use.

This isn’t a post about the obvious things like “Use the most recent version of PHP you can”, “Use SSL”, “Install this plugin”, “Use Vagrant/Lando” etc - I’m assuming you already know stuff like that for bog standard Wordpress development. Nor is it about actually developing Wordpress plugins or themes. Instead, this post is about bringing your Wordpress development workflow more into line with how you develop with MVC frameworks like Laravel, so that you have a better experience working with and maintaining Wordpress sites. We can’t solve the fundamental issues with Wordpress, but we can take some steps to make it easier to work with.

Use Bedrock

Bedrock is still Wordpress, but reorganized so that:

  • The Wordpress core, plugins and themes can be managed with Composer for easier updates
  • The configuration can be done with a .env file that can be kept out of version control, rather than putting it in wp-config.php
  • The web root is isolated to limit access to the files

In short, it optimizes Wordpress for how modern developers work. Arguably that’s at the expense of site owners, since it makes it harder for non-developers to manage the site, however for any Wordpress site that’s sufficiently complex to need development work done that’s a trade-off worth making. I’ve been involved in projects where Wordpress got used alongside an MVC framework for some custom functionality, and in my experience it caused a world of problems when updating plugins and themes because version control would get out of sync, so moving that to use Composer to manage them instead would have been a huge win.

Using Bedrock means that if you have a parent theme you use all the time, or custom plugins of your own, you can install them using Composer by adding the Git repositories to your composer.json, making it easier to re-use functionality you’ve already developed. It also makes recovery easier in the event of the site being compromised, because the files outside the vendor directory will be in version control, and you can delete the vendor directory and re-run composer install to replace the rest. By comparison, with a regular Wordpress install, if it’s compromised you can’t always be certain you’ve got all of the files that have been changed. Also, keeping Wordpress up to date becomes a simple matter of running composer update regularly, verifying it hasn’t broken anything, and then deploying it to production.

Bedrock uses WPackagist, which regularly scans the Wordpress Subversion repository for plugins and themes, so at least for plugins and themes published on the Wordpress site, it’s easy to install them. Paid plugins may be more difficult - I’d be inclined to put those in a private Git repository and install them from there, although I’d be interested to know if anyone else uses another method for that.

If you can’t use Bedrock, use WP CLI

If for any reason you can’t use Bedrock for a site, then have a look at WP CLI. On the server, you can use it to install and manage both plugins and themes, as well as the Wordpress core.

It’s arguably even more useful locally, as it can be used to generate scaffolding for plugins, themes (including child themes based on an existing theme), and components such as custom post types or taxonomies. In short, if you do any non-trivial amount of development with Wordpress you’ll probably find a use for it. Even if you can use Bedrock, you’re likely to find WP CLI handy for the scaffolding.

Upgrade the password encryption

I said this wouldn’t be about using a particular plugin, but this one is too important. Wordpress’s password hashing still relies on MD5, which is far too weak to be considered safe. Unfortunately, Wordpress still supports PHP versions as old as 5.2, and until they drop it they can’t really switch to something more secure.

wp-password-bcrypt overrides the password functionality of Wordpress to use Bcrypt, which is what modern PHP applications use. As a result, the hashes are considerably stronger. Given that Wordpress is a common target for hackers, it’s prudent to ensure your website is as secure as you can possibly make it.

If you use Bedrock, it uses this plugin by default, so it’s already taken care of for you.

Use a proper templating system

PHP is a weird hybrid of a programming language and a templating system. As such, it’s all too easy to wind up with too much logic in your view layer, so it’s a good idea to use a proper templating system if you can. Unfortunately, Wordpress doesn’t support that out of the box.

However, there are some third-party solutions for this. Sage uses Laravel’s Blade templating system (and also comes with Webpack preconfigured), while Timber lets you use Twig.

Use the Wordpress REST API for AJAX where you can

Version 4.7 of Wordpress introduced the Wordpress REST API, allowing the data to be exposed via RESTful endpoints. As a result, it should now be possible to build more complex and powerful user interfaces for that data. For instance, if you were using Wordpress to build a site for listing items for sale, you could create a single-page web app for the front end using React.js and Redux, and use the API to submit it, then show the submitted items.

I’m not a fan of the idea the Wordpress developers seem to have of trying to make it some kind of all-singing, all-dancing universal platform for the web, and the REST API seems to be part of that idea, but it does make it a lot easier than it was in the past to do something a bit out of the ordinary with Wordpress. In some cases it might be worth using Wordpress as the backend for a headless CMS, and the REST API makes that a practical approach. For simpler applications that just need to make a few AJAX calls, using the REST API is generally going to be more elegant and practical than any other approach to AJAX with Wordpress. It’s never going to perform as well or be as elegant as a custom-built REST API, but it’s definitely a step forward compared to the hoops you used to have to jump through to handle AJAX requests in Wordpress.

Summary

Wordpress is, and will remain for the foreseeable future, a pain in the backside to develop for compared to something like Laravel, and I remain completely mystified by the number of people who seem to think it’s the greatest thing since sliced bread. However, it is possible to make things better if you know how - it’s just that some of this stuff seems to be relatively obscure. In particular, discovering Bedrock is potentially game-changing because it makes it so much easier to keep the site under version control.

10th March 2018 3:10 pm

Using Stored Procedures in Your Web App

In the last few days I’ve done something I’ve never done before, namely written a stored procedure for a web app. Like most web developers, I know enough about SQL to be able to formulate some fairly complex queries, but I hadn’t really touched on control flow functions or stored procedures, and in my experience they tend to be the province of the dedicated database administrator, not us web devs, who will typically delegate more complex functionality to our application code.

In this case, there were a number of factors influencing my decision to use a stored procedure for this:

  • The application was a legacy application which had been worked on by developers of, shall we say, varying skill levels. As a result the database schema was badly designed, with no prospect of changing it without causing huge numbers of breakages
  • The query in question was used to generate a complex report that was quite time-consuming, therefore the optimisations from using a stored procedure were worthwhile.
  • The report required that data be grouped by a set of categories which were stored in a separate table, which meant the table had to be pivoted (transformed from rows to columns), resulting in an incredibly complex dynamic query that had to be constructed on the fly by concatenating different SQL strings. In PostgreSQL, this can be done fairly easily using the crosstab function, but MySQL doesn’t have native support for anything like this.

Historically, one issue with using stored procedures has been that it kept business logic out of the application code, meaning they are not stored in version control. However, most modern frameworks provide some support for migrations, and since they are intended to be used to make changes to the database, they are the obvious place to define the stored procedure. This particular application was built with an older framework that didn’t come with migrations, so we’d installed Phinx to handle those for us. Initially, I defined the stored procedure inside a migration that ran a raw query to create the stored procedure, as in this example:

public function up()
{
$query = <<<EOF
CREATE PROCEDURE IF NOT EXISTS foo
BEGIN
SELECT * FROM foo;
END
EOF;
$this->execute($query);
}
public function down()
{
$this->execute('DROP PROCEDURE IF EXISTS foo');
}

Once this is done, you can then use your framework’s particular support for raw queries to call CALL foo() whenever your stored procedure needs to be executed.

However, we soon ran into an issue. It turns out mysqldump doesn’t export stored procedures by default, so there was a risk that anyone working on the code base might import the database from an SQL file and not get the migrations. I’d used the Symfony Console component to create a simple command-line tool, reminiscent of Laravel’s Artisan, so I used that to create a command to set up the stored procedure, amended the migration to call that command, and placed a check in the application where the procedure was called so that if it was not defined the command would be called and the procedure would be created. In most cases this wouldn’t be an issue.

Having now had experience using stored procedures in a web application, there are a number of issues they raise:

  • It’s hard to make queries flexible, whereas with something like Eloquent it’s straightforward to conditionally apply WHERE statements.
  • While storing them in migrations is a practical solution, if the database is likely to be imported rather than created from scratch during development it can be problematic.
  • They aren’t easily portable, not just between database management systems, but between different versions - the production server was using an older version of MySQL, and it failed to create the procedure. It’s therefore good practice for your migrations to check the procedure was created successfully and raise a noisy exception if they failed.

Conversely, they do bring certain benefits:

  • For particularly complex transactions that don’t change, such as generating reports, they are a good fit since they reduce the amount of data that needs to be sent to the database and allow the query to be pre-optimised somewhat.
  • If a particular query is unusually expensive, is called often, and can’t be cached, it may improve performance to make it a stored procedure.
  • Doing a query in a for loop is usually a very big no-no. However, if there really is no way to avoid it (and this should almost never happen), it would make sense to try to do it in a stored procedure using SQL rather than in application code since that would minimise the overhead.
  • If multiple applications need to work with the same database, using stored procedures for queries in more than one application removes the need to reimplement or copy over the code for the query in the second application - they can just call the same procedure, and if it needs to be changed it need only be done once.

Honestly, I’m not sure I’m ever likely to again come across a scenario where using a stored procedure in a web application would be beneficial, but it’s been very interesting delving into aspects of SQL that I don’t normally touch on and I’ve picked up on some rarely-used SQL statements that I haven’t used before, such as GROUP_CONCAT() and CASE. With the widespread adoption of migrations in most frameworks, I think that the argument that using stored procedures keeps application logic out of version control no longer holds any water, since developers can generally be trusted to store changes to database structure in their migrations and not start messing them around, so the same applies for stored procedures. Report generation seems to be the ideal use case since this invariably involves complex queries that run regularly and don’t change often, and this is where I expect it would be most likely I’d have cause to use them again.

25th February 2018 5:22 pm

Check Your Code Base Is PHP 7 Ready With PHP Compatibility

I’ve recently started a new job and as part of that I’m working on a rather substantial legacy code base. In fact, it was so legacy that it was still in Subversion - needless to say the very first thing I did was migrate it to Git. One of the jobs on our radar for this code base is to migrate it to from PHP 5.4 to 5.6, and subsequently to PHP 7. I’ve been using it locally in 5.6 without issue so far, but I’ve also been looking around for an automated tool to help catch potential problems.

I recently discovered PHP Compatibility which is a set of sniffs for PHP CodeSniffer that can be used to detect code that will be problematic in a particular PHP version. As I use CodeSniffer extensively already, it’s a good fit for my existing toolset.

To install it, add the following dependencies to your composer.json:

"require-dev": {
"dealerdirect/phpcodesniffer-composer-installer": "^0.4.3",
"squizlabs/php_codesniffer": "^2.5",
"wimg/php-compatibility": "^8.1"
},

Then update your phpcs.xml to look something like this:

<ruleset name="PHP_CodeSniffer">
<description>The coding standard for my app.</description>
<file>./</file>
<arg value="np"/>
<rule ref="PSR2"/>
<rule ref="PHPCompatibility"/>
<config name="testVersion" value="7.2-"/>
</ruleset>

As you can see, it’s possible to use it alongside existing coding standards such as PSR2. Note the testVersion config key - the value specified is the PHP version we’re testing against. Here we’re specifying PHP 7.2.

Obviously, the very best way to guard against breakages in newer versions of PHP is to have a comprehensive test suite, but legacy code bases by definition tend to have little or no tests. By using PHP Compatibility, you should at least be able to catch syntax problems without having to audit the code base manually.

25th February 2018 3:50 pm

Unit Testing Your Laravel Controllers

In my previous post I mentioned some strategies for refactoring Laravel controllers to move unnecessary functionality elsewhere. However, I didn’t cover testing them. In this post I will demonstrate the methodology I use for testing Laravel controllers.

Say we have the following method in a controller:

public function store(Request $request)
{
$document = new Document($request->only([
'title',
'text',
]));
$document->save();
event(new DocumentCreated($document));
return redirect()->route('/');
}

This controller method does three things:

  • Return a response
  • Create a model instance
  • Fire an event

Our tests therefore need to pass it all its external dependencies and check it carries out the required actions.

First we fake the event facade:

    Event::fake();

Next, we create an instance of Illuminate\Http\Request to represent the HTTP request passed to the controller:

$request = Request::create('/store', 'POST',[
'title' => 'foo',
'text' => 'bar',
]);

If you’re using a custom form request class, you should instantiate that in exactly the same way.

Then, instantiate the controller, and call the method, passing it the request object:

$controller = new MyController();
$response = $controller->store($request);

You can then test the response from the controller. You can test the status code like this:

    $this->assertEquals(302, $response->getStatusCode());

You may also need to check the content of the response matches what you expect to see, by retrieving $response->getBody()->getContent().

Next, retrieve the newly created model instance, and verify it exists:

$document = Document::where('title', 'foo')->first();
$this->assertNotNull($document);

You can also use assertEquals() to check the attributes on the model if appropriate. Finally, you check the event was fired:

Event::assertDispatched(DocumentCreated::class, function ($event) use ($document) {
return $event->document->id === $document->id;
});

This test should not concern itself with any functionality triggered by the event, only that the event gets triggered. The event should have separate unit tests in which the event is triggered, and then the test verifies it carried out the required actions.

Technically, these don’t quite qualify as being unit tests because they hit the database, but they should cover the controller adequately. To make them true unit tests, you’d need to implement the repository pattern for the database queries rather than using Eloquent directly, and mock the repository, so you can assert that the mocked repository receive the right data and have it return the expected response.

Here is how you might do that with Mockery:

$mock = Mockery::mock('App\Contracts\Repositories\Document');
$mock->shouldReceive('create')->with([
'title' => 'foo',
'text' => 'bar',
])->once()->andReturn(true);
$controller = new MyController($mock);

As long as your controllers are kept as small as possible, it’s generally not too hard to test them. Unfortunately, fat controllers become almost impossible to test, which is another good reason to avoid them.

18th February 2018 6:10 pm

Put Your Laravel Controllers on a Diet

MVC frameworks are a tremendously useful tool for modern web development. They offer easy ways to carry out common tasks, and enforce a certain amount of structure on a project.

However, that doesn’t mean using them makes you immune to bad practices, and it’s quite easy to wind up falling into certain anti-patterns. Probably the most common is the Fat Controller.

What is a fat controller?

When I first started out doing professional web development, CodeIgniter 2 was the first MVC framework I used. While I hadn’t used it before, I was familiar with the general concept of MVC. However, I didn’t appreciate that when referring to the model layer as a place for business logic, that wasn’t necessarily the same thing as the database models.

As such, my controllers became a dumping ground for anything that didn’t fit into the models. If it didn’t interact with the database, I put it in the controller. They quickly became bloated, with many methods running to hundreds of lines of code. The code base became hard to understand, and when I was under the gun on projects I found myself copying and pasting functionality between controllers, making the situation even worse. Not having an experienced senior developer on hand to offer criticism and advice, it was a long time before I realised that this was a problem or how to avoid it.

Why are fat controllers bad?

Controllers are meant to be simple glue code that receives requests and returns responses. Anything else should be handed off to the model layer. As noted above, however, that’s not the same as putting it in the models. Your model layer can consist of many different classes, not just your Eloquent models, and you should not fall into the trap of thinking your application should consist of little more than models, views and controllers.

Placing business logic in controllers can be bad for many reasons:

  • Code in controllers can be difficult to write automated tests for
  • Any logic in a controller method may need to be repeated elsewhere if the same functionality is needed for a different route, unless it’s in a private or protected method that is called from elsewhere, in which case it’s very hard to test in isolation
  • Placing it in the controller makes it difficult to pull out and re-use on a different project
  • Making your controller methods too large makes them complex and hard to follow

As a general rule of thumb, I find that 10 lines of code for any one method for a controller is where it starts getting a bit much. That said, it’s not a hard and fast rule, and for very small projects it may not be worthwhile. But if a project is large and needs to be maintained for any reasonable period of time, you should take the trouble to ensure your controllers are as skinny as is practical.

Nowadays Laravel is my go-to framework and I’ve put together a number of strategies for avoiding the fat controller anti-pattern. Here’s some examples of how I would move various parts of the application out of my controllers.

Validation

Laravel has a nice, easy way of getting validation out of the controllers. Just create a custom form request for your input data, as in this example:

<?php
namespace App\Http\Requests;
use Illuminate\Foundation\Http\FormRequest;
class CreateRequest extends FormRequest
{
/**
* Determine if the user is authorized to make this request.
*
* @return bool
*/
public function authorize()
{
return true;
}
/**
* Get the validation rules that apply to the request.
*
* @return array
*/
public function rules()
{
return [
'email' => 'required|email'
];
}
}

Then type-hint the form request in the controller method, instead of Illuminate\Http\Request:

<?php
namespace App\Http\Controllers;
use App\Http\Requests\CreateRequest;
class HomeController extends Controller
{
public function store(CreateRequest $request)
{
// Process request here..
}
}

Database access and caching

For non-trivial applications I normally use decorated repositories to handle caching and database access in one place. That way my caching and database layers are abstracted out into separate classes, and caching is nearly always handled seamlessly without having to do much work.

Complex object creation logic

If I have a form or API endpoint that needs to:

  • Create more than one object
  • Transform the incoming data in some way
  • Or is non-trivial in any other way

I will typically pull it out into a separate persister class. First, you should create an interface for this persister class:

<?php
namespace App\Contracts\Persisters;
use Illuminate\Database\Eloquent\Model;
interface Foo
{
/**
* Create a new Model
*
* @param array $data
* @return Model
*/
public function create(array $data);
/**
* Update the given Model
*
* @param array $data
* @param Model $model
* @return Model
*/
public function update(array $data, Model $model);
}

Then create the persister class itself:

<?php
namespace App\Persisters;
use Illuminate\Database\Eloquent\Model;
use App\Contracts\Repositories\Foo as Repository;
use App\Contracts\Persisters\Foo as FooContract;
use Illuminate\Database\DatabaseManager;
use Carbon\Carbon;
class Foo implements FooContract
{
protected $repository;
protected $db;
public function __construct(DatabaseManager $db, Repository $repository)
{
$this->db = $db;
$this->repository = $repository;
}
/**
* Create a new Model
*
* @param array $data
* @return Model
*/
public function create(array $data)
{
$this->db->beginTransaction();
$model = $this->repository->create([
'date' => Carbon::parse($data['date'])->toDayDateTimeString(),
]);
$this->db->commit();
return $model;
}
/**
* Update the given Model
*
* @param array $data
* @param Model $model
* @return Model
*/
public function update(array $data, Model $model)
{
$this->db->beginTransaction();
$updatedmodel = $this->repository->update([
'date' => Carbon::parse($data['date'])->toDayDateTimeString(),
$model
]);
$this->db->commit();
return $updatedmodel;
}
}

Then you can set up the persister in a service provider so that type-hinting the interface returns the persister:

<?php
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
class AppServiceProvider extends ServiceProvider
{
/**
* Bootstrap any application services.
*
* @return void
*/
public function boot()
{
//
}
/**
* Register any application services.
*
* @return void
*/
public function register()
{
$this->app->bind(
'App\Contracts\Persisters\Foo',
'App\Persisters\Foo',
});
}
}

This approach means that complex logic, such as creating multiple related objects, can be handled in a consistent way, even if it needs to be called from multiple places.

Triggering actions as a result of something

Events are tailor-made for this use case, and Laravel documents them very well, so I won’t repeat it here. Suffice to say, if something needs to happen, but the response sent by the application doesn’t necessarily depend on it returning something immediately, then it’s probably worth considering making it an event. If it’s going to be called from multiple places, it’s even more worthwhile.

For instance, if you have a contact form, it’s worth taking the time to create an event for when a new contact is received, and handle proessing the contact within the listener for that event. Also, doing so means you can queue that event and have it handled outside the context of the application, so that it responds to the user more quickly. If you’re sending an acknowledgement email for a new user registration, you don’t need to wait for that email to be sent before you return the response, so queueing it can improve response times.

Interacting with third-party services

If you have some code that needs to interact with a third-party service or API, it can get quite complex, especially if you need to process the content in some way. It therefore makes sense to pull that functionality out into a separate class.

For instance, say you have some code in your controller that uses an HTTP client to fetch some data from a third-party API and display it in the view:

public function index(Request $request)
{
$data = $this->client->get('http://api.com/api/items');
$items = [];
foreach ($data as $k => $v) {
$item = [
'name' => $v['name'],
'description' => $v['data']['description'],
'tags' => $v['data']['metadata']['tags']
];
$items[] = $item;
}
return view('template', [
'items' => $items
]);
}

This is a very small example (and a lot simpler than most real-world instances of this issue), but it illustrates the principle. Not only does this code bloat the controller, it might also be used elsewhere in the application, and we don’t want to copy and paste it elsewhere - therefore it makes sense to extract it to a service class.

<?php
namespace App\Services
use GuzzleHttp\ClientInterface as GuzzleClient;
class Api
{
protected $client;
public function __construct(GuzzleClient $client)
{
$this->client = $client;
}
public function fetch()
{
$data = $this->client->get('http://api.com/api/items');
$items = [];
foreach ($data as $k => $v) {
$item = [
'name' => $v['name'],
'description' => $v['data']['description'],
'tags' => $v['data']['metadata']['tags']
];
$items[] = $item;
}
return $items;
}
}

Our controller can then type-hint the service and refactor that functionality out of the method:

public function __construct(App\Services\Api $api)
{
$this->api = $api;
}
public function index(Request $request)
{
$items = $this->api->fetch();
return view('template', [
'items' => $items
]);
}

Including common variables in the view

If data is needed in more than one view (eg show the user’s name on every page when logged in), consider using view composers to retrieve this data rather than fetching them in the controller. That way you’re not having to repeat that logic in more than one place.

Formatting content for display

Logically this belongs in the view layer, so you should write a helper to handle things like formatting dates. For more complex stuff, such as formatting HTML, you should be doing this in Blade (or another templating system, if you’re using one) - for instance, when generating an HTML table, you should consider using a view partial to loop through them. For particularly tricky functionality, you have the option of writing a custom Blade directive.

The same applies for rendering other content - for rendering JSON you should consider using API resources or Fractal to get any non-trivial logic for your API responses out of the controller. Blade templates can also work for non-HTML content such as XML.

Anything else…

These examples are largely to get you started, and there will be occasions where something doesn’t fall into any of the above categories. However, the same principle applies. Your controllers should stick to just receiving requests and sending responses, and anything else should normally be deferred to other classes.

Fat controllers make developer’s lives very difficult, and if you want your code base to be easily maintainable, you should be willing to refactor them ruthlessly. Any functionality you can pull out of the controller becomes easier to reuse and test, and as long as you name your classes and methods sensibly, they’re easier to understand.

Recent Posts

Logging to the ELK Stack With Laravel

Full-text Search With Mariadb

Building a Letter Classifier in PHP With Tesseract OCR and PHP ML

Console Applications With the Symfony Console Component

Rendering Different Views for Mobile and Desktop Clients in Laravel

About me

I'm a web and mobile app developer based in Norfolk. My skillset includes Python, PHP and Javascript, and I have extensive experience working with CodeIgniter, Laravel, Django, Phonegap and Angular.js.