Matthew Daly's Blog

I'm a web developer in Norfolk. This is my blog...

10th March 2018 3:10 pm

Using Stored Procedures in Your Web App

In the last few days I’ve done something I’ve never done before, namely written a stored procedure for a web app. Like most web developers, I know enough about SQL to be able to formulate some fairly complex queries, but I hadn’t really touched on control flow functions or stored procedures, and in my experience they tend to be the province of the dedicated database administrator, not us web devs, who will typically delegate more complex functionality to our application code.

In this case, there were a number of factors influencing my decision to use a stored procedure for this:

  • The application was a legacy application which had been worked on by developers of, shall we say, varying skill levels. As a result the database schema was badly designed, with no prospect of changing it without causing huge numbers of breakages
  • The query in question was used to generate a complex report that was quite time-consuming, therefore the optimisations from using a stored procedure were worthwhile.
  • The report required that data be grouped by a set of categories which were stored in a separate table, which meant the table had to be pivoted (transformed from rows to columns), resulting in an incredibly complex dynamic query that had to be constructed on the fly by concatenating different SQL strings. In PostgreSQL, this can be done fairly easily using the crosstab function, but MySQL doesn’t have native support for anything like this.

Historically, one issue with using stored procedures has been that it kept business logic out of the application code, meaning they are not stored in version control. However, most modern frameworks provide some support for migrations, and since they are intended to be used to make changes to the database, they are the obvious place to define the stored procedure. This particular application was built with an older framework that didn’t come with migrations, so we’d installed Phinx to handle those for us. Initially, I defined the stored procedure inside a migration that ran a raw query to create the stored procedure, as in this example:

public function up()
{
$query = <<<EOF
CREATE PROCEDURE IF NOT EXISTS foo
BEGIN
SELECT * FROM foo;
END
EOF;
$this->execute($query);
}
public function down()
{
$this->execute('DROP PROCEDURE IF EXISTS foo');
}

Once this is done, you can then use your framework’s particular support for raw queries to call CALL foo() whenever your stored procedure needs to be executed.

However, we soon ran into an issue. It turns out mysqldump doesn’t export stored procedures by default, so there was a risk that anyone working on the code base might import the database from an SQL file and not get the migrations. I’d used the Symfony Console component to create a simple command-line tool, reminiscent of Laravel’s Artisan, so I used that to create a command to set up the stored procedure, amended the migration to call that command, and placed a check in the application where the procedure was called so that if it was not defined the command would be called and the procedure would be created. In most cases this wouldn’t be an issue.

Having now had experience using stored procedures in a web application, there are a number of issues they raise:

  • It’s hard to make queries flexible, whereas with something like Eloquent it’s straightforward to conditionally apply WHERE statements.
  • While storing them in migrations is a practical solution, if the database is likely to be imported rather than created from scratch during development it can be problematic.
  • They aren’t easily portable, not just between database management systems, but between different versions - the production server was using an older version of MySQL, and it failed to create the procedure. It’s therefore good practice for your migrations to check the procedure was created successfully and raise a noisy exception if they failed.

Conversely, they do bring certain benefits:

  • For particularly complex transactions that don’t change, such as generating reports, they are a good fit since they reduce the amount of data that needs to be sent to the database and allow the query to be pre-optimised somewhat.
  • If a particular query is unusually expensive, is called often, and can’t be cached, it may improve performance to make it a stored procedure.
  • Doing a query in a for loop is usually a very big no-no. However, if there really is no way to avoid it (and this should almost never happen), it would make sense to try to do it in a stored procedure using SQL rather than in application code since that would minimise the overhead.
  • If multiple applications need to work with the same database, using stored procedures for queries in more than one application removes the need to reimplement or copy over the code for the query in the second application - they can just call the same procedure, and if it needs to be changed it need only be done once.

Honestly, I’m not sure I’m ever likely to again come across a scenario where using a stored procedure in a web application would be beneficial, but it’s been very interesting delving into aspects of SQL that I don’t normally touch on and I’ve picked up on some rarely-used SQL statements that I haven’t used before, such as GROUP_CONCAT() and CASE. With the widespread adoption of migrations in most frameworks, I think that the argument that using stored procedures keeps application logic out of version control no longer holds any water, since developers can generally be trusted to store changes to database structure in their migrations and not start messing them around, so the same applies for stored procedures. Report generation seems to be the ideal use case since this invariably involves complex queries that run regularly and don’t change often, and this is where I expect it would be most likely I’d have cause to use them again.

25th February 2018 5:22 pm

Check Your Code Base Is PHP 7 Ready With PHP Compatibility

I’ve recently started a new job and as part of that I’m working on a rather substantial legacy code base. In fact, it was so legacy that it was still in Subversion - needless to say the very first thing I did was migrate it to Git. One of the jobs on our radar for this code base is to migrate it to from PHP 5.4 to 5.6, and subsequently to PHP 7. I’ve been using it locally in 5.6 without issue so far, but I’ve also been looking around for an automated tool to help catch potential problems.

I recently discovered PHP Compatibility which is a set of sniffs for PHP CodeSniffer that can be used to detect code that will be problematic in a particular PHP version. As I use CodeSniffer extensively already, it’s a good fit for my existing toolset.

To install it, add the following dependencies to your composer.json:

"require-dev": {
"dealerdirect/phpcodesniffer-composer-installer": "^0.4.3",
"squizlabs/php_codesniffer": "^2.5",
"wimg/php-compatibility": "^8.1"
},

Then update your phpcs.xml to look something like this:

<ruleset name="PHP_CodeSniffer">
<description>The coding standard for my app.</description>
<file>./</file>
<arg value="np"/>
<rule ref="PSR2"/>
<rule ref="PHPCompatibility"/>
<config name="testVersion" value="7.2-"/>
</ruleset>

As you can see, it’s possible to use it alongside existing coding standards such as PSR2. Note the testVersion config key - the value specified is the PHP version we’re testing against. Here we’re specifying PHP 7.2.

Obviously, the very best way to guard against breakages in newer versions of PHP is to have a comprehensive test suite, but legacy code bases by definition tend to have little or no tests. By using PHP Compatibility, you should at least be able to catch syntax problems without having to audit the code base manually.

25th February 2018 3:50 pm

Unit Testing Your Laravel Controllers

In my previous post I mentioned some strategies for refactoring Laravel controllers to move unnecessary functionality elsewhere. However, I didn’t cover testing them. In this post I will demonstrate the methodology I use for testing Laravel controllers.

Say we have the following method in a controller:

public function store(Request $request)
{
$document = new Document($request->only([
'title',
'text',
]));
$document->save();
event(new DocumentCreated($document));
return redirect()->route('/');
}

This controller method does three things:

  • Return a response
  • Create a model instance
  • Fire an event

Our tests therefore need to pass it all its external dependencies and check it carries out the required actions.

First we fake the event facade:

    Event::fake();

Next, we create an instance of Illuminate\Http\Request to represent the HTTP request passed to the controller:

$request = Request::create('/store', 'POST',[
'title' => 'foo',
'text' => 'bar',
]);

If you’re using a custom form request class, you should instantiate that in exactly the same way.

Then, instantiate the controller, and call the method, passing it the request object:

$controller = new MyController();
$response = $controller->store($request);

You can then test the response from the controller. You can test the status code like this:

    $this->assertEquals(302, $response->getStatusCode());

You may also need to check the content of the response matches what you expect to see, by retrieving $response->getBody()->getContent().

Next, retrieve the newly created model instance, and verify it exists:

$document = Document::where('title', 'foo')->first();
$this->assertNotNull($document);

You can also use assertEquals() to check the attributes on the model if appropriate. Finally, you check the event was fired:

Event::assertDispatched(DocumentCreated::class, function ($event) use ($document) {
return $event->document->id === $document->id;
});

This test should not concern itself with any functionality triggered by the event, only that the event gets triggered. The event should have separate unit tests in which the event is triggered, and then the test verifies it carried out the required actions.

Technically, these don’t quite qualify as being unit tests because they hit the database, but they should cover the controller adequately. To make them true unit tests, you’d need to implement the repository pattern for the database queries rather than using Eloquent directly, and mock the repository, so you can assert that the mocked repository receive the right data and have it return the expected response.

Here is how you might do that with Mockery:

$mock = Mockery::mock('App\Contracts\Repositories\Document');
$mock->shouldReceive('create')->with([
'title' => 'foo',
'text' => 'bar',
])->once()->andReturn(true);
$controller = new MyController($mock);

As long as your controllers are kept as small as possible, it’s generally not too hard to test them. Unfortunately, fat controllers become almost impossible to test, which is another good reason to avoid them.

18th February 2018 6:10 pm

Put Your Laravel Controllers on a Diet

MVC frameworks are a tremendously useful tool for modern web development. They offer easy ways to carry out common tasks, and enforce a certain amount of structure on a project.

However, that doesn’t mean using them makes you immune to bad practices, and it’s quite easy to wind up falling into certain anti-patterns. Probably the most common is the Fat Controller.

What is a fat controller?

When I first started out doing professional web development, CodeIgniter 2 was the first MVC framework I used. While I hadn’t used it before, I was familiar with the general concept of MVC. However, I didn’t appreciate that when referring to the model layer as a place for business logic, that wasn’t necessarily the same thing as the database models.

As such, my controllers became a dumping ground for anything that didn’t fit into the models. If it didn’t interact with the database, I put it in the controller. They quickly became bloated, with many methods running to hundreds of lines of code. The code base became hard to understand, and when I was under the gun on projects I found myself copying and pasting functionality between controllers, making the situation even worse. Not having an experienced senior developer on hand to offer criticism and advice, it was a long time before I realised that this was a problem or how to avoid it.

Why are fat controllers bad?

Controllers are meant to be simple glue code that receives requests and returns responses. Anything else should be handed off to the model layer. As noted above, however, that’s not the same as putting it in the models. Your model layer can consist of many different classes, not just your Eloquent models, and you should not fall into the trap of thinking your application should consist of little more than models, views and controllers.

Placing business logic in controllers can be bad for many reasons:

  • Code in controllers can be difficult to write automated tests for
  • Any logic in a controller method may need to be repeated elsewhere if the same functionality is needed for a different route, unless it’s in a private or protected method that is called from elsewhere, in which case it’s very hard to test in isolation
  • Placing it in the controller makes it difficult to pull out and re-use on a different project
  • Making your controller methods too large makes them complex and hard to follow

As a general rule of thumb, I find that 10 lines of code for any one method for a controller is where it starts getting a bit much. That said, it’s not a hard and fast rule, and for very small projects it may not be worthwhile. But if a project is large and needs to be maintained for any reasonable period of time, you should take the trouble to ensure your controllers are as skinny as is practical.

Nowadays Laravel is my go-to framework and I’ve put together a number of strategies for avoiding the fat controller anti-pattern. Here’s some examples of how I would move various parts of the application out of my controllers.

Validation

Laravel has a nice, easy way of getting validation out of the controllers. Just create a custom form request for your input data, as in this example:

<?php
namespace App\Http\Requests;
use Illuminate\Foundation\Http\FormRequest;
class CreateRequest extends FormRequest
{
/**
* Determine if the user is authorized to make this request.
*
* @return bool
*/
public function authorize()
{
return true;
}
/**
* Get the validation rules that apply to the request.
*
* @return array
*/
public function rules()
{
return [
'email' => 'required|email'
];
}
}

Then type-hint the form request in the controller method, instead of Illuminate\Http\Request:

<?php
namespace App\Http\Controllers;
use App\Http\Requests\CreateRequest;
class HomeController extends Controller
{
public function store(CreateRequest $request)
{
// Process request here..
}
}

Database access and caching

For non-trivial applications I normally use decorated repositories to handle caching and database access in one place. That way my caching and database layers are abstracted out into separate classes, and caching is nearly always handled seamlessly without having to do much work.

Complex object creation logic

If I have a form or API endpoint that needs to:

  • Create more than one object
  • Transform the incoming data in some way
  • Or is non-trivial in any other way

I will typically pull it out into a separate persister class. First, you should create an interface for this persister class:

<?php
namespace App\Contracts\Persisters;
use Illuminate\Database\Eloquent\Model;
interface Foo
{
/**
* Create a new Model
*
* @param array $data
* @return Model
*/
public function create(array $data);
/**
* Update the given Model
*
* @param array $data
* @param Model $model
* @return Model
*/
public function update(array $data, Model $model);
}

Then create the persister class itself:

<?php
namespace App\Persisters;
use Illuminate\Database\Eloquent\Model;
use App\Contracts\Repositories\Foo as Repository;
use App\Contracts\Persisters\Foo as FooContract;
use Illuminate\Database\DatabaseManager;
use Carbon\Carbon;
class Foo implements FooContract
{
protected $repository;
protected $db;
public function __construct(DatabaseManager $db, Repository $repository)
{
$this->db = $db;
$this->repository = $repository;
}
/**
* Create a new Model
*
* @param array $data
* @return Model
*/
public function create(array $data)
{
$this->db->beginTransaction();
$model = $this->repository->create([
'date' => Carbon::parse($data['date'])->toDayDateTimeString(),
]);
$this->db->commit();
return $model;
}
/**
* Update the given Model
*
* @param array $data
* @param Model $model
* @return Model
*/
public function update(array $data, Model $model)
{
$this->db->beginTransaction();
$updatedmodel = $this->repository->update([
'date' => Carbon::parse($data['date'])->toDayDateTimeString(),
$model
]);
$this->db->commit();
return $updatedmodel;
}
}

Then you can set up the persister in a service provider so that type-hinting the interface returns the persister:

<?php
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
class AppServiceProvider extends ServiceProvider
{
/**
* Bootstrap any application services.
*
* @return void
*/
public function boot()
{
//
}
/**
* Register any application services.
*
* @return void
*/
public function register()
{
$this->app->bind(
'App\Contracts\Persisters\Foo',
'App\Persisters\Foo',
});
}
}

This approach means that complex logic, such as creating multiple related objects, can be handled in a consistent way, even if it needs to be called from multiple places.

Triggering actions as a result of something

Events are tailor-made for this use case, and Laravel documents them very well, so I won’t repeat it here. Suffice to say, if something needs to happen, but the response sent by the application doesn’t necessarily depend on it returning something immediately, then it’s probably worth considering making it an event. If it’s going to be called from multiple places, it’s even more worthwhile.

For instance, if you have a contact form, it’s worth taking the time to create an event for when a new contact is received, and handle proessing the contact within the listener for that event. Also, doing so means you can queue that event and have it handled outside the context of the application, so that it responds to the user more quickly. If you’re sending an acknowledgement email for a new user registration, you don’t need to wait for that email to be sent before you return the response, so queueing it can improve response times.

Interacting with third-party services

If you have some code that needs to interact with a third-party service or API, it can get quite complex, especially if you need to process the content in some way. It therefore makes sense to pull that functionality out into a separate class.

For instance, say you have some code in your controller that uses an HTTP client to fetch some data from a third-party API and display it in the view:

public function index(Request $request)
{
$data = $this->client->get('http://api.com/api/items');
$items = [];
foreach ($data as $k => $v) {
$item = [
'name' => $v['name'],
'description' => $v['data']['description'],
'tags' => $v['data']['metadata']['tags']
];
$items[] = $item;
}
return view('template', [
'items' => $items
]);
}

This is a very small example (and a lot simpler than most real-world instances of this issue), but it illustrates the principle. Not only does this code bloat the controller, it might also be used elsewhere in the application, and we don’t want to copy and paste it elsewhere - therefore it makes sense to extract it to a service class.

<?php
namespace App\Services
use GuzzleHttp\ClientInterface as GuzzleClient;
class Api
{
protected $client;
public function __construct(GuzzleClient $client)
{
$this->client = $client;
}
public function fetch()
{
$data = $this->client->get('http://api.com/api/items');
$items = [];
foreach ($data as $k => $v) {
$item = [
'name' => $v['name'],
'description' => $v['data']['description'],
'tags' => $v['data']['metadata']['tags']
];
$items[] = $item;
}
return $items;
}
}

Our controller can then type-hint the service and refactor that functionality out of the method:

public function __construct(App\Services\Api $api)
{
$this->api = $api;
}
public function index(Request $request)
{
$items = $this->api->fetch();
return view('template', [
'items' => $items
]);
}

Including common variables in the view

If data is needed in more than one view (eg show the user’s name on every page when logged in), consider using view composers to retrieve this data rather than fetching them in the controller. That way you’re not having to repeat that logic in more than one place.

Formatting content for display

Logically this belongs in the view layer, so you should write a helper to handle things like formatting dates. For more complex stuff, such as formatting HTML, you should be doing this in Blade (or another templating system, if you’re using one) - for instance, when generating an HTML table, you should consider using a view partial to loop through them. For particularly tricky functionality, you have the option of writing a custom Blade directive.

The same applies for rendering other content - for rendering JSON you should consider using API resources or Fractal to get any non-trivial logic for your API responses out of the controller. Blade templates can also work for non-HTML content such as XML.

Anything else…

These examples are largely to get you started, and there will be occasions where something doesn’t fall into any of the above categories. However, the same principle applies. Your controllers should stick to just receiving requests and sending responses, and anything else should normally be deferred to other classes.

Fat controllers make developer’s lives very difficult, and if you want your code base to be easily maintainable, you should be willing to refactor them ruthlessly. Any functionality you can pull out of the controller becomes easier to reuse and test, and as long as you name your classes and methods sensibly, they’re easier to understand.

4th February 2018 12:12 am

Using Lando As An Alternative to Vagrant

Although Vagrant is very useful for ensuring consistency between development environments, it’s quite demanding on system resources. Running a virtual machine introduces quite a bit of overhead, and it can be troublesome to provision.

This week I was introduced to Lando as an alternative to Vagrant. Rather than running a virtual machine like Vagrant does by default, Lando instead spins up Docker containers for the services you need, meaning it has considerably less overhead than Vagrant. It also includes presets for a number of frameworks and CMS’s, including:

  • Drupal 7
  • Drupal 8
  • Wordpress
  • Laravel

Considering that Vagrant needs quite a bit of boilerplate to set up the server for different types of projects, this gives Lando an obvious advantage. The only issue I’ve had with it is that it’s been unreliable when I’ve had to use it on Windows, which I don’t do much anyway.

Getting started

Lando requires that you have Docker installed. Once that’s done you can download and install it fro the website. Then you can run lando init to set it up:

$ lando init
? What recipe do you want to use? wordpress
? Where is your webroot relative to the init destination? .
? What do you want to call this app? wp-site
NOW WE'RE COOKING WITH FIRE!!!
Your app has been initialized!
Go to the directory where your app was initialized and run
`lando start` to get rolling.
Check the LOCATION printed below if you are unsure where to go.
Here are some vitals:
NAME wp-site
LOCATION /home/matthew/Projects/wp-site
RECIPE wordpress
DOCS https://docs.devwithlando.io/tutorials/wordpress.html

Here I’ve chosen the wordpress recipe, in the current directory, with the name wp-site. This generates the following file as .lando.yml:

name: wp-site
recipe: wordpress
config:
webroot: .

Then, if we run lando start, it will set up the required services:

$ lando start
landoproxyhyperion5000gandalfedition_proxy_1 is up-to-date
Creating network "wpsite_default" with the default driver
Creating volume "wpsite_appserver" with default driver
Creating volume "wpsite_data" with default driver
Creating volume "wpsite_data_database" with default driver
Creating wpsite_appserver_1 ...
Creating wpsite_database_1 ...
Creating wpsite_database_1
Creating wpsite_appserver_1 ... done
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4454k 100 4454k 0 0 3288k 0 0:00:01 0:00:01 --:--:-- 3290k
OS: Linux 4.13.0-32-generic #35-Ubuntu SMP Thu Jan 25 09:13:46 UTC 2018 x86_64
Shell:
PHP binary: /usr/local/bin/php
PHP version: 7.1.13
php.ini used:
WP-CLI root dir: phar://wp-cli.phar
WP-CLI vendor dir: phar://wp-cli.phar/vendor
WP_CLI phar path: /tmp
WP-CLI packages dir:
WP-CLI global config:
WP-CLI project config:
WP-CLI version: 1.5.0
BOOMSHAKALAKA!!!
Your app has started up correctly.
Here are some vitals:
APPSERVER URLS https://localhost:32802
http://localhost:32803
http://wp-site.lndo.site
https://wp-site.lndo.site

Note the APPSERVER URLS section - the site can be accessed locally via HTTP or HTTPS. For this recipe, it also installs WP CLI.

If we run docker ps, we can see that it’s running three Docker containers:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e920e152091 devwithlando/php:7.1-apache "/lando-entrypoint.s…" 16 minutes ago Up 16 minutes 0.0.0.0:32803->80/tcp, 0.0.0.0:32802->443/tcp wpsite_appserver_1
82ea60b1214f mysql:latest "/lando-entrypoint.s…" 16 minutes ago Up 16 minutes 0.0.0.0:32801->3306/tcp wpsite_database_1
e51d831199d7 traefik:1.3-alpine "/lando-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:58086->8080/tcp landoproxyhyperion5000gandalfedition_proxy_1

Apache lives in one container, MySQL in another, while the third runs Traefik, a lightweight load balancer, which listens on port 80. Traefik does the work of redirecting HTTP requests to the right place.

As I’ve been unhappy with the amount of resources Vagrant uses for a while, and I usually run Ubuntu (making using Docker straightforward), I’m planning on using Lando extensively in future. It’s lighter and faster to set up, and has sane defaults for most of the frameworks and CMS’s I use regularly, making it generally quicker and easier to work with.

Recent Posts

Mutation Testing With Infection

Switching from Vim to Neovim

Better Strings in PHP

Forcing SSL in Codeigniter

Logging to the ELK Stack With Laravel

About me

I'm a web and mobile app developer based in Norfolk. My skillset includes Python, PHP and Javascript, and I have extensive experience working with CodeIgniter, Laravel, Django, Phonegap and Angular.js.