Matthew Daly's Blog

I'm a web developer in Norfolk. This is my blog...

11th October 2018 9:21 am

Do You Still Need Jquery?

There was a time not so long ago when jQuery was ubiquitous. It was used on almost every website as a matter of course, to the point that many HTML boilerplates included a reference to the CDN.

However, more and more I think it’s probably unnecessary for two main use cases:

jQuery is probably unnecessary for many web apps with simple Javascript

When jQuery first appeared, IE6 was commonplace, and browser API’s were notoriously inconsistent. jQuery was very useful in ironing out those inconsistencies and helping to make the developer’s experience a bit better.

Nowadays, that’s no longer the case. Internet Explorer is on its way out, with IE11 being the only version still supported by Microsoft, and it’s becoming increasingly hard to justify support for older versions, especially with mobile browsers forming a bigger than ever chunk of the market. We’ll probably need to continue supporting IE11 for a good long while, and possibly IE10 for some time too, but these aren’t anything like as bad to work with as IE6. It’s worth noting that newer versions of jQuery are also dropping support for these older browsers, so in many ways it actually does less than it used to.

This is the usual thrust of articles on whether you should still be using jQuery so I’m not going to go over this matter , but for many smaller web apps, jQuery is no longer necessary, and a lot of developers have a tendency to keep using it when it’s probably not required.

jQuery is insufficient for web apps with complex Javascript

Nowadays, there’s a lot of web applications that have moved big chunks of functionality from the server side to the client side. Beyond a certain (and quite small) level of complexity, jQuery just doesn’t do enough to cut it. For me personally, the nature of the projects I work on means that this is a far, far bigger issue than the first one.

I used to work predominantly with Phonegap, which meant that a lot of functionality traditionally done on the server side had to be moved to the client side, and for that jQuery was never sufficient. My first Phonegap app started out using jQuery, but it quickly became obvious that it was going to be problematic. It wound up as a huge mass of jQuery callbacks and Handlebars templates, which was almost impossible to test and hard to maintain. Given this experience, I resolved to switch to a full-fledged Javascript framework next time I built a mobile app, and for the next one I chose Backbone.js, which still used jQuery as a dependency, but made things more maintainable by giving a structure that it didn’t have before, which was the crucial difference.

The more modern generation of Javascript frameworks such as Vue and React, go further in making jQuery redundant. Both of these implement a so-called Virtual DOM, which is used to calculate the minimum changes required to re-render the element in question. Subsequently using jQuery to mutate the DOM would cause problems because it would get out of sync with the Virtual DOM - in fact, in order to get a jQuery plugin working in the context of a React component, you have to actively prevent React from touching the DOM, thereby losing most of the benefits of using React in the first place. You usually see better results from using a React component designed for that purpose (or writing one, which React makes surprisingly simple), than from trying to shoehorn a jQuery plugin into it.

They also make a lot of things that jQuery does trivially easy - for instance, if you want to conditionally show and hide content in a React component, it’s just a case of building it to hide that content based on a particular value in the props or state, or filtering a list is just a case of applying a filter to the array containing the data and setting the state as appropriate.

In short, for single-page web apps or other ones with a lot of Javascript, you should look at other solutions first, and not just blithely assume jQuery will be up to the task. It’s technically possible to build this sort of web app using jQuery, but it’s apt to turn into a morass of spaghetti code unless approached with a great deal of discipline, one that sadly many developers don’t have, and it doesn’t exactly make it easy to promote code reuse. These days, I prefer React for complex web apps, because it makes it extremely intuitive to break my user interface up into reusable components, and test them individually. Using React would be overkill on brochure-style sites (unless you wanted to build it with something like Gatsby), but for more complex apps it’s often a better fit than jQuery.

So when should you use jQuery?

In truth, I’m finding it harder and harder to justify using it at all on new builds. I use it on my personal site because that’s built on Bootstrap 3 and so depends on jQuery, but for bigger web apps I’m generally finding myself moving to React, which renders it not just unnecessary for DOM manipulation, but counter-productive to use it. Most of what I do is big enough to justify something like React, and it generally results in code that is more declarative, easier to test and reason about, and less repetitive. Using jQuery for an application like this is probably a bad idea, because it’s difficult (not impossible, mind, if you follow some of the advice here, use a linter and consider using a proper client-side templating system alongside jQuery) to build an elegant and maintainable Javascript-heavy application.

As a rule of thumb, I find anything which is likely to require more than a few hundred lines of Javascript to be written, is probably complex enough that jQuery isn’t sufficient, and I should instead consider something like React.

I doubt it’d be worth the bother of ripping jQuery out of a legacy application and rewriting the whole thing to not require it, but for new builds I would think very hard about:

  • Whether jQuery is sufficient, or you’d be better off using something like React, Vue or Angular
  • If it is sufficient, whether it’s actually necessary

In all honesty, I don’t think using it when it’s technically not necessary is as much of a big deal as the issue of using it when it’s not really sufficient. Yes, dowloading a library you technically don’t need for a page is a bad practice, and it does make your site slower and harder for users on slow mobile connections, but there are ways to mitigate that such as CDN’s, caching and minification. If you build a web app using jQuery alone when React, Vue or Angular would be more suitable, you’re probably going to have to write a lot more code that will be difficult to maintain, test and understand. Things like React were created to solve the problems that arose when developers built complex client-side applications with jQuery, and are therefore a good fit for bigger applications. The complex setup does mean they have a threshold below which it’s not worth the bother of using them, but past that threshold they result in better, more maintainable, more testable and more reusable code.

Now React is cool, you hate jQuery, you hipster…

Don’t be a prat. Bitter experience has taught me that for a lot of my own personal use cases, jQuery is insufficient. It doesn’t suck, it’s just insufficient. If for your use case, jQuery is sufficient, then that’s fine. All I’m saying is that when a web app becomes sufficiently complex, jQuery can begin to cause more problems than it solves, and that for a sufficiently complex web app you should consider other solutions.

I currently maintain a legacy application that includes thousands of lines of Javascript. Most of it is done with jQuery and some plugins, and it’s resulted in some extremely repetitive jQuery callbacks that are hard to maintain and understand, and impossible to test. Recently I was asked to add a couple of modals to the admin interface, and rather than continuing to add them using jQuery and adding more spaghetti code, I instead opted to build them with React. During the process of building the first modal, I produced a number of components for different elements of the UI. Then, when I built the second one, I refactored those components to be more generic, and moved some common functionality into a higher-order component so that it could be reused. Now, if I need to add another modal, it will be trivial because I already have those components available, and I can just create a new component for the modal, import those components that I need, wrap it in the higher-order component if necessary, and that’s all. I can also easily test those components in isolation. In short, I’ve saved myself some work in the long run by writing it to use a library that was a better fit.

It’s not like using jQuery inevitably results in unmaintainable code, but it does require a certain amount of discipline to avoid it. A more opinionated library such as React makes it far, far harder to create spaghetti code, and makes code reuse natural in a way that jQuery doesn’t.

8th October 2018 11:20 am

An Approach to Writing Golden Master Tests for PHP Web Applications

Apologies if some of the spelling or formatting on this post is off - I wrote it on a long train journey down to London, with sunlight at an inconvenient angle.

Recently I had to carry out some substantial changes to the legacy web app I maintain as the lion’s share of my current job. The client has several channels that represent different parts of the business that would expect to see different content on the home page, and access to content is limited first by channel, and then by location. The client wanted an additional channel added. Due to bad design earlier in the application’s lifetime that isn’t yet practical to refactor away, each type of location has its own model, so it was necessary to add a new location model. It also had to work seamlessly, in the same way as the other location types. Unfortunately, these branch types didn’t use polymorphism, and instead used large switch statements, and it wasn’t practical to refactor all that away in one go. This was therefore quite a high-risk job, especially considering the paucity of tests on a legacy code base.

I’d heard of the concept of a golden master test before. If you haven’t heard of it before, the idea is that it works by running a process, capturing the output, and then comparing the output of that known good version against future runs. It’s very much a test of last resort since, in the context of a web app, it’s potentially very brittle since it depends on the state of the application remaining the same between runs to avoid false positives. I needed a set of simple “snapshot tests”, similar to how snapshot testing works with Jest, to catch unexpected breakages in a large number of pages, and this approach seemed to fit the bill. Unfortunately, I hadn’t been able to find a good example of how to do this for PHP applications, so it took a while to figure out something that worked.

Here is an example base test case I used for this approach:

<?php
namespace Tests;
use PHPUnit_Framework_TestCase as BaseTestCase;
use Behat\Mink\Driver\GoutteDriver;
use Behat\Mink\Session;
class GoldenMasterTestCase extends BaseTestCase
{
protected $driver;
protected $session;
protected $baseUrl = 'http://localhost:8000';
protected $snapshotDir = "tests/snapshots/";
public function setUp()
{
$this->driver = new GoutteDriver();
$this->session = new Session($this->driver);
}
public function tearDown()
{
$this->session = null;
$this->driver = null;
}
public function loginAs($username, $password)
{
$this->session->visit($this->baseUrl.'/login');
$page = $this->session->getPage();
$page->fillField("username", $username);
$page->fillField("password", $password);
$page->pressButton("Sign In");
return $this;
}
public function goto($path)
{
$this->session->visit($this->baseUrl.$path);
$this->assertNotEquals(404, $this->session->getStatusCode());
return $this;
}
public function saveHtml()
{
if (!$this->snapshotExists()) {
$this->saveSnapshot();
}
return $this;
}
public function assertSnapshotsMatch()
{
$path = $this->getPath();
$newHtml = $this->processHtml($this->getHtml());
$oldHtml = $this->getOldHtml();
$diff = "";
if (function_exists('xdiff_string_diff')) {
$diff = xdiff_string_diff($oldHtml, $newHtml);
}
$message = "The path $path does not match the snapshot\n$diff";
self::assertThat($newHtml == $oldHtml, self::isTrue(), $message);
}
protected function getHtml()
{
return $this->session->getPage()->getHtml();
}
protected function getPath()
{
$url = $this->session->getCurrentUrl();
$path = parse_url($url, PHP_URL_PATH);
$query = parse_url($url, PHP_URL_QUERY);
$frag = parse_url($url, PHP_URL_FRAGMENT);
return $path.$query.$frag;
}
protected function getEscapedPath()
{
return $this->snapshotDir.str_replace('/', '_', $this->getPath()).'.snap';
}
protected function snapshotExists()
{
return file_exists($this->getEscapedPath());
}
protected function processHtml($html)
{
return preg_replace('/<input type="hidden"[^>]+\>/i', '', $html);
}
protected function saveSnapshot()
{
$html = $this->processHtml($this->getHtml());
file_put_contents($this->getEscapedPath(), $html);
}
protected function getOldHtml()
{
return file_get_contents($this->getEscapedPath());
}
}

Because this application is built with Zend 1 and doesn’t have an easy way to get the HTML response without actually running the application, I was forced to use an actual HTTP client to fetch the content while the web server is running. I’ve used Mink together with Behat many times in the past, and the Goutte driver is fast and doesn’t rely on Javascript, so that was the best bet for a simple way of retrieving the HTML. Had I been taking this approach with a Laravel application, I could have populated the testing database with a common set of fixtures, and passed a request object through the application and captured the response object’s output rather than using an HTTP client, thereby eliminating the need to run a web server and making the tests faster and less brittle.

Another issue was CSRF handling. A CSRF token is, by definition, generated randomly each time the page is loaded, and so it broke those pages that had forms with CSRF tokens. The solution I came up with was to strip out the hidden input fields.

When each page is tested, the first step is to fetch the content of that page. The test case then checks to see if there’s an existing snapshot. If not, the content is saved as a new snapshot file. Otherwise, the two snapshots are compared, and the test fails if they do not match.

Once that base test case was in place, it was then straightforward to extend it to test multiple pages. I wrote one test to check pages that did not require login, and another to check pages that did require login, and the paths for those pages were passed through using a data provider method, as shown below:

<?php
namespace Tests\GoldenMaster;
use Tests\GoldenMasterTestCase;
class GoldenMasterTest extends GoldenMasterTestCase
{
/**
* @dataProvider nonAuthDataProvider
*/
public function testNonAuthPages($data)
{
$this->goto($data)
->saveHtml()
->assertSnapshotsMatch();
}
public function nonAuthDataProvider()
{
return [
['/login'],
];
}
/**
* @dataProvider dataProvider
*/
public function testPages($data)
{
$this->loginAs('foo', 'bar')
->goto($data)
->saveHtml()
->assertSnapshotsMatch();
}
public function dataProvider()
{
return [
['/foo'],
['/bar'],
];
}
}

Be warned, this is not an approach I would advocate as a matter of course, and it should only ever be a last resort as an alternative to onerous manual testing for things that can’t be tested in their current form. It’s extremely brittle, and I’ve had to deal with a lot of false positives, although that would be easier if I could populate a testing database beforehand and use that as the basis of the tests. It’s also very slow, with each test taking three or four seconds to run, although again this would be less of an issue if I could pass through a request object and get the response HTML directly. Nonetheless, I’ve found it to be a useful technique as a test of last resort for legacy applications.

5th October 2018 7:36 pm

Understanding the Pipeline Pattern

In a previous post, I used the pipeline pattern to demonstrate processing letters using optical recognition and machine learning. The pipeline pattern is something I’ve found very useful in recent months. For a sequential series of tasks, this approach can make your code easier to understand by allowing you to break it up into simple, logical steps which are easy to test and understand individually. If you’re familiar with pipes and redirection in Unix, you’ll be aware of how you can chain together multiple, relatively simple commands to carry out some very complex transformations on data.

A few months back, I was asked to build a webhook for a Facebook lead form at work. One of my colleagues was having to manually export CSV data from Facebook for the data, and then import it into a MySQL database and a Campaign Monitor mailing list, which was an onerous task, so they asked me to look at more automated solutions. I wound up building a webhook with Lumen that would go through the following steps:

  • Get the lead ID’s from the webhook
  • Pull the leads from the Facebook API using those ID’s
  • Process the raw data into a more suitable format
  • Save the data to the database
  • Push the data to Campaign Monitor

Since this involved a number of discrete steps, I chose to implement each step as a separate stage. That way, each step was easy to test in isolation, and it was easily reusable. As it turned out, this approach saved us because Facebook needed to approve this app (and ended up rejecting it - their documentation at the time wasn’t clear on implementing server-to-server apps, making it hard to meet their guidelines), so we needed an interim solution. I instead wrote an Artisan task for importing the file from a CSV, which involved the following steps:

  • Read the rows from the CSV file
  • Format the CSV data into the desired format
  • Save the data to the database
  • Push the data to Campaign Monitor

This meant that two of the existing steps could be reused, as is, without touching the code or tests. I just added two new classes to read the data and format the data, and the Artisan command, which simply called the various pipeline stages, and that was all. In this post, I’ll demonstrate how I implemented this.

While there is more than one implementation of this available, and it wouldn’t be hard to roll your own, I generally use the PHP League’s Pipeline package, since it’s simple, solid and well-tested. Let’s say our application has three steps:

  • Format the request data
  • Save the data
  • Push it to a third party service.

We therefore need to write a stage for each step in the process. Each one must be a callable, such as a closure, a callback, or a class that implements the __invoke() magic method. I usually go for the latter as it allows you to more easily inject dependencies into the stage via its constructor, making it easier to use and test. Here’s what our first stage might look like:

<?php
namespace App\Stages;
use Illuminate\Support\Collection;
class FormatData
{
public function __invoke(Collection $data): Collection
{
return $data->map(function ($item) {
return [
'name' => $item->fullname,
'email' => $item->email
];
});
}
}

This class does nothing more than receive a collection, and format the data as expected. We could have it accept a request object instead, but I opted not to because I felt it made more sense to pass the data in as a collection so it’s not tied to an HTTP request. That way, it can also handle data passed through from a CSV file using an Artisan task, and the details of how it receives the data in the first place are deferred to the class that calls the pipeline in the first place. Note this stage also returns a collection, for handling by the next step:

<?php
namespace App\Stages;
use App\Lead;
use Illuminate\Support\Collection;
class SaveData
{
public function __invoke(Collection $data): Collection
{
return $data->map(function ($item) {
$lead = new Lead;
$lead->name = $item->name;
$lead->email = $item->email;
$lead->save();
return $lead;
}
}
}

This step saves each lead as an Eloquent model, and returns a collection of the saved models, which are passed to the final step:

<?php
namespace App\Stages;
use App\Contracts\Services\MailingList;
use Illuminate\Support\Collection;
class AddDataToList
{
protected $list;
public function __construct(MailingList $list)
{
$this->list = $list;
}
public function __invoke(Collection $data)
{
return $data->each(function ($item) {
$this->list->add([
'name' => $item->name,
'email' => $item->email
]);
});
}
}

This step uses a wrapper class for a mailing service, which is passed through as a dependency in the constructor. The __invoke() method then loops through each Eloquent model and uses it to fetch the data, which is then added to the list. With our stages complete, we can now put them together in our controller:

<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Stages\FormatData;
use App\Stages\SaveData;
use App\Stages\AddDataToList;
use League\Pipeline\Pipeline;
use Illuminate\Support\Collection;
class WebhookController extends Controller
{
public function store(Request $request, Pipeline $pipeline, FormatData $formatData, SaveData $savedata, AddDataToList $addData)
{
try {
$data = Collection::make($request->get('data'));
$pipe = $pipeline->pipe($formatData)
->pipe($saveData)
->pipe($addData);
$pipe->process($data);
} catch (\Exception $e) {
// Handle exception
}
}
}

As mentioned above, we extract the request data (assumed to be an array of data for a webhook), and convert it into a collection. Then, we put together our pipeline. Note that we use dependency injection to fetch the steps - feel free to use method or constructor injection as appropriate. We instantiate our pipeline, and call the pipe() method multiple times to add new stages.

Finally we pass the data through to our pipe for processing by calling the process() method, passing in the initial data. Note that we can wrap the whole thing in a try...catch statement to handle exceptions, so if something happens that would mean we would want to cease processing at that point, we can throw an exception in the stage and handle it outside the pipeline.

This means that our controller is kept very simple. It just gets the data as a collection, then puts the pipeline together and passes the data through. If we subsequently had to write an Artisan task to do something similar from the command line, we could fetch the data via a CSV reader class, and then pass it to the same pipeline. If we needed to change the format of the initial data, we could replace the FormatData class with a single separate class with very little trouble.

Another thing you can do with the League pipeline package, but I haven’t yet had the occasion to try, is use League\Pipeline\PipelineBuilder to build pipelines in a more dynamic fashion. You can make steps conditional, as in this example:

<?php
use League\Pipeline\PipelineBuilder;
$builder = (new PipelineBuilder)
->add(new FormatData);
if ($data['type'] = 'foo') {
$builder->add(new HandleFooType);
}
$builder->add(new SaveData);
$pipeline = $builder->build();
$pipeline->process($data);

The pipeline pattern isn’t appropriate for every situation, but for anything that involves a set of operations on the same data, it makes a lot of sense, and can make it easy to break larger operations into smaller steps that are easier to understand, test, and re-use.

3rd October 2018 11:07 pm

Replacing Switch Statements With Polymorphism in PHP

For the last few months, I’ve been making a point of picking up on certain antipatterns, and ways to avoid or remove them. One I’ve seen a lot recently is unnecessary large switch-case or if-else statements. For instance, here is a simplified example of one of these, which renders links to different objects:

<?php
switch ($item->getType()) {
case 'audio':
$media = new stdClass;
$media->type = 'audio';
$media->duration = $item->getLength();
$media->name = $item->getName();
$media->url = $item->getUrl();
case 'video':
$media = new stdClass;
$media->type = 'video';
$media->duration = $item->getVideoLength();
$media->name = $item->getTitle();
$media->url = $item->getUrl();
}
return '<a href="'.$media->url.'" class="'.$media->type.'" data-duration="'.$media->duration.'">'.$media->name.'</a>';

There are a number of problems with this, most notably the fact that it’s doing a lot of work to try and create a new set of objects that behave consistently. Instead, your objects should be polymorphic - in other words, you should be able to treat the original objects the same.

While strictly speaking you don’t need one, it’s a good idea to create an interface that defines the required methods. That way, you can have those objects implement that interface, and be certain that they have all the required methods:

<?php
namespace App\Contracts;
interface MediaItem
{
public function getLength(): int;
public function getName(): string;
public function getType(): string;
public function getUrl(): string;
}

Then, you need to implement that interface in your objects. It doesn’t matter if the implementations are different, as long as the methods exist. That way, objects can define how they return a particular value, which is simpler and more logical than defining it in a large switch-case statement elsewhere. It also helps to prevent duplication. Here’s what the audio object might look like:

<?php
namespace App\Models;
use App\Contracts\MediaItem;
class Audio implements MediaItem
{
public function getLength(): int
{
return $this->length;
}
public function getName(): string
{
return $this->name;
}
public function getType(): string
{
return $this->type;
}
public function getUrl(): string
{
return $this->url;
}
}

And here’s a similar example of the video object:

<?php
namespace App\Models;
use App\Contracts\MediaItem;
class Video implements MediaItem
{
public function getLength(): int
{
return $this->getVideoLength();
}
public function getName(): string
{
return $this->getTitle();
}
public function getType(): string
{
return $this->type;
}
public function getUrl(): string
{
return $this->url;
}
}

With that done, the code to render the links can be greatly simplified:

<?php
return '<a href="'.$item->getUrl().'" class="'.$item->getType().'" data-duration="'.$item->getLength().'">'.$media->getName().'</a>';

Because we can use the exact same methods and get consistent responses, yet also allow for the different implementations within the objects, this approach allows for much more elegant and readable code. Different objects can be treated in the same way without the need for writing extensive if or switch statements.

I haven’t had the occasion to do so, but in theory this approach is applicable in other languages, such as Javascript or Python (although these languages don’t have the concept of interfaces). Since discovering the swtch statement antipattern and how to replace it with polymorphism, I’ve been able to remove a lot of overly complex code.

25th September 2018 10:03 pm

Career Direction After Seven Years

Earlier this month, I passed the seven year anniversary of starting my first web dev job. That job never really worked out, for various reasons, but since then I’ve had an interesting time of it. I’ve diversified into app development via Phonegap, and I’ve worked with frameworks that didn’t exist when I first started. So it seems a good opportunity to take stock and think about where I want to head next.

Sometimes these posts are where someone announces they’re leaving their current role, but that’s not the case here - I’m pretty happy where I am right now. I am maintaining a legacy project, but I do feel like I’m making a difference and it’s slowly becoming more pleasant to work with, and I’m learning a lot about applying design patterns, so I think where I am right now is a good place for me. However, it’s a useful exercise to think about what I want to do, where I want to concentrate my efforts, and what I want to learn about.

So, here are my thoughts about where I want to go in future:

  • I really enjoy working with React, and I want to do so much more than I have in the past, possibly including React Native. Ditto with Redux.
  • Much as I love Django, it’s unlikely I’ll be using it again in the future, as it’s simply not in much demand where I live. In 2015, I was working at a small agency with a dev team of three, including me, and it became apparent that we needed to standardise on a single framework. I’d been using CodeIgniter on and off for several years, but it was tired and dated, yet I couldn’t justify using Django because no-one else was familiar with Python, so we settled on Laravel. Ever since, Laravel has been my go-to framework - Django does some things better (Django REST Framework remains the best way I’ve ever found to create a REST API), but Laravel does enough stuff well enough that I can use it for most things I need, so it’s a good default option.
  • I really don’t want to work with Wordpress often, and if I do, I’d feel a lot better about it if I used Bedrock. Just churning out boilerplate sites is anathema to me - I’d much rather do something more interesting, even if it were paid worse.
  • PHP is actually pretty nice these days (as long as you’re not dealing with a legacy application), and I generally don’t mind working with it, as long as it’s fairly modern.
  • I enjoy mentoring and coaching others, and I’d like to do that a lot more often than I have been doing. Mentoring and coaching is a big part of being a senior developer, since a good mentor can quickly bring inexperienced developers up to a much higher standard, and hugely reduces the amount of horrible legacy code that needs to be maintained. I was without an experienced mentor for much of my career, and in retrospect it held me back - having someone around to teach me about TDD and design patterns earlier would have helped no end. Also, I find it the single most rewarding part of my job.
  • I have absolutely no desire whatsoever to go into management, or leave coding behind in any way, shape or form. I’ve heard it said before that Microsoft have two separate career tracks for developers, one through people management, the other into a software architect role, and were I there, I would definitely opt for the latter.
  • I’m now less interested in learning new frameworks or languages than I am in picking up and applying new design patterns, and avoiding antipatterns - they’re the best way to improve your code quality. I’ve learned the hard way that the hallmark of a skilled developer’s code is not the complexity, but the simplicity - I can now recognise the convoluted code I wrote earlier in my career as painful to maintain, and can identify it in legacy projects.
  • I’ve always used linters and other code quality tools, and I’m eager to evangelise their usage.
  • I’ve been a proponent of TDD for several years now, and that’s going to continue - I’ve not only seen how many things it catches when you have tests, but also how painful it is when you have a large legacy project with no tests at all, and I’m absolutely staggered that anyone ever continues to write non-trivial production code without any sort of tests.
  • I want to understand the frameworks I use at a deeper level - it’s all too easy to just treat them as magic, when there are huge benefits to understanding how your framework works under the bonnet, and how to swap out the framework’s functionality for alternative implementations.
  • I’d like to get involved in more IoT-related projects - guess the 3 Raspberry Pi’s and the Arduino I have gathering dust at home need to get some more use…
  • Chat interfaces are interesting - I built an Alexa skill recently, which was fun and useful, and I’d like to do stuff like that more often.

So, after seven years, that’s where I see myself going in future. I think I’m in a good place to do that right now, and I’ll probably stay where I am for a good long while yet. The first seven years of my web dev career have been interesting, and I’m eager to see what the next seven bring.

Recent Posts

Higher-order Components in React

Creating Your Own Dependency Injection Container in PHP

Understanding Query Objects

Writing a Custom Sniff for PHP Codesniffer

You Don't Need That Module Package

About me

I'm a web and mobile app developer based in Norfolk. My skillset includes Python, PHP and Javascript, and I have extensive experience working with CodeIgniter, Laravel, Zend Framework, Django, Phonegap and React.js.