A while back I provided details of the web server setup I used for Django applications. Nowadays I tend to use Laravel most of the time, so I thought I'd share an example of the sort of setup I use to deploy that.
As before I generally prefer Debian Stable where possible. If that's not possible for any reason then the current Ubuntu LTS is an acceptable substitute.
My usual web server these days is Nginx with PHP 7 or better via FPM. I generally use HTTP2 where possible, with SSL via Let's Encrypt.
Here's my typical Nginx config:
The times for FastCGI caching tend to vary in practice - sometimes it's not appropriate to use it all, while for others it can be cached for some time.
It's generally fairly safe to cache CSS and JS for a long time with a Laravel app if you're using Mix to version those assets, so I feel comfortable caching them for a year. Images are a bit dicier, but still don't change often so a month seems good enough.
I'll typically give each application its own pool, which means copying the file at
/etc/php/7.0/fpm/pool.d/www.conf to another file in the same directory, amending the pool name and path to set a new location for the socket, and then restarting Nginx and PHP-FPM. Here are the fields that should be changed:
1; Start a new pool named 'www'.2; the variable $pool can be used in any directive and will be replaced by the3; pool name ('www' here)4[my-app.domain]5...6listen = /var/run/php/php7.0-fpm-my-app.sock
I'm a fan of PostgreSQL - it's stricter than MySQL/MariaDB, and has some very useful additional field types, so where possible I prefer to use it over MySQL or MariaDB.
Cache and session backend
Redis is my usual choice here - I make heavy use of cache tags so I need a backend for the cache that supports them, and Memcached doesn't seem to have as much inertia as Redis these days. Neither needs much in the way of configuration, but you can get a slight speed boost by using phpiredis.
I sometimes use Redis for this too, but it can be problematic if you're using Redis as the queue and broadcast backend, so these days I'm more likely to use Beanstalk and keep Redis for other stuff. I use Supervisor for running the queue worker, and this is an example of the sort of configuration I would use:
1[program:laravel-worker]2process_name=%(program_name)s_%(process_num)02d3command=php /var/www/artisan queue:work --sleep=3 --tries=34autostart=true5autorestart=true6user=www-data7numprocs=88redirect_stderr=true9stdout_logfile=/var/log/worker.log
This is fairly standard for Laravel applications.
I often make use of the Laravel scheduled tasks system. Here's the typical cron job that would be used for that:
* * * * * php /var/www/artisan schedule:run >> /dev/null 2>&1
Again, this is standard for Laravel applications. It runs the scheduler every minute, and the scheduler then determines if it needs to do something.
To set all this up, I'll generally use Ansible. In addition to this, I'll generally also set up fail2ban to block various attacks via both HTTP and SSH.