Mistakes I Made Deploying Laravel on a Cheap VPS

2 min read

When I started deploying Laravel applications on budget VPS instances, I thought I knew what I was doing. I had developed locally, tested everything, and was ready to ship. What followed was a series of hard lessons that I want to share so you can avoid the same pitfalls.

The Queue Worker That Silently Died

My first major issue was with Laravel's queue system. Locally, everything worked perfectly. In production, jobs would just... disappear. No errors, no logs, nothing.

The problem? I was running the queue worker with php artisan queue:work in a basic screen session. When the server ran low on memory, the OOM killer would terminate my queue worker, and I'd have no idea until users complained.

The fix:

# Use supervisor to manage queue workers
sudo apt install supervisor

# /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/app/artisan queue:work --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/app/storage/logs/worker.log
stopwaitsecs=3600

Supervisor ensures your workers restart automatically and provides proper logging. On a cheap VPS, I also recommend setting --max-time to prevent memory leaks from killing your server.

Cron Jobs That Never Ran

Laravel's task scheduler is elegant—one cron entry to rule them all. Except when that entry doesn't work.

I added the cron job exactly as the docs said:

* * * * * cd /var/www/app && php artisan schedule:run >> /dev/null 2>&1

Nothing happened. The issue? The cron was running as root, but my Laravel app expected the www-data user. File permissions were wrong, cache directories were inaccessible, and the scheduler silently failed.

The fix:

# Edit the correct user's crontab
sudo crontab -u www-data -e

# Add this line
* * * * * cd /var/www/app && php artisan schedule:run >> /var/www/app/storage/logs/cron.log 2>&1

Always log your cron output during debugging. That >> /dev/null pattern hides problems.

Memory Limits Everywhere

A 1GB VPS sounds reasonable until you realize:

  • PHP-FPM wants 256MB per worker
  • MySQL wants its buffer pool
  • Your queue workers need memory too
  • The OS needs some to function

I was running 4 PHP-FPM workers, each capable of using 256MB. Do the math—that's potentially 1GB just for PHP.

The fix:

; /etc/php/8.2/fpm/pool.d/www.conf
pm = dynamic
pm.max_children = 3
pm.start_servers = 1
pm.min_spare_servers = 1
pm.max_spare_servers = 2
pm.max_requests = 500

; Also set memory_limit appropriately
memory_limit = 128M

On constrained servers, tune these values aggressively. Use pm.max_requests to recycle workers and prevent memory leaks.

No Monitoring Until It's Too Late

For months, I had no visibility into my server's health. I'd find out about problems when users complained or when I couldn't SSH in because the server was frozen.

Minimum viable monitoring:

  1. Server metrics: Set up Netdata or a simple monitoring agent
  2. Application errors: Use Laravel's logging with a service like Sentry or at minimum, email notifications
  3. Uptime checks: Free services like UptimeRobot or Freshping
// In your exception handler
public function register()
{
    $this->reportable(function (Throwable $e) {
        if (app()->bound('sentry')) {
            app('sentry')->captureException($e);
        }
    });
}

Lessons Learned

Running Laravel on a cheap VPS taught me more about systems administration than any course could. The constraints forced me to understand:

  • How PHP-FPM actually works
  • Why process managers like Supervisor exist
  • The importance of proper logging
  • That monitoring isn't optional, even for small projects

If you're deploying Laravel on budget infrastructure, embrace the constraints. They'll make you a better developer.