For three days in a row GitHub Actions kept emailing me the same failure:

Process completed with exit code 23.

At first it looked like a typical rsync glitch, but the log was very specific:

  • failed to set times
  • mkstemp ... Permission denied
  • failing inside /var/www/zinchuk.online/ and then later even inside /tmp/zinchuk.online-dist/

So this was not a flaky network issue. It was permissions, every time.

What was actually happening

The deploy pipeline was doing the right thing in theory:

  1. Build the site.
  2. rsync to /tmp/zinchuk.online-dist/
  3. sudo rsync into /var/www/zinchuk.online/

But I had two subtle problems:

  • /var/www/zinchuk.online was not always owned by zinchuk.
  • /tmp/zinchuk.online-dist sometimes ended up owned by root after a manual run.

Once either directory flipped ownership, the scheduled job started failing every morning.

Why CI could not fix it by itself

GitHub Actions runs sudo in a non-interactive shell. That means:

  • sudo cannot prompt for a password
  • the command fails, even if the password is correct

So my attempt to “just fix the permissions in CI” was always going to fail.

The fix that finally stuck

I created a tiny deploy script on the server and allowed it via sudoers with NOPASSWD, but only for that one script.

Then CI does this:

  • sudo /usr/local/bin/deploy_zinchuk_online.sh prepare
  • rsync into /tmp/zinchuk.online-dist/
  • sudo /usr/local/bin/deploy_zinchuk_online.sh publish

This keeps ownership stable and makes the scheduled deploys predictable.

I also added a simple health check to the workflow so failures are obvious immediately:

curl -fsS https://www.zinchuk.online/ > /dev/null

One small side note from this incident: I decided this post does not need a hero image, so I updated the blog schema to allow no image when a post is better as pure text.

What I learned

  • Exit code 23 is boring, but precise. It usually means permissions.
  • If you run manual deploys, you can accidentally poison the next CI run.
  • A tiny server-side script with strict sudoers is sometimes safer than trying to do everything in CI.

If the next few scheduled runs are green, I will consider this one closed.