Building a FFmpeg Processing Service with Go and Redis
Published January 19th, 2026
I’ve been working on a side project that stitches together videos from multiple rendering APIs. Think of it as a pipeline where different services generate clips, and I need something to combine them all into a final output and upload it somewhere.
At first, the FFmpeg logic lived inside the main project. But as I added more rendering sources, the code got messy. So I did what any reasonable person would do — I pulled it out into its own service.
This post is about that service: a queue-backed FFmpeg processor written in Go. It handles long-running jobs, retries failures, and has a neat adapter system for fetching and storing files from different places.
Why a separate service?
The main project calls out to various video APIs. Some return URLs, some upload directly to storage. I needed a single place to say “here are some input files, run FFmpeg, put the result here.” Having that logic separate means:
- The main app stays focused on orchestration
- I can scale FFmpeg workers independently
- Other projects can use the same service
Plus, I get to keep experimenting with Go, which I’m enjoying a lot lately.
The adapter system
One of the bits I’m most happy with is the adapter system. It uses URL protocols to figure out where files come from and where they go.
Want to fetch a video from a public URL? Just pass an http:// or https:// URL. Need to pull from Bunny storage? Use storage://zone-name/path/to/file.mp4. Uploading to Bunny Stream? stream://library-id/video-id. S3? s3://bucket/key.
The service looks at the protocol and picks the right adapter automatically. Adding a new storage backend is just implementing a simple interface. No config changes, no special flags — just URLs.
input: https://example.com/clip1.mp4
input: storage://my-zone/clips/intro.mp4
output: stream://library-123/final-video
This keeps the API dead simple. You don’t need to know how authentication works for each provider. The service handles it.
The architecture
Four components, each doing one thing:
API --> Redis --> Worker --> Webhooks
API — Accepts FFmpeg commands via HTTP, validates them, queues them for processing.
Redis — Message broker and result store, using asynq for the queue.
Worker — Pulls jobs, fetches inputs via adapters, runs FFmpeg, uploads results via adapters.
Webhooks — Delivers callbacks when jobs complete, with retries.
Async by default
Video processing is slow. A 30-second clip might take minutes to encode. You can’t hold an HTTP connection open that long.
So the API returns immediately with a job ID:
{
"command_id": "abc123",
"status": "PENDING"
}
Clients can poll for status or register a webhook to get notified when it’s done.
Why Redis and asynq?
I went with Redis over RabbitMQ or Kafka because I didn’t need the complexity. For this workload, Redis is more than enough.
asynq gives me a clean API for queuing tasks, plus a CLI and web UI to inspect what’s happening. Jobs persist to disk with Redis AOF, so they survive restarts. If I ever outgrow Redis, the abstraction makes swapping it out straightforward.
The webhook problem
My first webhook implementation was naive:
if req.Webhook != "" {
go sendWebhook(req.Webhook, commandID, &result)
}
Fire and forget. If the endpoint is down, the notification is gone. If the service crashes, gone. No retries, no visibility.
That’s bad. The video already encoded successfully — we just failed to tell anyone. Having to re-encode because a webhook failed is wasteful and frustrating.
Webhooks as a queue
Now webhooks go through their own queue:
task := asynq.NewTask("webhook:deliver", payload)
client.Enqueue(task,
asynq.MaxRetry(5),
asynq.Queue("webhooks"),
asynq.Retention(72*time.Hour),
)
A dedicated service handles delivery with retries and exponential backoff. After 5 failures, it moves to the archived state where I can inspect and retry manually.
func handleWebhookDeliver(ctx context.Context, t *asynq.Task) error {
resp, err := httpClient.Do(req)
if err != nil || resp.StatusCode >= 300 {
return fmt.Errorf("webhook failed: %d", resp.StatusCode)
}
return nil
}
Configuration
Everything is configurable via environment variables:
| Variable | Default | Purpose |
|---|---|---|
CONCURRENCY | 2 | Parallel FFmpeg jobs |
TASK_TIMEOUT_MINUTES | 30 | Max time per job |
WEBHOOK_MAX_RETRY | 5 | Delivery attempts |
HTTP_TIMEOUT | 10 | Webhook request timeout |
Running on a beefy machine? Crank up CONCURRENCY. Webhook endpoints flaky? Increase WEBHOOK_MAX_RETRY.
Trade-offs
Nothing is free:
- Complexity — Four services instead of one.
- Latency — Queue-based processing adds a few milliseconds.
- Redis dependency — If Redis dies, everything stops.
For video files that take seconds to minutes to process, these trade-offs are easy to accept.
What I like about this setup
The adapter system is the part I keep coming back to. It makes the service feel like a tool I can point at anything. HTTP, S3, bunny.net, whatever — just give it URLs and it figures out the rest.
The queue-based architecture isn’t specific to FFmpeg either. Any long-running operation benefits from this pattern: accept work, return immediately, process async, deliver notifications reliably.
I’ll probably write more about the adapters in a future post. For now, I’m just happy to have a service I can throw video processing at without worrying about the details.