Render Engine Rewrite

From serial bottlenecks to parallel job types

I rewrote a production render pipeline used for post-event video delivery. The legacy flow forced users to wait for every processing step to finish before a video could be marked ready. The rewrite split critical and non-critical work into separate job paths, reduced ready-time to under two minutes, and moved runtime infrastructure from a Windows service to Dockerized workers.

Key Metrics

Render time before

20+ min

Render time after

< 2 min

Deployment model

Windows svc -> Docker

Before vs After

Before

Synchronous
  • Download raw sources from live stream
  • Normalize and stitch multiple files
  • Generate intro slate (if selected)
  • Compress for player delivery
  • Upload player version to storage
  • Compress for archive storage
  • Upload archive version
  • Run copyright detection on audio
  • Extract and store flagged segments

-> Video marked ready only after all 9 steps

After

Parallel job types

Standard render (critical path)

  • Download and stitch sources
  • Normalize audio and bitrates
  • Generate intro slate (if selected)
  • Compress and upload player version

-> Video marked ready here

Archive render (background)

  • Compress and upload archive version
  • Copyright audio detection
  • Extract and store flagged segments

System Flow

Render workflow from live stream and queue into Dockerized render engine, then split into standard critical path and deferred archive rendering
The ingest path stays unified through queueing and worker orchestration, then execution splits by urgency: fast player readiness on the critical path and deferred archive/compliance processing in the background.

Key Decisions

Split on urgency, not process type

Archive compression and copyright detection are valuable, but they do not share the same deadline as player readiness. Reframing work by urgency removed non-critical tasks from the blocking path and made the "ready to watch" milestone both faster and more predictable.

Treat job types as an extension point

I introduced an explicit job model so new render variants could be added without changing core orchestration. That abstraction later supported ISO and audio-only outputs while preserving the existing pipeline contracts.

Docker over Windows service runtime

Moving FFmpeg and codec dependencies into the container image aligned dev, staging, and production behavior. Operations shifted from VM-level provisioning to container-level scaling, with cleaner rollback and better incident recovery characteristics.

C# / .NET FFmpeg Docker Azure Blob Storage HLS NVENC Worker Service