Skip to main content
Worker services are background processing services that don’t expose HTTP endpoints. They’re ideal for queue consumers, background jobs, and long-running processes. This is a complete reference for all fields that can be set for a worker service in porter.yaml.

Field Reference

FieldTypeRequiredDescription
namestringYesService identifier (max 31 chars)
typestringYesMust be worker
runstringYesCommand to execute
cpuCoresnumberYesCPU allocation
ramMegabytesintegerYesMemory allocation in MB
instancesintegerNoNumber of replicas (default: 1)
autoscalingobjectNoAutoscaling configuration
healthCheckobjectNoCombined health check config
livenessCheckobjectNoLiveness probe config
readinessCheckobjectNoReadiness probe config
startupCheckobjectNoStartup probe config
connectionsarrayNoExternal cloud connections
serviceMeshEnabledbooleanNoEnable service mesh
terminationGracePeriodSecondsintegerNoGraceful shutdown timeout
gpuCoresNvidiaintegerNoNVIDIA GPU cores
nodeGroupstringNoNode group UUID

Basic Example

services:
  - name: queue-worker
    type: worker
    run: npm run worker
    cpuCores: 0.5
    ramMegabytes: 512
    instances: 3

autoscaling

Type: object - Optional Configure horizontal pod autoscaling based on CPU and memory utilization.
FieldTypeDescription
enabledbooleanEnable autoscaling
minInstancesintegerMinimum number of replicas
maxInstancesintegerMaximum number of replicas
cpuThresholdPercentintegerCPU usage threshold (0-100)
memoryThresholdPercentintegerMemory usage threshold (0-100)
autoscaling:
  enabled: true
  minInstances: 1
  maxInstances: 10
  cpuThresholdPercent: 80
  memoryThresholdPercent: 80
When autoscaling is enabled, the instances field is ignored.

healthCheck

Type: object - Optional Configure a combined health check that applies to liveness, readiness, and startup probes. Worker services use command-based health checks since they don’t expose HTTP endpoints.
FieldTypeDescription
enabledbooleanEnable health checks
commandstringCommand to run for health check
timeoutSecondsintegerCommand timeout (min: 1)
initialDelaySecondsintegerInitial delay before checking (min: 0)
healthCheck:
  enabled: true
  command: ./healthcheck.sh
  timeoutSeconds: 5
  initialDelaySeconds: 15
Cannot be used together with livenessCheck, readinessCheck, or startupCheck. Use either the combined healthCheck or the individual checks.

Advanced Health Checks

For fine-grained control, configure liveness, readiness, and startup probes separately.

livenessCheck

Type: object - Optional Determines if the container should be restarted.
livenessCheck:
  enabled: true
  command: ./livez.sh
  timeoutSeconds: 5
  initialDelaySeconds: 15

readinessCheck

Type: object - Optional Determines if the container is ready to receive work.
readinessCheck:
  enabled: true
  command: ./readyz.sh
  timeoutSeconds: 5
  initialDelaySeconds: 5

startupCheck

Type: object - Optional Used for slow-starting containers. Other probes are disabled until this passes.
startupCheck:
  enabled: true
  command: ./startupz.sh
  timeoutSeconds: 10
  initialDelaySeconds: 0

connections

Type: array - Optional Connect to external cloud services. See Reference for full documentation.
connections:
  - type: awsRole
    role: my-iam-role

serviceMeshEnabled

Type: boolean - Optional Enable service mesh for enhanced inter-service communication with improved performance, reliability, and monitoring.
serviceMeshEnabled: true
Useful for workers that need to communicate with other services in your cluster.

terminationGracePeriodSeconds

Type: integer - Optional Seconds to wait for graceful shutdown before forcefully terminating the container.
terminationGracePeriodSeconds: 120
Set this to a value higher than your longest expected job. This gives workers time to complete in-progress work before shutdown.

gpuCoresNvidia

Type: integer - Optional Allocate NVIDIA GPU cores for ML workloads or GPU-accelerated processing.
gpuCoresNvidia: 1
nodeGroup: gpu-node-group-uuid
Requires a node group with GPU-enabled instances.

Complete Example

services:
  - name: queue-processor
    type: worker
    run: npm run worker
    cpuCores: 1
    ramMegabytes: 2048

    # Autoscaling
    autoscaling:
      enabled: true
      minInstances: 2
      maxInstances: 20
      cpuThresholdPercent: 70
      memoryThresholdPercent: 80

    # Health checks
    livenessCheck:
      enabled: true
      command: ./healthcheck.sh
      timeoutSeconds: 5
    readinessCheck:
      enabled: true
      command: ./ready.sh
      timeoutSeconds: 3

    # Cloud connections
    connections:
      - type: awsRole
        role: worker-sqs-access

    # Service mesh
    serviceMeshEnabled: true

    # Graceful shutdown (allow 2 minutes for jobs to complete)
    terminationGracePeriodSeconds: 120

Common Use Cases

Queue Consumer

services:
  - name: sqs-consumer
    type: worker
    run: node src/workers/sqs-consumer.js
    cpuCores: 0.5
    ramMegabytes: 512
    instances: 5
    terminationGracePeriodSeconds: 60
    connections:
      - type: awsRole
        role: sqs-consumer-role

Background Job Processor

services:
  - name: job-processor
    type: worker
    run: bundle exec sidekiq
    cpuCores: 1
    ramMegabytes: 1024
    autoscaling:
      enabled: true
      minInstances: 2
      maxInstances: 10
      cpuThresholdPercent: 70
    terminationGracePeriodSeconds: 300

ML Inference Worker

services:
  - name: ml-worker
    type: worker
    run: python worker.py
    cpuCores: 4
    ramMegabytes: 8192
    gpuCoresNvidia: 1
    nodeGroup: gpu-node-group-uuid
    instances: 2