This status indicates a Docker container is in its health check start period and hasn't yet passed a health check. The fix involves configuring an appropriate start_period, verifying the health check command works, and ensuring your application starts within the expected timeframe.
The "Container is starting (health: starting)" status in Docker indicates that a container with a configured HEALTHCHECK is still in its initialization phase. When Docker starts a container with a health check defined, the health status begins as "starting" and remains in this state until either a health check passes (changing to "healthy") or the start_period expires and enough consecutive checks fail (changing to "unhealthy"). This is normal behavior during container startup. The "starting" status exists specifically to give applications time to initialize before Docker begins counting health check failures. However, if a container stays stuck in "starting" status for an extended period, it typically indicates one of several problems: the health check command itself is failing, the application inside the container isn't starting properly, or the configured start_period and timing values need adjustment. The start_period parameter is particularly important for slow-starting applications like Java applications, databases, or services that need to load large datasets or establish connections before being ready to serve requests.
First, inspect the container's health check settings and current state:
# View container health status
docker ps
# Get detailed health configuration and status
docker inspect <container_name> --format='{{json .State.Health}}' | jq
# See the complete health check configuration
docker inspect <container_name> --format='{{json .Config.Healthcheck}}' | jqExample output showing a container stuck in "starting":
{
"Status": "starting",
"FailingStreak": 0,
"Log": []
}Check the Healthcheck config to see current timing values:
{
"Test": ["CMD-SHELL", "curl -f http://localhost:8080/health"],
"Interval": 30000000000,
"Timeout": 10000000000,
"StartPeriod": 0,
"Retries": 3
}Note: Times are in nanoseconds (30000000000 = 30 seconds).
The start_period gives your application a grace period to initialize before health check failures count against it. Set this based on how long your application typically takes to start.
In Dockerfile:
# For a typical web application (30-60 second startup)
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
# For slow-starting apps like Java/databases (2-5 minute startup)
HEALTHCHECK --interval=30s --timeout=10s --start-period=300s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1In docker-compose.yml:
services:
web:
image: myapp
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"]
interval: 30s
timeout: 10s
start_period: 60s
retries: 3
database:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
start_period: 120s # Databases need more time
retries: 5Recommended start_period values:
- Simple web apps: 30-60 seconds
- Node.js/Python apps: 30-90 seconds
- Java/Spring Boot apps: 120-300 seconds
- Databases (PostgreSQL, MySQL): 60-120 seconds
- Elasticsearch/Kafka: 180-300 seconds
Docker Engine 25.0 introduced the start_interval parameter, which allows more frequent health checks during the start period for faster detection when a service becomes ready:
In Dockerfile:
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --start-interval=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1In docker-compose.yml (Compose file version 3.9+):
services:
web:
image: myapp
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"]
interval: 30s
timeout: 10s
start_period: 60s
start_interval: 5s # Check every 5s during start_period
retries: 3This configuration means:
- During the 60s start_period: health checks run every 5 seconds
- After start_period ends: health checks run every 30 seconds
- Container becomes "healthy" as soon as the first check passes
Note: start_interval requires start_period to be set. If you omit start_period, you'll get an error.
Verify that your health check command actually works inside the container:
# Execute the health check command directly
docker exec <container_name> curl -f http://localhost:8080/health
echo "Exit code: $?"
# Or get a shell and test interactively
docker exec -it <container_name> sh
# Inside the container:
curl -f http://localhost:8080/health
echo $?Common issues:
1. curl/wget not found: Use wget (often pre-installed) or add health check tools:
# Alpine - use wget instead of curl
HEALTHCHECK CMD wget --spider -q http://localhost:8080/health || exit 1
# Or install curl
RUN apk add --no-cache curl2. Connection refused: Check the port and that your app binds to 0.0.0.0:
docker exec <container_name> netstat -tlnp3. IPv6 issues: Use 127.0.0.1 instead of localhost:
HEALTHCHECK CMD curl -f http://127.0.0.1:8080/health || exit 1Watch the health check status in real-time to understand what's happening:
# Watch container status continuously
watch -n 2 'docker ps --format "table {{.Names}} {{.Status}}"'
# Monitor Docker events for health status changes
docker events --filter container=<container_name> --filter event=health_status
# Check health check logs as they accumulate
watch -n 5 'docker inspect <container_name> --format="{{json .State.Health}}" | jq'Understanding the health check log:
During the start_period, health checks run but failures don't count against the retry limit. Once start_period ends, failed checks increment the FailingStreak counter:
{
"Status": "starting",
"FailingStreak": 0,
"Log": [
{
"Start": "2024-01-15T10:00:00.000Z",
"End": "2024-01-15T10:00:01.000Z",
"ExitCode": 1,
"Output": "curl: (7) Failed to connect"
},
{
"Start": "2024-01-15T10:00:30.000Z",
"End": "2024-01-15T10:00:30.500Z",
"ExitCode": 0,
"Output": "OK"
}
]
}When ExitCode becomes 0, the container transitions to "healthy".
When using Docker Compose with health check dependencies, services won't start until dependencies are healthy:
services:
web:
image: myapp
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
db:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
start_period: 60s
retries: 5
redis:
image: redis:7
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
start_period: 30s
retries: 3Known issue (Docker Compose): There's a known behavior where dependent services won't start until the full start_period has elapsed, even if the health check passes earlier. This was reported in docker/compose#11131.
Workaround: Use a shorter start_period with start_interval to balance fast healthy detection with tolerance for slow starts:
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 5s
start_period: 30s
start_interval: 2s # Quick checks during startup
retries: 5If your application genuinely takes a long time to start, consider optimizing it:
1. Lazy initialization: Defer non-critical startup tasks
// Node.js - lazy database connection
let dbConnection;
app.get('/health', (req, res) => {
res.status(200).send('OK'); // Basic health check
});
async function getDb() {
if (!dbConnection) {
dbConnection = await connectToDatabase();
}
return dbConnection;
}2. Separate readiness from liveness:
# Basic health check that passes early
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8080/health/live || exit 1"]
interval: 10s
start_period: 10s3. Pre-warm in development/CI:
# Wait script for CI pipelines
until docker inspect --format='{{.State.Health.Status}}' mycontainer | grep -q "healthy"; do
echo "Waiting for container to be healthy..."
sleep 2
done4. Use multi-stage builds to reduce image size and startup time:
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]How start_period actually works: During the start_period, Docker still runs health checks at the specified interval, but failed checks don't count toward the retry limit. The first successful check immediately transitions the container to "healthy" status, regardless of how much time remains in the start_period.
Difference between start_period and start_interval:
- start_period: The grace period during which health check failures don't count against retries
- start_interval: (Docker 25.0+) The frequency of health checks during the start_period
Without start_interval, checks during the start_period run at the regular interval (e.g., every 30s). With start_interval set to 5s, checks run every 5 seconds during startup for faster healthy detection.
Health check timing math: Consider a container with this configuration:
interval: 30s
timeout: 10s
start_period: 60s
retries: 3- First health check runs at T+30s (interval after start)
- During start_period (0-60s): failures don't count
- After start_period: 3 consecutive failures = unhealthy
- Worst case to unhealthy: 60s + (30s * 3) = 150s from container start
Compose version compatibility:
- start_period: Supported in Compose file version 2.3+
- start_interval: Supported in Compose file version 3.9+ with Docker Engine 25.0+
Debugging in orchestrators: In Docker Swarm or Kubernetes, a container stuck in "starting" can affect service availability:
- Swarm: Won't route traffic to container until healthy
- Kubernetes: Use livenessProbe.initialDelaySeconds equivalent to start_period
When containers never become healthy: If a container perpetually stays in "starting" then transitions to "unhealthy":
1. The health check command is fundamentally broken
2. The application never fully starts (crash loop, missing deps)
3. The start_period is too short for the application
Check docker logs <container_name> for application errors that might explain why the service isn't ready.
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
Error response from daemon: manifest for nginx:nonexistent not found: manifest unknown: manifest unknown
How to fix 'manifest for image:tag not found' in Docker
Error response from daemon: invalid reference format: repository name must be lowercase
How to fix 'repository name must be lowercase' in Docker
Error response from daemon: No such image
How to fix 'No such image' in Docker
Error response from daemon: Container is not running
How to fix 'Container is not running' when using docker exec