This error occurs when Docker's gcplogs logging driver cannot authenticate with Google Cloud Logging. The most common cause is that the Docker daemon itself lacks access to Google Cloud credentials, which must be set via environment variables in the daemon's systemd configuration, not in the container or shell environment.
When you configure Docker to use the gcplogs logging driver, the Docker daemon attempts to authenticate with Google Cloud Logging to send container logs. This authentication requires valid Google Cloud credentials. The "failed to get GCP credentials" error indicates that Docker cannot locate or use valid authentication credentials. This happens because: 1. **The Docker daemon needs credentials, not the container**: A common misconception is that setting `GOOGLE_APPLICATION_CREDENTIALS` in your shell, container environment, or docker-compose.yml is sufficient. However, the gcplogs driver runs within the Docker daemon process itself, so the daemon needs access to the credentials. 2. **Automatic credential discovery fails**: When running on Google Cloud (GCE/GKE), Docker discovers credentials from the instance metadata service. Outside of Google Cloud, you must explicitly provide a service account key file and point the daemon to it. 3. **Service account permissions missing**: Even with valid credentials, the service account must have the "Logs Writer" role (`roles/logging.logWriter`) or equivalent permissions to write to Cloud Logging. This is a critical error because containers configured to use gcplogs will fail to start until authentication is properly configured.
First, confirm that the gcplogs driver is causing the issue by checking your logging configuration:
Check container logging configuration:
# Check Docker daemon default logging driver
docker info | grep "Logging Driver"
# Check specific container configuration
docker inspect <container_name> | grep -A 10 "LogConfig"Check docker-compose.yml:
Look for gcplogs driver configuration:
services:
myapp:
image: myapp:latest
logging:
driver: gcplogs
options:
gcp-project: "my-project-id"Check daemon.json:
cat /etc/docker/daemon.jsonIf it contains "log-driver": "gcplogs", this confirms the configuration.
If you don't already have a service account key file, create one:
Create a service account:
# Create service account
gcloud iam service-accounts create docker-logging \
--description="Service account for Docker gcplogs driver" \
--display-name="Docker Logging"
# Grant Logs Writer permission
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:docker-logging@YOUR_PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/logging.logWriter"Generate the key file:
gcloud iam service-accounts keys create /etc/docker/gcp-logging-key.json \
--iam-account=docker-logging@YOUR_PROJECT_ID.iam.gserviceaccount.comSet proper permissions on the key file:
sudo chmod 600 /etc/docker/gcp-logging-key.json
sudo chown root:root /etc/docker/gcp-logging-key.jsonImportant: Do not use the Owner role for the service account. The "Logs Writer" role is sufficient and follows the principle of least privilege.
The key fix is to set the GOOGLE_APPLICATION_CREDENTIALS environment variable for the Docker daemon itself, not for your shell or containers.
Create a systemd override file:
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo nano /etc/systemd/system/docker.service.d/gcp-credentials.confAdd the following content:
[Service]
Environment="GOOGLE_APPLICATION_CREDENTIALS=/etc/docker/gcp-logging-key.json"Reload systemd and restart Docker:
sudo systemctl daemon-reload
sudo systemctl restart dockerVerify the environment variable is set:
sudo systemctl show docker --property=EnvironmentYou should see:
Environment=GOOGLE_APPLICATION_CREDENTIALS=/etc/docker/gcp-logging-key.jsonWhen running outside of Google Cloud, you must also specify the project ID:
Set as default in daemon.json:
sudo nano /etc/docker/daemon.json{
"log-driver": "gcplogs",
"log-opts": {
"gcp-project": "your-gcp-project-id",
"gcp-meta-name": "docker-host-name"
}
}Or per container in docker-compose.yml:
services:
myapp:
image: myapp:latest
logging:
driver: gcplogs
options:
gcp-project: "your-gcp-project-id"
gcp-log-cmd: "true"Or at runtime:
docker run -d \
--log-driver=gcplogs \
--log-opt gcp-project=your-gcp-project-id \
your-imageRestart Docker after daemon.json changes:
sudo systemctl restart dockerVerify that the gcplogs driver works correctly:
Run a test container:
docker run --rm \
--log-driver=gcplogs \
--log-opt gcp-project=your-gcp-project-id \
alpine echo "Test log message to GCP"If the container runs successfully, the gcplogs driver is working.
Check Google Cloud Console:
1. Go to [Cloud Logging](https://console.cloud.google.com/logs)
2. Select your project
3. Filter by:
- Resource type: "Global"
- Log name: Search for logs with your container name
Use gcloud to verify logs:
gcloud logging read "resource.type=global" --limit=5 --format=jsonYou should see the "Test log message to GCP" entry.
If you're running on Google Cloud infrastructure but still getting this error, check the VM's service account configuration:
Check VM service account and scopes:
# From inside the GCE VM
curl -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email
curl -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopesRequired scopes:
The VM needs one of these OAuth scopes:
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/cloud-platform
Update VM scopes if needed:
# Stop the VM first
gcloud compute instances stop INSTANCE_NAME --zone=ZONE
# Set the required scope
gcloud compute instances set-service-account INSTANCE_NAME \
--zone=ZONE \
--scopes=logging-write,monitoring-write
# Start the VM
gcloud compute instances start INSTANCE_NAME --zone=ZONEVerify service account has Logs Writer role:
gcloud projects get-iam-policy YOUR_PROJECT_ID \
--flatten="bindings[].members" \
--filter="bindings.role:roles/logging.logWriter"On Container-Optimized OS (used by GKE nodes and GCE container VMs), the /etc directory doesn't persist across reboots. Use cloud-init instead:
Create a cloud-init config:
#cloud-config
write_files:
- path: /etc/docker/daemon.json
content: |
{
"log-driver": "gcplogs"
}
runcmd:
- systemctl restart dockerSet via instance metadata:
gcloud compute instances add-metadata INSTANCE_NAME \
--zone=ZONE \
--metadata-from-file user-data=cloud-init.yamlFor new instances:
gcloud compute instances create INSTANCE_NAME \
--zone=ZONE \
--image-family=cos-stable \
--image-project=cos-cloud \
--metadata-from-file user-data=cloud-init.yaml \
--scopes=logging-writeNote: The VM needs to be rebooted for cloud-init changes to take effect.
If you need containers running immediately while fixing the gcplogs authentication, switch to the default json-file driver:
Override per container:
docker run --log-driver=json-file myimageOverride in docker-compose.yml:
services:
myapp:
image: myapp:latest
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"Change daemon default:
sudo nano /etc/docker/daemon.json{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}sudo systemctl restart dockerImportant: This is a temporary workaround. Return to gcplogs once authentication is properly configured for centralized logging.
### Common Mistakes When Configuring gcplogs
Mistake 1: Setting credentials in the wrong place
# WRONG - This sets the variable for the container, not the Docker daemon
services:
myapp:
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/key.json
volumes:
- ./key.json:/key.jsonThe gcplogs driver runs in the Docker daemon process, not in the container. The daemon doesn't see environment variables set in container configuration.
Mistake 2: Setting credentials in your shell only
# WRONG - Only affects your shell session, not the Docker daemon
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json
docker run --log-driver=gcplogs myimage # Still failsThe Docker daemon is a separate process (usually started by systemd) and doesn't inherit your shell's environment.
### Alternative: Use Docker Desktop on macOS/Windows
Docker Desktop for macOS and Windows handles gcplogs differently. You can set the environment variable in Docker Desktop settings:
1. Open Docker Desktop
2. Go to Settings/Preferences
3. Navigate to Docker Engine
4. The daemon configuration is shown as JSON
For credentials, you may need to mount the credentials file in a shared location and reference it in Docker Desktop's daemon configuration.
### Debugging Credential Issues
Check if the key file is valid:
# Test authentication with the key file
export GOOGLE_APPLICATION_CREDENTIALS=/etc/docker/gcp-logging-key.json
gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS
gcloud logging write test-log "Test from CLI"Check Docker daemon logs:
sudo journalctl -u docker.service -fLook for authentication-related errors when starting containers.
### Using Workload Identity on GKE
On GKE, Workload Identity is the preferred method for authenticating to Google Cloud services:
# Enable Workload Identity on the cluster
gcloud container clusters update CLUSTER_NAME \
--zone=ZONE \
--workload-pool=PROJECT_ID.svc.id.goog
# Configure the Kubernetes service account
kubectl annotate serviceaccount default \
iam.gke.io/gcp-service-account=GSA_NAME@PROJECT_ID.iam.gserviceaccount.comHowever, note that gcplogs runs at the node level, not pod level, so Workload Identity doesn't directly apply. Nodes still use their node service account for logging.
### Performance Considerations
The gcplogs driver sends logs synchronously by default. For high-throughput containers, consider:
{
"log-driver": "gcplogs",
"log-opts": {
"gcp-project": "your-project",
"mode": "non-blocking",
"max-buffer-size": "4m"
}
}Warning: In non-blocking mode, logs may be dropped if the buffer fills up faster than logs can be sent to Cloud Logging.
### Verifying Logs in Cloud Logging
Use this filter in Cloud Logging console to find Docker logs:
resource.type="global"
jsonPayload.container.name="/your-container-name"Or via gcloud:
gcloud logging read 'resource.type="global"' \
--project=YOUR_PROJECT \
--limit=10 \
--format="table(timestamp,jsonPayload.container.name,jsonPayload.data)"unable to configure the Docker daemon with file /etc/docker/daemon.json
How to fix 'unable to configure the Docker daemon with file daemon.json' in Docker
docker: Error response from daemon: OCI runtime create failed: container_linux.go: starting container process caused: exec: "/docker-entrypoint.sh": stat /docker-entrypoint.sh: no such file or directory
How to fix 'exec: entrypoint.sh: no such file or directory' in Docker
image operating system "linux" cannot be used on this platform
How to fix 'image operating system linux cannot be used on this platform' in Docker
dockerfile parse error line 5: unknown instruction: RRUN
How to fix 'unknown instruction' Dockerfile parse error in Docker
manifest unknown: manifest unknown
How to fix 'manifest unknown' in Docker