Logs are one of the first places teams look when something breaks in Amazon EKS. They help you understand pod crashes, restarts, failed deployments, node issues, and application errors.
But EKS does not automatically keep container logs in a durable place. Most container logs stay on the worker node where the pod ran, which means they can disappear when pods move, nodes scale down, or instances are replaced.
This guide explains how to collect and ship EKS logs properly. You will learn how EKS logging works, how to enable control plane logs, how to use Fluent Bit to collect container logs from every node, and how to send those logs to a backend.
Key Takeaways
- EKS logging has three layers: control plane, node system logs, and container/application logs
- Container logs write to stdout/stderr and land at /var/log/pods/ on the host node
- Fluent Bit is AWS’s recommended DaemonSet agent for container log collection
- Control plane logging is disabled by default and must be enabled per cluster
- On Fargate, use the built-in Fluent Bit log router. DaemonSets are not supported
- IRSA grants Fluent Bit CloudWatch access without hardcoding credentials
The Three Layers of EKS Logging
Understanding what you are logging helps you pick the right tool for each layer.
- Control plane logs: API server, audit, authenticator, controller manager, scheduler. These run on AWS-managed nodes you cannot SSH into. Once enabled, they stream directly to CloudWatch. No agent needed.
- Node system logs: kubelet, kube-proxy, and containerd logs written to the node filesystem via systemd. Captured by Fluent Bit alongside container logs on EC2 nodes.
- Container/application logs: Your app writes to stdout/stderr. The container runtime redirects this to
/var/log/pods/on the host node. Fluent Bit reads from here and forwards to your backend. These vanish when the pod is gone, which is why forwarding matters.
Enable Control Plane Logging
Disabled by default. Enable via AWS CLI or Terraform. At minimum, turn on api and audit for any production cluster.
AWS CLI:
aws eks update-cluster-config \ --region your-region \ --name your-cluster-name \ --logging '{"clusterLogging":[{"types":["api","audit","authenticator"],"enabled":true}]}'Terraform:
resource "aws_eks_cluster" "main" { name = "my-cluster" role_arn = aws_iam_role.eks_role.arn enabled_cluster_log_types = ["api", "audit", "authenticator"]}Logs appear in CloudWatch under /aws/eks/{cluster-name}/cluster within a few minutes.
💰 Cost Note
CloudWatch charges $0.50/GB ingested. Audit logs are high volume. Enable only what you actively use, set retention periods (the default is forever), and filter DEBUG entries at the Fluent Bit layer before ingestion.
Set Up EKS Container Logging with Fluent Bit
Fluent Bit runs as a DaemonSet; one pod per node reads container logs from /var/log/containers/, and forwards them to your backend. The setup has four steps.
Step 1: Create the namespace
kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cloudwatch-namespace.yamlStep 2: Create the ConfigMap
kubectl create configmap fluent-bit-cluster-info \ --from-literal=cluster.name=your-cluster \ --from-literal=logs.region=your-region \ --from-literal=http.server=On \ --from-literal=http.port=2020 \ --from-literal=read.head=Off \ --from-literal=read.tail=On \ -n amazon-cloudwatchStep 3: Set up IAM permissions with IRSA
Fluent Bit needs AWS CloudWatch write permissions. IRSA attaches an IAM role directly to the Kubernetes service account, no credentials on nodes.
eksctl create iamserviceaccount \ --name fluent-bit \ --namespace amazon-cloudwatch \ --cluster your-cluster-name \ --region your-region \ --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \ --approveStep 4: Deploy the DaemonSet
kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit.yaml # Verify — expect one pod per nodekubectl get daemonset fluent-bit -n amazon-cloudwatchContainer logs appear in CloudWatch under /aws/containerinsights/{cluster}/application.
CloudWatch Log Group Structure
Fluent Bit creates log groups with a predictable naming pattern. Log streams within /application follow the format {hostname}.{namespace}.{container-name}.
| Log Group | Contents |
| /aws/eks/{cluster}/cluster | Control plane: API server, audit, authenticator |
| /aws/containerinsights/{cluster}/application | All container stdout/stderr from all pods |
| /aws/containerinsights/{cluster}/host | Node-level: kubelet, kube-proxy, syslog |
| /aws/containerinsights/{cluster}/dataplane | EKS data plane component logs |
Sample CloudWatch Log Insights query to find errors in a namespace:
fields @timestamp, @message, kubernetes.namespace_name, kubernetes.container_name| filter kubernetes.namespace_name = 'production'| filter @message like /ERROR/| sort @timestamp desc| limit 100Shipping Logs to a Third-Party Backend
By default Fluent Bit ships to CloudWatch. To send to any HTTP backend (CubeAPM, Datadog, OpenSearch, Elasticsearch), update the OUTPUT block in your Fluent Bit ConfigMap:
[OUTPUT] Name http Match * Host your-backend-host Port 443 URI /api/v1/logs Format json tls On Header Authorization Bearer YOUR_API_TOKEN Retry_Limit 3Add multiple OUTPUT blocks to send logs to two destinations simultaneously, for example, CloudWatch for retention compliance and CubeAPM for live analysis.
EKS Container Logging on Fargate
Fargate does not support DaemonSets. Use the built-in Fluent Bit log router by creating an aws-observability namespace and ConfigMap:
kind: NamespaceapiVersion: v1metadata: name: aws-observability labels: aws-observability: enabled---kind: ConfigMapapiVersion: v1metadata: name: aws-logging namespace: aws-observabilitydata: output.conf: | [OUTPUT] Name cloudwatch_logs Match * region us-east-1 log_group_name /eks-fargate/my-cluster auto_create_group OnRestart your Fargate pods after applying. The pod execution role needs the same CloudWatch IAM permissions as the IRSA role above.
Filtering and Parsing Logs
Use Fluent Bit filters to drop noise, parse JSON, and enrich logs with Kubernetes metadata before they reach your backend.
Add Kubernetes metadata to every log record
[FILTER] Name kubernetes Match kube.* Merge_Log On Keep_Log Off K8S-Logging.Parser OnDrop health check and readiness probe noise
[FILTER] Name grep Match application.* Exclude log /health [FILTER] Name grep Match application.* Exclude log /readyzReassemble multi-line stack traces
[FILTER] Name multiline Match application.* Multiline.Key_Content log Multiline.Parser java,python,goTroubleshooting Missing Logs
Check Fluent Bit pod logs first
kubectl logs -n amazon-cloudwatch -l name=fluent-bit --tail=50 # Confirm IAM role is attached to the service accountkubectl describe sa fluent-bit -n amazon-cloudwatch # Confirm ConfigMap existskubectl get configmap fluent-bit-cluster-info -n amazon-cloudwatch -o yaml- No outbound HTTPS (port 443), a missing security group rule or VPC endpoint for CloudWatch is the most common cause of missing logs
- Logs stop after node replacement, expected behavior; each new node starts reading from the current position. Set Read_from_Head On if you need historical logs from a replaced node
- Short-lived pods or init containers, Fluent Bit may not flush before the files are cleaned up. Increase the container runtime log retention or write to a persistent volume
- Bottlerocket nodes , the log path may differ. Verify with: kubectl exec -it {fluent-bit-pod} -n amazon-cloudwatch — ls /var/log/containers/
🚀 Ship EKS Logs to CubeAPM
CubeAPM is an OpenTelemetry-native observability platform that correlates EKS container logs with distributed traces out of the box. Use the standard Fluent Bit HTTP output to send logs directly, no custom agents, no proprietary SDKs.
- Full-text log search correlated with distributed traces
- Works with EC2 node groups and Fargate
- Drop-in Fluent Bit HTTP output config
Disclaimer: This article contains pricing estimates based on publicly available AWS CloudWatch Logs rates as of May 2026. Actual costs may vary by AWS region, account type, and usage patterns. Always verify current pricing before making infrastructure decisions.
FAQs
1. Do I need to change my application code to enable EKS logging?
No. Write to stdout or stderr and Fluent Bit handles the rest. No SDK, no sidecar changes needed for standard container logs.
2. Fluent Bit or Fluentd,which should I use for EKS?
Fluent Bit. It uses around 450KB of memory versus 40MB+ for Fluentd, and AWS ships it as the default agent in Container Insights. Use Fluentd only if you need a specific plugin Fluent Bit does not support.
3. What IAM permissions does Fluent Bit need?
logs:CreateLogGroup, logs:CreateLogStream, logs:PutLogEvents, logs:DescribeLogGroups, and logs:DescribeLogStreams. Attach via IRSA to scope permissions to the Fluent Bit service account only.
4. Can I send EKS logs to multiple backends at the same time?
Yes. Add multiple OUTPUT blocks in your Fluent Bit config. Each block runs independently so you can send to CloudWatch and a second backend simultaneously without duplicating your pipeline.
5. Why are my EKS control plane logs not appearing in CloudWatch?
Control plane logging is off by default. Enable it explicitly per cluster using the AWS CLI, Terraform, or the Console under Cluster > Logging. Logs land in /aws/eks/{cluster}/cluster.
6. How do I reduce EKS CloudWatch logging costs?
Set retention periods on all log groups (default is forever), use a grep filter in Fluent Bit to drop DEBUG/TRACE records before ingestion, and only enable the control plane log types you actually query. Audit and API server logs carry the most signal; scheduler and controller manager are usually low priority.





