Logging Architecture
refer
Docker containers in Kubernetes write logs to standard output (stdout) and standard (stderr) error streams. Docker redirects these streams to a logging driver configured in Kubernetes to write to a file in JSON format. Kubernetes then exposes log files to users via kubectl logs command. Users can also get logs from a previous instantiation of a container setting the –previous flag of this command to true. That way they can get container logs if the container crashed and was restarted.
Ways
- using a logging sidecar container running inside an app’s pod (different sidecars for different app)
```
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:- name: example
image: busybox
args:- /bin/sh
- -c
-
while true;
do
echo “$(date)\n” » /var/log/example.log;
sleep 1;
done
volumeMounts: - name: varlog
mountPath: /var/log
- name: sidecar
image: busybox
args: [/bin/sh, -c, ‘tail -f /var/log/example.log’]
volumeMounts:- name: varlog
mountPath: /var/log
volumes:
- name: varlog
- name: varlog
emptyDir: {}
```
- name: example
- using a node-level logging agent that runs on every node (only works for stdout/stderr)
/var/log/containers
- push logs directly from within an application to some backend(X)
Best pratice
- container logs should have a separate shipper, storage, and lifecycle that are independent of pods and nodes
AWS
- fluentd(daemonset) on each node to collect logs(system,containers..)
- kinesis
- kinesis-splunk-reader to get the kinesis(region,name,shard) + fluentd-hec(send to splunk)