refer1
refer2

Monitor Parts

  • Kubernetes dashboard: gives an overview of the resources running on your cluster. It also gives a very basic means of deploying and interacting with those resources.
    1. Admin view
    2. Workload view
  • cAdvisor: is an open source agent that monitors resource usage and analyzes the performance of containers.
  • Liveness and Readiness Probes: actively monitor the health of a container.
    1. Readiness probes let Kubernetes know when an app is ready to serve traffic. Kubernetes will only allow a service to send traffic to the pod once the probe passes. If the probe fails, Kubernetes will stop sending traffic to that Pod until it passes again. These kinds of probes are useful when you have an application which takes some appreciable amount of time to start. The service won’t work until the probe completes successfully even when the process has already started.
    2. Liveness probes let Kubernetes know if an app is alive or not. If it is alive, no action is taken. If the app is dead, Kubernetes will remove the pod and start a new one to replace it. These probes are useful when you have an app that may hang indefinitely and stop serving requests.

When configuring probes, the following parameters can be provide:

  • initialDelaySeconds: the time to wait before sending a readiness/liveness probe when first starting a container. For liveness checks, make sure that the probe will start only after the app is ready or else your app will keep restarting.
  • periodSeconds: how often the probe is performed (default is 10).
  • timeoutSeconds: the number of seconds for a probe to timeout (default is 1).
  • successThreshold: the minimum consecutive successful checks for a probe to be considered successful.
  • failureThreshold: the amount of failed probe attempts before giving up. Giving up on a liveness probe causes Kubernetes to restart the pod. For readiness probes, the pod will be marked as unready.

  • Horizontal Pod Autoscaler: increases the number of pods if needed based on information gathered by analyzing different metrics.
    The Horizontal Pod Autoscaler, or HPA, is a feature of Kubernetes that enables us to automatically scale the number of pods needed for a deployment, replication controller, or replica set based on observed metrics. In practice, CPU metrics are often the primary trigger, but custom metrics are also possible too.
    kubectl autoscale deployment hpa-demo --cpu-percent=50 --min=1 --max=10
    kubectl get hpa
    

img(°0°)
img(°0°)

eks

- cni
awscni_assigned_ip_addresses
awscni_del_ip_req_count
awscni_assigned_ip_addresses
awscni_aws_api_error_count