Installing the Kubernetes agent enables Blue Matador to collect and analyze server metrics on your Kubernetes nodes as well as the state of many Kubernetes resources. After installation, the following events will be detected on your servers without configuring any thresholds:

  • Node CPU Anomalies
  • Node Swapping
  • Server Time Drift
  • Kubernetes API Health
  • Kubernetes Component Statuses
  • Kubernetes DaemonSet Health
  • Kubernetes Deployment Health
  • Kubernetes Pod Failures
  • Kubernetes Node Capacity
  • Kubernetes OOM Containers & Container Restarts
  • Kubernetes Service Health
  • Kubernetes Job Failures
  • Kubernetes Events

Requirements

Installing the Blue Matador Linux Agent requires either an active trial or paid account. Contact sales to get started.  You will also need to meet the following requirements:

  • Kubernetes version 1.10+
  • kubectl access to your cluster
  • Ability to set up RBAC (Role-Based Access Control) with your credentials

Installation Process

1. Log in to the app and navigate to the Integrations page via Setup > Integrations

2. Expand the Kubernetes installation by clicking on the Install button on the Kubernetes tile

3. Specify a name for your Kubernetes cluster.  This will help you identify your Kubernetes resources more easily in a multi-cluster environment, and helps the agent know which Kubernetes nodes belong in the same cluster.

4. Specify the namespace you would like the agent and related resources to be created in. If the namespace does not already exist, you must first create it using kubectl create namespace <name>.  If you are unsure what to use, use the default namespace.

5. Create a file named bluematador-rbac.yaml with the contents of the Role-Based Access Control yaml which has been configured with your namespace.

6. Create a file named bluematador-agent.yaml with the contents of the DaemonSet config which will run the Blue Matador agent and has been generated with your namespace, cluster name, Blue Matador account ID, and API key.

7. Create the resources to run the agent in your Kubernetes cluster using kubectl

kubectl create -f bluematador-rbac.yaml,bluematador-agent.yaml 

8. Verify the installation by checking the number of connected nodes against the number of nodes in your cluster.

Troubleshooting

 If you are unable to verify that the agent is connected, first make sure that the pods for the agent are running.

kubectl get pods --all-namespaces --selector name=bluematador-agent

You should see an output similar to this:

If there are no pods, then the kubelet may not have been able to schedule the pods. Ensure that there are no taints or tolerations that would affect the scheduling of the pod, or if there are, then modify the DaemonSet config to tolerate them.  Also ensure that each node has sufficient CPU and memory to run the agent. In small clusters of less than 10 nodes and 100 pods, the agent should be able to run on as little as 64Mi memory and 50m CPU. Medium-sized clusters should allocate at least 128Mi memory and 200m CPU, while large clusters (more than 50 nodes or 5000 pods) should allocate at least 256Mi memory and 500m CPU.

If the pods are created but the status is not Running for some pods, describe the pod to see if there is an event that indicates what is wrong. There may have been an issue pulling the docker image for the agent, and a network policy may need to be changed to allow this image to be pulled.

kubectl describe <pod name>  

 Once all the containers are running, you can verify that the agent is connected by viewing the logs for a pod. Typically the logs will be sparse, but look out for any errors related to Kubernetes data collection or authentication.

Verbose Logging

Verbose logging for the agent can be enabled by modifying the DaemonSet to add the environment variable BLUEMATADOR_VERBOSE with a value of 5. This will enable debug logging in the agent and can help provide more information if resources and metrics are not being properly collected.

      containers:
      - name: bluematador-agent
        image: bluematador/agent:2.0.3
        imagePullPolicy: Always
        env:
          - name: BLUEMATADOR_VERBOSE
            value: "5"

Then, update the DaemonSet in your cluster

kubectl apply bluematador-daemonset.yaml 

Network Issues

In order for the agent to connect to Blue Matador’s servers, ensure it has outgoing network connectivity to app.bluematador.com:443 and bluematador-flint-modules.s3.amazonaws.com:443.

Proxy Setup

You may configure the Kubernetes agent to connect to Blue Matador's servers via an HTTP proxy. To do this, add the following lines to your agent manifest file, replacing the example http proxy endpoint with your proxy's endpoint

        env:
          - name: HTTP_PROXY
            value: "http://myproxy.example.co:3128"
          - name: HTTPS_PROXY
            value: "http://myproxy.example.co:3128"

After updating the manifest file, make sure your proxy has whitelisted traffic to app.bluematador.com:443 and bluematador-flint-modules.s3.amazonaws.com:443, then apply your changes to the DaemonSet.

kubectl apply -f bluematador.yaml

Google Cloud Permissions

If you are installing the Kubernetes agent in Google Cloud Platform, your user may not have enough privileges in the Kubernetes cluster to create the necessary resources to run the Blue Matador agent. You can run the following command to give your user privileges:

kubectl create clusterrolebinding cluster-admin-binding  \
  --clusterrole cluster-admin \
  --user $(gcloud config get-value account)

Frequently Asked Questions

Does the agent need to run on every master and worker node? Configuring the DaemonSet to run on every node is preferred because each agent only reports the pods running on its node to Blue Matador. This allows us to distribute resource usage in larger clusters that may have thousands of pods running, but it also means that any node that does not have the agent will not have its pods monitored. You can modify the tolerations portion of the DaemonSet configuration to allow our agent to run on every node.

Should I install the Kubernetes agent if I am using the AWS integration with EC2 Instances? While the AWS integration collects many EC2 metrics, the Kubernetes agent provides value on top of that.  The Kubernetes agent is able to collect data from the Kubernetes API that is critical to monitoring the health of your Kubernetes cluster. This data is not available from the AWS integration alone.

Should I install the Linux agent directly onto a Kubernetes node? The preferred method of monitoring Kubernetes is to install the Blue Matador agent as a DaemonSet in your Kubernetes cluster using the instructions above. Installing the Linux agent directly on a node will not provide any Kubernetes events, but will still provide the Linux server events. Using both installations simultaneously can result in duplicate events, and many of the Linux events are less relevant to kubernetes workloads that are more dynamic in nature.

Where is the Dockerfile for the agent’s Docker image? You can view the Dockerfile here.

Did this answer your question?