- CPU requests specify the amount of CPU a Pod will be allocated
- If there not enough resources on your cluster to satisfy the CPU requests the pod won’t be scheduled
- That means you should probably imagine the CPU requests as 100% of the CPU available to the pod
- Anything after the requests can either run at full speed or be throttled (if nodes are already saturated)
- Of course anything after the CPU limit (if specified) will be artificially throttled
- The beauty of Docker and Kubernetes is that you can slice the CPU power in uneven pieces instead of allocating 1 machine per application
- You can’t do the rate of a sum because you can’t use range selectors (i.e.
[5m]
) on the output of a function; as rate takes a range vector and one would writerate( sum(http_requests_total)[5m] )
the query is not allowed - Using subqueries one can still compute the rate of a sum
- Doing so is discouraged anyway because if a server restarts and that causes a counter resets the sum over all the counter will be lower and rate will show a decrease for the metric, which didn’t happen
- If you instead take the single rates counter resets won’t show (at least not for more than 1 time window) and your graph will no longer show a decrease for the metric if it didn’t happen
Ref: https://www.robustperception.io/rate-then-sum-never-sum-then-rate/
- I developed a Chrome extension. Developer experience is OK but it can be frustrating at first. JavaScript without TypeScript is a mistake, frontend without React is a mistake too. I should’ve known that.
- An extension is: a
popup/
containing a web application. You can use anything you want as long…
- If you look at your kubeconfig you’ll notice for each of your clusters there’s a CA certificate
- If you try to remove it kubectl will complain as it’s “Unable to connect to the server: x509: certificate signed by unknown authority”
- Note that your clients (browsers, curl, …) always have a set of CA certs to trust. They’re saved somewhere in the fs based on the operating system. kubectl is not exception
- The CA cert you’re providing to kubectl is the cert of the root CA that’s used by other K8s certificates (e.g. the TLS certificate of the API server)
- When you connect to the API server it will return a certificate signed by that root CA. If you don’t provide kubectl with the root certificate your client can’t verify the API server is legit
- K8s allows you to implement authentication using OIDC. K8s doesn't provide any tool to generate OIDC id-tokens: you need another tool, e.g. Dex
- You connect to the Dex interface. Sign-in with Google. A typical OAuth2.0 sign-in window will open. Insert email, password, … and Google will return a JWT (called the id-token) signed by them
- You can provide the JWT to kubectl (both from the CLI or via the KUBECONFIG)
- The API server will use the JWT to identify the user
- If a RoleBinding (ClusterRoleBinding) exists for the user the Role (ClusterRole) will be used to check if they’re authorised to perform to required action (e.g
get pods
)
- nginx returns 404 with an html saying 404
- turns out the DNS record was pointing to the wrong LB
- wrong LB was forwarding to the wrong nginx pods
- wrong nginx pods didn't have any server block for the hostname I was querying
- nginx pods are returning 404
- Fix? Fix the DNS record to point to the right LB
- You install the ingress nginx controller
- Controller will add a new AWS load balancer using a
LoadBalancer
Service (NLB if specified in the values; ALB otherwise) - Load balancer is listening on port 443, forwarding on nodes’ port 34720, forwarding to pods matching selector
app=nginx
. Depending if this is a network load balancer or an application one, the load balancer will do TLS termination here - Controller is watching for new
Ingress
resources and updates the nginx configuration file injected into the actual nginx pods (not the controller) - Then comes the requests:
req -> LB -> nginx pod -> target pod per uservice