Comprehensive Guide to Using Kubernetes
July 11, 2025
4 min read

- Comprehensive Guide to Using Kubernetes
- What is Kubernetes?
- Key Concepts in Kubernetes
- Getting Started with Kubernetes
- Step 1: Install Kubernetes
- Installing Minikube
- Step 2: Start a Minikube Cluster
- Step 3: Interacting with Kubernetes
- Deploying an Application
- Scaling the Application
- Updating the Application
- Deleting Resources
- Step 4: Advanced Kubernetes Concepts
- ConfigMaps and Secrets
- Namespaces
- Step 5: Kubernetes Dashboard
- Troubleshooting Kubernetes Issues
- 1. Pods Not Starting (CrashLoopBackOff)
- 2. Pods Stuck in Pending State
- 3. Unable to Access Service (404 or Connection Refused)
- 4. Unable to Connect to the Cluster
- 5. Resource Limits Not Working Properly
- Conclusion
Comprehensive Guide to Using Kubernetes
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform for automating the deployment, scaling, and management of containerized applications. Kubernetes is designed to make it easy to manage applications in a microservices architecture, and it has become the standard for deploying containerized applications in production environments.
In this blog, we will cover the essential aspects of using Kubernetes, from installation to managing clusters, and also include troubleshooting tips to help resolve common issues.
What is Kubernetes?
Kubernetes is a platform that abstracts away the underlying infrastructure and allows developers to deploy and manage applications in a highly automated and scalable way. It allows you to run containers across clusters of machines (whether on-premise or in the cloud) and manage their lifecycle, scaling, and updates with ease.
Key Concepts in Kubernetes
- Pod: The smallest and simplest Kubernetes object. A Pod represents a set of running containers that share the same network namespace.
- Node: A physical or virtual machine in a Kubernetes cluster. A Node runs one or more Pods.
- Cluster: A group of nodes that run containerized applications and are managed by Kubernetes.
- Deployment: A Kubernetes resource that provides declarative updates to Pods and ReplicaSets, helping to manage the lifecycle of an application.
- Service: A Kubernetes resource that exposes an application running on a set of Pods as a network service.
- Namespace: A way to divide cluster resources between multiple users (or teams) via logical partitions.
Getting Started with Kubernetes
Step 1: Install Kubernetes
Kubernetes can be run on various platforms, but most commonly, it’s deployed on Linux, macOS, and Windows. Below, we will discuss installing Kubernetes using Minikube, which is a tool that helps you run Kubernetes clusters locally.
Installing Minikube
-
Install Minikube:
-
On macOS (using Homebrew):
brew install minikube
-
On Windows (using Chocolatey):
choco install minikube
-
On Linux (using the package manager):
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 chmod +x minikube sudo mv minikube /usr/local/bin
-
-
Install kubectl (Kubernetes CLI tool):
-
On macOS:
brew install kubectl
-
On Windows:
choco install kubernetes-cli
-
On Linux:
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl chmod +x kubectl sudo mv kubectl /usr/local/bin
-
Step 2: Start a Minikube Cluster
Once Minikube and kubectl are installed, you can start your local Kubernetes cluster:
-
Start the Kubernetes cluster:
minikube start
Minikube will download the necessary Kubernetes components and create a local cluster. Once it’s up and running, you can interact with it using
kubectl
. -
Verify the cluster status:
kubectl cluster-info
This should display information about the Kubernetes cluster, confirming it’s running.
Step 3: Interacting with Kubernetes
Now that your Kubernetes cluster is up and running, you can start deploying applications and managing resources.
Deploying an Application
-
Create a simple deployment: Kubernetes uses deployments to manage and scale applications. Let’s deploy a basic Nginx web server:
kubectl create deployment nginx --image=nginx
-
Expose the deployment as a service: To access the Nginx deployment from outside the cluster, expose it as a service:
kubectl expose deployment nginx --type=LoadBalancer --port=80
-
Get the service URL: After exposing the service, Minikube provides a way to access it through a local URL. Run the following command:
minikube service nginx --url
This should return a URL (e.g.,
http://192.168.99.100:30080
), where you can access your Nginx server.
Scaling the Application
Kubernetes makes it easy to scale applications. To scale the Nginx deployment to 3 replicas, run:
kubectl scale deployment nginx --replicas=3
To check the number of replicas running:
kubectl get pods
Updating the Application
Kubernetes allows you to update applications in a declarative way. For example, you can change the Nginx version by updating the deployment:
kubectl set image deployment/nginx nginx=nginx:1.19.0
Kubernetes will perform a rolling update and replace the old pods with new ones running the updated version of Nginx.
Deleting Resources
To clean up the resources you’ve created, use the following commands:
-
Delete the deployment:
kubectl delete deployment nginx
-
Delete the service:
kubectl delete service nginx
Step 4: Advanced Kubernetes Concepts
ConfigMaps and Secrets
-
ConfigMap: A ConfigMap allows you to decouple configuration data from your application code. This makes it easy to manage and modify configurations without rebuilding the application container.
Example of creating a ConfigMap:
kubectl create configmap my-config --from-literal=key=value
-
Secrets: Kubernetes Secrets are intended to store sensitive data, such as passwords, OAuth tokens, and ssh keys.
Example of creating a secret:
kubectl create secret generic my-secret --from-literal=password=secretpassword
Namespaces
Namespaces allow you to partition resources within a Kubernetes cluster, which is useful for managing multiple applications or teams.
To create a namespace:
kubectl create namespace my-namespace
To deploy an application in a specific namespace:
kubectl run my-app --image=nginx --namespace=my-namespace
Step 5: Kubernetes Dashboard
Kubernetes provides a web-based dashboard to manage and monitor the cluster. To enable it, run:
kubectl proxy
Then, navigate to http://localhost:8001/ui
in your web browser to access the Kubernetes Dashboard.
Troubleshooting Kubernetes Issues
Kubernetes is a complex system, and issues may arise during deployment or management of clusters. Here are some common problems and troubleshooting steps:
1. Pods Not Starting (CrashLoopBackOff)
-
Cause: This is often caused by misconfigurations or issues in the application container itself.
-
Solution:
-
Check the logs of the pod to identify the issue:
kubectl logs <pod-name>
-
Inspect the events in the cluster:
kubectl describe pod <pod-name>
-
2. Pods Stuck in Pending State
-
Cause: Kubernetes is unable to find resources (like CPU or memory) to schedule the pod.
-
Solution:
-
Check if your cluster has enough resources:
kubectl describe pod <pod-name>
-
Verify that you have sufficient resources available (e.g., CPU, memory) or scale down other workloads.
-
3. Unable to Access Service (404 or Connection Refused)
-
Cause: The service may not be correctly exposed or the application inside the pod may not be working.
-
Solution:
-
Check if the service is correctly exposed:
kubectl get services
-
Verify that the application is running and responding inside the container:
kubectl logs <pod-name>
-
4. Unable to Connect to the Cluster
-
Cause: This might be due to issues with your kubeconfig file or misconfigured contexts.
-
Solution:
-
Verify that your kubectl is configured to use the correct context:
kubectl config get-contexts kubectl config use-context <context-name>
-
5. Resource Limits Not Working Properly
-
Cause: Kubernetes uses resource requests and limits to manage CPU and memory. If these values are too low or too high, Kubernetes may struggle to manage resources effectively.
-
Solution:
- Make sure resource requests and limits are appropriately defined in your deployment YAML or manifest.
Example:
resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
Conclusion
Kubernetes provides a robust, scalable, and efficient way to manage containerized applications, especially in microservices environments. By using Kubernetes, developers can automate much of the deployment and management process, allowing them to focus on building features instead of worrying about infrastructure.
In this guide, we covered everything from installing Kubernetes to deploying and scaling applications, and we discussed some common issues and troubleshooting tips. Kubernetes is