No More kubectl Commands — Just Talk to Your Kubernetes Cluster in Natural Language

Ever typed something like:
kubectl get pods -n default -l app=nginx
And then paused… “Wait, what was the label again?”

What if you could just ask:

>>> how's my nginx app doing?

…and your terminal figured out the rest?

Thanks to kubectl-ai, Kubernetes can now understand natural language. You type what you want in plain English, and it responds by executing real, context-aware kubectl commands — without needing to memorize flags, write YAML, or guess namespaces.

Let’s explore how this works in action.

Getting Started

Before you can chat with your cluster, make sure you have the basics in place:

  • kubectl must be installed and configured for your environment. If you can already run kubectl get pods, you’re good to go.

Install kubectl-ai in Seconds

The fastest way (Linux & macOS):
Skip the manual steps and install in one line:

curl -sSL https://raw.githubusercontent.com/GoogleCloudPlatform/kubectl-ai/main/install.sh | bash

You’ll be ready to talk to your cluster in under a minute.

Prefer to do it yourself? Manual install (Linux, macOS, Windows):

  1. Download the latest release for your system from the official releases page.
  2. Extract the archive:
tar -zxvf kubectl-ai_Darwin_arm64.tar.gz
  1. Make it executable:
chmod a+x kubectl-ai
  1. Move it somewhere in your $PATH:
sudo mv kubectl-ai /usr/local/bin/

Already a kubectl power user? Install with Krew (Linux/macOS/Windows):

If you use Krew (the plugin manager for kubectl), installation is a breeze:

kubectl krew install ai

Now, just invoke it as a kubectl plugin:

kubectl ai

No matter which method you choose, you’ll be ready to type natural language commands and let kubectl-ai handle the rest. No more memorizing flags, no more YAML wrangling—just pure, conversational Kubernetes.


Ready to see kubectl-ai in action? Now that setup is out of the way, it’s time to experience the magic of conversational Kubernetes firsthand.

Let’s dive in: Instead of juggling flags and memorizing resource names, just type your question—kubectl-ai will handle the rest.

Note: The following examples are run on a Windows system using the OpenAI GPT-4o model, but kubectl-ai works just as smoothly across platforms and with other LLM providers.

Example 1: Get the Pod Count — Just Ask

Let’s say you want to find out how many pods are running in your cluster. Normally, you’d be reaching for kubectl get pods --all-namespaces and parsing through the output. But with kubectl-ai, you can just ask—no commands, no guesswork. It’s as simple as having a conversation with your cluster:

How many pods are running in the cluster?

kubectl-ai instantly returns the number of currently running pods in your cluster—and, for full transparency, it also shows you the exact kubectl command it ran behind the scenes. No more guessing or digging through documentation; you get answers and learn the underlying commands at the same time.

Example 2: Launch an Nginx Pod in Seconds

Now, let’s say you need to create a Kubernetes resource—like an nginx pod. No need to remember commands or write YAML. Just ask, and kubectl-ai handles the rest, showing you exactly what it will run before you confirm.

Create nginx pod

Now, you simply pick your choice from the list—quick and straightforward.

Now, let’s check the pod count again—just ask, and kubectl-ai will show you the latest numbers instantly.

Now you can see there are 21 pods running in the cluster—that means your nginx pod was created successfully. kubectl-ai not only makes the creation process seamless, but also lets you instantly confirm the result, so you know your deployment worked as expected

Example 3: Ask for Nginx Pod Details—No Commands Needed

How often do you need to dig into the details of a specific pod? With kubectl-ai, there’s no need to remember the exact command or worry about the right flags. Just type your question in plain English—like this—and get all the info you need, instantly.

I want to see the details of the nginx pod.

As you can see, kubectl-ai ran the kubectl describe command behind the scenes and presented the full details of the nginx pod—just like you’d expect from the CLI. The best part? You haven’t typed a single kubectl command yourself. Everything so far has been done in plain English.

Why This Changes the Game

CapabilityAdvantage
Natural language queriesNo CLI memorization or flag guessing
Context-aware responsesWorks across namespaces, deployments, and pods
YAML generationBuilds and applies configs instantly
Troubleshooting logicSmart fallback if a resource isn’t found
Local + cloud modelsSupports Gemini, OpenAI, or Ollama (offline)

Key Takeaway

kubectl-ai transforms Kubernetes from a syntax-heavy platform into a conversational system.

No more:

❌ Rechecking flags
❌ Copy-pasting YAML
❌ Searching through namespaces

Just ask what you need — and Kubernetes responds.

Curious to explore more AI-powered DevOps tools?
Check out KodeKloud AI for hands-on labs, tutorials, and the latest in cloud-native automation.

Ready to explore further?
Check out the official GitHub repo by Google Cloud for setup, usage examples, and contribution details:
👉 github.com/GoogleCloudPlatform/kubectl-ai