I remember the first time I saw a “generative AI” question in the AI 900 style.
It looked simple.
It was not.
The question was not asking what an LLM is.
It was asking which Azure option fits the goal with the least drama.
That is why this topic feels harder than the rest. It is newer. The terms are similar. The services overlap in how people describe them. And the options often look “kind of right.”
This guide is built to fix that.
The story the exam is testing
AI 900 does not need you to build a model.
It needs you to explain what generative AI does and how Azure supports it.
So the exam keeps returning to one story.
A user has a goal.
They want new content.
You pick the right approach and the right Azure capability.
What generative AI means in exam language
Generative AI is when a model produces new output based on patterns learned from data.
In the exam you will mostly see these outputs
Text like summaries emails chat answers
Code like snippets and explanations
Images in some examples
The key difference from classic AI tasks is this
Classic AI often classifies or detects
Generative AI creates new content
The three things you must be able to do
Microsoft’s study guide groups this area under “Describe features of generative AI workloads on Azure.”
So your job is mainly three skills.
1 Recognize generative AI scenarios
If the prompt says
Write summarize rewrite translate draft explain generate
You are in generative AI territory.
If the prompt says
Detect classify extract recognize label
That is usually not generative AI.
2 Explain what makes a generative model useful
You do not need math. You need plain logic.
Generative models are strong at
Understanding instructions
Producing human like text
Adapting tone and format
Using examples and patterns
In exam questions the “why” is often about productivity
Support agents drafting replies
Analysts summarizing long content
Developers getting code assistance
3 Pick the right Azure capability at a high level
The study guide expects you to recognize Azure OpenAI service and Azure AI Foundry related capabilities.
You do not need deep setup steps.
You need mapping.
Goal to service mapping
Build generative AI apps with foundation models
Think Azure OpenAI service
Explore and choose from available models
Think Azure AI Foundry and its model catalog concept
The part that tricks people
Similar options and small wording differences
Here are the common “looks right” traps.
Trap 1 Confusing model vs service
A model is the engine.
A service is how you use it on Azure.
Exam wording often mixes these casually.
Your job is to choose the Azure capability that provides access and management.
Trap 2 Thinking every AI task is generative now
A scenario about extracting key phrases is not generative AI.
It is NLP analytics.
A scenario about reading invoices is not generative AI.
It is document processing.
Generative AI appears when the output is new content.
Trap 3 Ignoring responsible AI in generative scenarios
The official objectives call out responsible AI considerations for generative AI.
So when you see words like
harmful content privacy bias safety grounded answers
The question is often testing responsible use choices not model trivia.
A simple mental model that works
Use this three step check on every generative AI question.
Step 1: What is being produced?
New text or code or content
Then generative AI
Labels categories extracted fields
Then not generative AI
Step 2: What is the user actually asking for?
A chat assistant
A summarizer
A content generator
A coding helper
This prevents you from choosing a random service name.
Step 3 What is the risk?
Private data
Hallucination risk
Policy compliance
Safety needs
If the scenario hints risk the best answer is usually the one that adds guardrails.
Mini story example
How the exam wants you to think?
Imagine this exam style situation.
A team wants a tool that drafts support replies from past tickets.
They want consistent tone.
They also want to reduce the chance of the tool inventing details.
Most people jump to the first generative AI term they remember.
That is where they lose points.
Better thinking
The workload is generating replies so it is generative AI
The risk is invented details so we need grounding and safety thinking
Pick the Azure generative AI capability and the approach that supports safer responses
The exam is not checking if you can code it.
It is checking if you can choose the right direction.
The vocabulary you should be comfortable with
This is the minimum set that shows up a lot.
Prompt
The instruction you give the model.
Completion
The model output.
Grounding
Reducing invented answers by using trusted source content.
Responsible AI for generative systems.
Concepts like safety privacy transparency.
Model selection idea
Choosing a model based on capability and fit.
This connects to the idea of a model catalog in Azure AI Foundry.
A quick study routine for this topic
If you only have a short time do this.
1 Read the official objective list for generative AI once.
2 Write a one line definition of generative AI.
3 Make a list of 10 scenario verbs that signal generative AI.
4 Practice mapping goals to Azure OpenAI service vs other Azure AI services.
5 Do a set of scenario questions Like Pass4success and rewrite your wrong answers as rules.
The goal is speed and confidence.
Not memorizing marketing names.
What you should be able to answer easily
If you can answer these without hesitation you are in good shape.
1 Is this a generative AI scenario or not?
2 What is the output type?
3 Which Azure capability fits at a high level?
4 What responsible AI concern is hinted by the wording?
Once those become automatic this topic stops feeling like a guessing game.