Skip to Sidebar Skip to Content

The Complete AWS Certified AI Practitioner (AIF‑C01) Study Guide

The Complete AWS Certified AI Practitioner (AIF‑C01) Study Guide

Highlights

  • This is a foundational certification, not a deep engineering exam. The AIF‑C01 tests whether you understand what AI and ML are, how generative AI works, which AWS services handle which use cases, and how to apply AI responsibly. You will not write code, build models, or configure SageMaker training jobs.
  • Domain 3 carries the highest weight at 28% of the exam. Applications of Foundation Models is the largest domain and covers practical use cases including RAG architecture, Bedrock Agents, Guardrails, and model selection. If you are going to over invest your study time anywhere, this is the domain.
  • The exam has 50 scored questions plus 15 unscored questions. You have 85 minutes to complete all 65 questions, which gives you roughly 1.3 minutes per question. The passing score is 700 out of 1000 and the exam costs USD 150. Question types include multiple choice, multiple response, ordering, and matching.
  • Service to use case mapping is heavily tested throughout the exam. You need to know that Amazon Comprehend handles sentiment analysis, Amazon Rekognition handles image and video analysis, Amazon Textract handles document extraction, and Amazon Bedrock provides managed access to foundation models. Getting these mappings wrong accounts for many candidate failures.
  • RAG and Amazon Bedrock Knowledge Bases appear across multiple domains. Understand the complete flow from document chunking and embedding into a vector store through retrieval and generation. This topic shows up in both Domain 2 and Domain 3 and represents one of the highest yield study areas for the exam.
  • Three weeks of focused study is sufficient for most candidates. AWS recommends at least six months of exposure to AI and ML on AWS. The exam rewards breadth over depth, and the biggest risk is overconfidence from daily AI usage without studying the specific AWS service mappings and responsible AI principles.

Introduction

Here is the situation. Your team has started using Amazon Bedrock. Someone mentioned SageMaker in a meeting and you nodded like you understood. You have been prompting Claude and ChatGPT for months and it feels like you should be able to put a certification behind all that experience. The AWS Certified AI Practitioner exists exactly for this moment.

This is not a deep ML engineering exam. You will not be asked to write Python, tune hyperparameters, or build training pipelines. This is a foundational certification that tests whether you understand what AI and ML are, how generative AI works, which AWS services do what, and how to use all of it responsibly. Think of it as AWS saying: “Do you understand AI well enough to make good decisions about it in a business context?”

50 scored questions. 85 minutes. Passing score of 700 out of 1000. Let us walk through every domain.

The Exam at a Glance

DetailValue
Exam CodeAIF-C01
Format50 scored + 15 unscored questions
Time85 minutes
Passing Score700 / 1000
CostUSD 150
Question TypesMultiple choice, multiple response, ordering, matching
Recommended Experience6 months exposure to AI/ML on AWS

⚠️ What This Exam Is NOT

You will not write code. You will not build models. You will not configure SageMaker training jobs. If you are looking for that, the Machine Learning Engineer Associate (MLA-C01) or the Generative AI Developer Professional (AIP-C01) is where you should be heading.

The Five Domains and What They Actually Test

#DomainWeight~Questions
1Fundamentals of AI and ML20%~10
2Fundamentals of Generative AI24%~12
3Applications of Foundation Models28%~14
4Guidelines for Responsible AI14%~7
5Security, Compliance, and Governance14%~7

Domain 3 is the heaviest at 28%. If you are going to over invest your study time anywhere, that is where.

Domain 1: Fundamentals of AI and ML (20%)

What You Need to Know

This domain tests whether you can tell AI apart from ML, ML apart from deep learning, and supervised learning apart from unsupervised learning. It sounds simple until the exam gives you a scenario and asks you to classify it.

Scenario: A retail company wants to predict which customers will churn next quarter based on their purchase history. Which type of ML is this?

The answer is supervised learning, specifically a classification problem, because you have labeled historical data (customers who did and did not churn) and you are predicting a category.

Key concepts to nail down:

  • The difference between supervised (labeled data, regression, classification), unsupervised (clustering, anomaly detection), and reinforcement learning (reward based, sequential decisions)
  • What inference means versus training
  • When to use ML versus when traditional rules based systems are sufficient
  • The ML pipeline stages: data collection, feature engineering, training, evaluation, deployment, monitoring
  • AWS services: Amazon SageMaker for training and hosting, Amazon Rekognition for vision, Amazon Comprehend for NLP, Amazon Translate, Amazon Polly, Amazon Transcribe

🎯 Exam Focus

The exam loves asking about the right AWS service for a use case. "The company wants to extract sentiment from customer reviews." That is Amazon Comprehend, not Rekognition (vision) or Textract (document extraction). Know which service maps to which problem type.

Domain 2: Fundamentals of Generative AI (24%)

What You Need to Know

This is where the exam tests whether you understand what happened when LLMs entered the picture. Foundation models, transformers, tokens, context windows, temperature, top‑p sampling, hallucinations, prompt engineering basics.

Scenario: A developer is using Amazon Bedrock to generate marketing copy but the outputs are too creative and sometimes factually wrong. What should they adjust?

Reduce the temperature. Lower temperature values make the model more deterministic and focused. Higher values introduce more randomness and creativity.

Key concepts to nail down:

  • What a foundation model is and why it is different from a task specific model
  • Transformers architecture at a conceptual level: attention mechanism, how tokens flow through the model
  • What tokens are, how context windows work, and why they matter for cost and performance
  • Prompt engineering techniques: zero shot, few shot, chain of thought, system prompts
  • Inference parameters: temperature, top‑p, top‑k, stop sequences, max tokens
  • Hallucinations: what causes them, how to mitigate them (grounding with RAG, lower temperature, explicit instructions)
  • Amazon Bedrock as the managed service to access foundation models (Claude, Llama, Titan, Mistral, Stability AI)
  • Amazon Q as the enterprise AI assistant (Q Business for enterprise knowledge, Q Developer for code)

💡 High Yield Topic

Amazon Bedrock Knowledge Bases and RAG come up heavily across Domains 2 and 3. Understand the flow: documents get chunked and embedded into a vector store (Amazon OpenSearch Serverless or Pinecone), then at query time the user question gets embedded, similar chunks are retrieved, and they are injected into the prompt so the model answers from your data instead of its training data.

Domain 3: Applications of Foundation Models (28%)

What You Need to Know

This is the largest domain and the most practical. It tests whether you can take a business problem and map it to the right foundation model application, the right AWS service, and the right architecture pattern.

Scenario: A legal firm wants their lawyers to ask natural language questions about thousands of internal contracts and get answers with citations. Which architecture?

RAG with Amazon Bedrock Knowledge Bases. The contracts get ingested into a vector store. When a lawyer asks a question, the system retrieves the most relevant contract clauses and the model generates an answer grounded in those documents, with source attribution.

Key concepts to nail down:

  • RAG architecture end to end: ingestion pipeline, chunking strategies, embedding models, vector databases, retrieval, generation
  • When to use RAG versus fine tuning versus prompt engineering (the decision framework from the earlier article in this series applies directly)
  • Amazon Bedrock Agents for agentic workflows that can call APIs and take actions
  • Amazon Bedrock Guardrails for filtering harmful content, PII, and off topic responses
  • Model selection: understanding when to pick Claude versus Llama versus Titan based on use case, cost, and latency
  • Multimodal models: models that handle text, images, and documents together
  • Amazon SageMaker JumpStart as an alternative way to deploy foundation models

Scenario: A customer support team wants the AI to look up order status in their CRM before responding. What feature enables this?

Amazon Bedrock Agents. Agents can be configured with action groups that call external APIs (like a CRM lookup) as part of the response generation process.

Domain 4: Guidelines for Responsible AI (14%)

What You Need to Know

14% sounds small, but these questions are often the easiest to get right if you study them, and the easiest to get wrong if you guess. AWS takes responsible AI seriously and the exam reflects that.

Scenario: A hiring platform is using an ML model to screen resumes. Early testing shows the model favors candidates from certain universities. What is this and how should it be addressed?

This is bias in the training data. The model learned patterns from historical hiring decisions that reflected human biases. The mitigation involves auditing the training data, using Amazon SageMaker Clarify to detect bias metrics, rebalancing the dataset, and implementing human review of model decisions.

Key concepts to nail down:

  • Types of bias: training data bias, selection bias, measurement bias, confirmation bias
  • Fairness metrics and how SageMaker Clarify detects them
  • Transparency and explainability: why stakeholders need to understand how the model makes decisions
  • Amazon Bedrock Guardrails for content filtering, topic restrictions, and PII redaction
  • Human in the loop: Amazon Augmented AI (A2I) for human review of model predictions
  • AWS responsible AI principles: fairness, explainability, privacy, robustness, governance, transparency

Domain 5: Security, Compliance, and Governance (14%)

What You Need to Know

This domain tests whether you understand the security wrapper around AI workloads on AWS. It is not about building security controls from scratch. It is about knowing which AWS service handles which security concern.

Scenario: A healthcare company wants to use Amazon Bedrock but needs to ensure that patient data in prompts is encrypted and never leaves a specific AWS Region. What controls apply?

Data encryption in transit and at rest (AWS KMS for key management), VPC endpoints to keep traffic off the public internet, and AWS Region selection with data residency policies. Amazon Bedrock does not store or log prompts by default, and you can use AWS PrivateLink for private connectivity.

Key concepts to nail down:

  • IAM policies for controlling who can invoke which Bedrock models
  • AWS KMS for encryption of data at rest, TLS for encryption in transit
  • VPC endpoints and AWS PrivateLink for private connectivity to AI services
  • AWS CloudTrail for auditing API calls to Bedrock and SageMaker
  • Data residency and compliance: understanding that some models are only available in certain Regions
  • The shared responsibility model as it applies to AI: AWS secures the infrastructure, you secure the data, prompts, and model configurations
  • Model governance: versioning models, tracking lineage, approval workflows

The 3 Week Study Plan

Here is a realistic timeline for someone with basic AWS and AI familiarity:

Week 1: Build the Foundation

Cover Domains 1 and 2. Understand AI/ML fundamentals, types of learning, generative AI concepts, and AWS AI services. Watch the AWS Skill Builder AI Practitioner course. Make flashcards for service to use case mappings.

Week 2: Go Deep on Applications and Responsibility

Cover Domains 3 and 4. This is where you spend the most time. Build a mental model of when to use RAG versus fine tuning versus prompt engineering. Understand Bedrock Knowledge Bases, Agents, and Guardrails. Study responsible AI principles until they feel obvious.

Week 3: Security, Practice Exams, and Gap Filling

Cover Domain 5. Then spend the rest of the week on practice exams. Every wrong answer is a study opportunity. Do not just memorize answers. Understand why the correct option is correct and why each distractor is wrong.

Exam Day: Ten Things to Remember

  1. There is no penalty for guessing. Never leave a question blank.
  2. 15 of the 65 total questions are unscored. You cannot tell which ones. Treat every question seriously.
  3. Read the scenario carefully. The answer often hides in one specific phrase like “minimize operational overhead” or “real time predictions.”
  4. When two answers seem correct, pick the one that is more managed. AWS almost always prefers Bedrock over a custom SageMaker deployment for this exam level.
  5. RAG is the answer to almost every “how do we ground the model in our data” question.
  6. SageMaker Clarify is the answer to almost every “how do we detect bias” question.
  7. Bedrock Guardrails is the answer to almost every “how do we filter harmful content” question.
  8. Amazon Q Business is the answer to “enterprise knowledge assistant” scenarios. Amazon Q Developer is for code assistance.
  9. For security questions, think IAM + KMS + CloudTrail + VPC endpoints. That combination covers most scenarios.
  10. Flag uncertain questions and come back. You have 85 minutes for 65 questions. That is roughly 1.3 minutes per question. Use the time.

The AIF‑C01 is not a difficult exam if you study with intention. It rewards breadth over depth. You do not need to know how transformers work mathematically. You need to know what they do, when to use them, which AWS service wraps them, and how to do it responsibly.

The biggest risk is overconfidence. People who use AI daily assume they will pass without studying and then get tripped up by service mapping questions, responsible AI nuances, or security controls they never thought about. Give it three focused weeks and you will walk out with a passing score.

FAQs

Q1: What is the AWS AIF‑C01 exam and who is it designed for?

The AWS Certified AI Practitioner (AIF‑C01) is a foundational level certification that validates your understanding of AI and ML concepts, generative AI fundamentals, and AWS AI services. It is designed for anyone who works with AI in a business context, including developers, solutions architects, project managers, and business analysts who need to make informed decisions about AI on AWS. You do not need to write code or build models. If you are looking for a hands on engineering certification, the Machine Learning Engineer Associate (MLA‑C01) or the Generative AI Developer Professional (AIP‑C01) would be more appropriate.

Q2: What are the five exam domains and how are they weighted?

The five domains are Fundamentals of AI and ML at 20%, Fundamentals of Generative AI at 24%, Applications of Foundation Models at 28%, Guidelines for Responsible AI at 14%, and Security, Compliance, and Governance at 14%. Domain 3 carries the most weight and covers practical application of foundation models including RAG, Bedrock Agents, Guardrails, and model selection. Domains 4 and 5 together account for 28% and cover responsible AI principles, bias detection with SageMaker Clarify, IAM policies, encryption with KMS, and compliance controls.

Q3: What is the difference between temperature, top p, and top k in the context of this exam?

These are inference parameters that control how foundation models generate output. Temperature controls randomness, where lower values produce more deterministic and focused responses and higher values produce more creative and varied outputs. Top p (nucleus sampling) limits the model's token selection to a cumulative probability threshold, so a top p of 0.9 means the model considers only the smallest set of tokens whose combined probability reaches 90%. Top k limits selection to the k most likely next tokens. The exam tests whether you can identify which parameter to adjust for specific scenarios, such as reducing temperature when outputs are too creative or factually unreliable.

Q4: How should I approach the responsible AI questions on the exam?

Responsible AI carries 14% of the exam and the questions are straightforward if you study them but easy to get wrong through guessing. Focus on understanding types of bias including training data bias, selection bias, and measurement bias. Know that Amazon SageMaker Clarify is the answer to nearly every bias detection question and that Amazon Bedrock Guardrails handles content filtering, topic restrictions, and PII redaction. Understand the concept of human in the loop using Amazon Augmented AI (A2I) for human review of model predictions. Learn the six AWS responsible AI principles which are fairness, explainability, privacy, robustness, governance, and transparency.

Q5: What are the best resources and how long should I study?

A three week study plan works well for most candidates with basic AWS and AI familiarity. Week one covers Domains 1 and 2 focusing on AI and ML fundamentals plus generative AI concepts. Week two is the heaviest, covering Domains 3 and 4 with deep focus on Bedrock Knowledge Bases, Agents, Guardrails, and responsible AI principles. Week three covers Domain 5 security topics followed by practice exams for gap filling. Use the AWS Certified AI Practitioner Course on KodeKloud, AWS Skill Builder official course and practice exam. Aim to score 80% or higher consistently on practice exams before booking the real one.

Pramodh Kumar M Pramodh Kumar M

Subscribe to Newsletter

Join me on this exciting journey as we explore the boundless world of web design together.