Amazon

AIF-C01 — AWS Certified Artificial Intelligence - Specialty Study Guide

326 practice questions Updated 2026-02-18 $19 (70% off) HTML + PDF formats

AIF-C01 Exam Overview

Prepare for the Amazon AIF-C01 certification exam with our comprehensive study guide. This study material contains 326 practice questions sourced from real exams and expert-verified for accuracy. Each question includes the correct answer and a detailed explanation to help you understand the material thoroughly.

The AIF-C01 exam — AWS Certified Artificial Intelligence - Specialty — is offered by Amazon. Our study materials were last updated on 2026-02-18 to reflect the most recent exam objectives and content.

What You Get

326 Practice Questions

Complete question bank covering all exam domains and objectives.

HTML + PDF Formats

Interactive HTML file (recommended) for screen study and a print-ready PDF.

Instant Download

Access your study materials immediately after purchase.

Email with Permanent Download Links

You will receive a confirmation email with permanent download links in case you want to download the files again in the future.

Why Choose CheapestExamDumps?

Lowest Price Available

Only $19 per exam — competitors charge $50-$300 for similar content.

Updated Monthly

Study materials refreshed within 30 days of any exam content changes.

Free Preview

Try 15 real practice questions before you buy — no signup required.

Instant Access

Download HTML + PDF immediately after payment. No waiting, no account needed.

$63 $19

One-time payment · HTML + PDF · Instant download · 326 questions

Free Sample — 15 Practice Questions

Preview 15 of 326 questions from the AIF-C01 exam. Try before you buy — purchase the full study guide for all 326 questions with answers and explanations.

Question 11

A company wants to use Amazon Q Business for its data. The company needs to ensure the security and privacy of the data. Which combination of steps will meet these requirements? (Choose two.)

A. Enable AWS Key Management Service (AWS KMS) keys for the Amazon Q Business Enterprise index.
B. Set up cross-account access to the Amazon Q index.
C. Configure Amazon Inspector for authentication.
D. Allow public access to the Amazon Q index.
E. Configure AWS Identity and Access Management (IAM) for authentication.
Show Answer
Correct Answer: A, E
Explanation:
To secure and protect data in Amazon Q Business, encryption and access control are required. Enabling AWS KMS keys provides encryption at rest for the Amazon Q Business Enterprise index, protecting sensitive data. Configuring AWS IAM enables proper authentication and authorization, ensuring that only approved users and roles can access the data. The other options either reduce security or are not relevant to Amazon Q Business authentication.

Question 39

A research group wants to test different generative AI models to create research papers. The research group has defined a prompt and needs a method to assess the models’ output. The research group wants to use a team of scientists to perform the output assessments. Which solution will meet these requirements?

A. Use automatic evaluation on Amazon Personalize.
B. Use content moderation on Amazon Rekognition.
C. Use model evaluation on Amazon Bedrock.
D. Use sentiment analysis on Amazon Comprehend.
Show Answer
Correct Answer: C
Explanation:
Amazon Bedrock model evaluation supports both automated and human-in-the-loop evaluation workflows. It allows teams of subject-matter experts, such as scientists, to review, score, and compare generative model outputs against defined prompts, which directly meets the requirement for human assessment of generated research papers.

Question 321

An AI practitioner wants to use a foundation model (FM) to design a search application. The search application must handle queries that have text and images. Which type of FM should the AI practitioner use to power the search application?

A. Multi-modal embedding model
B. Text embedding model
C. Multi-modal generation model
D. Image generation model
Show Answer
Correct Answer: A
Explanation:
A search application that must handle queries containing both text and images needs a model that can represent different modalities in a shared semantic space. Multi-modal embedding models encode text and images into comparable vector embeddings, enabling efficient similarity search and retrieval across modalities. Text-only embeddings cannot process images, and generation models focus on creating content rather than powering search and retrieval.

Question 104

A company has set up a translation tool to help its customer service team handle issues from customers around the world. The company wants to evaluate the performance of the translation tool. The company sets up a parallel data process that compares the responses from the tool to responses from actual humans. Both sets of responses are generated on the same set of documents. Which strategy should the company use to evaluate the translation tool?

A. Use the Bilingual Evaluation Understudy (BLEU) score to estimate the absolute translation quality of the two methods.
B. Use the Bilingual Evaluation Understudy (BLEU) score to estimate the relative translation quality of the two methods.
C. Use the BERTScore to estimate the absolute translation quality of the two methods.
D. Use the BERTScore to estimate the relative translation quality of the two methods.
Show Answer
Correct Answer: B
Explanation:
The setup compares outputs from the translation tool against human translations on the same documents, which is a relative evaluation between two methods. BLEU is a standard metric designed to compare machine translation outputs to reference (human) translations and is commonly used to assess relative performance rather than absolute quality.

Question 280

An AI practitioner has built a deep learning model to classify the types of materials in images. The AI practitioner now wants to measure the model performance. Which metric will help the AI practitioner evaluate the performance of the model?

A. Confusion matrix
B. Correlation matrix
C. R2 score
D. Mean squared error (MSE)
Show Answer
Correct Answer: A
Explanation:
The task is image classification. A confusion matrix is specifically used to evaluate classification models by summarizing correct and incorrect predictions per class (true/false positives and negatives). The other options are regression metrics or unrelated to classification performance.

Question 92

A company is working on a large language model (LLM) and noticed that the LLM’s outputs are not as diverse as expected. Which parameter should the company adjust?

A. Temperature
B. Batch size
C. Learning rate
D. Optimizer type
Show Answer
Correct Answer: A
Explanation:
Output diversity during text generation is controlled by the temperature parameter. Increasing temperature increases randomness in token sampling, leading to more varied and creative outputs, while lower temperature makes responses more deterministic. The other options relate to training dynamics, not inference-time diversity.

Question 109

A company uses a third-party model on Amazon Bedrock to analyze confidential documents. The company is concerned about data privacy. Which statement describes how Amazon Bedrock protects data privacy?

A. User inputs and model outputs are anonymized and shared with third-party model providers.
B. User inputs and model outputs are not shared with any third-party model providers.
C. User inputs are kept confidential, but model outputs are shared with third-party model providers.
D. User inputs and model outputs are redacted before the inputs and outputs are shared with third-party model providers.
Show Answer
Correct Answer: B
Explanation:
Amazon Bedrock is designed so that customer prompts and model responses remain within the customer’s AWS environment. Inputs and outputs are not shared with third-party model providers and are not used to train or improve the underlying foundation models unless the customer explicitly opts in. This ensures confidentiality when analyzing sensitive or confidential data.

Question 254

A company is building a chatbot to improve user experience. The company is using a large language model (LLM) from Amazon Bedrock for intent detection. The company wants to use few-shot learning to improve intent detection accuracy. Which additional data does the company need to meet these requirements?

A. Pairs of chatbot responses and correct user intents
B. Pairs of user messages and correct chatbot responses
C. Pairs of user messages and correct user intents
D. Pairs of user intents and correct chatbot responses
Show Answer
Correct Answer: C
Explanation:
Few-shot learning for intent detection requires labeled examples that map inputs to the desired classification output. In this case, the input is the user message and the target label is the user intent. Providing pairs of user messages and correct user intents allows the LLM to learn how different messages correspond to specific intents, improving intent detection accuracy.

Question 199

An AI practitioner is developing a prompt for an Amazon Titan model. The model is hosted on Amazon Bedrock. The AI practitioner is using the model to solve numerical reasoning challenges. The AI practitioner adds the following phrase to the end of the prompt: “Ask the model to show its work by explaining its reasoning step by step.” Which prompt engineering technique is the AI practitioner using?

A. Chain-of-thought prompting
B. Prompt injection
C. Few-shot prompting
D. Prompt templating
Show Answer
Correct Answer: A
Explanation:
Adding an instruction to "show its work" or explain reasoning step by step explicitly prompts the model to generate intermediate reasoning steps. This is the defining characteristic of chain-of-thought prompting, commonly used to improve performance on numerical and logical reasoning tasks.

Question 305

Which option is a use case for generative AI models?

A. Improving network security by using intrusion detection systems
B. Creating photorealistic images from text descriptions for digital marketing
C. Enhancing database performance by using optimized indexing
D. Analyzing financial data to forecast stock market trends
Show Answer
Correct Answer: B
Explanation:
Generative AI models are designed to create new content (e.g., images, text, audio) from prompts or data. Creating photorealistic images from text descriptions is a core and well-known generative AI use case. The other options focus on detection, optimization, or prediction rather than content generation.

Question 146

HOTSPOT - A company is designing a customer service chatbot by using a fine-tuned large language model (LLM). The company wants to ensure that the chatbot uses responsible AI characteristics. Select the correct responsible AI characteristic from the following list for each application design action. Each responsible AI characteristic should be selected one time or not at all.

Illustration for AIF-C01 question 146
Show Answer
Correct Answer: Privacy and security Transparency Safety
Explanation:
Anonymizing personal data protects user information (privacy and security). Providing explainable decisions ensures users understand how outcomes are produced (transparency). Using guardrails to block harmful or abusive content reduces risk and misuse (safety).

Question 172

A publishing company built a Retrieval Augmented Generation (RAG) based solution to give its users the ability to interact with published content. New content is published daily. The company wants to provide a near real-time experience to users. Which steps in the RAG pipeline should the company implement by using offline batch processing to meet these requirements? (Choose two.)

A. Generation of content embeddings
B. Generation of embeddings for user queries
C. Creation of the search index
D. Retrieval of relevant content
E. Response generation for the user
Show Answer
Correct Answer: A, C
Explanation:
Offline batch processing is best suited for steps that are computationally intensive and do not need to run per user request. Generating embeddings for newly published content can be done in batches as content is added. Creating or updating the search (vector) index from those embeddings is also an offline task. Query embeddings, retrieval, and response generation must occur in near real time to support interactive user queries.

Question 75

An AI practitioner who has minimal ML knowledge wants to predict employee attrition without writing code. Which Amazon SageMaker feature meets this requirement?

A. SageMaker Canvas
B. SageMaker Clarify
C. SageMaker Model Monitor
D. SageMaker Data Wrangler
Show Answer
Correct Answer: A
Explanation:
The requirement is to predict employee attrition without writing code and with minimal ML knowledge. Amazon SageMaker Canvas is a no-code/low-code ML service designed for business users to build models and generate predictions via a visual interface. The other options focus on bias detection (Clarify), monitoring deployed models (Model Monitor), or data preparation (Data Wrangler), not end-to-end no-code prediction.

Question 294

How can companies use large language models (LLMs) securely on Amazon Bedrock?

A. Design clear and specific prompts. Configure AWS Identity and Access Management (IAM) roles and policies by using least privilege access.
B. Enable AWS Audit Manager for automatic model evaluation jobs.
C. Enable Amazon Bedrock automatic model evaluation jobs.
D. Use Amazon CloudWatch Logs to make models explainable and to monitor for bias.
Show Answer
Correct Answer: A
Explanation:
Secure use of LLMs on Amazon Bedrock focuses on controlling access and reducing unintended behavior. Designing clear, specific prompts helps limit unexpected or unsafe outputs, while configuring IAM roles and policies with least-privilege access ensures only authorized users and services can invoke models or access data. The other options relate to evaluation, auditing, or logging, not core security controls.

Question 157

A company that uses multiple ML models wants to identify changes in original model quality so that the company can resolve any issues. Which AWS service or feature meets these requirements?

A. Amazon SageMaker JumpStart
B. Amazon SageMaker HyperPod
C. Amazon SageMaker Data Wrangler
D. Amazon SageMaker Model Monitor
Show Answer
Correct Answer: D
Explanation:
The requirement is to identify changes in original model quality across multiple ML models in production. Amazon SageMaker Model Monitor is specifically built for this purpose: it continuously monitors deployed models, detects data drift and model quality drift by comparing live inference data against baselines, and alerts when performance degrades. The other options focus on model deployment templates (JumpStart), large-scale training infrastructure (HyperPod), or data preparation (Data Wrangler), not ongoing model quality monitoring.

$63 $19

Get all 326 questions with detailed answers and explanations

AIF-C01 — Frequently Asked Questions

What is the Amazon AIF-C01 exam?

The Amazon AIF-C01 exam — AWS Certified Artificial Intelligence - Specialty — is a professional IT certification exam offered by Amazon.

How many practice questions are included?

This study guide contains 326 practice questions, each with an expert-verified correct answer and a detailed explanation. Questions cover all exam domains and objectives.

Is there a free sample available?

Yes! We provide a free sample of 15 practice questions from the AIF-C01 exam right on this page. Scroll up to preview them and evaluate the quality of our materials before purchasing.

When was this AIF-C01 study guide last updated?

This study guide was last updated on 2026-02-18. We regularly refresh our materials to reflect the latest exam content and objectives so you're always studying current material.

What file formats do I receive?

After purchase you receive two files: an interactive HTML file with show/hide answer toggles (ideal for studying on screen) and a PDF file (ideal for printing or offline study). Both work on any device — desktop, tablet, or phone.

How much does the AIF-C01 study guide cost?

The Amazon AIF-C01 study guide costs $19 (discounted from $63). This is a one-time payment with no subscriptions or hidden fees.

How do I get my files after payment?

After successful payment via Stripe, you are immediately redirected to a download page with links to your HTML and PDF files. We also send the download links to your email address as a backup, so you'll always have access.

Why choose CheapestExamDumps over other providers?

CheapestExamDumps offers the lowest price at $19 per exam — competitors charge $50-$300 for similar content. All study materials are expert-verified, updated monthly, and include a free 15-question preview with no signup required. You get instant access to both HTML and PDF formats after payment.