Microsoft Exam Syllabus

DP-700 syllabus, skills measured, and exam topics

The DP-700 exam measures Implement and manage an analytics solution, Ingest and transform data, and Monitor and optimize an analytics solution. Use this page to review the current official syllabus, major domains, and source links before exam day.

Skills measured by domain

Use the weighting table to decide where to spend the most study time.

Domain Weight
Implement and manage an analytics solution 30–35%
Ingest and transform data 30–35%
Monitor and optimize an analytics solution 30–35%

What to know before you study

These sections explain the role, audience, and exam framing behind the outline.

Purpose of this document

  • This study guide should help you understand what to expect on the exam and includes a summary of the topics the exam might cover and links to additional resources. The information and materials in this document should help you focus your studies as you prepare for the exam.
  • Useful links: Description
  • How to earn the certification: Some certifications only require passing one exam, while others require passing multiple exams.
  • Certification renewal: Microsoft associate, expert, and specialty certifications expire annually. You can renew by passing a free online assessment on Microsoft Learn.
  • Your Microsoft Learn profile: Connecting your certification profile to Microsoft Learn allows you to schedule and renew exams and share and print certificates.
  • Exam scoring and score reports: A score of 700 or greater is required to pass.
  • Exam sandbox: You can explore the exam environment by visiting our exam sandbox.
  • Request accommodations: If you use assistive devices, require extra time, or need modification to any part of the exam experience, you can request an accommodation.
  • Take a free Practice Assessment: Test your skills with practice questions to help you prepare for the exam.

Languages

  • Some exams are localized into other languages, and those are updated approximately eight weeks after the English version is updated. If the exam isn't available in your preferred language, you can request an additional 30 minutes to complete the exam.
  • The bullets that follow each of the skills measured are intended to illustrate how we are assessing that skill. Related topics may be covered in the exam.
  • Most questions cover features that are general availability (GA). The exam may contain questions on Preview features if those features are commonly used.

Audience profile

  • As a candidate for this exam, you should have subject matter expertise with data loading patterns, data architectures, and orchestration processes. Your responsibilities for this role include:
  • Ingesting and transforming data.
  • Securing and managing an analytics solution.
  • Monitoring and optimizing an analytics solution.
  • You work closely with analytics engineers, architects, analysts, and administrators to design and deploy data engineering solutions for analytics.
  • You should be skilled at manipulating and transforming data by using Structured Query Language (SQL), PySpark, and Kusto Query Language (KQL).

Detailed outline

Scan each section as a working study checklist instead of one long wall of text.

Implement and manage an analytics solution (30–35%)

  • Configure Spark workspace settings
  • Configure domain workspace settings
  • Configure OneLake workspace settings
  • Configure Dataflows Gen2 workspace settings
  • Configure version control
  • Implement database projects
  • Create and configure deployment pipelines
  • Implement workspace-level access controls
  • Implement item-level access controls
  • Implement row-level, column-level, object-level, and folder/file-level access controls
  • Implement dynamic data masking
  • Apply sensitivity labels to items

Ingest and transform data (30–35%)

  • Design and implement full and incremental data loads
  • Prepare data for loading into a dimensional model
  • Design and implement a loading pattern for streaming data
  • Choose an appropriate data store
  • Choose between Dataflows Gen2, notebooks, KQL, and T-SQL for data transformation
  • Create and manage OneLake shortcuts
  • Implement mirroring
  • Ingest data by using pipelines
  • Transform data by using PySpark, SQL, and KQL
  • Denormalize data
  • Group and aggregate data
  • Handle duplicate, missing, and late-arriving data

Monitor and optimize an analytics solution (30–35%)

  • Monitor data ingestion
  • Monitor data transformation
  • Monitor semantic model refresh
  • Configure alerts
  • Identify and resolve pipeline errors
  • Identify and resolve Dataflow Gen2 errors
  • Identify and resolve notebook errors
  • Identify and resolve Eventhouse errors
  • Identify and resolve Eventstream errors
  • Identify and resolve T-SQL errors
  • Identify and resolve OneLake shortcut errors
  • Optimize a Lakehouse table