top of page

DP-100 Azure Data Scientist Associate Sample Questions & Practice Exam

  • CertiMaan
  • Oct 27
  • 21 min read

Updated: 1 day ago

Strengthen your preparation for the Microsoft DP-100 Azure Data Scientist Associate certification with these expertly designed sample questions. Covering real-world machine learning scenarios, MLOps workflows, and Azure ML service integrations, these questions reflect the 2025 DP-100 exam pattern. Ideal for practicing data scientists and ML engineers, this resource provides hands-on practice with Azure Machine Learning studio, model training, deployment, and responsible AI. Whether you're looking to assess your readiness or refine your approach, these DP-100 sample questions are a powerful tool to boost your exam confidence and performance.



DP-100 Azure Data Scientist Associate Sample Questions List :


1. This question is included in a number of questions that depicts the identical set-up. However, every question has a distinctive result. Establish if the recommendation satisfies the requirements. You have been tasked with evaluating your model on a partial data sample via k-fold cross-validation. You have already configured a k parameter as the number of splits. You now have to configure the k parameter for the cross-validation with the usual value choice. Recommendation: You configure the use of the value k=3. Will the requirements be satisfied?

  1. Yes

  2. No

2. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Machine Learning workspace. You plan to tune model hyperparameters by using a sweep job. You need to find a sampling method that supports early termination of low-performance jobs and continuous hyperparameters. Solution: Use the Bayesian sampling method over the hyperparameter space. Does the solution meet the goal?

  1. No

  2. Yes

3. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are creating a model to predict the price of a student's artwork depending on the following variables: the student's length of education, degree type, and art form. You start by creating a linear regression model. You need to evaluate the linear regression model. Solution: Use the following metrics: Relative Squared Error, Coefficient of Determination, Accuracy, Precision, Recall, F1 score, and AUC. Does the solution meet the goal?

  1. Yes

  2. No

4. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are using Azure Machine Learning Studio to perform feature engineering on a dataset. You need to normalize values to produce a feature column grouped into bins. Solution: Apply an Entropy Minimum Description Length (MDL) binning mode. Does the solution meet the goal?

  1. Yes

  2. No

5. You are moving a large dataset from Azure Machine Learning Studio to a Weka environment. You need to format the data for the Weka environment. Which module should you use?

  1. Convert to SVMLight

  2. Convert to ARFF

  3. Convert to CSV

  4. Convert to Dataset

6. You are developing a machine learning model. You must inference the machine learning model for testing. You need to use a minimal cost compute target. Which two compute targets should you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

  1. Azure Machine Learning Kubernetes

  2. Azure Databricks

  3. Remote VM

  4. Local web service

  5. Azure Container Instances

7. You plan to deliver a hands-on workshop to several students. The workshop will focus on creating data visualizations using Python. Each student will use a device that has internet access. Student devices are not configured for Python development. Students do not have administrator access to install software on their devices. Azure subscriptions are not available for students. You need to ensure that students can run Python-based data visualization code. Which Azure tool should you use?

  1. Azure BatchAI

  2. Azure Machine Learning Service

  3. Azure Notebooks

  4. Anaconda Data Science Platform

8. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Machine Learning workspace. You plan to tune model hyperparameters by using a sweep job. You need to find a sampling method that supports early termination of low-performance jobs and continuous hyperparameters. Solution: Use the Sobol sampling method over the hyperparameter space. Does the solution meet the goal?

  1. Yes

  2. No

9. You create a multi-class image classification model with automated machine learning in Azure Machine Learning. You need to prepare labeled image data as input for model training in the form of an Azure Machine Learning tabular dataset. Which data format should you use?

  1. JSON

  2. Pascal VOC

  3. JSONL

  4. COCO

10. You need to implement a model development strategy to determine a user's tendency to respond to an ad. Which technique should you use?

  1. Use a Split Rows module to partition the data based on centroid distance

  2. Use a Split Rows module to partition the data based on distance travelled to the event

  3. Use a Relative Expression Split module to partition the data based on centroid distance

  4. Use a Relative Expression Split module to partition the data based on distance travelled to the event

11. You are implementing hyperparameter tuning for a model training from a notebook. The notebook is in an Azure Machine Learning workspace. You must configure a grid sampling method over the search space for the num_hidden_layers and batch_size hyperparameters. You need to identify the hyperparameters for the grid sampling. Which hyperparameter sampling approach should you use?

  1. qlognormal

  2. uniform

  3. choice

  4. normal

12. You develop a machine learning project on a local machine. The project uses the Azure Machine Learning SDK for Python. You use Git as version control for scripts. You submit a training run that returns a Run object. You need to retrieve the active Git branch for the training run. Which two code segments should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

  1. details = run.get_details()

  2. details.properties['azureml.git.commit']

  3. details = run.get_environment()

  4. details.properties['azureml.git.branch']

13. You create an Azure Machine Learning workspace. You train an MLflow-formatted regression model by using tabular structured data. You must use a Responsible AI dashboard to access the model. You need to use the Azure Machine Learning studio UI to generate the Responsible AI dashboard. What should you do first?

  1. Register the model with the workspace

  2. Deploy the model to a managed online endpoint

  3. Create the model explanations

  4. Convert the model from the MLflow format to a custom format

14. You are developing a machine learning model by using Azure Machine Learning. You are using multiple text files in tabular format for model data. You have the following requirements: • You must use AutoMLjobs to train the model. • You must use data from specified columns. • The data concept must support lazy evaluation. You need to load data into a Pandas dataframe. Which data concept should you use?

  1. Data asset

  2. MLTable

  3. Datastore

  4. URI

15. You create a deep learning model for image recognition on Azure Machine Learning service using GPU-based training. You must deploy the model to a context that allows for real-time GPU-based inferencing. You need to configure compute resources for model inferencing. Which compute type should you use?

  1. Azure Kubernetes Service

  2. Field Programmable Gate Array

  3. Machine Learning Compute

  4. Azure Container Instance

16. You register a model that you plan to use in a batch inference pipeline. The batch inference pipeline must use a ParallelRunStep step to process files in a file dataset. The script has the ParallelRunStep step runs must process six input files each time the inferencing function is called. You need to configure the pipeline. Which configuration setting should you specify in the ParallelRunConfig object for the PrallelRunStep step?

  1. node_count= "6"

  2. process_count_per_node= "6"

  3. mini_batch_size= "6"

  4. error_threshold= "6"

17. You use the Azure Machine Learning SDK for Python v1 and notebooks to train a model. You create a compute target, an environment, and a training script by using Python code. You need to prepare information to submit a training run. Which class should you use?

  1. Run

  2. ScriptRun

  3. ScriptRunConfig

  4. RunConfiguration

18. You create a machine learning model by using the Azure Machine Learning designer. You publish the model as a real-time service on an Azure Kubernetes Service (AKS) inference compute cluster. You make no changes to the deployed endpoint configuration. You need to provide application developers with the information they need to consume the endpoint. Which two values should you provide to application developers? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

  1. The name of the AKS cluster where the endpoint is hosted

  2. The URL of the endpoint

  3. The name of the inference pipeline for the endpoint

  4. The key for the endpoint

  5. The run ID of the inference pipeline experiment for the endpoint

19. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are creating a model to predict the price of a student's artwork depending on the following variables: the student's length of education, degree type, and art form. You start by creating a linear regression model. You need to evaluate the linear regression model. Solution: Use the following metrics: Accuracy, Precision, Recall, F1 score, and AUC. Does the solution meet the goal?

  1. Yes

  2. No

20. You use the Azure Machine Learning Python SDK to define a pipeline to train a model. The data used to train the model is read from a folder in a datastore. You need to ensure the pipeline runs automatically whenever the data in the folder changes. What should you do?

  1. Create a Schedule for the pipeline. Specify the datastore in the datastore property, and the folder containing the training data in the path_on_datastore property

  2. Set the regenerate_outputs property of the pipeline to True

  3. Create a PipelineParameter with a default value that references the location where the training data is stored

  4. Create a ScheduleRecurrance object with a Frequency of auto. Use the object to create a Schedule for the pipeline

21. You manage an Azure Machine Learning workspace. You choose the uri_folder data type as an output of a pipeline component. You need to define the data access mode that is supported by your configuration. Which mode should you define?

  1. rw_mount

  2. eval_upload

  3. download

  4. ro_mount

22. You construct a machine learning experiment via Azure Machine Learning Studio. You would like to split data into two separate datasets. Which of the following actions should you take?

  1. You should make use of the Clip Values module

  2. You should make use of the Group Categorical Values module

  3. You should make use of the Split Data module

  4. You should make use of the Group Data into Bins module

23. You need to select a feature extraction method. Which method should you use?

  1. Fisher Linear Discriminant Analysis

  2. Pearson's correlation

  3. Spearman correlation

  4. Mutual information

24. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train and register a machine learning model. You plan to deploy the model as a real-time web service. Applications must use key-based authentication to use the model. You need to deploy the web service. Solution: Create an AciWebservice instance. Set the value of the ssl_enabled property to True. Deploy the model to the service. Does the solution meet the goal?

  1. No

  2. Yes

25. You manage an Azure Machine Learning workspace. You have an environment for training jobs which uses an existing Docker image. A new version of the Docker image is available. You need to use the latest version of the Docker image for the environment configuration by using the Azure Machine Learning SDK v2. What should you do?

  1. Modify the conda_file to specify the new version of the Docker image

  2. Use the create_or_update method to change the tag of the image

  3. Change the description parameter of the environment configuration

  4. Use the Environment class to create a new version of the environment

26. You are preparing to train a regression model via automated machine learning. The data available to you has features with missing values, as well as categorical features with little discrete values. You want to make sure that automated machine learning is configured as follows: ✑ missing values must be automatically imputed. ✑ categorical features must be encoded as part of the training task. Which of the following actions should you take?

  1. You should make use of the featurization parameter with the 'off' value pair

  2. You should make use of the featurization parameter with the 'on' value pair

  3. You should make use of the featurization parameter with the 'FeaturizationConfig' value pair

  4. You should make use of the featurization parameter with the 'auto' value pair

27. You train and register a model in your Azure Machine Learning workspace. You must publish a pipeline that enables client applications to use the model for batch inferencing. You must use a pipeline with a single ParallelRunStep step that runs a Python inferencing script to get predictions from the input data. You need to create the inferencing script for the ParallelRunStep pipeline step. Which two functions should you include? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

  1. batch()

  2. main()

  3. score(mini_batch)

  4. run(mini_batch)

  5. init()

28. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are creating a new experiment in Azure Machine Learning Studio. One class has a much smaller number of observations than the other classes in the training set. You need to select an appropriate data sampling strategy to compensate for the class imbalance. Solution: You use the Synthetic Minority Oversampling Technique (SMOTE) sampling mode. Does the solution meet the goal?

  1. Yes

  2. No

29. You create an Azure Machine Learning workspace. You must configure an event handler to send an email notification when data drift is detected in the workspace datasets. You must minimize development efforts. You need to configure an Azure service to send the notification. Which Azure service should you use?

  1. Azure Automation runbook

  2. Azure Function apps

  3. Azure Logic Apps

  4. Azure DevOps pipeline

30. You use Azure Machine Learning studio to analyze a dataset containing a decimal column named column1. You need to verify that the column1 values are normally distributed. Which statistic should you use?

  1. Mean

  2. Profile

  3. Type

  4. Max

31. You need to consider the underlined segment to establish whether it is accurate. To improve the amount of low incidence cases in a dataset, you should make use of the SMOTE module. Select `No adjustment required` if the underlined segment is accurate. If the underlined segment is inaccurate, select the accurate option.

  1. Remove Duplicate Rows

  2. No adjustment required

  3. Edit Metadata

  4. Join Data

32. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model's predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a TabularExplainer. Does the solution meet the goal?

  1. No

  2. Yes

33. You train a model and register it in your Azure Machine Learning workspace. You are ready to deploy the model as a real-time web service. You deploy the model to an Azure Kubernetes Service (AKS) inference cluster, but the deployment fails because an error occurs when the service runs the entry script that is associated with the model deployment. You need to debug the error by iteratively modifying the code and reloading the service, without requiring a re-deployment of the service for each code update. What should you do?

  1. Modify the AKS service deployment configuration to enable application insights and re-deploy to AKS

  2. Create a local web service deployment configuration and deploy the model to a local Docker container

  3. Create an Azure Container Instances (ACI) web service deployment configuration and deploy the model on ACI

  4. Add a breakpoint to the first line of the entry script and redeploy the service to AKS

  5. Register a new version of the model and update the entry script to load the new version of the model from its registered path

34. You are developing a data science workspace that uses an Azure Machine Learning service. You need to select a compute target to deploy the workspace. What should you use?

  1. Apache Spark for HDInsight

  2. Azure Databricks

  3. Azure Data Lake Analytics

  4. Azure Container Service

35. You manage an Azure Machine Learning workspace. You develop a machine learning model. You must deploy the model to use a low-priority VM with a pricing discount. You need to deploy the model. Which compute target should you use?

  1. Local deployment

  2. Azure Machine Learning compute clusters

  3. Azure Container Instances (ACI)

  4. Azure Kubernetes Service (AKS)

36. You have an Azure Machine Learning workspace named WS1. You plan to use the Responsible AI dashboard to assess MLflow models that you will register in WS1. You need to identify the library you should use to register the MLflow models. Which library should you use?

  1. TensorFlow

  2. mlpy

  3. scikit-learn

  4. PyTorch

37. You are designing a training job in an Azure Machine Learning workspace by using Automated ML. During training, the compute resource must scale up to handle larger datasets. You need to select the compute resource that has a multi-node cluster that automatically scales. Which Azure Machine Learning compute target should you use?

  1. Endpoints

  2. Kubernetes cluster

  3. Compute instance

  4. Serverless compute

38. You create an Azure Machine Learning workspace. You must configure an event-driven workflow to automatically trigger upon completion of training runs in the workspace. The solution must minimize the administrative effort to configure the trigger. You need to configure an Azure service to automatically trigger the workflow. Which Azure service should you use?

  1. Event Hubs consumer

  2. Event Hubs Capture

  3. Event Grid subscription

  4. Azure Automation runbook

39. You have an Azure Machine Learning workspace. You plan to tune a model hyperparameter when you train the model. You need to define a search space that returns a normally distributed value. Which parameter should you use?

  1. Uniform

  2. LogNormal

  3. QUniform

  4. LogUniform

40. You are solving a classification task. You must evaluate your model on a limited data sample by using k-fold cross-validation. You start by configuring a k parameter as the number of splits. You need to configure the k parameter for the cross-validation. Which value should you use?

  1. k=0.01

  2. k=1

  3. k=5

  4. k=0.5

41. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train and register an Azure Machine Learning model. You plan to deploy the model to an online endpoint. You need to ensure that applications will be able to use the authentication method with a non-expiring artifact to access the model. Solution: Create a managed online endpoint and set the value of its auth_mode parameter to aml_token. Deploy the model to the online endpoint. Does the solution meet the goal?

  1. Yes

  2. No

42. You create an MLflow model. You must deploy the model to Azure Machine Learning for batch inference. You need to create the batch deployment. Which two components should you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

  1. Kubernetes online endpoint

  2. Compute target

  3. Environment

  4. Online endpoint

  5. Model files

43. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are analyzing a numerical dataset which contains missing values in several columns. You must clean the missing values using an appropriate operation without affecting the dimensionality of the feature set. You need to analyze a full dataset to include all values. Solution: Use the Last Observation Carried Forward (LOCF) method to impute the missing data points. Does the solution meet the goal?

  1. No

  2. Yes

44.  Your team is building a data engineering and data science development environment. The environment must support the following requirements: ✑ support Python and Scala ✑ compose data storage, movement, and processing services into automated data pipelines ✑ the same tool should be used for the orchestration of both data engineering and data science ✑ support workload isolation and interactive workloads ✑ enable scaling across a cluster of machines You need to create the environment. What should you do?

  1. Build the environment in Apache Hive for HDInsight and use Azure Data Factory for orchestration

  2. Build the environment in Azure Databricks and use Azure Data Factory for orchestration

  3. Build the environment in Apache Spark for HDInsight and use Azure Container Instances for orchestration

  4. Build the environment in Azure Databricks and use Azure Container Instances for orchestration

45. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model's predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a PFIExplainer. Does the solution meet the goal?

  1. Yes

  2. No

46. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train and register an Azure Machine Learning model. You plan to deploy the model to an online endpoint. You need to ensure that applications will be able to use the authentication method with a non-expiring artifact to access the model. Solution: Create a Kubernetes online endpoint and set the value of its auth_mode parameter to aml_token. Deploy the model to the online endpoint. Does the solution meet the goal?

  1. Yes

  2. No

47. You are creating a new experiment in Azure Machine Learning Studio. You have a small dataset that has missing values in many columns. The data does not require the application of predictors for each column. You plan to use the Clean Missing Data. You need to select a data cleaning method. Which method should you use?

  1. Synthetic Minority Oversampling Technique (SMOTE)

  2. Replace using MICE

  3. Replace using Probabilistic PCA

  4. Normalization

48. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train and register an Azure Machine Learning model. You plan to deploy the model to an online endpoint. You need to ensure that applications will be able to use the authentication method with a non-expiring artifact to access the model. Solution: Create a managed online endpoint with the default authentication settings. Deploy the model to the online endpoint. Does the solution meet the goal?

  1. Yes

  2. No

49. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Machine Learning workspace. You connect to a terminal session from the Notebooks page in Azure Machine Learning studio. You plan to add a new Jupyter kernel that will be accessible from the same terminal session. You need to perform the task that must be completed before you can add the new kernel. Solution: Create an environment. Does the solution meet the goal?

  1. Yes

  2. No

50. You plan to build a team data science environment. Data for training models in machine learning pipelines will be over 20 GB in size. You have the following requirements: ✑ Models must be built using Caffe2 or Chainer frameworks. ✑ Data scientists must be able to use a data science environment to build the machine learning pipelines and train models on their personal devices in both connected and disconnected network environments. Personal devices must support updating machine learning pipelines when connected to a network. You need to select a data science environment. Which environment should you use?

  1. Azure Machine Learning Service

  2. Azure Machine Learning Studio

  3. Azure Databricks

  4. Azure Kubernetes Service (AKS)

51. You need to implement a feature engineering strategy for the crowd sentiment local models. What should you do?

  1. Apply a Pearson correlation coefficient

  2. Apply a Spearman correlation coefficient

  3. Apply an analysis of variance (ANOVA)

  4. Apply a linear discriminant analysis

52. You use the Azure Machine Learning SDK to run a training experiment that trains a classification model and calculates its accuracy metric. The model will be retrained each month as new data is available. You must register the model for use in a batch inference pipeline. You need to register the model and ensure that the models created by subsequent retraining experiments are registered only if their accuracy is higher than the currently registered model. What are two possible ways to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

  1. Specify a tag named accuracy with the accuracy metric as a value when registering the model, and only register subsequent models if their accuracy is higher than the accuracy tag value of the currently registered model

  2. Register the model with the same name each time regardless of accuracy, and always use the latest version of the model in the batch inferencing pipeline

  3. Specify the model framework version when registering the model, and only register subsequent models if this value is higher

  4. Specify a property named accuracy with the accuracy metric as a value when registering the model, and only register subsequent models if their accuracy is higher than the accuracy property value of the currently registered model

  5. Specify a different name for the model each time you register it

53. This question is included in a number of questions that depicts the identical set-up. However, every question has a distinctive result. Establish if the recommendation satisfies the requirements. You have been tasked with employing a machine learning model, which makes use of a PostgreSQL database and needs GPU processing, to forecast prices. You are preparing to create a virtual machine that has the necessary tools built into it. You need to make use of the correct virtual machine type. Recommendation: You make use of a Deep Learning Virtual Machine (DLVM) Windows edition. Will the requirements be satisfied?

  1. Yes

  2. No

54. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You manage an Azure Machine Learning workspace. The development environment for managing the workspace is configured to use Python SDK v2 in Azure Machine Learning Notebooks. A Synapse Spark Compute is currently attached and uses system-assigned identity. You need to use Python code to update the Synapse Spark Compute to use a user-assigned identity. Solution: Configure the IdentityConfiguration class with the appropriate identity type. Does the solution meet the goal?

  1. No

  2. Yes


FAQs


1. What is the Microsoft Azure Data Scientist Associate DP-100 certification?

The DP-100 certification validates your ability to use machine learning techniques, build, train, and deploy AI models on Microsoft Azure using tools like Azure Machine Learning and Python.

2. How do I become an Azure Data Scientist Associate certified professional?

You must pass the DP-100: Designing and Implementing a Data Science Solution on Azure exam, which evaluates your skills in data preparation, model training, and deployment on Azure.

3. What are the prerequisites for the DP-100 certification exam?

There are no strict prerequisites, but having knowledge of Python, data science concepts, and Azure Machine Learning is highly recommended.

4. How much does the Microsoft Azure Data Scientist Associate DP-100 exam cost?

The exam costs approximately $165 USD, but prices may vary by country or region.

5. How many questions are in the DP-100 exam, and what is the duration?

The exam includes 40–60 multiple-choice questions, and you’ll have 120 minutes to complete it.

6. What topics are covered in the Azure Data Scientist Associate DP-100 certification exam?

It covers data preparation, model development, training, deployment, and performance monitoring using Azure ML tools.

7. How difficult is the Microsoft Azure Data Scientist Associate DP-100 exam?

The DP-100 is a moderate to advanced-level certification, requiring both theoretical understanding and hands-on machine learning experience.

8. How long does it take to prepare for the DP-100 certification exam?

On average, candidates spend 6–10 weeks preparing, depending on their familiarity with data science and Azure services.

9. What jobs can I get after earning the Azure Data Scientist Associate DP-100 certification?

You can work as a Data Scientist, Machine Learning Engineer, AI Specialist, or Cloud Data Analyst in organizations using Microsoft Azure.

10. What is the average salary of an Azure Data Scientist Associate certified professional?

Certified professionals typically earn between $110,000–$145,000 per year, depending on experience and job role.


Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
CertiMaan Logo

​​

Terms Of Use     |      Privacy Policy     |      Refund Policy    

   

 Copyright © 2011 - 2025  Ira Solutions -   All Rights Reserved

Disclaimer:: 

The content provided on this website is for educational and informational purposes only. We do not claim any affiliation with official certification bodies, including but not limited to Pega, Microsoft, AWS, IBM, SAP , Oracle , PMI, or others.

All practice questions, study materials, and dumps are intended to help learners understand exam patterns and enhance their preparation. We do not guarantee certification results and discourage the misuse of these resources for unethical purposes.

PayU logo
Razorpay logo
bottom of page