合格保証付きクイズ2023年最新の実際に出る検証済みのDP-500無料試験問題集 [Q64-Q80]

Share

合格保証付きクイズ2023年最新の実際に出る検証済みのDP-500無料試験問題集

無料Azure Enterprise Data Analyst Associate DP-500究極な学習ガイド(更新されたのは115問があります)


Microsoft DP-500 認定試験の出題範囲:

トピック出題範囲
トピック 1
  • Explore and visualize data by using the Azure Synapse SQL results pane
  • Deploy and manage datasets by using the XMLA endpoint
トピック 2
  • Explore data by using native visuals in Spark notebooks
  • Explore data by using Azure Synapse Analytics
トピック 3
  • Design and implement enterprise-scale row-level security and object-level security
  • Analyze data model efficiency by using VertiPaq Analyzer
トピック 4
  • Identify requirements for a solution, including features, performance, and licensing strategy
  • Recommend and configure an on-premises gateway in Power BI
トピック 5
  • Query advanced data sources, including JSON, Parquet, APIs, and Azure Machine Learning models
  • Connect to and query datasets by using the XMLA endpoint
トピック 6
  • Integrate an analytics platform into an existing IT infrastructure
  • Create and distribute paginated reports in Power BI Report Builder
トピック 7
  • Design and configure Power BI reports for accessibility
  • Implement performance improvements in Power Query and data sources
トピック 8
  • Perform impact analysis of downstream dependencies from dataflows and datasets
  • Manage Power BI assets by using Azure Purview
トピック 9
  • Create queries, functions, and parameters by using the Power Query Advanced Editor
  • Identify and implement performance improvements in queries and report visuals

 

質問 64
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution that might meet the stated goals. Some question sets might have more than one
correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions
will not appear in the review screen.
You have a Power Bl dataset named Datasetl.
In Dataset1, you currently have 50 measures that use the same time intelligence logic.
You need to reduce the number of measures, while maintaining the current functionality.
Solution: From DAX Studio, you write a query that uses grouping sets.
Does this meet the goal?

  • A. No
  • B. Yes

正解: A

 

質問 65
You are configuring an aggregation table as shown in the following exhibit.

The detail table is named FactSales and the aggregation table is named FactSales(Agg).
You need to aggregate SalesAmount for each store.
Which type of summarization should you use for SalesAmount and StoreKey? To answer, select the
appropriate options in the answer area,
NOTE: Each correct selection is worth one point.

正解:

解説:

 

質問 66
You have the following code in an Azure Synapse notebook.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the code.
NOTE: Each correct selection is worth one point.

正解:

解説:

Reference:
https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_stacked.html
https://matplotlib.org/stable/api/legend_api.html

 

質問 67
You are using DAX Studio to analyze a slow-running report query. You need to identify inefficient join
operations in the query. What should you review?

  • A. the query statistics
  • B. the query plan
  • C. the server timings
  • D. the query history

正解: B

 

質問 68
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet files by using automatic schema inference. The files contain more than 40 million rows of UTF-8-encoded business names, survey names, and participant counts. The database is configured to use the default collation.
The queries use open row set and infer the schema shown in the following table.

You need to recommend changes to the queries to reduce I/O reads and tempdb usage.
Solution: You recommend using openrowset with to explicitly specify the maximum length for businessName and surveyName.
Does this meet the goal?

  • A. No
  • B. Yes

正解: A

解説:
Instead use Solution: You recommend using OPENROWSET WITH to explicitly define the collation for businessName and surveyName as Latin1_General_100_BIN2_UTF8.
Query Parquet files using serverless SQL pool in Azure Synapse Analytics.
Important
Ensure you are using a UTF-8 database collation (for example Latin1_General_100_BIN2_UTF8) because string values in PARQUET files are encoded using UTF-8 encoding. A mismatch between the text encoding in the PARQUET file and the collation may cause unexpected conversion errors. You can easily change the default collation of the current database using the following T-SQL statement: alter database current collate Latin1_General_100_BIN2_UTF8'.
Note: If you use the Latin1_General_100_BIN2_UTF8 collation you will get an additional performance boost compared to the other collations. The Latin1_General_100_BIN2_UTF8 collation is compatible with parquet string sorting rules. The SQL pool is able to eliminate some parts of the parquet files that will not contain data needed in the queries (file/column-segment pruning). If you use other collations, all data from the parquet files will be loaded into Synapse SQL and the filtering is happening within the SQL process. The Latin1_General_100_BIN2_UTF8 collation has additional performance optimization that works only for parquet and CosmosDB. The downside is that you lose fine-grained comparison rules like case insensitivity.

 

質問 69
You have an Azure Data Lake Storage Gen 2 container that stores more than 300,000 files representing hourly telemetry dat a. The data is organized in folders by the year, month, and day according to when the telemetry was captured.
You have the following query in Power Query Editor.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point

正解:

解説:

Reference:
https://docs.microsoft.com/en-us/powerquery-m/table-selectrows
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-comparison-with-blob-storage

 

質問 70
You need to create the customized Power Bl usage reporting. The Usage Metrics Report dataset has already been created. The solution must minimize development and administrative effort.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

正解:

解説:

1 - From powerbi.com, create a new report..
2 - Add a report measure
3 - Add visuals to the report
4 - Publish the report to the Sales Analytics workspace

 

質問 71
You have a file named File1.txt that has the following characteristics:
* A header row
* Tab delimited values
* UNIX-style line endings
You need to read File1.txt by using an Azure Synapse Analytics serverless SQL pool.
Which query should you execute?
A)

B)

C)

D)

  • A. Option B
  • B. Option D
  • C. Option A
  • D. Option C

正解: C

 

質問 72
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions
will not appear in the review screen.
You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet files
by using automatic schema inference. The files contain more than 40 million rows of UTF-8-encoded business
names, survey names, and participant counts. The database is configured to use the default collation.
The queries use open row set and infer the schema shown in the following table.

You need to recommend changes to the queries to reduce I/O reads and tempdb usage.
Solution: You recommend defining a data source and view for the Parquet files. You recommend updating the
query to use the view.
Does this meet the goal?

  • A. No
  • B. Yes

正解: B

 

質問 73
You are using a Python notebook in an Apache Spark pool in Azure Synapse Analytics.
You need to present the data distribution statistics from a DataFrame in a tabular view.
Which method should you invoke on the DataFrame?

  • A. describe
  • B. explain
  • C. freqltems
  • D. sample

正解: A

解説:
pandas.DataFrame.describe
Descriptive statistics include those that summarize the central tendency, dispersion and shape of a dataset's distribution, excluding NaN values.
Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. The output will vary depending on what is provided.

 

質問 74
You have a Power Bl tenant that contains 10 workspaces.
You need to create dataflows in three of the workspaces. The solution must ensure that data engineers can access the resulting data by using Azure Data Factory.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point

  • A. Create and save the dataflows to an Azure Data Lake Storage account.
  • B. Create and save the dataflows to the internal storage of Power BI
  • C. Associate the Power Bl tenant to an Azure Data Lake Storage account.
  • D. Add the managed identity for Data Factory as a member of the workspaces.

正解: A,C

解説:
Data used with Power BI is stored in internal storage provided by Power BI by default. With the integration of dataflows and Azure Data Lake Storage Gen 2 (ADLS Gen2), you can store your dataflows in your organization's Azure Data Lake Storage Gen2 account. This essentially allows you to "bring your own storage" to Power BI dataflows, and establish a connection at the tenant or workspace level.

 

質問 75
What should you configure in the deployment pipeline?

  • A. a backward deployment
  • B. a data source rule
  • C. a selective deployment
  • D. auto-binding

正解: D

 

質問 76
Note: This question is part of a scries of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have the Power Bl data model shown in the exhibit. (Click the Exhibit tab.)

Users indicate that when they build reports from the data model, the reports take a long time to load.
You need to recommend a solution to reduce the load times of the reports.
Solution: You recommend creating a perspective that contains the commonly used fields.
Does this meet the goal?

  • A. No
  • B. Yes

正解: A

解説:
Instead denormalize For Performance.
Even though it might mean storing a bit of redundant data, schema denormalization can sometimes provide better query performance. The only question then becomes is the extra space used worth the performance benefit.

 

質問 77
You are creating a Power 81 single-page report.
Some users will navigate the report by using a keyboard, and some users will navigate the report by using a screen reader.
You need to ensure that the users can consume content on a report page in a logical order.
What should you configure on the report page?

  • A. the bookmark order
  • B. the X position
  • C. the tab order
  • D. the layer order

正解: C

解説:
Tab order is the order in which users interact with the items on a page using the keyboard. Generally, we want tab order to be predictable and to closely match the visual order on the page (unless there is a good reason to deviate).
Note: If you are using the keyboard to navigate in a Power BI report, the order in which you arrive at visuals will not follow your vision unless you set the new tab order property. If you have low or no vision, this becomes an even bigger issue because you may not be able to see that you are navigating visuals out of visual order because the screen reader just reads whatever comes next.

 

質問 78
You have a Power Bl workspace that contains one dataset and four reports that connect to the dataset. The dataset uses Import storage mode and contains the following data sources:
* A CSV file in an Azure Storage account
* An Azure Database for PostgreSQL database
You plan to use deployment pipelines to promote the content from development to test to production. There will be different data source locations for each stage. What should you include in the deployment pipeline to ensure that the appropriate data source locations are used during each stage?

  • A. selective deployment
  • B. auto-binding across pipelines
  • C. parameter rules
  • D. data source rules

正解: C

解説:
Note: Create deployment rules
When working in a deployment pipeline, different stages may have different configurations. For example, each stage can have different databases or different query parameters. The development stage might query sample data from the database, while the test and production stages query the entire database.
When you deploy content between pipeline stages, configuring deployment rules enables you to allow changes to content, while keeping some settings intact. For example, if you want a dataset in a production stage to point to a production database, you can define a rule for this. The rule is defined in the production stage, under the appropriate dataset. Once the rule is defined, content deployed from test to production, will inherit the value as defined in the deployment rule, and will always apply as long as the rule is unchanged and valid.

 

質問 79
You have a Power Bl workspace named Workspacel that contains five dataflows.
You need to configure Workspacel to store the dataflows in an Azure Data Lake Storage Gen2 account
What should you do first?

  • A. Delete the dataflow queries.
  • B. From the Power Bl Admin portal, enable tenant-level storage.
  • C. Disable load for all dataflow queries.
  • D. Change the Data source settings in the dataflow queries.

正解: D

 

質問 80
......

今すぐトップクラスを試そうDP-500練習試験問題:https://www.jpntest.com/shiken/DP-500-mondaishu

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡