[2023年11月] 確実合格する有効な方法 Snowflake 試験問題集 ARA-C01 試験学習ガイド [Q107-Q122]

Share

[2023年11月] 確実合格する有効な方法Snowflake試験問題集でARA-C01試験学習ガイド

ARA-C01問題集とSnowPro Advanced Architect Certificationトレーニングコースでお客様の合格を楽にさせる学習合格試験問題!


Snowflake ARA-C01認定を獲得するには、候補者は、スノーフレークアーキテクチャと機能に関連する幅広いトピックをカバーする厳格な試験に合格する必要があります。この試験は、60の複数選択の質問で構成され、90分間タイミングが合っています。この試験はコンピューターベースであり、候補者の自宅またはオフィスから認可されたテストセンターまたはリモートで撮影できます。


SnowPro Advanced Architect Certification 試験は、候補者の安全かつ拡張可能で信頼性の高いSnowflakeソリューションの設計、実装、および管理能力をテストするように設計されています。この試験は、Snowflakeアーキテクチャ、データモデリング、セキュリティ、パフォーマンス最適化、拡張性、移行など、広範囲のトピックをカバーしています。合格した候補者は、組織の複雑なデータ管理ニーズを満たすSnowflakeソリューションを設計および実装する能力を証明します。

 

質問 # 107
Which alter command below may affect the availability of column with respect to time travel?

  • A. ALTER TABLE...SET DEFAULT
  • B. ALTER TABLE...DROP COLUMN
  • C. ALTER TABLE...SET DATA TYPE

正解:C


質問 # 108
Below are the rest APIs provided by Snowpipe

  • A. insertFiles
  • B. loadData
  • C. insertReport

正解:A、C


質問 # 109
How will you drop a cluster key?

  • A. ALTER TABLE <name> DROP CLUSTERING KEY
  • B. ALTER TABLE <name> DELETE CLUSTERING KEY
  • C. ALTER TABLE <name> REMOVE CLUSTERING KEY

正解:A


質問 # 110
How does a standard virtual warehouse policy work in Snowflake?

  • A. It starts only f the system estimates that there is a query load that will keep the cluster busy for at least 2 minutes.
  • B. It conserves credits by keeping running clusters fully loaded rather than starting additional clusters.
  • C. It starts only if the system estimates that there is a query load that will keep the cluster busy for at least 6 minutes.
  • D. It prevents or minimizes queuing by starting additional clusters instead of conserving credits.

正解:B


質問 # 111
A Data Engineer is designing a near real-time ingestion pipeline for a retail company to ingest event logs into Snowflake to derive insights. A Snowflake Architect is asked to define security best practices to configure access control privileges for the data load for auto-ingest to Snowpipe.
What are the MINIMUM object privileges required for the Snowpipe user to execute Snowpipe?

  • A. CREATE on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table
  • B. USAGE on the named pipe, named stage, target database, and schema, and INSERT and SELECT on the target table
  • C. OWNERSHIP on the named pipe, USAGE on the named stage, target database, and schema, and INSERT and SELECT on the target table
  • D. OWNERSHIP on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table

正解:D


質問 # 112
Snowflake supports the following query performance optimizing methods

  • A. Caching techniques
  • B. B-tree type indexes
  • C. Retrieving results of previous query from cache

正解:A、C


質問 # 113
Databases created from shares cannot be replicated

  • A. FALSE
  • B. TRUE

正解:B


質問 # 114
You have create a task as below
CREATE TASK mytask1
WAREHOUSE = mywh
SCHEDULE = '5 minute'
WHEN
SYSTEM$STREAM_HAS_DATA('MYSTREAM')
AS
INSERT INTO mytable1(id,name) SELECT id, name FROM mystream WHERE METADATA$ACTION = 'INSERT';
Which statement is true below?

  • A. If SYSTEM$STREAM_HAS_DATA returns false, the task will go to suspended mode
  • B. If SYSTEM$STREAM_HAS_DATA returns false, the task will still run
  • C. If SYSTEM$STREAM_HAS_DATA returns false, the task will be skipped

正解:C


質問 # 115
A company has several sites in different regions from which the company wants to ingest data.
Which of the following will enable this type of data ingestion?

  • A. The company must have a Snowflake account in each cloud region to be able to ingest data to that account.
  • B. The company should use a storage integration for the external stage.
  • C. The company should provision a reader account to each site and ingest the data through the reader accounts.
  • D. The company must replicate data between Snowflake accounts.

正解:A


質問 # 116
A media company needs a data pipeline that will ingest customer review data into a Snowflake table, and apply some transformations. The company also needs to use Amazon Comprehend to do sentiment analysis and make the de-identified final data set available publicly for advertising companies who use different cloud providers in different regions.
The data pipeline needs to run continuously ang efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.
Which design will meet these requirements?

  • A. Ingest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.
  • B. Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.
  • C. Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.
  • D. Ingest the data using COPY INTO and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

正解:C


質問 # 117
Your business team runs a set of identical queries every day after the batch ETL run is complete. From the following actions, what is the best action that you will recommend.

  • A. After the ETL run, execute the identical queries so that they remain in the result cache
  • B. After the ETL run, copy the tables to another schema for the business users to query
  • C. After the ETL run, resize the warehouse to a larger warehouse

正解:A


質問 # 118
How is the change of local time due to daylight savings time handled in Snowflake tasks? (Choose two.)

  • A. A frequent task execution schedule like minutes may not cause a problem, but will affect the task history.
  • B. Task schedules can be designed to follow specified or local time zones to accommodate the time changes.
  • C. A task will move to a suspended state during the daylight savings time change.
  • D. A task schedule will follow only the specified time and will fail to handle lost or duplicated hours.
  • E. A task scheduled in a UTC-based schedule will have no issues with the time changes.

正解:A、B


質問 # 119
Which of the below objects cannot be replicated?

  • A. Resource Monitors
  • B. Users
  • C. Shares
  • D. Databases
  • E. Roles
  • F. Warehouses

正解:A、B、C、E、F


質問 # 120
An Architect has been asked to clone schema STAGING as it looked one week ago, Tuesday June 1st at 8:00 AM, to recover some objects.
The STAGING schema has 50 days of retention.
The Architect runs the following statement:
CREATE SCHEMA STAGING_CLONE CLONE STAGING at (timestamp => '2021-06-01 08:00:00'); The Architect receives the following error: Time travel data is not available for schema STAGING. The requested time is either beyond the allowed time travel period or before the object creation time.
The Architect then checks the schema history and sees the following:
CREATED_ON|NAME|DROPPED_ON
2021-06-02 23:00:00 | STAGING | NULL
2021-05-01 10:00:00 | STAGING | 2021-06-02 23:00:00
How can cloning the STAGING schema be achieved?

  • A. Undrop the STAGING schema and then rerun the CLONE statement.
  • B. Rename the STAGING schema and perform an UNDROP to retrieve the previous STAGING schema version, then run the CLONE statement.
  • C. Modify the statement: CREATE SCHEMA STAGING_CLONE CLONE STAGING at (timestamp => '2021-05-01 10:00:00');
  • D. Cloning cannot be accomplished because the STAGING schema version was not active during the proposed Time Travel time period.

正解:B


質問 # 121
The following DDL command was used to create a task based on a stream:

Assuming MY_WH is set to auto_suspend - 60 and used exclusively for this task, which statement is true?

  • A. The warehouse MY_WH will never suspend.
  • B. The warehouse MY_WH will be made active every five minutes to check the stream.
  • C. The warehouse MY_WH will automatically resize to accommodate the size of the stream.
  • D. The warehouse MY_WH will only be active when there are results in the stream.

正解:B


質問 # 122
......

最新 [2023年11月] 効果的な学習法で試験合格できるARA-C01:https://www.jpntest.com/shiken/ARA-C01-mondaishu

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡