[2023年04月]に更新されたSnowPro Advanced Certification ARA-C01試験練習問題集お試しセット [Q81-Q103]

Share

[2023年04月]に更新されたSnowPro Advanced Certification ARA-C01試験練習問題集お試しセット

2023年最新のARA-C01プレミアム資料テストPDF無料問題集お試しセット


Snowflake ARA-C01(SnowPro Advanced Architect Certification)認定試験は、Snowflakeを使用する企業や組織によって世界的に認められている高い評判を持つ認定資格です。この認定試験は、データウェアハウジングとデータアナリティクスの上級アーキテクトになりたい個人のスキルと知識をテストするように設計されています。


Snowflake ARA-C01 試験は、Snowflake のクラウドデータプラットフォームのアーキテクチャ、セキュリティ、パフォーマンス、および管理に関する幅広いトピックをカバーしています。この試験では、Snowflake のユニークな機能と能力についての深い知識、複雑なデータソリューションの設計と実装の経験が必要です。この試験は、すでに SnowPro Core 認定を取得し、数年間にわたって Snowflake で作業してきた専門家を対象としています。

 

質問 # 81
At which object type level can the APPLY MASKING POLICY, APPLY ROW ACCESS POLICY and APPLY SESSION POLICY privileges be granted?

  • A. Table
  • B. Global
  • C. Schema
  • D. Database

正解:A


質問 # 82
Which of the below objects cannot be replicated?

  • A. Resource Monitors
  • B. Users
  • C. Roles
  • D. Shares
  • E. Databases
  • F. Warehouses

正解:A、B、C、D、F


質問 # 83
You have created a table as below
CREATE TABLE EMPLOYEE(EMPLOYEE_ID NUMBER, EMPLOYEE_NAME VARCHAR);
What datatype Snowflake will use for EMPLOYEE_ID?

  • A. INTEGER
  • B. FIXED
  • C. NUMBER

正解:B


質問 # 84
Following objects can be cloned in snowflake

  • A. Internal stages
  • B. Transient table
  • C. Permanent table
  • D. Temporary table
  • E. External tables

正解:B、C、D


質問 # 85
Create a task and a stream following the below steps. So, when the
system$stream_has_data('rawstream1') condition returns false, what will happen to the task ?
-- Create a landing table to store raw JSON data.
-- Snowpipe could load data into this table. create or replace table raw (var variant);
-- Create a stream to capture inserts to the landing table.
-- A task will consume a set of columns from this stream. create or replace stream rawstream1 on table raw;
-- Create a second stream to capture inserts to the landing table.
-- A second task will consume another set of columns from this stream. create or replace stream rawstream2 on table raw;
-- Create a table that stores the names of office visitors identified in the raw data. create or replace table names (id int, first_name string, last_name string);
-- Create a table that stores the visitation dates of office visitors identified in the raw data.
create or replace table visits (id int, dt date);
-- Create a task that inserts new name records from the rawstream1 stream into the names table
-- every minute when the stream contains records.
-- Replace the 'etl_wh' warehouse with a warehouse that your role has USAGE privilege on. create or replace task raw_to_names
warehouse = etl_wh schedule = '1 minute' when
system$stream_has_data('rawstream1') as
merge into names n
using (select var:id id, var:fname fname, var:lname lname from rawstream1) r1 on n.id = to_number(r1.id)
when matched then update set n.first_name = r1.fname, n.last_name = r1.lname
when not matched then insert (id, first_name, last_name) values (r1.id, r1.fname, r1.lname)
;
-- Create another task that merges visitation records from the rawstream1 stream into the visits table
-- every minute when the stream contains records.
-- Records with new IDs are inserted into the visits table;
-- Records with IDs that exist in the visits table update the DT column in the table.
-- Replace the 'etl_wh' warehouse with a warehouse that your role has USAGE privilege on. create or replace task raw_to_visits
warehouse = etl_wh schedule = '1 minute' when
system$stream_has_data('rawstream2') as
merge into visits v
using (select var:id id, var:visit_dt visit_dt from rawstream2) r2 on v.id = to_number(r2.id) when matched then update set v.dt = r2.visit_dt
when not matched then insert (id, dt) values (r2.id, r2.visit_dt)
;
-- Resume both tasks.
alter task raw_to_names resume;
alter task raw_to_visits resume;
-- Insert a set of records into the landing table. insert into raw
select parse_json(column1) from values
('{"id": "123","fname": "Jane","lname": "Smith","visit_dt": "2019-09-17"}'),
('{"id": "456","fname": "Peter","lname": "Williams","visit_dt": "2019-09-17"}');
-- Query the change data capture record in the table streams select * from rawstream1;
select * from rawstream2;

  • A. Task will be skipped
  • B. Task will return an warning message
  • C. Task will be executed but no rows will be merged

正解:A


質問 # 86
Running EXPLAIN on a query does not require a running warehouse

  • A. FALSE
  • B. TRUE

正解:B


質問 # 87
Remote service in external function can be an AWS Lambda function

  • A. FALSE
  • B. TRUE

正解:B


質問 # 88
A company has a Snowflake account named ACCOUNTA in AWS us-east-1 region. The company stores its marketing data in a Snowflake database named MARKET_DB. One of the company's business partners has an account named PARTNERB in Azure East US 2 region. For marketing purposes the company has agreed to share the database MARKET_DB with the partner account.
Which of the following steps MUST be performed for the account PARTNERB to consume data from the MARKET_DB database?

  • A. Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA replicate the database MARKET_DB to AZABC123 and from this account set up the data sharing to the PARTNERB account.
  • B. From account ACCOUNTA create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then make this database the provider and share it with the PARTNERB account.
  • C. Create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then replicate this database to the partner's account PARTNERB.
  • D. Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA create a share of database MARKET_DB, create a new database out of this share locally in AWS us-east-1 region, and replicate this new database to AZABC123 account. Then set up data sharing to the PARTNERB account.

正解:A


質問 # 89
When using the Snowflake Connector for Kafka, what data formats are supported for the messages? (Choose two.)

  • A. JSON
  • B. Parquet
  • C. CSV
  • D. XML
  • E. Avro

正解:A、E


質問 # 90
By default, the maximum file size that can be unloaded to a single file in snowflake is

  • A. 5 GB for Azure, AWS and GCP
  • B. 10 MB
  • C. 16 MB

正解:C


質問 # 91
A company's client application supports multiple authentication methods, and is using Okta.
What is the best practice recommendation for the order of priority when applications authenticate to Snowflake?

  • A. 1) External browser, SSO
    2) Key Pair Authentication, mostly used for development environment users
    3) Okta native authentication
    4) OAuth (ether Snowflake OAuth or External OAuth)
    5) Password
  • B. 1) OAuth (either Snowflake OAuth or External OAuth)
    2) External browser
    3) Okta native authentication
    4) Key Pair Authentication, mostly used for service account users
    5) Password
  • C. 1) Okta native authentication
    2) Key Pair Authentication, mostly used for production environment users
    3) Password
    4) OAuth (either Snowflake OAuth or External OAuth)
    5) External browser, SSO
  • D. 1) Password
    2) Key Pair Authentication, mostly used for production environment users
    3) Okta native authentication
    4) OAuth (either Snowflake OAuth or External OAuth)
    5) External browser, SSO

正解:A


質問 # 92
Assigning ACCOUNTADMIN as the default role for the account administrators is a best practice

  • A. TRUE
  • B. FALSE

正解:B


質問 # 93
An Architect uses COPY INTO with the ON_ERROR=SKIP_FILE option to bulk load CSV files into a table called TABLEA, using its table stage. One file named file5.csv fails to load. The Architect fixes the file and re-loads it to the stage with the exact same file name it had previously.
Which commands should the Architect use to load only file5.csv file from the stage? (Choose two.)

  • A. COPY INTO tablea FROM @%tablea MERGE = TRUE;
  • B. COPY INTO tablea FROM @%tablea NEW_FILES_ONLY = TRUE;
  • C. COPY INTO tablea FROM @%tablea RETURN_FAILED_ONLY = TRUE;
  • D. COPY INTO tablea FROM @%tablea FORCE = TRUE;
  • E. COPY INTO tablea FROM @%tablea;
  • F. COPY INTO tablea FROM @%tablea FILES = ('file5.csv');

正解:B、E


質問 # 94
You ran a query and the query
SELECT * FROM inventory WHERE BIBNUMBER = 2805127;
The query profile looks as below. If you would like to further tune the query, what is the best thing to do?

  • A. alter table inventory cluster by (BIBNUMBER);
  • B. Create an index on column BIBNUMBER
  • C. Divide the table into multiple smaller tables
  • D. Execute the below query to enable auto clustering

正解:A


質問 # 95
You have a table named customer_table. You want to create another table as customer_table_other which will be same as customer_table with respect to schema and data.
What is the best option?

  • A. CREATE TABLE customer_table_other CLONE customer_table
  • B. ALTER TABLE customer_table_other SWAP WITH customer_table
  • C. CREATE TABLE customer_table_other AS SELECT * FROM customer_table

正解:A


質問 # 96
What will happen if you try to ALTER a COLUMN(which has NULL values) to set it to NOT NULL

  • A. Snowflake drops the row and let the change happen
  • B. An error is returned and no changes are applied to the column
  • C. Snowflake automatically assigns a default value and let the change happen

正解:B


質問 # 97
You are a snowflake architect in an organization. The business team came to to deploy an use case which requires you to load some data which they can visualize through tableau. Everyday new data comes in and the old data is no longer required.
What type of table you will use in this case to optimize cost

  • A. TEMPORARY
  • B. PERMANENT
  • C. TRANSIENT

正解:C


質問 # 98
One of your joins is taking a lot of time. The query profile view looks like this.
What may be the issue?

  • A. This may be an "exploding join" issue. The query has provided a condition where records from one table match multiple records from another table resulting in a cartesian product
  • B. There is not enough memory to process the join query
  • C. Looks like tablescan is the most expensive operation in the profile.

正解:A


質問 # 99
Which command do you run to remove files from stage?

  • A. REMOVE
  • B. PURGE
  • C. CLEAN
  • D. DELETE

正解:A


質問 # 100
Data sharing is supported only between provider and consumer accounts in same region

  • A. TRUE
  • B. FALSE

正解:B


質問 # 101
A company wants to deploy its Snowflake accounts inside its corporate network with no visibility on the internet. The company is using a VPN infrastructure and Virtual Desktop Infrastructure (VDI) for its Snowflake users. The company also wants to re-use the login credentials set up for the VDI to eliminate redundancy when managing logins.
What Snowflake functionality should be used to meet these requirements? (Choose two.)

  • A. Set up SSO for federated authentication.
  • B. Use private connectivity from a cloud provider.
  • C. Set up replication to allow users to connect from outside the company VPN.
  • D. Use a proxy Snowflake account outside the VPN, enabling client redirect for user logins.
  • E. Provision a unique company Tri-Secret Secure key.

正解:D、E


質問 # 102
Materialized views based on external tables can improve query performance

  • A. FALSE
  • B. TRUE

正解:B


質問 # 103
......


スノープロアドバンストアーキテクト認定は、複雑なスノーフレークソリューションの設計と実装の専門知識を示したいデータアーキテクトやエンジニアにとって、貴重な資格です。認定は、データ管理、ウェアハウジング、および分析の分野におけるスキルと知識を示すことによって、専門家の競争力を高めます。認定は、スノーフレークをデータ管理に使用する企業によって、世界的に認められています。

 

今すぐ弊社のSnowPro Advanced Certification試験パッケージ使って試験準備してARA-C01をパスせよ:https://www.jpntest.com/shiken/ARA-C01-mondaishu

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡