[Q83-Q108] 実際にあるSPLK-2002問題集でリアルSplunk問題集PDFを提供しています [2024年03月]

Share

実際にあるSPLK-2002問題集でリアルSplunk問題集PDFを提供しています [2024年03月]

実際のJPNTest SPLK-2002問題集PDFで100%合格率を保証します


この試験では、候補者が複雑なSplunk Enterpriseアーキテクチャを設計および実装し、高度なSplunk Enterprise構成を開発し、Splunk Enterpriseを他のシステムと統合する能力を測定します。この認定は、Splunk Enterpriseと協力するIT専門家にとって貴重な資格として認識されています。個人は、組織のニーズを満たすSplunk Enterpriseソリューションを設計および実装するために必要なスキルと知識を持っていることを示しています。

 

質問 # 83
Splunk Enterprise platform instrumentation refers to data that the Splunk Enterprise deployment logs in the
_introspectionindex. Which of the following logs are included in this index? (Select all that apply.)

  • A. metrics.log
  • B. resource_usage.log
  • C. disk_objects.log
  • D. audit.log

正解:B、C

解説:
Explanation/Reference: https://docs.splunk.com/Documentation/Splunk/7.3.1/Troubleshooting/
Abouttheplatforminstrumentationframework


質問 # 84
How does the average run time of all searches relate to the available CPU cores on the indexers?

  • A. Average run time increases as the number of CPU cores on the indexers increases.
  • B. Average run time decreases as the number of CPU cores on the indexers decreases.
  • C. Average run time is independent of the number of CPU cores on the indexers.
  • D. Average run time increases as the number of CPU cores on the indexers decreases.

正解:D

解説:
Explanation
The average run time of all searches increases as the number of CPU cores on the indexers decreases. The CPU cores are the processing units that execute the instructions and calculations for the data. The number of CPU cores on the indexers affects the search performance, because the indexers are responsible for retrieving and filtering the data from the indexes. The more CPU cores the indexers have, the faster they can process the data and return the results. The less CPU cores the indexers have, the slower they can process the data and return the results. Therefore, the average run time of all searches is inversely proportional to the number of CPU cores on the indexers. The average run time of all searches is not independent of the number of CPU cores on the indexers, because the CPU cores are an important factor for the search performance. The average run time of all searches does not decrease as the number of CPU cores on the indexers decreases, because this would imply that the search performance improves with less CPU cores, which is not true. The average run time of all searches does not increase as the number of CPU cores on the indexers increases, because this would imply that the search performance worsens with more CPU cores, which is not true


質問 # 85
Which Splunk tool offers a health check for administrators to evaluate the health of their Splunk deployment?

  • A. btool
  • B. Monitoring Console
  • C. DiagGen
  • D. SPL Clinic

正解:B

解説:
Explanation
The Monitoring Console is the Splunk tool that offers a health check for administrators to evaluate the health of their Splunk deployment. The Monitoring Console provides dashboards and alerts that show the status and performance of various Splunk components, such as indexers, search heads, forwarders, license usage, and search activity. The Monitoring Console can also run health checks on the deployment and identify any issues or recommendations. The btool is a command-line tool that shows the effective settings of the configuration files, but it does not offer a health check. The DiagGen is a tool that generates diagnostic snapshots of the Splunk environment, but it does not offer a health check. The SPL Clinic is a tool that analyzes and optimizes SPL queries, but it does not offer a health check. For more information, see About the Monitoring Console in the Splunk documentation.


質問 # 86
A customer plans to ingest 600 GB of data per day into Splunk. They will have six concurrent users, and they also want high data availability and high search performance. The customer is concerned about cost and wants to spend the minimum amount on the hardware for Splunk. How many indexers are recommended for this deployment?

  • A. Two indexers not in a cluster, assuming users run many long searches.
  • B. Two indexers clustered, assuming a high volume of saved/scheduled searches.
  • C. Three indexers not in a cluster, assuming a long data retention period.
  • D. Two indexers clustered, assuming high availability is the greatest priority.

正解:D

解説:
Explanation
Two indexers clustered is the recommended deployment for a customer who plans to ingest 600 GB of data per day into Splunk, has six concurrent users, and wants high data availability and high search performance.
This deployment will provide enough indexing capacity and search concurrency for the customer's needs, while also ensuring data replication and searchability across the cluster. The customer can also save on the hardware cost by using only two indexers. Two indexers not in a cluster will not provide high data availability, as there is no data replication or failover. Three indexers not in a cluster will provide more indexing capacity and search concurrency, but also more hardware cost and no data availability. The customer's data retention period, number of long searches, or volume of saved/scheduled searches are not relevant for determining the number of indexers. For more information, see [Reference hardware] and [About indexer clusters and index replication] in the Splunk documentation.


質問 # 87
A new Splunk customer is using syslog to collect data from their network devices on port 514. What is the best practice for ingesting this data into Splunk?

  • A. Use a Splunk forwarder to collect the input on port 514 and forward the data.
  • B. Configure syslog to send the data to multiple Splunk indexers.
  • C. Use a Splunk indexer to collect a network input on port 514 directly.
  • D. Configure syslog to write logs and use a Splunk forwarder to collect the logs.

正解:A


質問 # 88
Before users can use a KV store, an admin must create a collection. Where is a collection is defined?

  • A. kvcollections.conf
  • B. collection.conf
  • C. collections.conf
  • D. kvstore.conf

正解:C

解説:
Explanation
A collection is defined in the collections.conf file, which specifies the name, schema, and permissions of the collection. The kvstore.conf file is used to configure the KV store settings, such as the port, SSL, and replication factor. The other two files do not exist1


質問 # 89
When troubleshooting a situation where some files within a directory are not being indexed, the ignored files are discovered to have long headers. What is the first thing that should be added to inputs.conf?

  • A. Increase the value of initCrcLength.
  • B. Add a crcSalt=<SOURCE> attribute.
  • C. Decrease the value of initCrcLength.
  • D. Add a crcSalt=<string> attribute.

正解:A

解説:
* inputs.conf is a configuration file that contains settings for various types of data inputs, such as files, directories, network ports, scripts, and so on1.
* initCrcLength is a setting that specifies the number of characters that the input uses to calculate the CRC (cyclic redundancy check) of a file1. The CRC is a value that uniquely identifies a file based on its content2.
* crcSalt is another setting that adds a string to the CRC calculation to force the input to consume files that have matching CRCs1. This can be useful when files have identical headers or when files are renamed or rolled over2.
* When troubleshooting a situation where some files within a directory are not being indexed, the ignored files are discovered to have long headers, the first thing that should be added to inputs.conf is to increase the value of initCrcLength. This is because by default, the input only performs CRC checks against the first 256 bytes of a file, which means that files with long headers may have matching CRCs and be skipped by the input2. By increasing the value of initCrcLength, the input can use more characters from the file to calculate the CRC, which can reduce the chances of CRC collisions and ensure that different files are indexed3.
* Option C is the correct answer because it reflects the best practice for troubleshooting this situation.
Option A is incorrect because decreasing the value of initCrcLength would make the CRC calculation less reliable and more prone to collisions. Option B is incorrect because adding a crcSalt with a static string would not help differentiate files with long headers, as they would still have matching CRCs. Option D is incorrect because adding a crcSalt with the <SOURCE> attribute would add the full directory path to the CRC calculation, which would not help if the files are in the same directory2.
References:
1: inputs.conf - Splunk Documentation 2: How the Splunk platform handles log file rotation 3: Solved:
Configure CRC salt - Splunk Community


質問 # 90
Which of the following are client filters available in serverclass.conf? (Select all that apply.)

  • A. Splunk server role.
  • B. IP address.
  • C. Platform (machine type).
  • D. DNS name.

正解:B、D

解説:
Explanation/Reference: https://docs.splunk.com/Documentation/Splunk/7.3.1/Updating/ Filterclients#Define_filters_through_serverclass.conf


質問 # 91
What is a Splunk Job? (Select all that apply.)

  • A. A search process kicked off via a report or an alert.
  • B. A child OS process manifested from the splunkd process.
  • C. Searches that are subjected to some usage quota.
  • D. A user-defined Splunk capability.

正解:D


質問 # 92
Which Splunk tool offers a health check for administrators to evaluate the health of their Splunk deployment?
btool

  • A. Monitoring Console
  • B.
  • C. DiagGen
  • D. SPL Clinic

正解:A

解説:
Explanation/Reference: https://docs.splunk.com/Documentation/Splunk/7.3.1/DMC/DMCoverview


質問 # 93
An index has large text log entries with many unique terms in the raw data. Other than the raw data, which index components will take the most space?

  • A. Index source metadata (sources.data files).
  • B. Index sourcetype metadata (SourceTypes. data files).
  • C. Index files (*. tsidx files).
  • D. Bloom filters (bloomfilter files).

正解:C

解説:
Index files (. tsidx files) are the main components of an index that store the raw data and the inverted index of terms. They take the most space in an index, especially if the raw data has many unique terms that increase the size of the inverted index. Bloom filters, source metadata, and sourcetype metadata are much smaller in comparison and do not depend on the number of unique terms in the raw data.
References:
* How the indexer stores indexes
* Splunk Enterprise Certified Architect Study Guide, page 17


質問 # 94
The KV store forms its own cluster within a SHC. What is the maximum number of SHC members KV store will form?

  • A. 0
  • B. Unlimited
  • C. 1
  • D. 2

正解:D

解説:
Explanation
The KV store forms its own cluster within a SHC. The maximum number of SHC members KV store will form is 50. The KV store cluster is a subset of the SHC members that are responsible for replicating and storing the KV store data. The KV store cluster can have up to 50 members, but only 20 of them can be active at any given time. The other members are standby members that can take over if an active member fails. The KV store cluster cannot have more than 50 members, nor can it have an unlimited number of members. The KV store cluster cannot have 25 or 100 members, because these numbers are not multiples of 5, which is the minimum replication factor for the KV store cluster


質問 # 95
Indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. There is ample CPU and memory available on the indexers. Which of the following is most likely to improve indexing performance?

  • A. Increase the number of parallel ingestion pipelines in server.conf
  • B. Decrease the maximum concurrent scheduled searches in limits.conf
  • C. Increase the maximum number of hot buckets in indexes.conf
  • D. Decrease the maximum size of the search pipelines in limits.conf

正解:B


質問 # 96
In which phase of the Splunk Enterprise data pipeline are indexed extraction configurations processed?

  • A. Input
  • B. Search
  • C. Parsing
  • D. Indexing

正解:C

解説:
Explanation/Reference: https://docs.splunk.com/Documentation/Splunk/7.3.2/Admin/
Configurationparametersandthedatapipeline


質問 # 97
When configuring a Splunk indexer cluster, what are the default values for replication and search factor?
replication_factor = 2

  • A. search_factor = 2
    replication_factor = 3
  • B. search factor = 3
    replication_factor = 3
  • C. search factor = 3
  • D. search_factor = 2
    replication_factor = 2

正解:A


質問 # 98
In search head clustering, which of the following methods can you use to transfer captaincy to a different member? (Select all that apply.)

  • A. Use the Search Head Clustering settings menu from Splunk Web on any member.
  • B. Run the splunk transfer shcluster-captain command from the current captain.
  • C. Run the splunk transfer shcluster-captain command from the member you would like to become the captain.
  • D. Use the Monitoring Console.

正解:A、C

解説:
Explanation
In search head clustering, there are two methods to transfer captaincy to a different member. One method is to use the Search Head Clustering settings menu from Splunk Web on any member. This method allows the user to select a specific member to become the new captain, or to let Splunk choose the best candidate. The other method is to run the splunk transfer shcluster-captain command from the member that the user wants to become the new captain. This method requires the user to know the name of the target member and to have access to the CLI of that member. Using the Monitoring Console is not a method to transfer captaincy, because the Monitoring Console does not have the option to change the captain. Running the splunk transfer shcluster-captain command from the current captain is not a method to transfer captaincy, because this command will fail with an error message


質問 # 99
What log file would you search to verify if you suspect there is a problem interpreting a regular expression in a monitor stanza?

  • A. tailing_processor.log
  • B. btool.log
  • C. metrics.log
  • D. splunkd.log

正解:A

解説:
Explanation
The tailing_processor.log file would be the best place to search if you suspect there is a problem interpreting a regular expression in a monitor stanza. This log file contains information about how Splunk monitors files and directories, including any errors or warnings related to parsing the monitor stanza. The splunkd.log file contains general information about the Splunk daemon, but it may not have the specific details about the monitor stanza. The btool.log file contains information about the configuration files, but it does not log the runtime behavior of the monitor stanza. The metrics.log file contains information about the performance metrics of Splunk, but it does not log the event breaking issues. For more information, see About Splunk Enterprise logging in the Splunk documentation.


質問 # 100
As a best practice, where should the internal licensing logs be stored?

  • A. License server.
  • B. Deployment layer.
  • C. Search head layer.
  • D. Indexing layer.

正解:C


質問 # 101
Which of the following describe migration from single-site to multisite index replication?

  • A. Multisite total values should not exceed any single-site factors.
  • B. Single-site buckets instantly receive the multisite policies.
  • C. A master node is required at each site.
  • D. Multisite policies apply to new data only.

正解:A

解説:
Explanation/Reference: https://docs.splunk.com/Documentation/Splunk/7.3.2/Indexer/Migratetomultisite


質問 # 102
What is the algorithm used to determine captaincy in a Splunk search head cluster?

  • A. Round-robin distribution consensus.
  • B. Rapt distributed consensus.
  • C. Rift distributed consensus.
  • D. Raft distributed consensus.

正解:D

解説:
Explanation
The algorithm used to determine captaincy in a Splunk search head cluster is Raft distributed consensus. Raft is a consensus algorithm that is used to elect a leader among a group of nodes in a distributed system. In a Splunk search head cluster, Raft is used to elect a captain among the cluster members. The captain is the cluster member that is responsible for coordinating the search activities, replicating the configurations and apps, and pushing the knowledge bundles to the search peers. The captain is dynamically elected based on various criteria, such as CPU load, network latency, and search load. The captain can change over time, depending on the availability and performance of the cluster members. Rapt, Rift, and Round-robin are not valid algorithms for determining captaincy in a Splunk search head cluster


質問 # 103
How many cluster managers are required for a multisite indexer cluster?

  • A. Two for the entire cluster.
  • B. Two for each site.
  • C. One for the entire cluster.
  • D. One for each site.

正解:C

解説:
A multisite indexer cluster is a type of indexer cluster that spans multiple geographic locations or sites. A multisite indexer cluster requires only one cluster manager, also known as the master node, for the entire cluster. The cluster manager is responsible for coordinating the replication and search activities among the peer nodes across all sites. The cluster manager can reside in any site, but it must be accessible by all peer nodes and search heads in the cluster. Option C is the correct answer. Option A is incorrect because having two cluster managers for the entire cluster would introduce redundancy and complexity. Option B is incorrect because having one cluster manager for each site would create separate clusters, not a multisite cluster. Option D is incorrect because having two cluster managers for each site would be unnecessary and inefficient12
1: https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Multisiteoverview 2:
https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Clustermanageroverview


質問 # 104
When should a Universal Forwarder be used instead of a Heavy Forwarder?

  • A. When most of the data requires masking.
  • B. When data comes directly from a database server.
  • C. When there is a high-velocity data source.
  • D. When a modular input is needed.

正解:C

解説:
According to the Splunk blog1, the Universal Forwarder is ideal for collecting data from high-velocity data sources, such as a syslog server, due to its smaller footprint and faster performance. The Universal Forwarder performs minimal processing and sends raw or unparsed data to the indexers, reducing the network traffic and the load on the forwarders. The other options are false because:
* When most of the data requires masking, a Heavy Forwarder is needed, as it can perform advanced filtering and data transformation before forwarding the data2.
* When data comes directly from a database server, a Heavy Forwarder is needed, as it can run modular inputs such as DB Connect to collect data from various databases2.
* When a modular input is needed, a Heavy Forwarder is needed, as the Universal Forwarder does not include a bundled version of Python, which is required for most modular inputs2.


質問 # 105
Which command is used for thawing the archive bucket?

  • A. Splunk collect
  • B. Splunk dbinspect
  • C. Splunk rebuild
  • D. Splunk convert

正解:C

解説:
Explanation
The splunk rebuild command is used for thawing the archive bucket. Thawing is the process of restoring frozen data back to Splunk for searching. Frozen data is data that has been archived or deleted from Splunk after reaching the end of its retention period. To thaw a bucket, the user needs to copy the bucket from the archive location to the thaweddb directory under SPLUNK_HOME/var/lib/splunk and run the splunk rebuild command to rebuild the .tsidx files for the bucket. The splunk collect command is used for collecting diagnostic data from a Splunk instance. The splunk convert command is used for converting configuration files from one format to another. The splunk dbinspect command is used for inspecting the status and properties of the buckets in an index.


質問 # 106
Which of the following commands is used to clear the KV store?

  • A. splunk reinitialize kvstore
  • B. splunk delete kvstore
  • C. splunk clean kvstore
  • D. splunk clear kvstore

正解:C

解説:
The splunk clean kvstore command is used to clear the KV store. This command will delete all the collections and documents in the KV store and reset it to an empty state. This command can be useful for troubleshooting KV store issues or resetting the KV store data. The splunk clear kvstore, splunk delete kvstore, and splunk reinitialize kvstore commands are not valid Splunk commands. For more information, see Use the CLI to manage the KV store in the Splunk documentation.


質問 # 107
Which Splunk Enterprise offering has its own license?

  • A. Splunk Forwarder Management
  • B. Splunk Universal Forwarder
  • C. Splunk Heavy Forwarder
  • D. Splunk Cloud Forwarder

正解:B

解説:
Explanation/Reference: https://docs.splunk.com/Splexicon:Forwardinglicense


質問 # 108
......

検証済みSPLK-2002問題集と解答で最新SPLK-2002をダウンロード:https://www.jpntest.com/shiken/SPLK-2002-mondaishu

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡