次の認定試験に速く合格する!
簡単に認定試験を準備し、学び、そして合格するためにすべてが必要だ。
(A)ApplicationMaster
(B)ResourceManager
(C)DataNode
(D)NodeManager
(E)JobTracker
(F)TaskTracker
(G)NameNode
(A)Configure the ResourceManager hostname and enable node services on YARN bysetting the following property in yarn-site.xml:<name>yarn.resourcemanager.hostname</name><value>your_responseManager_hostname</value>
(B)Configure the NodeManager to enable MapReduce services on YARN by addingfollowing property in yarn-site.xml:<name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value>
(C)Configure the number of map tasks per job on YARN by setting the following property inmapred-site.xml:<name>mapreduce.job.maps</name><value>2</value>
(D)Configure a default scheduler to run on YARN by setting the following property insapred-site.xml:<name>mapreduce.jobtracker.taskScheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>
(E)Configure the NodeManager hostname and enable services on YARN by setting thefollowing property in yarn-site.xml:<name>yarn.nodemanager.hostname</name><value>your_nodeManager_hostname</value>
(F)Configure MapReduce as a framework running on YARN by setting the followingproperty in mapred-site.xml:<name>mapreduce.framework.name</name><value>yarn</value>
(A)The Mapper transfers the intermediate data immediately to the Reducers as it generated by the Map task
(B)The Mapper stores the intermediate data on the underlying filesystem of the local disk in the directories yarn.nodemanager.local-dirs
(C)The Mapper stores the intermediate data in HDFS on the node where the MAP tasks ran in the HDFS /usercache/&[user]sppcache/application_&(appid) directory for the user who ran the job
(D)The Mapper stores the intermediate data on the mode running the job's ApplicationMaster so that is available to YARN's ShuffleService before the data is presented to the Reducer
(E)YARN holds the intermediate data in the NodeManager's memory (a container) until it is transferred to the Reducers
(A)Run a Secondary NameNode on a different master from the NameNode in order to load provide automatic recovery from a NameNode failure
(B)Add another master node to increase the number of nodes running the JournalNode which increases the number of machines available to HA to create a quorum
(C)Configure the cluster's disk drives with an appropriate fault tolerant RAID level
(D)Run the ResourceManager on a different master from the NameNode in the order to load share HDFS metadata processing
(E)Set an HDFS replication factor that provides data redundancy, protecting against failure
我々は12時間以内ですべてのお問い合わせを答えます。
オンラインサポート時間:( UTC+9 ) 9:00-24:00月曜日から土曜日まで
サポート:現在連絡