[2023年04月21日]CKADサンプルには正確で更新された問題 [Q13-Q29]

Share

[2023年04月21日]CKADサンプルには正確で更新された問題

CKAD試験情報と無料練習テスト

質問 # 13
Exhibit:

Context
You are tasked to create a ConfigMap and consume the ConfigMap in a pod using a volume mount.
Task
Please complete the following:
* Create a ConfigMap named another-config containing the key/value pair: key4/value3
* start a pod named nginx-configmap containing a single container using the
nginx image, and mount the key you just created into the pod under directory /also/a/path

  • A. Solution:





  • B. Solution:




正解:A


質問 # 14
Exhibit:

Context
A pod is running on the cluster but it is not responding.
Task
The desired behavior is to have Kubemetes restart the pod when an endpoint returns an HTTP 500 on the /healthz endpoint. The service, probe-pod, should never send traffic to the pod while it is failing. Please complete the following:
* The application has an endpoint, /started, that will indicate if it can accept traffic by returning an HTTP 200. If the endpoint returns an HTTP 500, the application has not yet finished initialization.
* The application has another endpoint /healthz that will indicate if the application is still working as expected by returning an HTTP 200. If the endpoint returns an HTTP 500 the application is no longer responsive.
* Configure the probe-pod pod provided to use these endpoints
* The probes should use port 8080

  • A. Solution:

    In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds field tells the kubelet that it should wait 5 seconds before performing the first probe. To perform a probe, the kubelet executes the command cat /tmp/healthy in the target container. If the command succeeds, it returns 0, and the kubelet considers the container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the container and restarts it.
    When the container starts, it executes this command:
    /bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
    For the first 30 seconds of the container's life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy returns a failure code.
    Create the Pod:
    kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml
    Within 30 seconds, view the Pod events:
    kubectl describe pod liveness-exec
    The output indicates that no liveness probes have failed yet:
    FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    --------- -------- ----- ---- ------------- -------- ------ -------
    24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
    After 35 seconds, view the Pod events again:
    kubectl describe pod liveness-exec
    At the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
    FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    --------- -------- ----- ---- ------------- -------- ------ -------
    37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully
    2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
    Wait another 30 seconds, and verify that the container has been restarted:
    kubectl get pod liveness-exec
    The output shows that RESTARTS has been incremented:
    NAME READY STATUS RESTARTS AGE
    liveness-exec 1/1 Running 1 1m
  • B. Solution:

    In the configuration file, you can see that the Pod has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds. The initialDelaySeconds field tells the kubelet that it should wait 5 seconds before performing the first probe. To perform a probe, the kubelet executes the command cat /tmp/healthy in the target container. If the command succeeds, it returns 0, and the kubelet considers the container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the container and restarts it.
    When the container starts, it executes this command:
    /bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
    For the first 30 seconds of the container's life, there is a /tmp/healthy file. So during the first 30 seconds, the command cat /tmp/healthy returns a success code. After 30 seconds, cat /tmp/healthy returns a failure code.
    Create the Pod:
    kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml
    Within 30 seconds, view the Pod events:
    kubectl describe pod liveness-exec
    The output indicates that no liveness probes have failed yet:
    FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    --------- -------- ----- ---- ------------- -------- ------ -------
    24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
    After 35 seconds, view the Pod events again:
    kubectl describe pod liveness-exec
    At the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
    FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    --------- -------- ----- ---- ------------- -------- ------ -------
    37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
    2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
    Wait another 30 seconds, and verify that the container has been restarted:
    kubectl get pod liveness-exec
    The output shows that RESTARTS has been incremented:
    NAME READY STATUS RESTARTS AGE
    liveness-exec 1/1 Running 1 1m

正解:B


質問 # 15

Task:
Create a Deployment named expose in the existing ckad00014 namespace running 6 replicas of a Pod. Specify a single container using the ifccncf/nginx: 1.13.7 image Add an environment variable named NGINX_PORT with the value 8001 to the container then expose port
8001

正解:

解説:
See the solution below.
Explanation
Solution:

Text Description automatically generated

Text Description automatically generated


質問 # 16
Context

Context
It is always useful to look at the resources your applications are consuming in a cluster.
Task
* From the pods running in namespace cpu-stress , write the name only of the pod that is consuming the most CPU to file /opt/KDOBG030l/pod.txt, which has already been created.

正解:

解説:
Solution:


質問 # 17
Context

Task:
Modify the existing Deployment named broker-deployment running in namespace quetzal so that its containers.
1) Run with user ID 30000 and
2) Privilege escalation is forbidden
The broker-deployment is manifest file can be found at:

正解:

解説:
Solution:



質問 # 18
Refer to Exhibit.

Context
Developers occasionally need to submit pods that run periodically.
Task
Follow the steps below to create a pod that will start at a predetermined time and]which runs to completion only once each time it is started:
* Create a YAML formatted Kubernetes manifest /opt/KDPD00301/periodic.yaml that runs the following shell command: date in a single busybox container. The command should run every minute and must complete within 22 seconds or be terminated oy Kubernetes. The Cronjob namp and container name should both be hello
* Create the resource in the above manifest and verify that the job executes successfully at least once

正解:

解説:
Solution:



質問 # 19

Context
Your application's namespace requires a specific service account to be used.
Task
Update the app-a deployment in the production namespace to run as the restrictedservice service account. The service account has already been created.

正解:

解説:
See the solution below.
Explanation
Solution:


質問 # 20

Task:
A pod within the Deployment named buffale-deployment and in namespace gorilla is logging errors.
1) Look at the logs identify errors messages.
Find errors, including User "system:serviceaccount:gorilla:default" cannot list resource "deployment" [...] in the namespace "gorilla"
2) Update the Deployment buffalo-deployment to resolve the errors in the logs of the Pod.
The buffalo-deployment 'S manifest can be found at -/prompt/escargot/buffalo-deployment.yaml See the solution below.

正解:

解説:
Explanation
Solution:
Text Description automatically generated


Text Description automatically generated


Text Description automatically generated




Text Description automatically generated


質問 # 21
Exhibit:

Task
You are required to create a pod that requests a certain amount of CPU and memory, so it gets scheduled to-a node that has those resources available.
* Create a pod named nginx-resources in the pod-resources namespace that requests a minimum of 200m CPU and 1Gi memory for its container
* The pod should use the nginx image
* The pod-resources namespace has already been created

  • A. Solution:



  • B. Solution:




正解:B


質問 # 22
Exhibit:

Context
A user has reported an aopticauon is unteachable due to a failing livenessProbe .
Task
Perform the following tasks:
* Find the broken pod and store its name and namespace to /opt/KDOB00401/broken.txt in the format:

The output file has already been created
* Store the associated error events to a file /opt/KDOB00401/error.txt, The output file has already been created. You will need to use the -o wide output specifier with your command
* Fix the issue.

  • A. Solution:
    Create the Pod:
    kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yaml
    Within 30 seconds, view the Pod events:
    kubectl describe pod liveness-exec
    The output indicates that no liveness probes have failed yet:
    FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    --------- -------- ----- ---- ------------- -------- ------ -------
    24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox"
    kubectl describe pod liveness-exec
    At the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
    FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    --------- -------- ----- ---- ------------- -------- ------ -------
    37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox"
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
    2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
    Wait another 30 seconds, and verify that the Container has been restarted:
    kubectl get pod liveness-exec
    The output shows that RESTARTS has been incremented:
    NAME READY STATUS RESTARTS AGE
    liveness-exec 1/1 Running 1 m
  • B. Solution:
    Create the Pod:
    kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yaml
    Within 30 seconds, view the Pod events:
    kubectl describe pod liveness-exec
    The output indicates that no liveness probes have failed yet:
    FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    --------- -------- ----- ---- ------------- -------- ------ -------
    24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox"
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
    23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
    After 35 seconds, view the Pod events again:
    kubectl describe pod liveness-exec
    At the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
    FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    --------- -------- ----- ---- ------------- -------- ------ -------
    37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox"
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
    36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
    2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
    Wait another 30 seconds, and verify that the Container has been restarted:
    kubectl get pod liveness-exec
    The output shows that RESTARTS has been incremented:
    NAME READY STATUS RESTARTS AGE
    liveness-exec 1/1 Running 1 m

正解:B


質問 # 23

Context
Developers occasionally need to submit pods that run periodically.
Task
Follow the steps below to create a pod that will start at a predetermined time and]which runs to completion only once each time it is started:
* Create a YAML formatted Kubernetes manifest /opt/KDPD00301/periodic.yaml that runs the following shell command: date in a single busybox container. The command should run every minute and must complete within 22 seconds or be terminated oy Kubernetes. The Cronjob namp and container name should both be hello
* Create the resource in the above manifest and verify that the job executes successfully at least once See the solution below.

正解:

解説:
Explanation
Solution:



質問 # 24

Context
You are tasked to create a secret and consume the secret in a pod using environment variables as follow:
Task
* Create a secret named another-secret with a key/value pair; key1/value4
* Start an nginx pod named nginx-secret using container image nginx, and add an environment variable exposing the value of the secret key key 1, using COOL_VARIABLE as the name for the environment variable inside the pod See the solution below.

正解:

解説:
Explanation
Solution:




質問 # 25

Task:
Update the Pod ckad00018-newpod in the ckad00018 namespace to use a NetworkPolicy allowing the Pod to send and receive traffic only to and from the pods web and db

正解:

解説:
See the solution below.
Explanation
Solution:


質問 # 26
Context

Task
You are required to create a pod that requests a certain amount of CPU and memory, so it gets scheduled to-a node that has those resources available.
* Create a pod named nginx-resources in the pod-resources namespace that requests a minimum of 200m CPU and 1Gi memory for its container
* The pod should use the nginx image
* The pod-resources namespace has already been created

正解:

解説:
Solution:





質問 # 27
Context
Anytime a team needs to run a container on Kubernetes they will need to define a pod within which to run the container.
Task
Please complete the following:
* Create a YAML formatted pod manifest
/opt/KDPD00101/podl.yml to create a pod named app1 that runs a container named app1cont using image Ifccncf/arg-output with these command line arguments: -lines 56 -F
* Create the pod with the kubect1 command using the YAML file created in the previous step
* When the pod is running display summary data about the pod in JSON format using the kubect1 command and redirect the output to a file named /opt/KDPD00101/out1.json
* All of the files you need to work with have been created, empty, for your convenience

正解:

解説:
Solution:






質問 # 28
Exhibit:

Context
You have been tasked with scaling an existing deployment for availability, and creating a service to expose the deployment within your infrastructure.
Task
Start with the deployment named kdsn00101-deployment which has already been deployed to the namespace kdsn00101 . Edit it to:
* Add the func=webFrontEnd key/value label to the pod template metadata to identify the pod for the service definition
* Have 4 replicas
Next, create ana deploy in namespace kdsn00l01 a service that accomplishes the following:
* Exposes the service on TCP port 8080
* is mapped to me pods defined by the specification of kdsn00l01-deployment
* Is of type NodePort
* Has a name of cherry

  • A. Solution:



  • B. Solution:


正解:A


質問 # 29
......

合格させるLinux Foundation CKADプレミアムお試しセットテストエンジンPDFで無料問題集セット:https://www.jpntest.com/shiken/CKAD-mondaishu

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡