CKA 無料問題集「Linux Foundation Certified Kubernetes Administrator (CKA) Program」

Score: 4%

Context
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.
Task
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
* Deployment
* StatefulSet
* DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole lo the new ServiceAccount cicd-token , limited to the namespace app-team1.
正解:
Solution:
Task should be complete on node k8s -1 master, 2 worker for this connect use command
[student@node-1] > ssh k8s
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets, daemonsets kubectl create serviceaccount cicd-token --namespace=app-team1 kubectl create rolebinding deployment-clusterrole --clusterrole=deployment-clusterrole -- serviceaccount=default:cicd-token --namespace=app-team1
Quick Reference
ConfigMaps,
Documentation Deployments,
Namespace
You must connect to the correct host . Failure to do so may result in a zero score.
[candidate@base] $ ssh cka000048b
Task
An NGINX Deployment named nginx-static is running in the nginx-static namespace. It is configured using a ConfigMap named nginx-config .
First, update the nginx-config ConfigMap to also allow TLSv1.2. connections.
You may re-create, restart, or scale resources as necessary.
You can use the following command to test the changes:
[candidate@cka000048b] $ curl -- tls-max
1.2 https://web.k8s.local
正解:
Task Summary
* SSH into cka000048b
* Update the nginx-config ConfigMap in the nginx-static namespace to allow TLSv1.2
* Ensure the nginx-static Deployment picks up the new config
* Verify the change using the provided curl command
Step-by-Step Instructions
Step 1: SSH into the correct host
ssh cka000048b
Step 2: Get the ConfigMap
kubectl get configmap nginx-config -n nginx-static -o yaml > nginx-config.yaml Open the file for editing:
nano nginx-config.yaml
Look for the TLS configuration in the data field. You are likely to find something like:
ssl_protocols TLSv1.3;
Modify it to include TLSv1.2 as well:
ssl_protocols TLSv1.2 TLSv1.3;
Save and exit the file.
Now update the ConfigMap:
kubectl apply -f nginx-config.yaml
Step 3: Restart the NGINX pods to pick up the new ConfigMap
Pods will not reload a ConfigMap automatically unless it's mounted in a way that supports dynamic reload and the app is watching for it (NGINX typically doesn't by default).
The safest way is to restart the pods:
Option 1: Roll the deployment
kubectl rollout restart deployment nginx-static -n nginx-static
Option 2: Delete pods to force recreation
kubectl delete pod -n nginx-static -l app=nginx-static
Step 4: Verify using curl
Use the provided curl command to confirm that TLS 1.2 is accepted:
curl --tls-max 1.2 https://web.k8s.local
A successful response means the TLS configuration is correct.
Final Command Summary
ssh cka000048b
kubectl get configmap nginx-config -n nginx-static -o yaml > nginx-config.yaml nano nginx-config.yaml # Modify to include "ssl_protocols TLSv1.2 TLSv1.3;" kubectl apply -f nginx-config.yaml kubectl rollout restart deployment nginx-static -n nginx-static
# or
kubectl delete pod -n nginx-static -l app=nginx-static
curl --tls-max 1.2 https://web.k8s.local
You must connect to the correct host.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000037
Context
A legacy app needs to be integrated into the Kubernetes built-in logging architecture (i.e.
kubectl logs). Adding a streaming co-located container is a good and common way to accomplish this requirement.
Task
Update the existing Deployment synergy-leverager, adding a co-located container named sidecar using the busybox:stable image to the existing Pod . The new co-located container has to run the following command:
/bin/sh -c "tail -n+1 -f /var/log/syne
rgy-leverager.log"
Use a Volume mounted at /var/log to make the log file synergy-leverager.log available to the co- located container .
Do not modify the specification of the existing container other than adding the required volume mount .
Failure to do so may result in a reduced score.
正解:
Task Summary
* SSH into the correct node: cka000037
* Modify existing deployment synergy-leverager
* Add a sidecar container:
* Name: sidecar
* Image: busybox:stable
* Command:
/bin/sh -c "tail -n+1 -f /var/log/synergy-leverager.log"
* Use a shared volume mounted at /var/log
* Don't touch existing container config except adding volume mount
Step-by-Step Solution
1## SSH into the correct node
ssh cka000037
## Skipping this will result in a zero score.
2## Edit the deployment
kubectl edit deployment synergy-leverager
This opens the deployment YAML in your default editor (vi or similar).
3## Modify the spec as follows
# Inside the spec.template.spec, do these 3 things:
# A. Define a shared volume
Add under volumes: (at the same level as containers):
volumes:
- name: log-volume
emptyDir: {}
# B. Add volume mount to the existing container
Locate the existing container under containers: and add this:
volumeMounts:
- name: log-volume
mountPath: /var/log
# Do not change any other configuration for this container.
# C. Add the sidecar container
Still inside containers:, add the new container definition after the first one:
- name: sidecar
image: busybox:stable
command:
- /bin/sh
- -c
- "tail -n+1 -f /var/log/synergy-leverager.log"
volumeMounts:
- name: log-volume
mountPath: /var/log
spec:
containers:
- name: main-container
image: your-existing-image
volumeMounts:
- name: log-volume
mountPath: /var/log
- name: sidecar
image: busybox:stable
command:
- /bin/sh
- -c
- "tail -n+1 -f /var/log/synergy-leverager.log"
volumeMounts:
- name: log-volume
mountPath: /var/log
volumes:
- name: log-volume
emptyDir: {}
Save and exit
If using vi or vim, type:
bash
CopyEdit
wq
5## Verify
Check the updated pods:
kubectl get pods -l app=synergy-leverager
Pick a pod name and describe it:
kubectl describe pod <pod-name>
Confirm:
* 2 containers running (main-container + sidecar)
* Volume mounted at /var/log
ssh cka000037
kubectl edit deployment synergy-leverager
# Modify as explained above
kubectl get pods -l app=synergy-leverager
kubectl describe pod <pod-name>
List "nginx-dev" and "nginx-prod" pod and delete those pods
正解:
kubect1 get pods -o wide
kubectl delete po "nginx-dev"
kubectl delete po "nginx-prod"

Task
Create a new Ingress resource as follows:
. Name: echo
. Namespace : sound-repeater
. Exposing Service echoserver-service on
http://example.org/echo
using Service port 8080
The availability of Service
echoserver-service can be checked
i
using the following command, which should return 200 :
[candidate@cka000024] $ curl -o /de v/null -s -w "%{http_code}\n"
http://example.org/echo
正解:
Task Summary
Create an Ingress named echo in the sound-repeater namespace that:
* Routes requests to /echo on host example.org
* Forwards traffic to service echoserver-service
* Uses service port 8080
* Verification should return HTTP 200 using curl
# Step-by-Step Answer
1## SSH into the correct node
As shown in the image:
bash
CopyEdit
ssh cka000024
## Skipping this will result in a ZERO score!
2## Verify the namespace and service
Ensure the sound-repeater namespace and echoserver-service exist:
kubectl get svc -n sound-repeater
Look for:
echoserver-service ClusterIP ... 8080/TCP
3## Create the Ingress manifest
Create a YAML file: echo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
namespace: sound-repeater
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.org
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: echoserver-service
port:
number: 8080
4## Apply the Ingress resource
kubectl apply -f echo-ingress.yaml
5## Test with curl as instructed
Use the exact verification command:
curl -o /dev/null -s -w "%{http_code}\n"
http://example.org/echo
# You should see:
200
# Final Answer Summary
ssh cka000024
kubectl get svc -n sound-repeater
# Create the Ingress YAML
cat <<EOF > echo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
namespace: sound-repeater
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.org
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: echoserver-service
port:
number: 8080
EOF
kubectl apply -f echo-ingress.yaml
curl -o /dev/null -s -w "%{http_code}\n"
http://example.org/echo
You must connect to the correct host.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000047
Task
A MariaDB Deployment in the mariadb namespace has been deleted by mistake. Your task is to restore the Deployment ensuring data persistence. Follow these steps:
Create a PersistentVolumeClaim (PVC ) named mariadb in the mariadb namespace with the following specifications:
Access mode ReadWriteOnce
Storage 250Mi
You must use the existing retained PersistentVolume (PV ).
Failure to do so will result in a reduced score.
There is only one existing PersistentVolume .
Edit the MariaDB Deployment file located at ~/mariadb-deployment.yaml to use PVC you created in the previous step.
Apply the updated Deployment file to the cluster.
Ensure the MariaDB Deployment is running and stable.
正解:
Task Overview
You're restoring a MariaDB deployment in the mariadb namespace with persistent data.
# Tasks:
* SSH into cka000047
* Create a PVC named mariadb:
* Namespace: mariadb
* Access mode: ReadWriteOnce
* Storage: 250Mi
* Use the existing retained PV (there's only one)
* Edit ~/mariadb-deployment.yaml to use the PVC
* Apply the deployment
* Verify MariaDB is running and stable
Step-by-Step Solution
1## SSH into the correct host
ssh cka000047
## Required - skipping = zero score
2## Inspect the existing PersistentVolume
kubectl get pv
# Identify the only existing PV, e.g.:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
mariadb-pv 250Mi RWO Retain Available <none> manual
Ensure the status is Available, and it is not already bound to a claim.
3## Create the PVC to bind the retained PV
Create a file mariadb-pvc.yaml:
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv # Match the PV name exactly
EOF
Apply the PVC:
kubectl apply -f mariadb-pvc.yaml
# This binds the PVC to the retained PV.
4## Edit the MariaDB Deployment YAML
Open the file:
nano ~/mariadb-deployment.yaml
Look under the spec.template.spec.containers.volumeMounts and spec.template.spec.volumes sections and update them like so:
Add this under the container:
yaml
CopyEdit
volumeMounts:
- name: mariadb-storage
mountPath: /var/lib/mysql
And under the pod spec:
volumes:
- name: mariadb-storage
persistentVolumeClaim:
claimName: mariadb
# These lines mount the PVC at the MariaDB data directory.
5## Apply the updated Deployment
kubectl apply -f ~/mariadb-deployment.yaml
6## Verify the Deployment is running and stable
kubectl get pods -n mariadb
kubectl describe pod -n mariadb <mariadb-pod-name>
# Ensure the pod is in Running state and volume is mounted.
# Final Command Summary
ssh cka000047
kubectl get pv # Find the retained PV
# Create PVC
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv
EOF
kubectl apply -f mariadb-pvc.yaml
# Edit Deployment
nano ~/mariadb-deployment.yaml
# Add volumeMount and volume to use the PVC as described
kubectl apply -f ~/mariadb-deployment.yaml
kubectl get pods -n mariadb
Scale the deployment webserver to 6 pods.
正解:
What is the maximum number of samples that can be submitted to WildFire manually per day?

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡