305-300日本語 PDF問題集で2024年05月04日最近更新された問題 [Q32-Q52]

Share

305-300日本語 PDF問題集で2024年05月04日最近更新された問題

305-300日本語試験問題有効な305-300日本語問題集PDF

質問 # 32
QEMU 仮想マシンの CPU に関して正しいのは次のうちどれですか? (2つお選びください。)

  • A. 各 QEMU 仮想マシンは、1 つのコアを持つ 1 つの CPU のみを搭載できます。
  • B. QEMU 仮想マシンの CPU アーキテクチャは、ホスト システムのアーキテクチャから独立しています。
  • C. QEMU 仮想マシンは、SMP システムを実行するために複数の仮想 CPU をサポートします。
  • D. QEMU 仮想マシンごとに、専用の物理 CPU コアを 1 つ予約する必要があります。
  • E. QEMU は仮想 CPU の概念を使用して、仮想マシンを物理 CPU にマッピングします。

正解:B、C

解説:
The CPU architecture of a QEMU virtual machine is independent of the host system's architecture. QEMU can emulate many CPU architectures, including x86, ARM, Alpha, and SPARC, regardless of the host system's architecture1. This allows QEMU to run guest operating systems that are not compatible with the host system's hardware. Therefore, option A is correct. QEMU virtual machines support multiple virtual CPUs in order to run SMP systems. QEMU uses the concept of virtual CPUs (vCPUs) to map the virtual machines to physical CPUs. Each vCPU is a thread that runs on a physical CPU core. QEMU allows the user to specify the number of vCPUs and the CPU model for each virtual machine. QEMU can run SMP systems with multiple vCPUs, as well as single-processor systems with one vCPU2. Therefore, option E is also correct. The other options are incorrect because they do not describe the CPU of a QEMU virtual machine. Option B is wrong because QEMU virtual machines can have more than one CPU with more than one core. Option C is wrong because QEMU does not require a dedicated physical CPU core for each virtual machine. QEMU can share the physical CPU cores among multiple virtual machines, depending on the load and the scheduling policy.
Option D is wrong because QEMU does not use the term CPU, but vCPU, to refer to the virtual machines' processors. References:
* QEMU vs VirtualBox: What's the difference? - LinuxConfig.org
* QEMU / KVM CPU model configuration - QEMU documentation
* Introduction - QEMU documentation
* Qemu/KVM Virtual Machines - Proxmox Virtual Environment


質問 # 33
コマンド virsh vol-list vms は次のエラーを返します。
エラー: プール「vms」の取得に失敗しました
エラー: ストレージ プールが見つかりません: 一致する名前 'vms ' のストレージ プールがありません ディレクトリ/vmsexists の場合、次のコマンドのどれがこの問題を解決しますか?

  • A. libvirt-poolctl new --name=/vms --type=dir --path=/vms
  • B. /vms/.libvirtpool をタッチします
  • C. qemu-img プール vms:/vms
  • D. virsh pool-create-as vms dir --target /vms
  • E. dd if=/dev/zero of=/vms bs=1 count=0 flags=name:vms

正解:D

解説:
The command virsh pool-create-as vms dir --target /vms creates and starts a transient storage pool named vms of type dir with the target directory /vms12. This command resolves the issue of the storage pool not found error, as it makes the existing directory /vms visible to libvirt as a storage pool. The other commands are invalid because:
* dd if=/dev/zero of=/vms bs=1 count=0 flags=name:vms is not a valid command syntax. The dd command does not take a flags argument, and the output file /vms should be a regular file, not a directory3.
* libvirt-poolctl new --name=/vms --type=dir --path=/vms is not a valid command name. There is no such command as libvirt-poolctl in the libvirt package4.
* qemu-img pool vms:/vms is not a valid command syntax. The qemu-img command does not have a pool subcommand, and the vms:/vms argument is not a valid image specification5.
* touch /vms/.libvirtpool is not a valid command to create a storage pool. The touch command only creates an empty file, and the .libvirtpool file is not recognized by libvirt as a storage pool configuration file6.
References:
* 1: virsh - difference between pool-define-as and pool-create-as - Stack Overflow
* 2: dd(1) - Linux manual page - man7.org
* 3: 12.3.3. Creating a Directory-based Storage Pool with virsh - Red Hat Customer Portal
* 4: libvirt - Linux Man Pages (3)
* 5: qemu-img(1) - Linux manual page - man7.org
* 6: touch(1) - Linux manual page - man7.org


質問 # 34
commandvagrant init の目的は何ですか?

  • A. 実行ボックスでプロビジョニング ツールを実行します。
  • B. Vagrant ボックスをダウンロードします。
  • C. Vagrant ボックスを開始します。
  • D. Vagrant 設定ファイルを作成します。
  • E. Vagrant を Linux ホストにインストールします。

正解:D

解説:
The command vagrant init is used to initialize the current directory to be a Vagrant environment by creating an initial Vagrantfile if one does not already exist1. The Vagrantfile contains the configuration settings for the Vagrant box, such as the box name, box URL, network settings, synced folders, provisioners, etc. The command vagrant init does not execute any provisioning tool, start any box, install Vagrant on a Linux host, or download any box. Those actions are performed by other Vagrant commands, such as vagrant provision, vagrant up, vagrant install, and vagrant box add, respectively. References:
* 1: vagrant init - Command-Line Interface | Vagrant | HashiCorp Developer


質問 # 35
Intel Extended Page Table (EPT) や AMD Rapid Virtualization Indexing (RVI) など、ネストされたページ テーブル拡張機能をサポートする CPU によって仮想化が容易になるハードウェア コンポーネントはどれですか?

  • A. メモリ
  • B. ホスト バス アダプタ
  • C. ハードディスク
  • D. IO キャッシュ
  • E. ネットワーク インターフェイス

正解:A

解説:
Nested page table extensions, such as Intel Extended Page Table (EPT) or AMD Rapid Virtualization Indexing (RVI), are hardware features that facilitate the virtualization of memory. They allow the CPU to perform the translation of guest virtual addresses to host physical addresses in a single step, without the need for software-managed shadow page tables. This reduces the overhead and complexity of memory management for virtual machines, and improves their performance and isolation. Nested page table extensions do not directly affect the virtualization of other hardware components, such as network interfaces, host bus adapters, hard disks, or IO cache.
References:
* Second Level Address Translation - Wikipedia
* c - What is use of extended page table? - Stack Overflow
* Hypervisor From Scratch - Part 4: Address Translation Using Extended ...


質問 # 36
空白を埋める
CPU、ブロック I/O、またはメモリ消費量でソートされたコンテナーを一覧表示する LXC コマンドはどれですか? (パスやパラメータを指定せずにコマンドのみを指定します。)

正解:

解説:
lxc-top
Explanation:
LXD supports the following network interface types for containers: macvlan, bridged, physical, sriov, and ovn1. Macvlan creates a virtual interface on the host that is connected to the same network as the parent interface2. Bridged connects the container to a network bridge that acts as a virtual switch3. Physical attaches the container to a physical network interface on the host2. Ipsec and wifi are not valid network interface types for LXD containers. References:
* 1: Bridge network - Canonical LXD documentation
* 2: How to create a network - Canonical LXD documentation
* 4: LXD containers and networking with static IP - Super User


質問 # 37
.dockerignore ファイルの目的は何ですか?

  • A. これは、Docker によって提供されるボリュームとポートを無視する必要があるコンテナーのルート ファイル システムに存在します。
  • B. Docker イメージを構築するときに Dockerfile のどの部分を無視するかを指定します。
  • C. 派生イメージを構築するときに除外する必要がある Docker イメージ内に存在するファイルをリストします。
  • D. Docker がコンテナに自動的に接続しないボリュームの最上位ディレクトリに配置する必要があります
  • E. Docker イメージの構築時に Docker が Docker デーモンに送信しないファイルを指定します

正解:E

解説:
The purpose of a .dockerignore file is to specify files that Docker does not submit to the Docker daemon when building a Docker image. A .dockerignore file is a text file that contains a list of files or directories that should be excluded from the build context, which is the set of files and folders that are available for use in a Dockerfile. By using a .dockerignore file, you can avoid sending files or directories that are large, contain sensitive information, or are irrelevant to the Docker image to the daemon, which can improve the efficiency and security of the build process. The other options are incorrect because they do not describe the function of a
.dockerignore file. Option A is wrong because a .dockerignore file does not affect the files existing in a Docker image, but only the files sent to the daemon during the build. Option C is wrong because a .dockerignore file does not exist in the root file system of containers, but in the same directory as the Dockerfile. Option D is wrong because a .dockerignore file does not affect the volumes that Docker attaches to a container, but only the files included in the build context. Option E is wrong because a .dockerignore file does not affect the parts of a Dockerfile that are executed, but only the files available for use in a Dockerfile. References:
* What are .dockerignore files, and why you should use them?
* Dockerfile reference | Docker Docs
* How to use .dockerignore and its importance - Shisho Cloud


質問 # 38
オプションdom0_memを使用してXen Domain-0に割り当てられるメモリ量を制限するには、このオプションをどこで指定する必要がありますか?

  • A. Domain-0 カーネルの構築時の .config ファイル内。
  • B. Xen のビルド時の Makefile 内。
  • C. Xen のグローバル構成ファイルのいずれか。
  • D. Xen の起動時の構成ファイル /etc/xen/Domain-0.cfg 内。
  • E. Xen の起動時のブートローダー構成内。

正解:E


質問 # 39
次のコマンドを 2 回連続して実行するとどうなりますか?
docker run -tid -v data:/data debian bash

  • A. 2 回目の呼び出しで生成されたコンテナは、/data/ の内容を読み取ることのみが可能で、変更することはできません。
  • B. 両方のコンテナはデータ ボリュームのコンテンツを共有し、そのコンテンツを変更するための完全な権限を持ち、それぞれの変更を相互に確認します。
  • C. 各コンテナには独自の独立したデータ ボリュームが装備されており、各コンテナの /data/ で利用できます。
  • D. コンテナ イメージ データの元のコンテンツは両方のコンテナで利用できますが、変更は各コンテナ内でローカルに残ります。
  • E. 2 番目のコマンド呼び出しは、ボリューム データが実行中のコンテナにすでに関連付けられていることを示すエラーで失敗します。

正解:B

解説:
The command docker run -tid -v data:/data debian bash creates and runs a new container from the debian image, with an interactive terminal and a detached mode, and mounts a named volume data at /data in the container12. If the volume data does not exist, it is created automatically3. If the command is executed twice in succession, two containers are created and run, each with its own terminal and process ID, but they share the same volume data. This means that both containers can access, modify, and see the contents of the data volume, and any changes made by one container are reflected in the other container. Therefore, the statement C is true and the correct answer. The statements A, B, D, and E are false and incorrect, as they do not describe the behavior of the command or the volume correctly. References:
* 1: docker run | Docker Docs.
* 2: Docker run reference | Docker Docs - Docker Documentation.
* 3: Use volumes | Docker Documentation.
* [4]: How to Use Docker Run Command with Examples - phoenixNAP.


質問 # 40
仮想マシン ストレージのコンテキストにおけるスパース イメージに関して正しいのは次のどれですか?
(2つお選びください。)

  • A. スパースイメージは、公称サイズとは異なる量のスペースを消費する可能性があります。
  • B. スパース イメージは、ブロックの最初の使用時にバックエンド ストレージを割り当てます。
  • C. スパース イメージは、イメージ内のファイルが削除されると自動的に縮小されます。
  • D. スパース イメージは準仮想化と組み合わせてのみ使用できます。
  • E. スパース イメージは、最大容量を超えそうになると自動的にサイズ変更されます。

正解:A、B


質問 # 41
次のコマンドを使用して新しい Docker ネットワークを作成した後、次の手順を実行します。
docker network create --driver Bridgeisolation_nw
コンテナをネットワークに接続するには、docker create にどのパラメータを追加する必要がありますか?

  • A. --attach=isoulated_nw
  • B. --alias=isoulated_nw
  • C. --ethernet=isoulated_nw
  • D. --eth0=isoulated_nw
  • E. --network=isoulated_nw

正解:E

解説:
To attach a container to a network when creating it, the --network flag must be used with the name of the network as the argument. The --network flag specifies the network mode for the container. By default, the network mode is bridge, which means the container is connected to the default bridge network. However, if a custom network is created, such as isolated_nw in this case, the container must be explicitly attached to it using the --network flag. For example, to create a container named web1 and attach it to the isolated_nw network, the command would be:
docker create --name web1 --network isolated_nw nginx
The other options are not valid parameters for docker create. The --eth0, --ethernet, and --attach flags do not exist. The --alias flag is used to specify an additional network alias for the container on a user-defined network, but it does not attach the container to the network. References:
* docker network create | Docker Documentation1
* docker create | Docker Documentation
* Networking overview | Docker Docs2


質問 # 42
現在のホストで実行されている仮想マシンを一覧表示する virsh 内のコマンドはどれですか?

  • A. わかります
  • B. 表示
  • C. すべてのリスト
  • D. リスト
  • E. リスト VM

正解:D

解説:
The command virsh list is used to list all running domains (VMs) on the current host. The command virsh list
--all can be used to list both active and inactive domains. The other options are not valid virsh commands. The command virsh list is a basic command that lists all running domains (VMs). You can also list all configured VMs by adding the --all option. This is useful if you want to see all VMs configured in the target hypervisor that you can use on subsequent commands1. References:
* 1: 8 Linux virsh subcommands for managing VMs on the command line | Enable Sysadmin.


質問 # 43
コンテナに関連付けられていないすべてのボリュームを削除するコマンドは次のどれですか?

  • A. Docker ボリュームのバキューム
  • B. docker volume orphan -d
  • C. Docker ボリューム プルーン
  • D. Docker ボリュームのガベージコレクト
  • E. Docker ボリュームのクリーンアップ

正解:C

解説:
The command that deletes all volumes which are not associated with a container is docker volume prune. This command removes all unused local volumes, which are those that are not referenced by any containers. By default, it only removes anonymous volumes, which are those that are not given a specific name when they are created. To remove both unused anonymous and named volumes, the --all or -a flag can be added to the command. The command will prompt for confirmation before deleting the volumes, unless the --force or -f flag is used to bypass the prompt. The command will also show the total reclaimed space after deleting the volumes12.
The other commands listed in the question are not valid or do not have the same functionality as docker volume prune. They are either made up, misspelled, or have a different purpose. These commands are:
* docker volume cleanup: This command does not exist in Docker. There is no cleanup subcommand for docker volume.
* docker volume orphan -d: This command does not exist in Docker. There is no orphan subcommand for docker volume, and the -d flag is not a valid option for any docker volume command.
* docker volume vacuum: This command does not exist in Docker. There is no vacuum subcommand for docker volume.
* docker volume garbage-collect: This command does not exist in Docker. There is no garbage-collect subcommand for docker volume.
References:
* docker volume prune | Docker Docs
* How to Remove all Docker Volumes - YallaLabs.


質問 # 44
libvirt ドメイン定義の <domain> 要素の type 属性で有効な値は次のうちどれですか? (2つお選びください。)

  • A. プロシージャ
  • B. ネームスペース
  • C. cgroup
  • D. kvm
  • E. Ixc

正解:D、E

解説:
The type attribute of a <domain> element in a libvirt domain definition specifies the hypervisor used for running the domain. The allowed values are driver specific, but include "xen", "kvm", "hvf" (since 8.1.0 and QEMU 2.12), "qemu" and "lxc"1. Therefore, the valid values among the options are C. kvm and E. lxc. KVM stands for Kernel-based Virtual Machine, which is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V)2. LXC stands for Linux Containers, which is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host3. The other options are not valid values for the type attribute, asthey are either not hypervisors or not supported by libvirt. References:http://libvirt.org/formatdomain.html
https://libvirt.org/formatcaps.html


質問 # 45
LXC と Docker がコンテナーを作成するために使用するメカニズムは次のどれですか? (3つお選びください。)

  • A. ファイル システムのアクセス許可
  • B. POSIXACL
  • C. カーネル名前空間
  • D. コントロールグループ
  • E. Linux の機能

正解:C、D、E

解説:
LXC and Docker are both container technologies that use Linux kernel features to create isolated environments for running applications. The main mechanisms that they use are:
* Linux Capabilities: These are a set of privileges that can be assigned to processes to limit their access to certain system resources or operations. For example, a process with the CAP_NET_ADMIN capability can perform network administration tasks, such as creating or deleting network interfaces. Linux
* capabilities allow containers to run with reduced privileges, enhancing their security and isolation.
* Kernel Namespaces: These are a way of creating separate views of the system resources for different processes. For example, a process in a mount namespace can have a different file system layout than the host or other namespaces. Kernel namespaces allow containers to have their own network interfaces, process IDs, user IDs, and other resources, without interfering with the host or other containers.
* Control Groups: These are a way of grouping processes and applying resource limits and accounting to them. For example, a control group can limit the amount of CPU, memory, disk I/O, or network bandwidth that a process or a group of processes can use. Control groups allow containers to have a fair share of the system resources and prevent them from exhausting the host resources.
POSIX ACLs and file system permissions are not mechanisms used by LXC and Docker to create containers.
They are methods of controlling the access to files and directories on a file system, which can be applied to any process, not just containers.
References:
* LXC vs Docker: Which Container Platform Is Right for You?
* LXC vs Docker: Why Docker is Better in 2023 | UpGuard
* What is the Difference Between LXC, LXD and Docker Containers
* lxc - Which container implementation docker is using - Unix & Linux Stack Exchange


質問 # 46
仮想マシン ストレージのコンテキストにおけるスパース イメージに関して正しいのは次のどれですか?
(2つお選びください。)

  • A. スパース イメージは、ブロックの最初の使用時にバックエンド ストレージを割り当てます。
  • B. スパース イメージは、イメージ内のファイルが削除されると自動的に縮小されます。
  • C. スパース イメージは準仮想化と組み合わせてのみ使用できます。
  • D. スパース イメージは、最大容量を超えそうになると自動的にサイズ変更されます。
  • E. まばらなイメージは、公称サイズとは異なる量のスペースを消費する可能性があります。

正解:A、E

解説:
Sparse images are a type of virtual disk images that grow in size as data is written to them, but do not shrink when data is deleted from them. Sparse images may consume an amount of space different from their nominal size, which is the maximum size that the image can grow to. For example, a sparse image with a nominal size of 100 GB may only take up 20 GB of physical storage if only 20 GB of data is written to it. Sparse images allocate backend storage at the first usage of a block, which means that the physical storage is only used when the virtual machine actually writes data to a block. This can save storage space and improve performance, as the image does not need to be pre-allocated or zeroed out.
Sparse images are not automatically shrunk when files within the image are deleted, because the virtual machine does not inform the host system about the freed blocks. To reclaim the unused space, a special tool such as virt-sparsify1 or qemu-img2 must be used to compact the image. Sparse images can be used with both full virtualization and paravirtualization, as the type of virtualization does not affect the format of the disk image. Sparse images are not automatically resized when their maximum capacity is about to be exceeded, because this would require changing the partition table and the filesystem of the image, which is not a trivial task. To resize a sparse image, a tool such as virt-resize3 or qemu-img2 must be used to increase the nominal size and the filesystem size of the image. References: 1 (search for "virt-sparsify"), 2 (search for
"qemu-img"), 3 (search for "virt-resize").


質問 # 47
LXC ではどのような仮想化が実装されていますか?

  • A. システムコンテナ
  • B. ハードウェアコンテナ
  • C. 準仮想化
  • D. アプリケーションコンテナ
  • E. CPU エミュレーション

正解:A

解説:
LXC implements system containers, which are a type of operating-system-level virtualization. System containers allow running multiple isolated Linux systems on a single Linux control host, using a single Linux kernel. System containers share the same kernel with the host and each other, but have their own file system, libraries, andprocesses. System containers are different from application containers, which are designed to run a single application or service in an isolated environment. Application containers are usually smaller and more portable than system containers, but also more dependent on the host kernel and libraries. Hardware containers, CPU emulation, and paravirtualization are not related to LXC, as they are different kinds of virtualization methods that involve hardware abstraction, instruction translation, or modification of the guest operating system. References:
* 1: LXC - Wikipedia
* 2: Linux Virtualization : Linux Containers (lxc) - GeeksforGeeks
* 3: Features - Proxmox Virtual Environment


質問 # 48
Packer Inspection サブコマンドの目的は何ですか?

  • A. Packer イメージの実行中のインスタンス内でコマンドを実行します。
  • B. Packer イメージのビルド プロセス中に作成されたアーティファクトをリストします。
  • C. 既存の Packer イメージからファイルを取得します。
  • D. Packer テンプレートに含まれる構成の概要を表示します。
  • E. Packer イメージの使用統計を表示します。

正解:D

解説:
* The purpose of the packer inspect subcommand is to display an overview of the configuration contained in a Packer template1. A Packer template is a file that defines the various components a Packer build requires, such as variables, sources, provisioners, and post-processors2. The packer inspect subcommand can help you quickly learn about a template without having to dive into the HCL (HashiCorp Configuration Language) itself1. The subcommand will tell you things like what variables a template accepts, the sources it defines, the provisioners it defines and the order they'll run, and more1.
* The other options are not correct because:
* A) Retrieve files from an existing Packer image. This is not the purpose of the packer inspect subcommand. To retrieve files from an existing Packer image, you need to use the packer scp subcommand, which copies files from a running instance of a Packer image to your local machine2.
* B) Execute commands within a running instance of a Packer image. This is not the purpose of the packer inspect subcommand. To execute commands within a running instance of a Packer image, you need to use the packer ssh subcommand, which connects to a running instance of a Packer image via SSH and runs the specified command2.
* C) List the artifacts created during the build process of a Packer image. This is not the purpose of the packer inspect subcommand. To list the artifacts created during the build process of a Packer image, you need to use the packer build subcommand with the -machine-readable flag, which outputs the build information in a machine-friendly format that includes the artifact details2.
* D) Show usage statistics of a Packer image. This is not the purpose of the packer inspect subcommand. To show usage statistics of a Packer image, you need to use the packer console subcommand with the -stat flag, which launches an interactive console that allows you to inspect and modify variables, sources, and functions, and displays the usage statistics of the current session2. References: 1: packer inspect - Commands | Packer | HashiCorp Developer 2:
Commands | Packer | HashiCorp Developer


質問 # 49
KVM ドメインの libvirt によって制限できるリソースは次のうちどれですか? (2つお選びください。)

  • A. 使用可能なファイルの数
  • B. ドメイン内で許可されるファイル システム
  • C. 実行中のプロセスの数
  • D. CPU 石灰の量
  • E. 使用可能なメモリのサイズ

正解:D、E

解説:
Libvirt is a toolkit that provides a common API for managing different virtualization technologies, such as KVM, Xen, LXC, and others. Libvirt allows users to configure and control various aspects of a virtual machine (also called a domain), such as its CPU, memory, disk, network, and other resources. Among the resources that can be limited by libvirt for a KVM domain are:
* Amount of CPU time: Libvirt allows users to specify the number of virtual CPUs (vCPUs) that a domain can use, as well as the CPU mode, model, topology, and tuning parameters. Users can also set the CPU shares, quota, and period to control the relative or absolute amount of CPU time that a domain can consume. Additionally, users can pin vCPUs to physical CPUs or NUMA nodes to improve performance and isolation. These settings can be configured in the domain XML file under the <cpu> and <cputune> elements12.
* Size of available memory: Libvirt allows users to specify the amount of memory that a domain can use, as well as the memory backing, tuning, and NUMA node parameters. Users can also set the memory hard and soft limits, swap hard limit, and minimum guarantee to control the memory allocation and reclaim policies for a domain. These settings can be configured in the domain XML file under the <memory>, <memoryBacking>, and <memtune> elements13.
The other resources listed in the question are not directly limited by libvirt for a KVM domain. File systems allowed in the domain are determined by the disk and filesystem devices that are attached to the domain, which can be configured in the domain XML file under the <disk> and <filesystem> elements14. Number of running processes and number of available files are determined by the operating system and the file system of the domain, which are not controlled by libvirt.
References:
* libvirt: Domain XML format
* CPU Allocation
* Memory Allocation
* Hard drives, floppy disks, CDROMs


質問 # 50
ユーザーデータから直接cloud-init処理できるデータの種類は次のうちどれですか? (3つお選びください。)

  • A. 実行する Base64 エンコードされたバイナリ ファイル
  • B. 起動元の ISO イメージ
  • C. YAML でのクラウド構成宣言
  • D. 実行するシェル スクリプト
  • E. インポートする URL のリスト

正解:C、D、E

解説:
Cloud-init is a tool that allows users to customize the configuration and behavior of cloud instances during the boot process. Cloud-init can process different kinds of data that are passed to the instance via user-data, which is a mechanism provided by various cloud providers to inject data into the instance. Among the kinds of data that cloud-init can process directly from user-data are:
* Shell scripts to execute: Cloud-init can execute user-data that is formatted as a shell script, starting with the #!/bin/sh or #!/bin/bash shebang. The script can contain any commands that are valid in the shell environment of the instance. The script is executed as the root user during the boot process12.
* Lists of URLs to import: Cloud-init can import user-data that is formatted as a list of URLs, separated by newlines. The URLs can point to any valid data source that cloud-init supports, such as shell scripts, cloud-config files, or include files. The URLs are fetched and processed by cloud-init in the order they appear in the list13.
* cloud-config declarations in YAML: Cloud-init can process user-data that is formatted as a cloud-config file, which is a YAML document that contains declarations for various cloud-init modules. The cloud-config file can specify various aspects of the instance configuration, such as hostname, users, packages, commands, services, and more. The cloud-config file must start with the #cloud-config header14.
The other kinds of data listed in the question are not directly processed by cloud-init from user-data. They are either not supported, not recommended, or require additional steps to be processed. These kinds of data are:
* ISO images to boot from: Cloud-init does not support booting from ISO images that are passed as user-data. ISO images are typically used to install an operating system on a physical or virtual machine, not to customize an existing cloud instance. To boot from an ISO image, the user would need to attach it as a secondary disk to the instance and configure the boot order accordingly5.
* Base64-encoded binary files to execute: Cloud-init does not recommend passing binary files as user-data, as they may not be compatible with the instance's architecture or operating system.
Base64-encoding does not change this fact, as it only converts the binary data into ASCII characters. To execute a binary file, the user would need to decode it and make it executable on the instance6.
References:
* User-Data Formats - cloud-init 22.1 documentation
* User-Data Scripts
* Include File
* Cloud Config
* How to Boot From ISO Image File Directly in Windows
* How to run a binary file as a command in the terminal?.


質問 # 51
Cloud-init の目的は何ですか?

  • A. クラウド内のロード バランサーや仮想ファイアウォールなどのインフラストラクチャ サービスの構成を標準化します。
  • B. 関連する複数の LaaS インスタンスの作成と開始を調整します。
  • C. 特定のインスタンスの構成に合わせて、laaS インスタンスの汎用イメージを準備します。
  • D. LaaS インスタンスをクラウド内の特定のコンピューティング ノードに割り当てます。
  • E. systemd や SysV init などの一般的な Linux inic システムを置き換えます。

正解:C

解説:
Cloud-init is a tool that processes configurations and runs through five stages during the initial boot of Linux VMs in a cloud. It allows users to customize a Linux VM as it boots for the first time, by applying user data to the instance. User data can include scripts, commands, packages, files, users, groups, SSH keys, and more.
Cloud-init can also interact with various cloud platforms and services, such as Azure, AWS, OpenStack, and others. The purpose of cloud-init is to prepare the generic image of an laaS instance to fit a specific instance's configuration, such as hostname, network, security, and application settings. References:
* Cloud-init - The standard for customising cloud instances
* Understanding cloud-init - Azure Virtual Machines
* Tutorial - Customize a Linux VM with cloud-init in Azure - Azure Virtual Machines


質問 # 52
......

305-300日本語問題集合格確定させる練習には62問があります:https://www.jpntest.com/shiken/305-300J-mondaishu

弊社を連絡する

我々は12時間以内ですべてのお問い合わせを答えます。

オンラインサポート時間:( UTC+9 ) 9:00-24:00
月曜日から土曜日まで

サポート:現在連絡