xen-create-image command, which largely automates the task. The only mandatory parameter is --hostname, giving a name to the domU; other options are important, but they can be stored in the /etc/xen-tools/xen-tools.conf configuration file, and their absence from the command line doesn't trigger an error. It is therefore important to either check the contents of this file before creating images, or to use extra parameters in the xen-create-image invocation. Important parameters of note include the following:
--memory, to specify the amount of RAM dedicated to the newly created system;
--size and --swap, to define the size of the “virtual disks” available to the domU;
--debootstrap-cmd, to specify the which debootstrap command is used. The default is debootstrap if debootstrap and cdebootstrap are installed. In that case, the --dist option will also most often be used (with a distribution name such as bullseye).
--dhcp states that the domU's network configuration should be obtained by DHCP while --ip allows defining a static IP address.
--dir option, is to create one file on the dom0 for each device the domU should be provided. For systems using LVM, the alternative is to use the --lvm option, followed by the name of a volume group; xen-create-image will then create a new logical volume inside that group, and this logical volume will be made available to the domU as a hard disk drive.
#xen-create-image --hostname testxen --dhcp --dir /srv/testxen --size=2G --dist=bullseye --role=udevGeneral Information -------------------- Hostname : testxen Distribution : bullseye Mirror : http://deb.debian.org/debian Partitions : swap 512M (swap) / 2G (ext4) Image type : sparse Memory size : 256M Bootloader : pygrub [...] Logfile produced at: /var/log/xen-tools/testxen.log Installation Summary --------------------- Hostname : testxen Distribution : bullseye MAC Address : 00:16:3E:C2:07:EE IP Address(es) : dynamic SSH Fingerprint : SHA256:K+0QjpGzZOacLZ3jX4gBwp0mCESt5ceN5HCJZSKWS1A (DSA) SSH Fingerprint : SHA256:9PnovvGRuTw6dUcEVzzPKTITO0+3Ki1Gs7wu4ke+4co (ECDSA) SSH Fingerprint : SHA256:X5z84raKBajUkWBQA6MVuanV1OcV2YIeD0NoCLLo90k (ED25519) SSH Fingerprint : SHA256:VXu6l4tsrCoRsXOqAwvgt57sMRj2qArEbOzHeydvV34 (RSA) Root Password : FS7CUxsY3xkusv7EkbT9yae
vif*, veth*, peth* and xenbr0. The Xen hypervisor arranges them in whichever layout has been defined, under the control of the user-space tools. Since the NAT and routing models are only adapted to particular cases, we will only address the bridging model.
xend daemon is configured to integrate virtual network interfaces into any pre-existing network bridge (with xenbr0 taking precedence if several such bridges exist). We must therefore set up a bridge in /etc/network/interfaces (which requires installing the bridge-utils package, which is why the xen-utils package recommends it) to replace the existing eth0 entry (be careful to use the correct network device name):
auto xenbr0
iface xenbr0 inet dhcp
bridge_ports eth0
bridge_maxwait 0
xl command. This command allows different manipulations on the domains, including listing them and, starting/stopping them. You might need to increase the default memory by editing the variable memory from configuration file (in this case, /etc/xen/testxen.cfg). Here we have set it to 1024 (megabytes).
#xl listName ID Mem VCPUs State Time(s) Domain-0 0 3918 2 r----- 35.1 #xl create /etc/xen/testxen.cfgParsing config from /etc/xen/testxen.cfg #xl listName ID Mem VCPUs State Time(s) Domain-0 0 2757 2 r----- 45.2 testxen 3 1024 1 r----- 1.3
testxen domU uses real memory taken from the RAM that would otherwise be available to the dom0, not simulated memory. Care should therefore be taken, when building a server meant to host Xen instances, to provision the physical RAM accordingly.
hvc0 console, with the xl console command:
#xl console testxen[...] Debian GNU/Linux 11 testxen hvc0 testxen login:
xl pause and xl unpause commands. Note that even though a paused domU does not use any processor power, its allocated memory is still in use. It may be interesting to consider the xl save and xl restore commands: saving a domU frees the resources that were previously used by this domU, including RAM. When restored (or unpaused, for that matter), a domU doesn't even notice anything beyond the passage of time. If a domU was running when the dom0 is shut down, the packaged scripts automatically save the domU, and restore it on the next boot. This will of course involve the standard inconvenience incurred when hibernating a laptop computer, for instance; in particular, if the domU is suspended for too long, network connections may expire. Note also that Xen is so far incompatible with a large part of ACPI power management, which precludes suspending the host (dom0) system.
shutdown command) or from the dom0, with xl shutdown or xl reboot.
xl subcommands expect one or more arguments, often a domU name. These arguments are well described in the xl(1) manual page.
init process, and the resulting set looks very much like a virtual machine. The official name for such a setup is a “container” (hence the LXC moniker: LinuX Containers), but a rather important difference with “real” virtual machines such as provided by Xen or KVM is that there is no second kernel; the container uses the very same kernel as the host system. This has both pros and cons: advantages include excellent performance due to the total lack of overhead, and the fact that the kernel has a global vision of all the processes running on the system, so the scheduling can be more efficient than it would be if two independent kernels were to schedule different task sets. Chief among the inconveniences is the impossibility to run a different kernel in a container (whether a different Linux version or a different operating system altogether).
/sys/fs/cgroup. Since Debian 8 switched to systemd, which also relies on control groups, this is now done automatically at boot time without further configuration.
/etc/network/interfaces, moving the configuration for the physical interface (for instance, eth0 or enp1s0) to a bridge interface (usually br0), and configuring the link between them. For instance, if the network interface configuration file initially contains entries such as the following:
auto eth0 iface eth0 inet dhcp
auto br0
iface br0 inet dhcp
bridge-ports eth0eth0 as well as the interfaces defined for the containers.
/etc/network/interfaces file then becomes:
# Interface eth0 is unchanged
auto eth0
iface eth0 inet dhcp
# Virtual interface
auto tap0
iface tap0 inet manual
vde2-switch -t tap0
# Bridge for containers
auto br0
iface br0 inet static
bridge-ports tap0
address 10.0.0.1
netmask 255.255.255.0
br0 interface.
#lxc-create -n testlxc -t debiandebootstrap is /usr/sbin/debootstrap Checking cache download in /var/cache/lxc/debian/rootfs-stable-amd64 ... Downloading debian minimal ... I: Retrieving Release I: Retrieving Release.gpg [...] Download complete. Copying rootfs to /var/lib/lxc/testlxc/rootfs... [...] #
/var/cache/lxc, then moved to its destination directory. This allows creating identical containers much more quickly, since only copying is then required.
--arch option to specify the architecture of the system to be installed and a --release option if you want to install something else than the current stable release of Debian. You can also set the MIRROR environment variable to point to a local Debian mirror.
lxcbr0, which by default is used by all newly created containers via /etc/lxc/default.conf and the lxc-net service:
lxc.net.0.type = veth lxc.net.0.link = lxcbr0 lxc.net.0.flags = up
lxcbr0 bridge on the host. You will find these settings in the created container's configuration (/var/lib/lxc/testlxc/config), where also the device' MAC address will be specified in lxc.net.0.hwaddr. Should this last entry be missing or disabled, a random MAC address will be generated.
lxc.uts.name = testlxc
lxc-start --name=testlxc.
lxc-attach -n testlxc passwd if we want. We can login with:
#lxc-console -n testlxcConnected to tty 1 Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself Debian GNU/Linux 11 testlxc tty1 testlxc login:rootPassword: Linux testlxc 5.10.0-11-amd64 #1 SMP Debian 5.10.92-1 (2022-01-18) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Wed Mar 9 01:45:21 UTC 2022 on console root@testlxc:~#ps auxwfUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 18964 11464 ? Ss 01:36 0:00 /sbin/init root 45 0.0 0.2 31940 10396 ? Ss 01:37 0:00 /lib/systemd/systemd-journald root 71 0.0 0.1 99800 5724 ? Ssl 01:37 0:00 /sbin/dhclient -4 -v -i -pf /run/dhclient.eth0.pid [..] root 97 0.0 0.1 13276 6980 ? Ss 01:37 0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups root 160 0.0 0.0 6276 3928 pts/0 Ss 01:46 0:00 /bin/login -p -- root 169 0.0 0.0 7100 3824 pts/0 S 01:51 0:00 \_ -bash root 172 0.0 0.0 9672 3348 pts/0 R+ 01:51 0:00 \_ ps auxwf root 164 0.0 0.0 5416 2128 pts/1 Ss+ 01:49 0:00 /sbin/agetty -o -p -- \u --noclear [...] root@testlxc:~#
/var/lib/lxc/testlxc/rootfs). We can exit the console with Control+a q.
lxc-start starting using the --daemon option by default. We can interrupt the container with a command such as lxc-stop --name=testlxc.
lxc-autostart which starts containers whose lxc.start.auto option is set to 1). Finer-grained control of the startup order is possible with lxc.start.order and lxc.group: by default, the initialization script first starts containers which are part of the onboot group and then the containers which are not part of any group. In both cases, the order within a group is defined by the lxc.start.order option.
qemu-* commands: it is still about KVM.
/proc/cpuinfo.
virt-manager is a graphical interface that uses libvirt to create and manage virtual machines.
apt-get install libvirt-clients libvirt-daemon-system qemu-kvm virtinst virt-manager virt-viewer. libvirt-daemon-system provides the libvirtd daemon, which allows (potentially remote) management of the virtual machines running of the host, and starts the required VMs when the host boots. libvirt-clients provides the virsh command-line tool, which allows controlling the libvirtd-managed machines.
virt-install, which allows creating virtual machines from the command line. Finally, virt-viewer allows accessing a VM's graphical console.
eth0 physical interface and a br0 bridge, and that the former is connected to the latter.
libvirtd where to store the disk images, unless the default location (/var/lib/libvirt/images/) is fine.
#mkdir /srv/kvm#virsh pool-create-as srv-kvm dir --target /srv/kvmPool srv-kvm created #
virt-install's most important options. This command registers the virtual machine and its parameters in libvirtd, then starts it so that its installation can proceed.
#virt-install --connect qemu:///system--virt-type kvm
--name testkvm
--memory 2048
--disk /srv/kvm/testkvm.qcow,format=qcow2,size=10
--cdrom /srv/isos/debian-11.2.0-amd64-netinst.iso
--network bridge=virbr0
--graphics vnc
--os-type linux
--os-variant debiantesting
Starting install... Allocating 'testkvm.qcow'
The --connect option specifies the “hypervisor” to use. Its form is that of an URL containing a virtualization system (xen://, qemu://, lxc://, openvz://, vbox://, and so on) and the machine that should host the VM (this can be left empty in the case of the local host). In addition to that, and in the QEMU/KVM case, each user can manage virtual machines working with restricted permissions, and the URL path allows differentiating “system” machines (/system) from others (/session).
| |
Since KVM is managed the same way as QEMU, the --virt-type kvm allows specifying the use of KVM even though the URL looks like QEMU.
| |
The --name option defines a (unique) name for the virtual machine.
| |
The --memory option allows specifying the amount of RAM (in MB) to allocate for the virtual machine.
| |
The --disk specifies the location of the image file that is to represent our virtual machine's hard disk; that file is created, unless present, with a size (in GB) specified by the size parameter. The format parameter allows choosing among several ways of storing the image file. The default format (qcow2) allows starting with a small file that only grows when the virtual machine starts actually using space.
| |
The --cdrom option is used to indicate where to find the optical disk to use for installation. The path can be either a local path for an ISO file, an URL where the file can be obtained, or the device file of a physical CD-ROM drive (i.e. /dev/cdrom).
| |
The --network specifies how the virtual network card integrates in the host's network configuration. The default behavior (which we explicitly forced in our example) is to integrate it into any pre-existing network bridge. If no such bridge exists, the virtual machine will only reach the physical network through NAT, so it gets an address in a private subnet range (192.168.122.0/24).
The default network configuration, which contains the definition for a virbr0 bridge interface, can be edited using virsh net-edit default and started via virsh net-start default if not already done automatically during system start.
| |
--graphics vnc states that the graphical console should be made available using VNC. The default behavior for the associated VNC server is to only listen on the local interface; if the VNC client is to be run on a different host, establishing the connection will require setting up an SSH tunnel (see Paragraaf 9.2.1.4, “Creating Encrypted Tunnels with Port Forwarding”). Alternatively, --graphics vnc,listen=0.0.0.0 can be used so that the VNC server is accessible from all interfaces; note that if you do that, you really should design your firewall accordingly.
| |
The --os-type and --os-variant options allow optimizing a few parameters of the virtual machine, based on some of the known features of the operating system mentioned there.
The full list of OS types can be shown using the osinfo-query os command from the libosinfo-bin package.
|
virt-viewer can be run from any graphical environment to open the graphical console (note that the root password of the remote host is asked twice because the operation requires 2 SSH connections):
$virt-viewer --connect qemu+ssh://root@server/system testkvmroot@server's password: root@server's password:
libvirtd for the list of the virtual machines it manages:
#virsh -c qemu:///system list --all Id Name State ---------------------------------- 8 testkvm shut off
#virsh -c qemu:///system start testkvmDomain testkvm started
vncviewer):
#virsh -c qemu:///system vncdisplay testkvm127.0.0.1:0
virsh subcommands include:
reboot to restart a virtual machine;
shutdown to trigger a clean shutdown;
destroy, to stop it brutally;
suspend to pause it;
resume to unpause it;
autostart to enable (or disable, with the --disable option) starting the virtual machine automatically when the host starts;
undefine to remove all traces of the virtual machine from libvirtd.