Internet Windows Android

We use KVM to create virtual machines on the server. RedHat finally switches from Xen to KVM kvm virtualization systems

When choosing a tariff, a person also chooses a virtualization method for the server. We offer a choice of virtualization at the level of the OpenVZ operating system and hardware virtualization KVM.

It is not possible to change the type of virtualization after launch because the servers are on different hardware platforms. You will have to order a new server, transfer the project, and abandon the old server.

Comparison of virtualization types

Openvz KVM

OS from a number of proposed: Debian, CentOS, Ubuntu

Linux, Windows, FreeBSD, installing your own distribution

Changing resources without rebooting (hard disk, memory, processor)

The memory and processor will change after a reboot, the hard drive - only after contacting support (on ready-made tariffs, the memory cannot be changed)

Change of tariff plan without reboot

Change of tariff plan. The server will be unavailable for 1-2 hours.

Soft limits: maximum server performance can deviate up or down

Hard limits: each server receives the declared resources

Restriction on the launch of high-load projects. It is forbidden to run Java applications, bulk mailings and proxied traffic. TUN / TAP is off.

The ability to launch any projects (except for distributed computing systems)

Possibility . For this type of virtualization, a VNC connection to the GUI is not possible.

Possibility . If the server is for some reason unavailable via SSH or you need to connect to the graphical interface, you can access the server via VNC.

You can go to the ISPmanager control panel:

  • from your personal account: section - Products - Virtual servers - select a server, the "Go" button on top,
  • following the link from the Instructions: Personal Account - Products - Virtual Servers - select the server, above the "Instructions".

OpenVZ virtualization

OpenVZ is an operating system level virtualization. The technology is based on the Linux kernel and allows you to create and run on one physical server isolated copies selected operating system (Debian, CentOS, Ubuntu). Installing a different OS is not possible because the virtual servers share a common Linux kernel.

Technology is different ease of server management: the user can in the personal account on their own * add the amount of resources (memory, processor, hard disk) or switch to another tariff with the same virtualization. Changes are applied automatically, without rebooting the server.

On servers with OpenVZ virtualization prohibited run:

  • services for organizing proxying of any type of traffic
  • streaming services
  • game servers
  • systems or elements of distributed computing systems (for example, bitcoin mining)
  • services of mass mailing of mail messages, even if they are used for legal purposes
  • Java applications
  • other resource-intensive applications

Such projects create uneven load on the parent server and can interfere with neighboring virtual machines.

* - for previous versions of tariffs (VDS-2015, VDS-Summer, VDS-2016), changing the tariff in the personal account is no longer available. An independent change of the tariff plan is possible only at the current OVZ virtualization tariffs. If it is important for you to have access to quick management of server resources - to switch to an up-to-date tariff plan. If the cost of the new tariff is higher than the cost of the current one, the tariff change is free, in other cases - within the framework. The tariff is changed without restarting the server.

KVM virtualization

KVM (Kernel-based Virtual Machine) is a hardware virtualization technology that allows you to create on a host machine full virtual analogue of a physical server... KVM allows you to create a virtual server completely isolated from its "neighbors" with its own OS kernel, which the user can configure and modify to suit his own needs without restrictions. Each such server is allocated its own area in RAM and hard disk space, which increases the overall reliability of the server.

Installation is possible any operating system to choose from (Debian, CentOS, Ubuntu, FreeBSD, Windows Server), or install your own distribution (in the VMmanager panel, in the ISO images section, click the Create button and add your system ISO image).

Change of tariff plan is possible only in the big direction and only within the framework of the basic tariff line (Start, Acceleration, Takeoff, Flyoff). If your project grows out of the tariff, write a support request from your Personal Account - administrators will change the tariff to the required one for free. Change tariff downward can only be transferred to a new server. Order a new server and transfer the data yourself, or technical support specialists will help with the transfer for 1 call for the administration package or 250 rubles.

Remember that on VDS-Fast and Furious and VDS-Atlant tariffs, you can change resources instead of changing tariff: the number of available processor cores and RAM independently in the control panel, and the size of the hard disk after contacting support (within the framework of administration or for 250 rubles).

Given the features and benefits that KVM virtualization provides, its tariffs are more expensive similar tariffs with OpenVZ virtualization.

On servers with KVM virtualization, the Subscriber is prohibited from placing systems or elements of distributed computing systems (for example, bitcoin mining).

Changing virtualization on the server

It is impossible to change virtualization from OpenVZ to KVM and vice versa within one server.

1. Order a second server with the required virtualization in the BILLmanager panel, section Virtual Servers → Order

2. Transfer data to it.

3. After migration and verification, you can delete the old server (Virtual Servers → Delete).


Checking Hypervisor Support

We check that the server supports virtualization technologies:

cat / proc / cpuinfo | egrep "(vmx | svm)"

In response, you should receive something like:

flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc dfmd_tsc arch_perfmon non-pebble nbs boptsp lm constant_tsc arch_perfnig non-pebble nbs bopts smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb tpr_shadow vnmi flexpriority ept vpid dtherm ida arat

Otherwise, go to BIOS, find the option to enable virtualization technology (it has different names, for example, Intel Virtualization Technology or Virtualization) and enable it - set the value Enable.

You can also check compatibility with the command:

* if the command returns an error "Kvm-ok command not found", install the appropriate package: apt-get install cpu-checker.

If we see:

INFO: / dev / kvm exists
KVM acceleration can be used

then there is support from the hardware part.

Server preparation

For our convenience, we will create a directory in which we will store data for KVM:

mkdir -p / kvm / (vhdd, iso)

* two directories will be created: / kvm / vhdd(for virtual hard disks) and / kvm / iso(for iso images).

Let's set the time:

\ cp / usr / share / zoneinfo / Europe / Moscow / etc / localtime

* this command sets the zone in accordance with Moscow time.

ntpdate ru.pool.ntp.org

* we perform synchronization with the time server.

Installation and launch

Install KVM and the necessary management utilities.

a) Ubuntu up to version 18.10

apt-get install qemu-kvm libvirt-bin virtinst libosinfo-bin

b) Ubuntu after 18.10:

apt-get install qemu-kvm libvirt-daemon-system libvirt-bin virtinst libosinfo-bin

* where qemu-kvm- hypervisor; libvirt-bin- hypervisor management library; virtinst- utility for managing virtual machines; libosinfo-bin- a utility for viewing a list of operating systems that can be used as guests.

Let's configure the automatic start of the service:

systemctl enable libvirtd

Let's start libvirtd:

systemctl start libvirtd

Network configuration

Virtual machines can work behind NAT (which is the KVM server) or receive IP addresses from the local network - for this, you need to configure a network bridge. We'll set up the latter.

When using a remote connection, check the settings carefully. In case of an error, the connection will be terminated.

Install bridge-utils:

apt-get install bridge-utils

a) network setup in older versions of Ubuntu (/ etc / network / interfaces).

Open the configuration file for configuring network interfaces:

vi / etc / network / interfaces

And we will bring it to the form:

#iface eth0 inet static
# address 192.168.1.24
# netmask 255.255.255.0
# gateway 192.168.1.1
# dns-nameservers 192.168.1.1 192.168.1.2

Auto br0
iface br0 inet static
address 192.168.1.24
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 192.168.1.1 192.168.1.2
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

* where everything that is commented out is the old settings of my network; br0- name of the interface of the bridge being created; eth0- the existing network interface through which the bridge will work.

Restart the network service:

systemctl restart networking

b) network configuration in newer versions of Ubuntu (netplan).

vi /etc/netplan/01-netcfg.yaml

* depending on the system version, configuration file yaml may have a different name.

We bring it to the form:

network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: false
dhcp6: false
wakeonlan: true

Bridges:
br0:
macaddress: 2c: 6d: 45: c3: 55: a7
interfaces:
- eth0
addresses:
- 192.168.1.24/24
gateway4: 192.168.1.1
mtu: 1500
nameservers:
addresses:
- 192.168.1.1
- 192.168.1.2
parameters:
stp: true
forward-delay: 4
dhcp4: false
dhcp6: false

* in this example we create a virtual bridge interface br0; as a physical interface we use eth0.

Apply network settings:

We set up redirection of network traffic (so that virtual machines with a NAT network interface can access the Internet):

vi /etc/sysctl.d/99-sysctl.conf

Add the line:

net.ipv4.ip_forward = 1

Apply the settings:

sysctl -p /etc/sysctl.d/99-sysctl.conf

Creating a virtual machine

To create the first virtual machine, enter the following command:

virt-install -n VM1 \
--autostart \
--noautoconsole \
--network = bridge: br0 \
--ram 2048 --arch = x86_64 \
--vcpus = 2 --cpu host --check-cpu \
--disk path = / kvm / vhdd / VM1-disk1.img, size = 16 \
--cdrom /kvm/iso/ubuntu-18.04.3-server-amd64.iso \
--graphics vnc, listen = 0.0.0.0, password = vnc_password \
--os-type linux --os-variant = ubuntu18.04 --boot cdrom, hd, menu = on

  • VM1 - the name of the machine to be created;
  • autostart - allow the virtual machine to automatically start along with the KVM server;
  • noautoconsole - does not connect to the virtual machine console;
  • network - network type. In this example, we are creating a virtual machine with a network bridge interface. To create an internal interface with a NAT type, enter --network = default, model = virtio;
  • ram - the amount of RAM;
  • vcpus - the number of virtual processors;
  • disk - virtual disk: path - path to disk; size - its volume;
  • cdrom - virtual drive with system image;
  • graphics - parameters for connecting to the virtual machine using the graphical console (in this example, we use vnc); listen - on what address vnc accepts requests (in our example, at all); password - password for connecting using vnc;
  • os-variant - guest operating system (we received the entire list with the command osinfo-query os, in this example we install Ubuntu 18.04).

Connecting to a virtual machine

On the computer from which we plan to work with virtual machines, download a VNC client, for example, TightVNC and install it.

On the server, enter:

virsh vncdisplay VM1

the command will show which port VNC is running on for machine VM1. I have had:

*: 1 means that you need to add 1 to 5900 - 5900 + 1 = 5901.

We launch TightVNC Viewer, which we installed and enter the connection data:

Click on Connect... At the request for a password, enter the one that was specified when creating the VM, ( vnc_password). We will connect to the virtual machine with a remote console.

If we do not remember the password, open the virtual machine configuration with the command:

And we find the line:



* in this example, a password is used to access the virtual machine 12345678 .

Managing a virtual machine from the command line

Examples of commands that can be useful when working with virtual machines.

1. Get a list of created cars:

virsh list --all

2. Turn on the virtual machine:

virsh start VMname

* where VMname Is the name of the created machine.

3. Turn off the virtual machine:

ubuntu-vm-builder is a package developed by Canonical to simplify the creation of new virtual machines.

To install it, enter:

apt-get install ubuntu-vm-builder


Recently, an interesting report was released by Principled Technologies, which specializes in, among other things, all kinds of testing of hardware and software environments. The document "" explains that the ESXi hypervisor can run more virtual machines on the same hardware than the RHEV KVM hypervisor.

It is clear that the study is biased (at least if you look at the title), but since there are not so many such documents, we decided to pay attention to it.

For testing, we used a Lenovo x3650 M5 rack server, on which a Microsoft SQL Server 2016 DBMS was running in virtual machines with an OLTP load. OPM (orders per minute) was used as the main performance indicator, which displays a quantitative assessment of executed transactions.

If you do not use the Memory Overcommit techniques, then the result of executing one host in the number of OPMs on 15 virtual machines is approximately the same on both hypervisors:

But when the number of virtual machines increases, vSphere performs much better:

The crosses mark the machines that simply did not start on RHV, the product console gave the following error:

Despite the inclusion of memory optimization techniques in Red Hat Virtualization Manager (RHV-M), such as memory ballooning and kernel shared memory, the sixteenth virtual machine still refused to start on KVM:

Well, on vSphere they continued to increase the number of VMs until they ran into a lack of resources:

It turned out that with overcommit technicians on vSphere it turned out to run 24 virtual machines, and on RHV - only 15 pieces. As a result, we concluded that 1.6 times more virtual machines can be run on VMware vSphere:

Not to say that this is an objective test, but it is obvious that ESXi in this case performs better than KVM in terms of any optimizations of memory and other VM resources.


Tags: VMware, Red Hat, Performance, RHV, vSphere, ESXi, KVM
Tags: KVM, oVirt, Open Source, Update

Recall that RHEV is based on the Kernel-based Virtual Machine (KVM) hypervisor and supports the OpenStack open cloud architecture. Let's see what's new in the updated RHEV version 3.4.

Infrastructure

  • SNMP configuration service to support third-party monitoring systems.
  • Saving the settings of the cloud installation of RHEV for the possibility of its recovery in case of failure or for replication in other clouds.
  • RHEV authentication services have been rewritten and improved.
  • The ability to hot add a processor to the VM (Hot Plug CPU). This requires support from the OS.
  • Non-root users now have access to logs.
  • New installer based on TUI (textual user interface).
  • IPv6 support.
  • Possibility of choosing a connection to the VM console in Native Client or noVNC mode.
  • Possibility to change some settings of a running virtual machine.
  • Full support for RHEL 7 as a guest OS.
  • Ability to enable / disable KSM (Kernel Samepage Merging) at the cluster level.
  • Ability to reboot VM from RHEVM or with a console command.

Networking

  • Tighter integration with the OpenStack infrastructure:
    • Security and scalability improvements for networks deployed with Neutron.
    • Supports Open vSwitch technology (scalable virtual switch) and SDN capabilities.
  • Network Labels - labels that can be used when referring to devices.
  • Correct virtual network adapter (vNIC) numbering order.
  • Support for iproute2.
  • A single point to configure the network settings for multiple hosts on a specified network.

Storage capabilities

  • Mixed storage domains - the ability to simultaneously use disk devices from iSCSI, FCP, NFS, Posix and Gluster storage to organize storage of virtual machines.
  • Multiple Storage Domains - the ability to distribute disks of one virtual machine across multiple storages within the data center.
  • Ability to specify disks that will participate in creating snapshots, as well as those that will not.
  • The mechanism for restoring a VM from a backup has been improved - now it is possible to specify a snapshot of the state to which you want to rollback.
  • Asynchronous management of Gluster storage tasks.
  • Read-Only Disk for Engine - This feature enables the Red Hat Enterprise Virtualization Manager management tool to use read-only disks.
  • Multipathing access for iSCSI storage.

Virtualization tools

  • Guest OS agents (ovirt-guest-agent) for OpenSUSE and Ubuntu.
  • SPICE Proxy - the ability to use proxy servers to allow users to access their VMs (if, for example, they are outside the infrastructure network).
  • SSO (Single Sign-On) Method Control - the ability to switch between different pass-through authentication mechanisms. So far, there are only two options: guest agent SSO and no SSO.
  • Support for multiple versions of the same virtual machine template.

Scheduler and Service Level Enhancements

  • Improvements to the virtual machine scheduler.
  • Affinity / Anti-Affinity groups (rules for the existence of virtual machines on hosts - place machines together or separately).
  • Power-Off Capacity is a power policy that allows you to shut down a host and prepare its virtual machines for migration to another location.
  • Even Virtual Machine Distribution - the ability to distribute virtual machines to hosts based on the number of VMs.
  • High-Availability Virtual Machine Reservation - a mechanism that allows you to guarantee the recovery of virtual machines in the event of a failure of one or more host servers. It works on the basis of calculating the available capacity of the computing resources of the cluster hosts.

Improvements to the interface

  • Bug fixes related to the fact that the interface did not always react to events taking place in the infrastructure.
  • Support for low screen resolutions (when some elements of the control console were not visible at low resolutions).

You can download Red Hat Enterprise Virtualization 3.4 from this link. Documentation is available.


Tags: Red Hat, RHEV, Update, Linux, KVM

The new version of the RHEL OS has many new interesting features, among which many relate to virtualization technologies. Some of the major new features in RHEL 7:

  • Built-in support for packaged Docker applications.
  • Kernel patching utility Technology Preview - patching the kernel without rebooting the OS.
  • Direct and indirect integration with Microsoft Active Directory, described in more detail.
  • XFS is now the default file system for boot, root and user data partitions.
    • For XFS, the maximum file system size has been increased from 100 TB to 500 TB.
    • For ext4, this size has been increased from 16 TB to 50 TB.
  • Improved OS installation process (new wizard).
  • Ability to manage Linux servers using Open Linux Management Infrastructure (OpenLMI).
  • NFS and GFS2 file system improvements.
  • New capabilities of KVM virtualization technology.
  • Ability to run RHEL 7 as a guest OS.
  • Improvements to NetworkManager and a new command line utility for performing NM-CLI network tasks.
  • Supports Ethernet network connections at speeds up to 40 Gbps.
  • Supports WiGig wireless technology (IEEE 802.11ad) (at speeds up to 7 Gbps).
  • New Team Driver mechanism that virtually combines network devices and ports into a single interface at the L2 level.
  • New dynamic service FirewallD, which is a flexible firewall that takes precedence over iptables and supports multiple network trust zones.
  • GNOME 3 in classic desktop mode.

For more information on the new features in RHEL 7, see Red Hat.

In terms of virtualization, Red Hat Enterprise Linux 7 introduces the following major innovations:

  • Technological preview of virtio-blk-data-plane feature, which allows executing QEMU I / O commands in a separate optimized thread.
  • There is a technological preview of PCI Bridge technology, which allows more than 32 PCI devices to be supported in QEMU.
  • QEMU Sandboxing - improved isolation between RHEL 7 host guest OSs.
  • Support for "hot" adding virtual processors to machines (vCPU Hot Add).
  • Multiple Queue NICs - Each vCPU has its own transmit and receive queues, which eliminates the need for other vCPUs (Linux guest OS only).
  • Hot Migration Page Delta Compression technology allows the KVM hypervisor to migrate faster.
  • KVM introduces support functions for paravirtualized functions of Microsoft OS, for example, Memory Management Unit (MMU) and Virtual Interrupt Controller. This allows Windows guests to run faster (these features are disabled by default).
  • Supports EOI Acceleration technology based on Intel and AMD Advanced Programmable Interrupt Controller (APIC) interface.
  • Technological preview of USB 3.0 support in KVM guest operating systems.
  • Supports Windows 8, Windows 8.1, Windows Server 2012 and Windows Server 2012 R2 guest operating systems on a KVM hypervisor.
  • I / O Throttling functions for guest OSs on QEMU.
  • Support for Ballooning and transparent huge pages technologies.
  • The new virtio-rng device is available as a random number generator for guest operating systems.
  • Support for hot migration of guest operating systems from a Red Hat Enterprise Linux 6.5 host to a Red Hat Enterprise Linux 7 host.
  • Supports assigning NVIDIA GRID and Quadro devices as a second device in addition to emulated VGA.
  • Para-Virtualized Ticketlocks technology that improves performance when there are more virtual vCPUs than physical ones on the host.
  • Improved error handling for PCIe devices.
  • New Virtual Function I / O (VFIO) driver improves security.
  • Supports Intel VT-d Large Pages Technology when using the VFIO driver.
  • Improvements in giving accurate time to virtual machines on KVM.
  • Support for images of the QCOW2 version 3 format.
  • Improved Live Migration statistics - total time, expected downtime and bandwidth.
  • Dedicated stream for Live Migration, which allows hot migrations not to impact guest OS performance.
  • Emulation of AMD Opteron G5 processors.
  • Support for new Intel processor instructions for KVM guest operating systems.
  • Supports read-only VPC and VHDX virtual disk formats.
  • New features of the libguestfs utility for working with virtual disks of machines.
  • New Windows Hardware Quality Labs (WHQL) drivers for Windows guest operating systems.
  • Integration with VMware vSphere: Open VM Tools, 3D graphics drivers for OpenGL and X11, and improved communication mechanism between the guest OS and the ESXi hypervisor.

Release Notes of the new OS version are available at this link. You can read about the virtualization functions in the new RHEL 7 release (and - in Russian). The sources for the Red Hat Enterprise Linux 7 rpm packages are now only available through the Git repository.


Tags: Linux, QEMU, KVM, Update, RHEL, Red Hat

Ravello has found an interesting way to leverage nested virtualization in its Cloud Application Hypervisor product, which allows it to universally deploy VMs across different virtualization platforms in the public clouds of different service providers.

The main component of this system is HVX technology - its own hypervisor (based on Xen), which is part of the Linux OS and runs nested virtual machines without changing them using binary translation techniques. Further, these machines can be hosted in Amazon EC2, HP Cloud, Rackspace, and even private clouds managed by VMware vCloud Director (support for the latter is expected soon).

The Ravello product is a SaaS service, and such nesting dolls can simply be uploaded to any of the supported hosting sites, regardless of the hypervisor it uses. A virtual network between machines is created via an L2 overlay over the existing L3 infrastructure of the hoster using a GRE-like protocol (only based on UDP):

The very mechanics of the proposed Cloud Application Hypervisor service are as follows:

  • The user uploads virtual machines to the cloud (machines created on ESXi / KVM / Xen platforms are supported).
  • Describes multi-machine applications using a special GUI or API.
  • Publishes its VMs to one or more supported clouds.
  • The resulting configuration is saved as a snapshot in the Ravello cloud (then in which case it can be restored or unloaded) - this storage can be created both on the basis of Amazon S3 cloud storage, CloudFiles, and on the basis of its own block storages or NFS volumes.
  • After that, each user can get a multi-machine configuration of their application on demand.

The obvious question that comes up first is what about performance? Well, first of all, the Cloud Application Hypervisor is designed for development and test teams for which performance is not a critical factor.

And secondly, the performance test results of such nested nesting dolls show not so bad results:

For those interested in HVX technology, there is a good overview video in Russian:


Tags: Rovello, Nested Virtualization, Cloud, HVX, VMware, ESXi, KVM, Xen, VMachines, Amazon, Rackspace

The new version of the open virtualization platform RHEV 3.0 is based on the Red Ha Enterprise Linux version 6 distribution and, traditionally, the KVM hypervisor.

New features of Red Hat Enterprise Virtualization 3.0:

  • The Red Hat Enterprise Virtualization Manager management tool is now Java-based, running on the JBoss platform (previously .NET was used, and, accordingly, was tied to Windows, now you can use Linux for the management server).
  • A self-service portal for users to self-deploy virtual machines, create templates, and administer their own environments.
  • New RESTful API allowing access to all solution components from third-party applications.
  • An advanced administration mechanism that provides the ability to granularly assign permissions, delegate authority based on user roles, and hierarchical privilege management.
  • Supports local server disks as storage for virtual machines (but Live Migration is not supported for them).
  • An integrated reporting engine that analyzes historical performance data and predicts virtual infrastructure development.
  • Optimized for WAN connections, including dynamic compression technologies and automatic adjustment of desktop effects and color depth. In addition, the new version of SPICE has enhanced support for Linux guest desktops.
  • Updated KVM hypervisor based on the latest Red Hat Enterprise Linux 6.1 released in May 2011.
  • Supports up to 160 logical CPUs and 2 TB of memory for host servers, 64 vCPUs and 512 GB of memory for virtual machines.
  • New possibilities for administration of large installations of RHEV 3.0.
  • Support for large pages of memory (Transparant Huge Pages, 2MB instead of 4KB) in guest OSs, which improves performance with fewer reads.
  • Optimization of the vhost-net component. Now the KVM networking stack has been moved from user mode to kernel mode, which significantly increases performance and reduces network latency.
  • Using the functions of the sVirt library, which provides hypervisor security.
  • A paravirtualized x2paic controller has appeared, which reduces overhead on the content of the VM (especially effective for intensive workloads).
  • Async-IO technology to optimize I / O and improve performance.

You can download the final release of Red Hat Enterprise Virtualization 3.0 using this link.

And, finally, a short video review of Red Hat Enterprise Virtualization Manager 3.0 (RHEV-M):


Tags: Red Hat, Enterprise, Update, KVM, Linux

Well done NetApp! Roman, we are waiting for translation into Russian)


Tags: Red Hat, KVM, NetApp, Storage, NFS

ConVirt 2.0 Open Source allows you to manage the Xen and KVM hypervisors included in free and commercial Linux distributions, deploy virtual servers from templates, monitor performance, automate administrator tasks, and configure all aspects of the virtual infrastructure. ConVirt 2.0 supports hot migration of virtual machines, thin virtual disks (growing as they fill up with data), control of virtual machine resources (including running ones), extensive monitoring functions and means of intelligent placement of virtual machines on host servers (manual load balancing).

ConVirt 2.0 still exists only in the Open Source edition, but the developers promise to soon release the ConVirt 2.0 Enteprise edition, which will differ from the free edition in the following features:

FeatureConVirt 2.0
Open source
ConVirt 2.0 Enterprise

Architecture
Multi-platform Support
Agent-less Architecture
Universal Web Access
Datacenter-wide Console

Administration
Start, Stop, Pause, Resume
Maintanence Mode
Snapshot
Change Resource Allocation on a Running VM

Monitoring
Real-time Data
Historical Information
Server Pools
Storage pools
Alerts and Notifications

Provisioning
Templates-based Provisioning
Template Library
Integrated Virtual Appliance Catalogs
Thin Provisioning
Scheduled Provisioning

Automation
Intelligent Virtual Machine Placement
Live migration
Host Private Networking
SAN, NAS Storage Support

Advanced Automation
High Availability
Backup and Recovery
VLAN Setup
Storage automation
Dynamic Resource Allocation
Power Saving Mode

Security
SSH Access
Multi-user Administration
Auditing
Fine Grained Access Control

Integration
Open Repository
Command Line Interface
Programmatic API

Tags: Xen, KVM, Convirt, Citrix, Red Hat, Free, Open Source,

Convirture, the 2007 XenMan GUI for managing the XEN hypervisor, recently released free Convirture ConVirt 1.0, which changed its name to XenMan.

With ConVirt, you can manage Xen and KVM hypervisors using the following features:

  • Management of multiple host servers.
  • Snapshots (snapshots).
  • Live migration of virtual machines between hosts.
  • VM backup.
  • The simplest monitoring of hosts and virtual machines.
  • Support for virtual modules (Virtual Appliances).

You can download Convirture ConVirt 1.0 from this link:

Convirture ConVirt 1.0
Tags: Xen, KVM

On Ubuntu, it is recommended to use the KVM hypervisor (virtual machine manager) and libvirt as the management instrumentation. Libvirt includes a set of software APIs and custom virtual machine (VM) management applications virt-manager (graphical interface, GUI) or virsh (command line, CLI). Alternative managers can be convirt (GUI) or convirt2 (WEB interface).

Currently, only the KVM hypervisor is officially supported on Ubuntu. This hypervisor is part of the kernel code for the Linux operating system. Unlike Xen, KVM does not support paravirtualization, which means that in order to use it, your CPU must support VT technologies. You can check if your processor supports this technology by running the command in a terminal:

If you receive a message as a result:

INFO: / dev / kvm exists KVM acceleration can be used

then KVM will work without problems.

If you receive a message at the output:

Your CPU does not support KVM extensions KVM acceleration can NOT be used

then you can still use the virtual machine, but it will be much slower.

    Install 64-bit systems as guests

    Allocate more than 2 GB of RAM to guest systems

Installation

Sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils

This is an installation on a server without Xs, i.e. it does not include a graphical interface. You can install it with the command

Sudo apt-get install virt-manager

After that, the menu item "Virtual Machine Manager" will appear and, with a high degree of probability, everything will work. If any problems do arise, then you will need to read the instructions in the English-language wiki.

Creating a guest system

The procedure for creating a guest system using the graphical interface is quite simple.

But the text mode can be described.

qcow2

When creating a system using the graphical interface, it is proposed to either select an existing image file or block device as a hard disk, or create a new file with raw (RAW) data. However, this is far from the only available file format. Of all the disk types listed in man qemu-img, qcow2 is the most flexible and modern. It supports snapshots, encryption and compression. It must be created before creating a new guest system.

Qemu-img create -o preallocation = metadata -f qcow2 qcow2.img 20G

According to the same man qemu-img, preallocation of metadata (-o preallocation = metadata) makes the disk a bit larger initially, but provides better performance when the image needs to grow. In fact, in this case, this option avoids a nasty bug. The created image initially takes up less than a megabyte of space and grows to the specified size as needed. The guest system should immediately see this final specified size, however, during the installation phase, it can see the actual size of the file. Naturally, it will refuse to install on a 200 KB hard drive. The bug is not specific to Ubuntu, it appears in RHEL, at least.

In addition to the type of image, you can later choose the way to connect it - IDE, SCSI or Virtio Disk. The performance of the disk subsystem will depend on this choice. There is no unambiguously correct answer, you need to choose based on the task that will be assigned to the guest system. If the guest system is created "to see", then any method will do. In general, it is usually I / O that is the bottleneck of a virtual machine, therefore, when creating a highly loaded system, this issue must be taken as responsibly as possible.

I am writing this post in order to demonstrate a step-by-step installation and configuration of a virtual machine in Linux based on KVM. Earlier I already wrote about virtualization, where I used the wonderful one.

Now I am faced with the question of renting a good server with a large amount of RAM and a large hard disk. But I don't want to run projects directly on the host machine, so I will delimit them into separate small virtual servers with Linux OS or docker containers (I'll talk about them in another article).

All modern cloud hosting services work on the same principle, i.e. a hoster on good hardware raises a bunch of virtual servers, which we used to call VPS / VDS, and distributes them to users, or automates this process (hello, DigitalOcean).

KVM (kernel-based virtual machine) is Linux software that uses x86-compatible processor hardware to run Intel VT / AMD SVM virtualization technology.

Installing KVM

I will carry out all the scams to create a virtual machine on Ubuntu 16.04.1 LTS. To check if your processes support hardware virtualization based on Intel VT / AMD SVM, run:

Grep -E "(vmx | svm)" / proc / cpuinfo

If the terminal is not empty, then everything is in order and KVM can be installed. Ubuntu only officially supports the KVM hypervisor (included in the Linux kernel) and advises using the libvirt library as a management tool, which we will do next.

You can also check support for hardware virtualization in Ubuntu through the command:

If successful, you will see something like this:

INFO: / dev / kvm exists KVM acceleration can be used

Install packages for working with KVM:

Sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils

If you have access to the graphical shell of the system, then you can install the libvirt GUI manager:

Sudo apt-get install virt-manager

Using virt-manager is quite simple (no more complicated than VirtualBox), so this article will focus on the console version of installing and configuring a virtual server.

Installing and configuring a virtual server

In the console version of installation, configuration and system management, the virsh utility (add-on to the libvirt library) is an indispensable tool. It has a large number of options and parameters, a detailed description can be obtained as follows:

Man virsh

or call the standard "help":

Virsh help

I always adhere to the following rules when working with virtual servers:

  1. I store iso images of the OS in the / var / lib / libvirt / boot directory
  2. I store virtual machine images in the / var / lib / libvirt / images directory
  3. I explicitly assign each new virtual machine its own static IP address via the DHCP server of the hypervisor.

Let's start installing the first virtual machine (64-bit server Ubuntu 16.04 LTS):

Cd / var / lib / libvirt / boot sudo wget http://releases.ubuntu.com/16.04/ubuntu-16.04.1-desktop-amd64.iso

After downloading the image, launch the installation:

Sudo virt-install \ --virt-type = kvm \ --name ubuntu1604 \ --ram 1024 \ --vcpus = 1 \ --os-variant = ubuntu16.04 \ --hvm \ --cdrom = / var / lib / libvirt / boot / ubuntu-16.04.1-server-amd64.iso \ --network network = default, model = virtio \ --graphics vnc \ --disk path = / var / lib / libvirt / images / ubuntu1604. img, size = 20, bus = virtio

Translating all these parameters into "human language", it turns out that we are creating a virtual machine with Ubuntu 16.04 OS, 1024 MB of RAM, 1 processor, a standard network card (the virtual machine will go to the Internet as if due to NAT), 20 GB HDD.

It is worth paying attention to the parameter --os-variant, it tells the hypervisor for which OS the settings should be adapted.
A list of available OS options can be obtained by running the command:

Osinfo-query os

If there is no such utility in your system, then install:

Sudo apt-get install libosinfo-bin

After starting the installation, the following message will appear in the console:

Domain installation still in progress. You can reconnect to the console to complete the installation process.

This is a normal situation, we will continue the installation via VNC.
We look at which port it was raised from our virtual machine (in the neighboring terminal, for example):

Virsh dumpxml ubuntu1604 ... ...

Port 5900, at local address 127.0.0.1. To connect to VNC, you need to use Port Forwarding over ssh. Before doing this, make sure tcp forwarding is enabled on the ssh daemon. To do this, go to the sshd settings:

Cat / etc / ssh / sshd_config | grep AllowTcpForwarding

If nothing was found or you see:

AllowTcpForwarding no

Then we edit the config on

AllowTcpForwarding yes

and restart sshd.

Port forwarding setup

We execute the command on the local machine:

Ssh -fN -l login -L 127.0.0.1:5900:localhost:5900 server_ip

Here we have configured ssh port forwarding from local port 5900 to server port 5900. Now you can connect to VNC using any VNC client. I prefer UltraVNC for its simplicity and convenience.

After a successful connection, the screen will display a standard welcome screen for starting the installation of Ubuntu:

After the installation is complete and the usual reboot, the login window will appear. After logging in, we determine the IP address of the newly-made virtual machine, in order to make it static later:

Ifconfig

We remember and go to the host machine. We take out the mac-address of the "network" card of the virtual machine:

Virsh dumpxml ubuntu1604 | grep "mac address"

We remember our mac address:

Editing the network settings of the hypervisor:

Sudo virsh net-edit default

We are looking for DHCP, and add this:

You should end up with something like this:

For the settings to take effect, you must restart the hypervisor DHCP server:

Sudo virsh net-destroy default sudo virsh net-start default sudo service libvirt-bin restart

After that, we reboot the virtual machine, now it will always have the IP address assigned to it - 192.168.122.131.

There are other ways to set a static IP to the virtual machine, for example, by directly editing the network settings inside the guest system, but here it is as your heart desires. I just showed the option that I myself prefer to use.

To connect to the terminal of the virtual machine, run:

Ssh 192.168.122.131

The vehicle is ready for battle.

Virsh: command list

To see the running virtual hosts (all available ones can be obtained by adding --all):

Sudo virsh list

You can reboot the host:

Sudo virsh reboot $ VM_NAME

Stop virtual machine:

Sudo virsh stop $ VM_NAME

Execute halt:

Sudo virsh destroy $ VM_NAME

Sudo virsh start $ VM_NAME

Disconnection:

Sudo virsh shutdown $ VM_NAME

Add to autostart:

Sudo virsh autostart $ VM_NAME

Very often it is required to clone the system in order to use it in the future as a framework for other virtual operating systems, for this they use the virt-clone utility.

Virt-clone --help

It clones an existing virtual machine and changes the host-sensitive data, for example, mac address. Passwords, files and other user-specific information in the clone remains the same. If the IP address was manually registered on the cloned virtual machine, then there may be problems with SSH access to the clone due to a conflict (2 hosts with the same IP).

In addition to installing a virtual machine via VNC, it is also possible with X11Forwarding via the virt-manager utility. On Windows, for example, you can use Xming and PuTTY for this.