Release notes
Forklift 2.6
You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:
-
VMware vSphere
-
oVirt (oVirt)
-
OpenStack
-
Open Virtual Appliances (OVAs) that were created by VMware vSphere
-
Remote KubeVirt clusters
The release notes describe technical changes, new features and enhancements, known issues, and resolved issues.
Technical changes
This release has the following technical changes:
In earlier releases of Forklift, users had to specify a fingerprint when creating a vSphere provider. This required users to retrieve the fingerprint from the server that vCenter runs on. Forklift no longer requires this fingerprint as an input, but rather computes it from the specified certificate in the case of a secure connection or automatically retrieves it from the server that runs vCenter/ESXi in the case of an insecure connection.
The user interface console has improved the process of creating a migration plan. The new migration plan dialog enables faster creation of migration plans.
It includes only the minimal settings that are required, while you can confirgure advanced settings separately. The new dialog also provides defaults for network and storage mappings, where applicable. The new dialog can also be invoked from the the Provider
> Virtual Machines
tab, after selecting the virtual machines to migrate. It also better aligns with the user experience in the OCP console.
virtual machine preferences
have replaced OpenShift templatesThe virtual machine preferences
have replaced OpenShift templates. Forklift currently falls back to using OpenShift templates when a relevant preference is not available.
Custom mappings of guest operating system type to virtual machine preference can be configured using config
maps. This is in order to use custom virtual machine preferences, or to support more guest operating system types.
Migration from OVA moves from being a Technical Preview and is now a fully supported feature.
Running
stateForklift creates the VM with its desired Running
state on the target provider, instead of creating the VM and then running it as an additional operation. (MTV-794)
must-gather
logs can now be loaded only by using the CLIThe Forklift web console can no longer download logs. With this update, you must download must-gather
logs by using CLI commands. For more information, see Must Gather Operator.
pvc-init
pods when migrating from vSphereForklift no longer runs pvc-init
pods during cold migration from a vSphere provider to the OpenShift cluster that Forklift is deployed on. However, in other flows where data volumes are used, they are set with the cdi.kubevirt.io/storage.bind.immediate.requested
annotation, and CDI runs first-consume pods for storage classes with volume binding mode WaitForFirstConsumer
.
New features and enhancements
This section provides features and enhancements introduced in Forklift 2.6.
New features and enhancements 2.6.3
You can now perform cold migrations from a vSphere provider of VMs whose virtual disks are encrypted by Linux Unified Key Setup (LUKS). (MTV-831)
You can now specify the primary disk when you migrate VMs from vSphere with more than one bootable disk. This avoids Forklift automatically attempting to convert the first bootable disk that it detects while it examines all the disks of a virtual machine. This feature is needed because the first bootable disk is not necessarily the disk that the VM is expected to boot from in KubeVirt. (MTV-1079)
You can now remotely access the UI of a remote cluster when you create a source provider. For example, if the provider is a remote oVirt oVirt cluster, Forklift adds a link to the remote oVirt web console when you define the provider. This feature makes it easier for you to manage and debug a migration from remote clusters. (MTV-1054)
New features and enhancements 2.6.0
You can now specify a CA certificate that can be used to authenticate the server that runs vCenter or ESXi, depending on the specified SDK endpoint of the vSphere provider. (MTV-530)
You can now specify a CA certificate that can be used to authenticate the API server of a remote OpenShift cluster. (MTV-728)
Forklift enables the configuration of vSphere providers with the SDK of ESXi. You need to select ESXi as the Endpoint type of the vSphere provider and specify the URL of the SDK of the ESXi server. (MTV-514)
Forklift supports the migration of VMs that were created from images in OpenStack. (MTV-644)
Forklift supports migrations of VMs that are set with Fibre Channel (FC) LUNs from oVirt. As with other LUN disks, you need to ensure the OpenShift nodes have access to the FC LUNs. During the migrations, the FC LUNs are detached from the source VMs in oVirt and attached to the migrated VMs in OpenShift. (MTV-659)
Forklift sets the CPU type of migrated VMs in OpenShift with their custom CPU type in oVirt. In addition, a new option was added to migration plans that are set with oVirt as a source provider to preserve the original CPU types of source VMs. When this option is selected, Forklift identifies the CPU type based on the cluster configuration and sets this CPU type for the migrated VMs, for which the source VMs are not set with a custom CPU. (MTV-547)
Red Hat Enterprise Linux (RHEL) 9 does not support RHEL 6 as a guest operating system. Therefore, RHEL 6 is not supported in OpenShift Virtualization. With this update, a validation of RHEL 6 guest operating system was added to OpenShift Virtualization. (MTV413)
The ability to retrieve CA certificates, which was available in previous versions, has been restored. The vSphere Verify certificate
option is in the add-provider
dialog. This option was removed in the transition to the OKD console and has now been added to the console. This functionality is also available for oVirt, OpenStack, and OpenShift providers now. (MTV-737)
Forklift validates the availability of a VDDK image that is specified for a vSphere provider on the target OpenShift name as part of the validation of a migration plan. Forklift also checks whether the libvixDiskLib.so
symbolic link (symlink) exists within the image. If the validation fails, the migration plan cannot be started. (MTV-618)
Forklift presents a warning when attempting to migrate a VM that is set with a TPM device from oVirt or vSphere. The migrated VM in OpenShift would be set with a TPM device but without the content of the TPM device on the source environment. (MTV-378)
With this update, you can edit plans that have failed to migrate any VMs. Some plans fail or are canceled because of incorrect network and storage mappings. You can now edit these plans until they succeed. (MTV-779)
The validation service includes default validation rules for virtual machines from the Open Virtual Appliance (OVA). (MTV-669)
Resolved issues
This release has the following resolved issues:
Resolved issues 2.6.7
In earlier releases of Forklift, there was an issue with the incorrect handling of single and double quotes in interface configuration (ifcfg) files, which control the software interfaces for individual network devices. This issue has been resolved in Forklift 2.6.7, in order to cover additional IP configurations on Red Hat Enterprise Linux, CentOS, Rocky Linux and similar distributions. (MTV-1439)
In earlier releases of Forklift, there was an issue with the preservation of netplan-based network configurations. This issue has been resolved in Forklift 2.6.7, so that static IP configurations are preserved if netplan (netplan.io) is used by using the netplan configuration files to generate udev rules for known mac-address and ifname
tuples. (MTV-1440)
In earlier releases of Forklift, there was an issue with the accidental leakage of error messages into udev .rules files. This issue has been resolved in Forklift 2.6.7, with a static IP persistence script added to the udev rule file. (MTV-1441)
Resolved issues 2.6.6
In earlier releases of Forklift, there was a runtime error of invalid memory address
or nil pointer dereference
caused by a pointer that was nil, and there was an attempt to access the value that it points to. This issue has been resolved in Forklift 2.6.6. (MTV-1353)
In earlier releases of Forklift, the scheduler could place all migration pods on a single node. When this happened, the node ran out of the resources. This issue has been resolved in Forklift 2.6.6. (MTV-1354)
In earlier releases of Forklift, a vulnerability was found in the Forklift Controller. There is no verification against the authorization header, except to ensure it uses bearer authentication. Without an authorization header and a bearer token, a 401
error occurs. The presence of a token value provides a 200 response with the requested information. This issue has been resolved in Forklift 2.6.6.
For more details, see (CVE-2024-8509).
Resolved issues 2.6.5
In earlier releases of Forklift, during the migration of Rocky Linux 8, CentOS 7.2 and later, and Ubuntu 22 virtual machines (VM) from VMware to OKD (OCP), the name of the network interfaces is modified, and the static IP configuration for the VM is no longer functional. This issue has been resolved for static IPs in Rocky Linux 8, Centos 7.2 and later, Ubuntu 22 in Forklift 2.6.5. (MTV-595)
Resolved issues 2.6.4
Windows (Windows 2022) VMs configured with multiple disks, which are Online before the migration, are Offline after a successful migration from oVirt or VMware to OKD, using Forklift. Only the C:\
primary disk is Online. This issue has been resolved for basic disks in Forklift 2.6.4. (MTV-1299)
For details of the known issue of dynamic disks being Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd
, see (MTV-1344).
In earlier releases of Forklift, while migrating a Windows 2022 Server with a static IP address assigned, and selecting the Preserve static IPs option, after a successful Windows migration, while the node started and the IP address was preserved, the subnet mask, gateway, and DNS servers were not preserved. This resulted in an incomplete migration, and the customer was forced to log in locally from the console to fully configure the network. This issue has been resolved in Forklift 2.6.4. (MTV-1286)
qemu-guest-agent
not being installed at first boot in Windows Server 2022After a successful Windows 2022 server guest migration using Forklift 2.6.1, the qemu-guest-agent
is not completely installed. The Windows Scheduled task is being created, however it is being set to run 4 hours in the future instead of the intended 2 minutes in the future. (MTV-1325)
Resolved issues 2.6.3
golang: net
malformed DNS message can cause infinite loopIn earlier releases of Forklift, there was a flaw was discovered in the stdlib
package of the Go programming language, which impacts previous versions of Forklift. This vulnerability primarily threatens web-facing applications and services that rely on Go for DNS queries. This issue has been resolved in Forklift 2.6.3.
For more details, see (CVE-2024-24788).
virt-v2v
copies disks sequentially (vSphere only)In earlier releases of Forklift, there was a problem with the way Forklift interpreted the controller_max_vm_inflight
setting for vSphere to schedule migrations. This issue has been resolved in Forklift 2.6.3. (MTV-1191)
In earlier versions of Forklift, cold migrations from a vSphere provider with an ESXi SDK endpoint failed if any network was used except for the default network for disk transfers. This issue has been resolved in Forklift 2.6.3. (MTV-1180)
DiskTransfer
state (vSphere only)In earlier versions of Forklift, warm migrations over an ESXi network from a vSphere provider with a vCenter SDK endpoint were stuck in DiskTransfer
state because Forklift was unable to locate image snapshots. This issue has been resolved in Forklift 2.6.3. (MTV-1161)
Lost
state after cold migrationsIn earlier versions of Forklift, after cold migrations, there were leftover PVCs that had a status of Lost
instead of being deleted, even after the migration plan that created them was archived and deleted. Investigation showed that this was because importer pods were retained after copying, by default, rather than in only specific cases. This issue has been resolved in Forklift 2.6.3. (MTV-1095)
In earlier versions of Forklift, some VMs that were imported from vSphere were not mapped to a template in OKD while other VMs, with the same guest operating system, were mapped to the corresponding template. Investigations indicated that this was because vSphere stopped reporting the operating system after not receiving updates from VMware tools for some time. This issue has been resolved in Forklift 2.6.3 by taking the value of the operating system from the output of the investigation that virt-v2v
performs on the disks. (MTV-1046)
Resolved issues 2.6.2
net/http, x/net/http2
: unlimited number of CONTINUATION
frames can cause a denial-of-service (DoS) attackA flaw was discovered with the implementation of the HTTP/2
protocol in the Go programming language, which impacts previous versions of Forklift. There were insufficient limitations on the number of CONTINUATION frames sent within a single stream. An attacker could potentially exploit this to cause a denial-of-service (DoS) attack. This flaw has been resolved in Forklift 2.6.2.
For more details, see (CVE-2023-45288).
mtv-api-container
: Golang html/template: errors
returned from MarshalJSON
methods may break template escapingA flaw was found in the html/template
Golang standard library package, which impacts previous versions of Forklift. If errors returned from MarshalJSON
methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in Forklift 2.6.2.
For more details, see (CVE-2024-24785).
mtv-validation-container
: Golang net/mail
: comments in display names are incorrectly handledA flaw was found in the net/mail
Golang standard library package, which impacts previous versions of Forklift. The ParseAddressList
function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in Forklift 2.6.2.
For more details, see (CVE-2024-24784).
mtv-api-container
: Golang crypto/x509
: Verify panics on certificates with an unknown public key algorithmA flaw was found in the crypto/x509
Golang standard library package, which impacts previous versions of Forklift. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify
to panic. This affects all crypto/tls
clients and servers that set Config.ClientAuth
to VerifyClientCertIfGiven
or RequireAndVerifyClientCert
. The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in Forklift 2.6.2.
For more details, see (CVE-2024-24783).
mtv-api-container
: Golang net/http
memory exhaustion in Request.ParseMultipartForm
A flaw was found in the net/http
Golang standard library package, which impacts previous versions of Forklift. When parsing a multipart
form, either explicitly with Request.ParseMultipartForm
or implicitly with Request.FormValue
, Request.PostFormValue
, or Request.FormFile
, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in Forklift 2.6.2.
For more details, see (CVE-2023-45290).
In earlier releases of Forklift, migration of VMs failed because the migration was stuck in the AllocateDisks
phase. As a result of being stuck, the migration did not progress, and PVCs were not bound. The root cause of the issue was that ImageConversion
did not run when target storage was set for wait-for-first-consumer
. The problem was resolved in Forklift 2.6.2. (MTV-1126)
In earlier releases of Forklift, forklift-controller
panicked when a user attempted to import VMs that had direct LUNs. The problem was resolved in Forklift 2.6.2. (MTV-1134)
Resolved issues 2.6.1
In Forklift 2.6.0, there was a problem in copying VMs with multiple disks from VMware vSphere and from OVA files. The migrations appeared to succeed but all the disks were transferred to the same PV in the target environment while other disks were empty. In some cases, bootable disks were overridden, so the VM could not boot. In other cases, data from the other disks was missing. The problem was resolved in Forklift 2.6.1. (MTV-1067)
In Forklift 2.6.0, migrations from one OKD cluster to another failed when the time to transfer the disks of a VM exceeded the time to live (TTL) of the Export API in OpenShift, which was set to 2 hours by default. The problem was resolved in Forklift 2.6.1 by setting the default TTL of the Export API to 12 hours, which greatly reduces the possibility of an expiration of the Export API. Additionally, you can increase or decrease the TTL setting as needed. (MTV-1052)
In earlier releases of Forklift, if a VM was configured with a disk that was on a datastore that was no longer available in vSphere at the time a migration was attempted, the forklift-controller
crashed, rendering Forklift unusable. In Forklift 2.6.1, Forklift presents a critical validation for VMs with such disks, informing users of the problem, and the forklift-controller
no longer crashes, although it cannot transfer the disk. (MTV-1029)
Resolved issues 2.6.0
In earlier releases of Forklift, the PV was not removed when the OVA provider was deleted. This has been resolved in Forklift 2.6.0, and the PV is automatically deleted when the OVA provider is deleted. (MTV-848)
In earlier releases of Forklift, when migrating a VM that has a snapshot from VMware, the VM that was created in OpenShift Virtualization contained the data in the snapshot but not the latest data of the VM. This has been resolved in Forklift 2.6.0. (MTV-447)
populate
pods and PVCIn earlier releases of Forklift, when you canceled and deleted a failed migration plan, and after creating a PVC and spawning the populate
pods, the populate
pods and PVC were not deleted. You had to delete the pods and PVC manually. This issue has been resolved in Forklift 2.6.0. (MTV-678)
In earlier releases of Forklift, when migrating from OKD to OKD, the version of the source provider cluster had to be OKD version 4.13 or later. This issue has been resolved in Forklift 2.6.0, with validation being shown when migrating from versions of OpenShift before 4.13. (MTV-734)
In earlier releases of Forklift, multiple disks from different storage domains were always mapped to a single storage class, regardless of the storage mapping that was configured. This issue has been resolved in Forklift 2.6.0. (MTV-1008)
In earlier releases of Forklift, a VM that was migrated from an OVA that did not include the firmware type in its OVF configuration was set with UEFI. This was incorrect for VMs that were configured with BIOS. This issue has been resolved in Forklift 2.6.0, as Forklift now consumes the firmware that is detected by virt-v2v
during the conversion of the disks. (MTV-759)
In earlier releases of Forklift, when configuring a transfer network for vSphere hosts, the console plugin created the Host
CR before creating its secret. The secret should be specified first in order to validate it before the Host
CR is posted. This issue has been resolved in Forklift 2.6.0. (MTV-868)
ConnectionTestFailed
message appearsIn earlier releases of Forklift, when adding an OVA provider, the error message ConnectionTestFailed
instantly appeared, although the provider had been created successfully. This issue has been resolved in Forklift 2.6.0. (MTV-671)
ConnectionTestSucceeded
True response from the wrong URLIn earlier releases of Forklift, the ConnectionTestSucceeded
condition was set to True
even when the URL was different than the API endpoint for the RHV Manager. This issue has been resolved in Forklift 2.6.0. (MTV-740)
In earlier releases of Forklift, migrating a VM that is placed in a Data Center that is stored directly under the /vcenter
in vSphere succeeded. However, it failed when the Data Center was stored inside a folder. This issue was resolved in Forklift 2.6.0. (MTV-796)
The OVA inventory watcher detects files changes, including deleted files. Updates from the ova-provider-server
pod are now sent every five minutes to the forklift-controller
pod that updates the inventory. (MTV-733)
In earlier releases of Forklift, the error logs lacked clear information to identify the reason for a failure to create a PV on a destination storage class that does not have a configured storage profile. This issue was resolved in Forklift 2.6.0. (MTV-928)
CopyDisks
phase when there is an outdated ovirtvolumepopulatorIn earlier releases of Forklift, an earlier failed migration could have left an outdated ovirtvolumepopulator
. When starting a new plan for the same VM to the same project, the CreateDataVolumes
phase did not create populator PVCs when transitioning to CopyDisks
, causing the CopyDisks
phase to stay indefinitely. This issue was resolved in Forklift 2.6.0. (MTV-929)
For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.
Known issues
This release has the following known issues:
Warm migration and remote migration flows are impacted by multiple bugs
Warm migration and remote migration flows are impacted by multiple bugs. It is strongly recommended to fall back to cold migration until this issue is resolved. (MTV-1366) |
When migrating older Linux distributions, such as CentOS 7.0 and 7.1, virtual machines (VMs) from VMware to OKD, the name of the network interfaces changes, and the static IP configuration for the VM no longer functions. This issue is caused by RHEL 7.0 and 7.1 still requiring virtio-transitional
. Workaround: Manually update the guest to RHEL 7.2 or update the VM specification post-migration to use transitional. (MTV-1382)
The dynamic disks are Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd
. (MTV-1344)
The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)
vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key might be missing on the target OKD cluster.
Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)
hostPath
When migrating a VM with multiple disks to more than one storage class of type hostPath
, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.
Warm migrations and migrations to remote OKD clusters from vSphere do not support the same guest operating systems that are supported in cold migrations and migrations to the local OKD cluster. RHEL 8 and RHEL 9 might cause this limitation.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.
When migrating VMs that are installed with RHEL 9 as a guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in OpenShift Virtualization. (MTV-491)
When migrating a virtual machine (VM) with NVME disks from vSphere, the migration process fails, and the Web Console shows that the Convert image to kubevirt
stage is running
but did not finish successfully. (MTV-963)
Migrating an image-based VM without the virtual_size
field can fail on a block mode storage class. (MTV-946)
Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)
Migrating VMs with independent persistent disks from VMware to OCP-V fails. (MTV-993)
When vSphere does not receive updates about the guest operating system from the VMware tools, it considers the information about the guest operating system to be outdated and ceases to report it. When this occurs, Forklift is unaware of the guest operating system of the VM and is unable to associate it with the appropriate virtual machine preference or OpenShift template. (MTV-1046)
default
projectThe migration process fails when migrating an image-based VM from OpenStack to the default
project. (MTV-964)
For a complete list of all known issues in this release, see the list of Known Issues in Jira.
Forklift 2.5
You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:
-
VMware vSphere
-
oVirt (oVirt)
-
OpenStack
-
Open Virtual Appliances (OVAs) that were created by VMware vSphere
-
Remote KubeVirt clusters
The release notes describe technical changes, new features and enhancements, and known issues for Forklift.
Technical changes
This release has the following technical changes:
In this version of Forklift, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.
Forklift enables migrations from vSphere source providers by not enforcing Enterprise Master Secret (EMS). This enables migrating from all vSphere versions that Forklift supports, including migrations that do not meet 2023 FIPS requirements.
The user interface of the create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.
The old UI of Forklift 2.3 cannot be enabled by setting feature_ui: true
in ForkliftController anymore.
Forklift 2.5.6 can be deployed on OpenShift 4.15 clusters.
New features and enhancements
This release has the following features and improvements:
In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)
Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere. |
Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.
Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider and a destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)
During the migration from oVirt (oVirt), direct Logical Units (LUNs) are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)
In addition to standard password authentication, Forklift supports the following authentication methods: Token authentication and Application credential authentication. (MTV-539)
The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)
You can now create the VMware vSphere source provider without specifying a VMware Virtual Disk Development Kit (VDDK) init
image. It is strongly recommended you create a VDDK init
image to accelerate migrations.
In Forklift 2.5.3, deployment on OpenShift Kubernetes Engine (OKE) has been enabled. For more information, see About OpenShift Kubernetes Engine. (MTV-803)
In Forklift 2.5.4, migration of VMs to destination storage classes that have encrypted RADOS Block Devices (RBD) volumes is now supported.
To make use of this new feature, set the value of the parameter controller_block_overhead
to 1Gi
, following the procedure in Configuring the MTV Operator. (MTV-851)
Known issues
This release has the following known issues:
Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)
The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)
vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.
Warm migration from oVirt fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456)
When migrating a VM with multiple disks to more than one storage classes of type hostPath
, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.
Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.
When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in OpenShift Virtualization. (MTV-491)
When adding an OVA provider, the error message ConnectionTestFailed
can appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready
, this means that the ova server pod creation
has failed. (MTV-671)
ovirtvolumepopulator
from failed migration causes plan to stay indefinitely in CopyDisks
phaseAn outdated ovirtvolumepopulator
in the namespace, left over from an earlier failed migration, stops a new plan of the same VM when it transitions to CopyDisks
phase. The plan remains in that phase indefinitely. (MTV-929)
The migration fails to build the Persistent Volume Claim (PVC) if the destination storage class does not have a configured storage profile. The forklift-controller
raises an error message without a clear reason for failing to create a PVC. (MTV-928)
For a complete list of all known issues in this release, see the list of Known Issues in Jira.
Resolved issues
This release has the following resolved issues:
Versions of the package jsrsasign
before 11.0.0, used in earlier releases of Forklift, are vulnerable to Observable Discrepancy in the RSA PKCS1.5 or RSA-OAEP decryption process. This discrepancy means an attacker could decrypt ciphertexts by exploiting this vulnerability. However, exploiting this vulnerability requires the attacker to have access to a large number of ciphertexts encrypted with the same key. This issue has been resolved in Forklift 2.5.5 by upgrading the package jsrasign
to version 11.0.0.
For more information, see CVE-2024-21484.
A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of Forklift, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.
This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-44487 (Rapid Reset Attack) and CVE-2023-39325 (Rapid Reset Attack).
Context.FileAttachment
functionA flaw was found in the Gin-Gonic Gin Web Framework, used by Forklift. The filename parameter of the Context.FileAttachment
function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment
function. A maliciously created filename could cause the Content-Disposition
header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition
header.
This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.
For more information, see CVE-2023-29401 (Gin-Gonic Gin Web Framework) and CVE-2023-26125.
A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means Forklift versions before Forklift 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)
This issue has been resolved in Forklift 2.5.2. It is advised to update to this version of Forklift or later.
For more information, see CVE-2023-26144.
A flaw was found in otelhttp handler
of OpenTelemetry-Go. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to a memory leak caused by http.user_agent
and http.method
having unbound cardinality, which could allow a remote, unauthenticated attacker to exhaust the server’s memory by sending many malicious requests, affecting the availability. (MTV-795)
This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.
For more information, see CVE-2023-45142.
A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to QUIC connections not setting an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. With the fix, connections now consistently reject messages larger than 65KiB in size. (MTV-708)
This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.
For more information, see CVE-2023-39322.
A flaw was found in Golang. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable to processing an incomplete post-handshake message for a QUIC connection, which causes a panic. (MTV-693)
This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.
For more information, see CVE-2023-39321.
A flaw was found in the Golang html/template
package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable, as the html/template
package did not properly handle occurrences of <script
, <!--
, and </script
within JavaScript literals in <script>
contexts. This flaw could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped, which could be leveraged to perform an XSS
attack. (MTV-693)
This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.
For more information, see CVE-2023-39319.
A flaw was found in the Golang html/template
package used in Forklift. This flaw means Forklift versions before Forklift 2.5.3 are vulnerable as the html/template
package did not properly handle HMTL-like ""
comment tokens, nor hashbang \#!
comment tokens. This flaw could cause the template parser to improperly interpret the contents of <script>
contexts, causing actions to be improperly escaped, which could be leveraged to perform an XSS
attack. (MTV-693)
This issue has been resolved in Forklift 2.5.3. It is advised to update to this version of Forklift or later.
For more information, see CVE-2023-39318.
In earlier releases of Forklift 2.3, the log files downloaded from UI could contain logs that are related to an earlier migration plan. (MTV-783)
This issue has been resolved in Forklift 2.5.3.
In earlier releases of Forklift 2.3, the size of disks that are extended in RHV was not adequately monitored. This resulted in the inability to migrate virtual machines with extended disks from a RHV provider. (MTV-830)
This issue has been resolved in Forklift 2.5.3.
In earlier releases of Forklift 2.3, the filesystem overhead for new persistent volumes was hard-coded to 10%. The overhead was insufficient for certain filesystem types, resulting in failures during cold-migrations from oVirt and OSP to the cluster where Forklift is deployed. In other filesystem types, the hard-coded overhead was too high, resulting in excessive storage consumption.
In Forklift 2.5.3, the filesystem overhead can be configured, as it is no longer hard-coded. If your migration allocates persistent volumes without CDI, you can adjust the file system overhead. You adjust the file system overhead by adding the following label and value to the spec
portion of the forklift-controller
CR:
spec:
`controller_filesystem_overhead: <percentage>` (1)
1 | The percentage of overhead. If this label is not added, the default value of 10% is used. This setting is valid only if the storageclass is filesystem . (MTV-699) |
In earlier releases of Forklift, the create and update provider forms could have presented stale data.
This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)
In earlier releases of Forklift, the Migration Controller
service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.
This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)
In earlier releases of Forklift, the Migration Controller
service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.
This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)
In earlier releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine
rejected the snapshot creation, or disk transfer, operation.
This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)
In earlier releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because it could not trigger the snapshot creation.
This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)
In earlier releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.
This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)
In earlier releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.
This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)
In earlier releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.
This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated can boot on the target OKD cluster. (MTV-433)
In Forklift releases 2.4.0-2.5.3, cold migrations from vSphere to the local cluster on which Forklift was deployed did not take a specified transfer network into account. This issue is resolved in Forklift 2.5.4. (MTV-846)
In Forklift 2.5.6, the virt-v2v arguments include –root first
, which mitigates an issue with multi-boot VMs where the pod fails. This is a fix for a regression that was introduced in Forklift 2.4, in which the --root argument was dropped. (MTV-987)
In earlier releases of Forklift 2.3, populator pods were always restarted on failure. This made it difficult to gather the logs from the failed pods. In Forklift 2.5.3, the number of restarts of populator pods is limited to three times. On the third and final time, the populator pod remains in the fail status and its logs can then be easily gathered by must-gather
and by forklift-controller
to know this step has failed. (MTV-818)
A vulnerability found in the Node.js Package Manager (npm) IP Package can allow an attacker to obtain sensitive information and obtain access to normally inaccessible resources. MTV-941
This issue has been resolved in Forklift 2.5.6.
For more information, see CVE-2023-42282
A flaw was found in the versions of the Golang net/http/internal
package, that were used in earlier releases of Forklift. This flaw could allow a malicious user to send an HTTP request and cause the receiver to read more bytes from the network than are in the body (up to 1GiB), causing the receiver to fail reading the response, possibly leading to a Denial of Service (DoS). This issue has been resolved in Forklift 2.5.6.
For more information, see CVE-2023-39326.
For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.
Upgrade notes
It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.
When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field spec.selector of deployment forklift-controller
is immutable. Workaround: Remove the custom resource forklift-controller
of type ForkliftController
from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin
pod runs to load the upgraded Forklift web console. (MTV-518)
Forklift 2.4
Migrate virtual machines (VMs) from VMware vSphere or oVirt or OpenStack to KubeVirt with Forklift.
The release notes describe technical changes, new features and enhancements, and known issues.
Technical changes
This release has the following technical changes:
Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)
Disk transfers use ovirt-imageio
client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.
When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.
The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.
You must update the StorageProfile
resource with accessModes
and volumeMode
for non-provisioner storage classes such as NFS.
Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:
-
If you are migrating to OCP 4.12 or earlier, use VDDK version 7.
-
If you are migrating to OCP 4.13 or later, use VDDK version 8.
New features and enhancements
This release has the following features and improvements:
Forklift now supports migrations with OpenStack as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.
The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration
to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true
in ForkliftController. (MTV-427)
Skip certificate validation option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.
Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.
Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)
Known issues
This release has the following known issues:
Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)
The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)
If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)
vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.
The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.
The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)
Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.
Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)
When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.
When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)
When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)
VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)
Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.
When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)
When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field spec.selector of deployment forklift-controller
is immutable. Workaround: remove the custom resource forklift-controller
of type ForkliftController
from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin
pod runs to load the upgraded Forklift web console. (MTV-518)
Resolved issues
This release has the following resolved issues:
A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.
This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.
For more information, see CVE-2023-44487 (Rapid Reset Attack) and CVE-2023-39325 (Rapid Reset Attack).
Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)
If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)
Previously, the cluster-admin
role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)
Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)
The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)
Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.
The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)
Forklift 2.3
You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.
The release notes describe technical changes, new features and enhancements, and known issues.
Technical changes
This release has the following technical changes:
In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider
CR for VMware migrations.
You must update the StorageProfile
resource with accessModes
and volumeMode
for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.
New features and enhancements
This release has the following features and improvements:
You can use warm migration to migrate VMs from both VMware and oVirt.
VMware users do not have to have full cluster-admin
privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.
Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.
Known issues
This release has the following known issues:
When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)
The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)
If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)
The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)
Deleting a migration plan does not remove temporary resources such as importer
pods, conversion
pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.
The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)
If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)
The problem occurs for both vSphere and oVirt migrations.
Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)
Forklift 2.2
You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.
The release notes describe technical changes, new features and enhancements, and known issues.
Technical changes
This release has the following technical changes:
You can set the time interval between snapshots taken during the precopy stage of warm migration.
New features and enhancements
This release has the following features and improvements:
You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory
service and written in Rego, the Open Policy Agent native query language.
You can download logs for a migration plan or a migrated VM by using the Forklift web console.
You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.
You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.
Known issues
This release has the following known issues:
Certain Validation
service issues, which are marked as Critical
and display the assessment text, The VM will not be migrated
, do not block migration. (BZ#2025977)
The following Validation
service assessments do not block migration:
Assessment | Result |
---|---|
The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported). |
The migrated VM will have a virtio disk if the source interface is not recognized. |
The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported). |
The migrated VM will have a virtio NIC if the source interface is not recognized. |
The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization. |
The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly. |
One or more of the VM’s disks has an illegal or locked status condition. |
The migration will proceed but the disk transfer is likely to fail. |
The VM has a disk with a storage type other than |
The migration will proceed but the disk transfer is likely to fail. |
The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization. |
The migration will proceed but the disk transfer is likely to fail. |
The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization. |
The migrated VM will not have USB devices. |
The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization. |
The migrated VM will not have a watchdog device. |
The VM’s status is not |
The migration will proceed but it might hang if the VM cannot be powered off. |
The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)
If a resource does not exist, for example, if the virt-launcher
pod does not exist because the migrated VM is powered off, its log is unavailable.
The following error appears in the missing resource’s current.log
file when it is downloaded from the web console or created with the must-gather
tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
(BZ#2023260)
Retaining the importer
pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)
As a temporary workaround, the importer
pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer
pod log is not retained after warm migration is complete. You can only view the importer
pod log by using the oc logs -f <cdi-importer_pod>
command during the precopy stage.
This issue only affects the importer
pod log and warm migration. Cold migration and the virt-v2v
logs are not affected.
Deleting a migration plan does not remove temporary resources such as importer
pods, conversion
pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.
The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)
Plan
CR are not displayed in the web console.If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)
If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)
Succeeded
in the Plan
CRIf you delete a target VirtualMachine
CR during the Convert image to kubevirt step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found
. However, the status of the VM migration is Succeeded
in the Plan
CR file and in the web console. (BZ#2031529)
Forklift 2.1
You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.
The release notes describe new features and enhancements, known issues, and technical changes.
Technical changes
HyperConverged
custom resourceThe VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged
custom resource. Before this release, it was referenced in the v2v-vmware
config map.
New features and enhancements
This release adds the following features and improvements.
You can perform a cold migration of VMs from oVirt.
You can create migration hooks to run Ansible playbooks or custom code before or after migration.
must-gather
data collectionYou can specify options for the must-gather
tool that enable you to filter the data by namespace, migration plan, or VMs.
You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.
Known issues
The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)
The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)
The cause of this problem might be one of the following conditions:
-
The storage class does not exist on the target cluster.
-
The VDDK image has not been added to the
HyperConverged
custom resource. -
The VM does not have a disk.
-
The VM disk is locked.
-
The VM time zone is not set to UTC.
-
The VM is configured for a USB device.
To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.
To determine the cause:
-
Click Workloads → Virtualization in the OKD web console.
-
Click the Virtual Machines tab.
-
Select a virtual machine to open the Virtual Machine Overview screen.
-
Click Status to view the status of the virtual machine.
The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time
after first assessing the potential impact on the workload. (BZ#1993259)
If a oVirt resource UUID is used in a Host
, NetworkMap
, StorageMap
, or Plan
custom resource (CR), a "Provider not found" error is displayed.
You must use the resource name. (BZ#1994037)
If a oVirt resource name is used in a NetworkMap
, StorageMap
, or Plan
custom resource (CR) and if the same resource name exists in another data center, the Plan
CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.
In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)
Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)
Forklift 2.0
You can migrate virtual machines (VMs) from VMware vSphere with Forklift.
The release notes describe new features and enhancements, known issues, and technical changes.
New features and enhancements
This release adds the following features and improvements.
Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.
You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.
You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.
The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.
The validation service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
Known issues
This section describes known issues and mitigations.
The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)
If the network map remains in a NotReady
state and the NetworkMap
manifest displays a Destination network not found
error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)
Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)
You can do one of the following to mitigate this issue:
-
Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.
-
Increase the snapshot interval in the
vm-import-controller-config
config map to720
minutes:$ kubectl patch configmap/vm-import-controller-config \ -n openshift-cnv -p '{"data": \ {"warmImport.intervalMinutes": "720"}}'