Installing and using Forklift 2.8
- About Forklift
- Forklift cold migration and warm migration introduction
- Prerequisites
- Installing and configuring the Forklift Operator
- Migrating virtual machines by using the OKD web console
- Migrating virtual machines from VMware vSphere
- Migrating virtual machines from oVirt
- Migrating virtual machines from OpenStack
- Migrating virtual machines from OVA
- Migrating virtual machines from KubeVirt
- Migrating virtual machines from the command line
- Permissions needed by non-administrators to work with migration plan components
- Migrating virtual machines
- Migrating from a VMware vSphere source provider
- Migrating from a oVirt source provider
- Migrating from an OpenStack source provider
- Migrating from an Open Virtual Appliance (OVA) source provider
- Migrating from a Red Hat KubeVirt source provider
- Advanced migration options
- Upgrading Forklift
- Uninstalling Forklift
- Forklift performance recommendations
- Ensure fast storage and network speeds
- Ensure fast datastore read speeds to ensure efficient and performant migrations
- Endpoint types
- Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance
- Avoid additional network load on VMware networks
- Control maximum concurrent disk migrations per ESXi host
- Migrations are completed faster when migrating multiple VMs concurrently
- Migrations complete faster using multiple hosts
- Multiple migration plans compared to a single large migration plan
- Maximum values tested for cold migrations
- Warm migration recommendations
- Maximum values tested for warm migrations
- Recommendations for migrating VMs with large disks
- Increasing asynchronous I/O (AIO) sizes and buffer counts for NBD transport mode
- Troubleshooting
- Telemetry
- Additional information
About Forklift
You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:
-
VMware vSphere
-
oVirt (oVirt)
-
OpenStack
-
Open Virtual Appliances (OVAs) that were created by VMware vSphere
-
Remote KubeVirt clusters
Forklift cold migration and warm migration introduction
Cold migration is when a powered off virtual machine (VM) is migrated to a separate host. The VM is powered off, and there is no need for common shared storage.
Warm migration is when a powered on VM is migrated to a separate host. A source host state is cloned to the destination host.
-
Create an initial snapshot of running VM disks.
-
Copy first snapshot to target: full-disk transfer, the largest amount of data copied. It takes more time to complete.
-
Copy deltas: changed data, copying only data that has changed since the last snapshot was taken. It takes less time to complete.
-
Create a new snapshot.
-
Copy the delta between the previous snapshot and the new snapshot.
-
Schedule the next snapshot, configurable by default, one hour after the last snapshot finished.
-
-
An arbitrary number of deltas can be copied.
-
Scheduled time to finalize warm migration
-
Shut down the source VM.
-
Copy the final snapshot delta to the target.
-
Continue in the same way as cold migration
-
Guest conversion
-
Starting target VM (optional)
-
Migration speed comparison
-
The observed speeds for the warm migration single disk transfer and disk conversion are approximately the same as for the cold migration.
-
The benefit of warm migration is that the transfer of the snapshot is happening in the background while the VM is powered on.
-
The default snapshot time is taken every 60 minutes. If VMs change substantially, more data needs to be transferred than in cold migration when the VM is powered off.
-
The cutover time, meanng the shutdown of the VM and last snapshot transfer, is dependent on how much the VM has changed since the last snapshot.
About cold and warm migration
Forklift supports cold migration from:
-
VMware vSphere
-
oVirt (oVirt)
-
OpenStack
-
Remote KubeVirt clusters
Forklift supports warm migration from VMware vSphere and from oVirt.
Cold migration
Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.
VMware only: In cold migrations, in situations in which a package manager cannot be used during the migration, Forklift does not install the To enable Forklift to automatically install If that is not possible, use your preferred automated or manual procedure to install |
Warm migration
Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.
Then the VMs are shut down and the remaining data is copied during the cutover stage.
The VMs are not shut down during the precopy stage.
The VM disks are copied incrementally by using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller
deployment.
You must enable CBT for each source VM and each VM disk. A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the |
The precopy stage runs until the cutover stage is started manually or is scheduled to start.
The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.
You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration
manifest.
Advantages and disadvantages of cold and warm migrations
The table that follows offers a more detailed description of the advantages and disadvantages of cold migration and warm migration. It assumes that you have installed Red Hat Enterprise Linux (RHEL) 9 on the OKD platform on which you installed Forklift:
Cold migration | Warm migration | |
---|---|---|
Duration |
Correlates to the amount of data on the disks. Each block is copied once. |
Correlates to the amount of data on the disks and VM utilization. Blocks may be copied multiple times. |
Fail fast |
Convert and then transfer. Each VM is converted to be compatible with OKD and, if the conversion is successful, the VM is transferred. If a VM cannot be converted, the migration fails immediately. |
Transfer and then convert. For each VM, Forklift creates a snapshot and transfers it to OKD. When you start the cutover, Forklift creates the last snapshot, transfers it, and then converts the VM. |
Tools |
|
Containerized Data Importer (CDI), a persistent storage management add-on, and |
Data transferred |
Approximate sum of all disks |
Approximate sum of all disks and VM utilization |
VM downtime |
High: The VMs are shut down, and the disks are transferred. |
Low: Disks are transferred in the background. The VMs are shut down during the cutover stage, and the remaining data is migrated. Data stored in RAM is not migrated. |
Parallelism |
Disks are transferred sequentially for each VM. For remote migration, disks are transferred in parallel. [1] |
Disks are transferred in parallel by different pods. |
Connection use |
Keeps the connection to the Source only during the disk transfer. |
Keeps the connection to the Source during the disk transfer, but the connection is released between snapshots. |
Tools |
Forklift only. |
Forklift and CDI from KubeVirt. |
The preceding table describes the situation for VMs that are running because the main benefit of warm migration is the reduced downtime, and there is no reason to initiate warm migration for VMs that are down. However, performing warm migration for VMs that are down is not the same as cold migration, even when Forklift uses |
When importing from VMware, there are additional factors which impact the migration speed such as limits related to ESXi, vSphere. or VDDK. |
Conclusions
Based on the preceding information, we can draw the following conclusions about cold migration vs. warm migration:
-
The shortest downtime of VMs can be achieved by using warm migration.
-
The shortest duration for VMs with a large amount of data on a single disk can be achieved by using cold migration.
-
The shortest duration for VMs with a large amount of data that is spread evenly across multiple disks can be achieved by using warm migration.
Prerequisites
Review the following prerequisites to ensure that your environment is prepared for migration.
Software requirements
Forklift has software requirements for all providers as well as specific software requirements per provider.
Software requirements for all providers
You must install compatible versions of OKD and KubeVirt.
Storage support and default modes
Forklift uses the following default volume and access modes for supported storage.
Provisioner | Volume mode | Access mode |
---|---|---|
kubernetes.io/aws-ebs |
Block |
ReadWriteOnce |
kubernetes.io/azure-disk |
Block |
ReadWriteOnce |
kubernetes.io/azure-file |
Filesystem |
ReadWriteMany |
kubernetes.io/cinder |
Block |
ReadWriteOnce |
kubernetes.io/gce-pd |
Block |
ReadWriteOnce |
kubernetes.io/hostpath-provisioner |
Filesystem |
ReadWriteOnce |
manila.csi.openstack.org |
Filesystem |
ReadWriteMany |
openshift-storage.cephfs.csi.ceph.com |
Filesystem |
ReadWriteMany |
openshift-storage.rbd.csi.ceph.com |
Block |
ReadWriteOnce |
kubernetes.io/rbd |
Block |
ReadWriteOnce |
kubernetes.io/vsphere-volume |
Block |
ReadWriteOnce |
If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:
See Enabling a statically-provisioned storage class for details on editing the storage profile. |
If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail. |
When you migrate from OpenStack, or when you run a cold migration from oVirt to the OKD cluster that Forklift is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead. If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead. In some cases, however, you might want to decrease the file system overhead to reduce storage consumption. You can change the file system overhead by changing the value of the |
Network prerequisites
The following prerequisites apply to all migrations:
-
IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.
-
The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.
-
If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.
Ports
The firewalls must enable traffic over the following ports:
Port | Protocol | Source | Destination | Purpose |
---|---|---|---|---|
443 |
TCP |
OpenShift nodes |
VMware vCenter |
VMware provider inventory Disk transfer authentication |
443 |
TCP |
OpenShift nodes |
VMware ESXi hosts |
Disk transfer authentication |
902 |
TCP |
OpenShift nodes |
VMware ESXi hosts |
Disk transfer data copy |
Port | Protocol | Source | Destination | Purpose |
---|---|---|---|---|
443 |
TCP |
OpenShift nodes |
oVirt Engine |
oVirt provider inventory Disk transfer authentication |
443 |
TCP |
OpenShift nodes |
oVirt hosts |
Disk transfer authentication |
54322 |
TCP |
OpenShift nodes |
oVirt hosts |
Disk transfer data copy |
Source virtual machine prerequisites
The following prerequisites apply to all migrations:
-
ISO images and CD-ROMs are unmounted.
-
Each NIC contains either an IPv4 address or an IPv6 address, although a NIC may use both.
-
The operating system of each VM is certified and supported as a guest operating system for conversions.
You can check that the operating system is supported by referring to the table in Converting virtual machines from other hypervisors to KVM with virt-v2v. See the columns of the table that refer to RHEL 8 hosts and RHEL 9 hosts. |
-
VMs that you want to migrate with MTV 2.6.z run on RHEL 8.
-
VMs that you want to migrate with MTV 2.7.z run on RHEL 9.
-
The name of a VM must not contain a period (
.
). Forklift changes any period in a VM name to a dash (-
). -
The name of a VM must not be the same as any other VM in the KubeVirt environment.
Forklift has limited support for the migration of dual-boot operating system VMs.
In the case of a dual-boot operating system VM, Forklift will try to convert the first boot disk it finds. Alternatively the root device can be specified in the Forklift UI.
Forklift automatically assigns a new name to a VM that does not comply with the rules.
Forklift makes the following changes when it automatically generates a new VM name:
-
Excluded characters are removed.
-
Uppercase letters are switched to lowercase letters.
-
Any underscore (
_
) is changed to a dash (-
).
This feature allows a migration to proceed smoothly even if someone enters a VM name that does not follow the rules.
-
Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider.
Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)
Measured Boot
cannot be migratedMicrosoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver.
The alternative to migration is to re-create the Windows VM directly on KubeVirt.
oVirt prerequisites
The following prerequisites apply to oVirt migrations:
-
To create a source provider, you must have at least the
UserRole
andReadOnlyAdmin
roles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.
You must keep the |
-
To migrate virtual machines:
-
You must have one of the following:
-
oVirt admin permissions. These permissions allow you to migrate any virtual machine in the system.
-
DiskCreator
andUserVmManager
permissions on every virtual machine you want to migrate.
-
-
You must use a compatible version of oVirt.
-
You must have the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate.
You can obtain the Engine CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.
-
If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.
-
|
OpenStack prerequisites
The following prerequisites apply to OpenStack migrations:
-
You must use a compatible version of OpenStack.
Additional authentication methods for migrations with OpenStack source providers
Forklift versions 2.6 and later support the following authentication methods for migrations with OpenStack source providers in addition to the standard username and password credential set:
-
Token authentication
-
Application credential authentication
You can use these methods to migrate virtual machines with OpenStack source providers using the command-line interface (CLI) the same way you migrate other virtual machines, except for how you prepare the Secret
manifest.
Using token authentication with an OpenStack source provider
You can use token authentication, instead of username and password authentication, when you create an OpenStack source provider.
Forklift supports both of the following types of token authentication:
-
Token with user ID
-
Token with user name
For each type of token authentication, you need to use data from OpenStack to create a Secret
manifest.
Have an OpenStack account.
-
In the dashboard of the OpenStack web console, click Project > API Access.
-
Expand Download OpenStack RC file and click OpenStack RC file.
The file that is downloaded, referred to here as
<openstack_rc_file>
, includes the following fields used for token authentication:OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAME
-
To get the data needed for token authentication, run the following command:
$ openstack token issue
The output, referred to here as
<openstack_token_output>
, includes thetoken
,userID
, andprojectID
that you need for authentication using a token with user ID. -
Create a
Secret
manifest similar to the following:-
For authentication using a token with user ID:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-tokenid namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: token token: <token_from_openstack_token_output> projectID: <projectID_from_openstack_token_output> userID: <userID_from_openstack_token_output> url: <OS_AUTH_URL_from_openstack_rc_file> EOF
-
For authentication using a token with user name:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-tokenname namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: token token: <token_from_openstack_token_output> domainName: <OS_DOMAIN_NAME_from_openstack_rc_file> projectName: <OS_PROJECT_NAME_from_openstack_rc_file> username: <OS_USERNAME_from_openstack_rc_file> url: <OS_AUTH_URL_from_openstack_rc_file> EOF
-
Using application credential authentication with an OpenStack source provider
You can use application credential authentication, instead of username and password authentication, when you create an OpenStack source provider.
Forklift supports both of the following types of application credential authentication:
-
Application credential ID
-
Application credential name
For each type of application credential authentication, you need to use data from OpenStack to create a Secret
manifest.
You have an OpenStack account.
-
In the dashboard of the OpenStack web console, click Project > API Access.
-
Expand Download OpenStack RC file and click OpenStack RC file.
The file that is downloaded, referred to here as
<openstack_rc_file>
, includes the following fields used for application credential authentication:OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAME
-
To get the data needed for application credential authentication, run the following command:
$ openstack application credential create --role member --role reader --secret redhat forklift
The output, referred to here as
<openstack_credential_output>
, includes:-
The
id
andsecret
that you need for authentication using an application credential ID -
The
name
andsecret
that you need for authentication using an application credential name
-
-
Create a
Secret
manifest similar to the following:-
For authentication using the application credential ID:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-appid namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: applicationcredential applicationCredentialID: <id_from_openstack_credential_output> applicationCredentialSecret: <secret_from_openstack_credential_output> url: <OS_AUTH_URL_from_openstack_rc_file> EOF
-
For authentication using the application credential name:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-appname namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: applicationcredential applicationCredentialName: <name_from_openstack_credential_output> applicationCredentialSecret: <secret_from_openstack_credential_output> domainName: <OS_DOMAIN_NAME_from_openstack_rc_file> username: <OS_USERNAME_from_openstack_rc_file> url: <OS_AUTH_URL_from_openstack_rc_file> EOF
-
VMware prerequisites
It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.
Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN. |
The following prerequisites apply to VMware migrations:
-
You must use a compatible version of VMware vSphere.
-
You must be logged in as a user with at least the minimal set of VMware privileges.
-
To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.
-
The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with
virt-v2v
. -
If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
-
If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.
-
It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.
In case of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail. |
Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare. |
VMware privileges
The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.
Privilege | Description | ||
---|---|---|---|
|
|||
|
Allows powering off a powered-on virtual machine. This operation powers down the guest operating system. |
||
|
Allows powering on a powered-off virtual machine and resuming a suspended virtual machine. |
||
|
Allows managing a virtual machine by the VMware VIX API. |
||
|
|||
|
Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting. |
||
|
Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
||
|
Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting. |
||
|
Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
||
|
Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
||
|
Allows cloning of a template. |
||
|
Allows cloning of an existing virtual machine and allocation of resources. |
||
|
Allows creation of a new template from a virtual machine. |
||
|
Allows customization of a virtual machine’s guest operating system without moving the virtual machine. |
||
|
Allows deployment of a virtual machine from a template. |
||
|
Allows marking an existing powered-off virtual machine as a template. |
||
|
Allows marking an existing template as a virtual machine. |
||
|
Allows creation, modification, or deletion of customization specifications. |
||
|
Allows promote operations on a virtual machine’s disks. |
||
|
Allows reading a customization specification. |
||
|
|||
|
Allows creation of a snapshot from the virtual machine’s current state. |
||
|
Allows removal of a snapshot from the snapshot history. |
||
|
|||
|
Allows exploring the contents of a datastore. |
||
|
Allows performing low-level file operations - read, write, delete, and rename - in a datastore. |
||
|
|||
|
Allows verification of the validity of a session. |
||
|
|||
|
Allows decryption of an encrypted virtual machine. |
||
|
Allows access to encrypted resources. |
Create a role in VMware with the permissions described in the preceding table and then apply this role to the Inventory section, as described in Creating a VMware role to apply MTV permissions |
Creating a VMware role to grant MTV privileges
You can create a role in VMware to grant privileges for Forklift and then grant those privileges to users with that role.
The procedure that follows explains how to do this in general. For detailed instructions, see VMware documentation.
-
In the vCenter Server UI, create a role that includes the set of privileges described in the table in VMware prerequisites.
-
In the vSphere inventory UI, grant privileges for users with this role to the appropriate vSphere logical objects at one of the following levels:
-
At the user or group level: Assign privileges to the appropriate logical objects in the data center and use the Propagate to child objects option.
-
At the object level: Apply the same role individually to all the relevant vSphere logical objects involved in the migration, for example, hosts, vSphere clusters, data centers, or networks.
-
Creating a VDDK image
It is strongly recommended that Forklift should be used with the VMware Virtual Disk Development Kit (VDDK) SDK when transferring virtual disks from VMware vSphere.
Creating a VDDK image, although optional, is highly recommended. Using Forklift without VDDK is not recommended and could result in significantly lower migration speeds. |
To make use of this feature, you download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry.
The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).
Storing the VDDK image in a public registry might violate the VMware license terms. |
-
podman
installed. -
You are working on a file system that preserves symbolic links (symlinks).
-
If you are using an external registry, KubeVirt must be able to access it.
-
Create and navigate to a temporary directory:
$ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
-
In a browser, navigate to the VMware VDDK version 8 download page.
-
Select version 8.0.1 and click Download.
In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page. |
-
Save the VDDK archive file in the temporary directory.
-
Extract the VDDK archive:
$ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
-
Create a
Dockerfile
:$ cat > Dockerfile <<EOF FROM registry.access.redhat.com/ubi8/ubi-minimal USER 1001 COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib RUN mkdir -p /opt ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"] EOF
-
Build the VDDK image:
$ podman build . -t <registry_route_or_server_path>/vddk:<tag>
-
Push the VDDK image to the registry:
$ podman push <registry_route_or_server_path>/vddk:<tag>
-
Ensure that the image is accessible to your KubeVirt environment.
Increasing the NFC service memory of an ESXi host
If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.
-
Log in to the ESXi host as root.
-
Change the value of
maxMemory
to1000000000
in/etc/vmware/hostd/config.xml
:... <nfcsvc> <path>libnfcsvc.so</path> <enabled>true</enabled> <maxMemory>1000000000</maxMemory> <maxStreamMemory>10485760</maxStreamMemory> </nfcsvc> ...
-
Restart
hostd
:# /etc/init.d/hostd restart
You do not need to reboot the host.
VDDK validator containers need requests and limits
If you have the cluster or project resource quotas set, you must ensure that you have a sufficient quota for the Forklift pods to perform the migration.
You can see the defaults, which you can override in the ForkliftController custom resource (CR), listed as follows. If necessary, you can adjust these defaults.
These settings are highly dependent on your environment. If there are many migrations happening at once and the quotas are not set enough for the migrations, then the migrations can fail. This can also be correlated to the MAX_VM_INFLIGHT
setting that determines how many VMs/disks are migrated at once.
-
This affects both cold and warm migrations:
For cold migration, it is likely to be more resource intensive as it performs the disk copy. For warm migration, you could potentially reduce the requests.
-
virt_v2v_container_limits_cpu:
4000m
-
virt_v2v_container_limits_memory:
8Gi
-
virt_v2v_container_requests_cpu:
1000m
-
virt_v2v_container_requests_memory:
1Gi
Cold and warm migration using
virt-v2v
can be resource-intensive. For more details, see Compute power and RAM.
-
-
This affects any migrations with hooks:
-
hooks_container_limits_cpu:
1000m
-
hooks_container_limits_memory:
1Gi
-
hooks_container_requests_cpu:
100m
-
hooks_container_requests_memory:
150Mi
-
-
This affects any OVA migrations:
-
ova_container_limits_cpu:
1000m
-
ova_container_limits_memory:
1Gi
-
ova_container_requests_cpu:
100m
-
ova_container_requests_memory:
150Mi
-
Open Virtual Appliance (OVA) prerequisites
The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:
-
All OVA files are created by VMware vSphere.
Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere. |
-
The OVA files are in one or more folders under an NFS shared directory in one of the following structures:
-
In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.
The filename of each compressed package must have the
.ova
extension. Several compressed packages can be stored in the same folder.When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.
For example, if the NFS share is,
/nfs
, then:
The folder/nfs
is scanned.
The folder/nfs/subfolder1
is scanned.
But,/nfs/subfolder1/subfolder2
is not scanned. -
In extracted OVF packages.
When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. However, there can be only one
.ovf
file in a folder. Otherwise, the migration will fail.For example, if the NFS share is,
/nfs
, then:
The OVF file/nfs/vm.ovf
is scanned.
The OVF file/nfs/subfolder1/vm.ovf
is scanned.
The OVF file/nfs/subfolder1/subfolder2/vm.ovf
is scanned.
But, the OVF file/nfs/subfolder1/subfolder2/subfolder3/vm.ovf
is not scanned.
-
Software compatibility guidelines
You must install compatible software versions.
Forklift | OKD | KubeVirt | VMware vSphere | oVirt | OpenStack |
---|---|---|---|---|---|
2.8 |
4.18, 4.17, 4.16 |
4.18, 4.17, 4.16 |
6.5 or later |
4.4 SP1 or later |
16.1 or later |
Migration from oVirt 4.3
Forklift was tested only with oVirt (RHV) 4.4 SP1. Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.8. While not supported, basic migrations from oVirt 4.3 are expected to work. Generally it is advised to upgrade oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt. Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt. However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.8. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt. |
OpenShift Operator Life Cycles
For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.
Installing and configuring the Forklift Operator
You can install the Forklift Operator by using the OKD web console or the command-line interface (CLI).
In Forklift version 2.4 and later, the Forklift Operator includes the Forklift plugin for the OKD web console.
After you install the Forklift Operator by using either the OKD web console or the CLI, you can configure the Operator.
Installing the Forklift Operator by using the OKD web console
You can install the Forklift Operator by using the OKD web console.
-
OKD 4.18 or later installed.
-
KubeVirt Operator installed on an OpenShift migration target cluster.
-
You must be logged in as a user with
cluster-admin
permissions.
-
In the OKD web console, click Operators → OperatorHub.
-
Use the Filter by keyword field to search for forklift-operator.
The Forklift Operator is a Community Operator. Red Hat does not support Community Operators.
-
Click Migration Toolkit for Virtualization Operator and then click Install.
-
Click Create ForkliftController when the button becomes active.
-
Click Create.
Your ForkliftController appears in the list that is displayed.
-
Click Workloads → Pods to verify that the Forklift pods are running.
-
Click Operators → Installed Operators to verify that Migration Toolkit for Virtualization Operator appears in the konveyor-forklift project with the status Succeeded.
When the plugin is ready you will be prompted to reload the page. The Migration menu item is automatically added to the navigation bar, displayed on the left of the OKD web console.
Installing the Forklift Operator by using the command-line interface
You can install the Forklift Operator by using the command-line interface (CLI).
-
OKD 4.18 or later installed.
-
KubeVirt Operator installed on an OpenShift migration target cluster.
-
You must be logged in as a user with
cluster-admin
permissions.
-
Create the konveyor-forklift project:
$ cat << EOF | kubectl apply -f - apiVersion: project.openshift.io/v1 kind: Project metadata: name: konveyor-forklift EOF
-
Create an
OperatorGroup
CR calledmigration
:$ cat << EOF | kubectl apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: migration namespace: konveyor-forklift spec: targetNamespaces: - konveyor-forklift EOF
-
Create a
Subscription
CR for the Operator:$ cat << EOF | kubectl apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: forklift-operator namespace: konveyor-forklift spec: channel: development installPlanApproval: Automatic name: forklift-operator source: community-operators sourceNamespace: openshift-marketplace startingCSV: "konveyor-forklift-operator.2.8.3" EOF
-
Create a
ForkliftController
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: ForkliftController metadata: name: forklift-controller namespace: konveyor-forklift spec: olm_managed: true EOF
-
Verify that the Forklift pods are running:
$ kubectl get pods -n konveyor-forklift
Example outputNAME READY STATUS RESTARTS AGE forklift-api-bb45b8db4-cpzlg 1/1 Running 0 6m34s forklift-controller-7649db6845-zd25p 2/2 Running 0 6m38s forklift-must-gather-api-78fb4bcdf6-h2r4m 1/1 Running 0 6m28s forklift-operator-59c87cfbdc-pmkfc 1/1 Running 0 28m forklift-ui-plugin-5c5564f6d6-zpd85 1/1 Running 0 6m24s forklift-validation-7d84c74c6f-fj9xg 1/1 Running 0 6m30s forklift-volume-populator-controller-85d5cb64b6-mrlmc 1/1 Running 0 6m36s
Configuring the Forklift Operator
You can configure all of the following settings of the Forklift Operator by modifying the ForkliftController
CR, or in the Settings section of the Overview page, unless otherwise indicated.
-
Maximum number of virtual machines (VMs) or disks per plan that Forklift can migrate simultaneously.
-
How long
must gather
reports are retained before being automatically deleted. -
CPU limit allocated to the main controller container.
-
Memory limit allocated to the main controller container.
-
Interval at which a new snapshot is requested before initiating a warm migration.
-
Frequency with which the system checks the status of snapshot creation or removal during a warm migration.
-
Percentage of space in persistent volumes allocated as file system overhead when the
storageclass
isfilesystem
(ForkliftController
CR only). -
Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any
storageclass
that is block-based (ForkliftController
CR only). -
Configuration map of operating systems to preferences for vSphere source providers (
ForkliftController
CR only). -
Configuration map of operating systems to preferences for oVirt (oVirt) source providers (
ForkliftController
CR only). -
Whether to retain importer pods so that the Containerized Data Importer (CDI) does not delete them during migration (
ForkliftController
CR only).
The procedure for configuring these settings using the user interface is presented in Configuring MTV settings. The procedure for configuring these settings by modifying the ForkliftController
CR is presented following.
-
Change a parameter’s value in the
spec
section of theForkliftController
CR by adding the parameter and value as follows:spec: parameter: value (1)
1 Parameters that you can configure using the CLI are shown in the table that follows, along with a description of each parameter and its default value.
Parameter | Description | Default value |
---|---|---|
|
Varies with provider as follows:
|
|
|
The duration in hours for retaining |
|
|
The CPU limit allocated to the main controller container. |
|
|
The memory limit allocated to the main controller container. |
|
|
The interval in minutes at which a new snapshot is requested before initiating a warm migration. |
|
|
The frequency in seconds with which the system checks the status of snapshot creation or removal during a warm migration. |
|
|
Percentage of space in persistent volumes allocated as file system overhead when the
|
|
|
Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any
|
|
|
Configuration map for vSphere source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed. To see the list of preferences in your KubeVirt environment, open the OpenShift web console and click Virtualization > Preferences. Add values to the configuration map when this parameter has the default value,
|
|
|
Configuration map for oVirt source providers. This configuration map maps the operating system of the incoming VM to a KubeVirt preference name. This configuration map needs to be in the namespace where the Forklift Operator is deployed. To see the list of preferences in your KubeVirt environment, open the OpenShift web console and click Virtualization → Preferences. You can add values to the configuration map when this parameter has the default value,
|
|
|
Whether to retain importer pods so that the Containerized Data Importer (CDI) does not delete them during migration.
|
|
Configuring the controller_max_vm_inflight parameter
The value of controller_max_vm_inflight
parameter, which is shown in the UI as Max concurrent virtual machine migrations, varies by the source provider of the migration
-
For all migrations except OVA or VMware migrations, the parameter specifies the maximum number of disks that Forklift can transfer simultaneously. In these migrations, Forklift migrates the disks in parallel. This means that if the combined number of disks that you want to migrate is greater than the value of the setting, additional disks must wait until the queue is free, without regard for whether a VM has finished migrating.
For example, if the value of the parameter is 15, and VM A has 5 disks, VM B has 5 disks, and VM C has 6 disks, all the disks except for the 16th disk start migrating at the same time. Once any of them has migrated, the 16th disk can be migrated, even though not all the disks on VM A and the disks on VM B have finished migrating.
-
For OVA migrations, the parameter specifies the maximum number of VMs that Forklift can migrate simultaneously, meaning that all additional disks must wait until at least one VM has been completely migrated.
For example, if the value of the parameter is 2, and VM A has 5 disks, VM B has 5 disks, and VM C has 6 disks, all the disks on VM C must wait to migrate until either all the disks on VM A or on VM B finish migrating.
-
For VMware migrations, the parameter has the following meanings:
-
Cold migration:
-
To local KubeVirt: VMs for each ESXi host that can migrate simultaneously.
-
To remote KubeVirt: Disks for each ESXi host that can migrate simultaneously.
-
-
Warm migration: Disks for each ESXi host that can migrate simultaneously.
-
Migrating virtual machines by using the OKD web console
Use the Forklift user interface to migrate virtual machines (VMs). It is located in the Virtualization section of the OKD web console.
The MTV user interface
The Forklift user interface is integrated into the OKD web console.
In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for virtualization, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.
-
If you are an administrator, you can see all projects.
-
If you are a non-administrator, you can see only the projects that you have permissions to work with.
The MTV Overview page
The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.
If you have Administrator privileges, you can access the Overview page by clicking Migration → Overview in the OKD web console.
The Overview page has 3 tabs:
-
Overview
-
YAML
-
Metrics
Overview tab
The Overview tab lets you see:
-
Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator
-
Pods: The name, status, and creation time of each pod that was deployed by the Forklift Operator
-
Conditions: Status of the Forklift Operator:
-
Failure: Last failure.
False
indicates no failure since deployment. -
Running: Whether the Operator is currently running and waiting for the next reconciliation.
-
Successful: Last successful reconciliation.
-
YAML tab
The custom resource ForkliftController that defines the operation of the Forklift Operator. You can modify the custom resource from this tab.
Metrics tab
The Metrics tab lets you see:
-
Migrations: The number of migrations performed using Forklift:
-
Total
-
Running
-
Failed
-
Succeeded
-
Canceled
-
-
Virtual Machine Migrations: The number of VMs migrated using Forklift:
-
Total
-
Running
-
Failed
-
Succeeded
-
Canceled
-
Since a single migration might involve many virtual machines, the number of migrations performed using Forklift might vary significantly from the number of virtual machines that have been migrated using Forklift. |
-
Chart showing the number of running, failed, and succeeded migrations performed using Forklift for each of the last 7 days
-
Chart showing the number of running, failed, and succeeded virtual machine migrations performed using Forklift for each of the last 7 days
Configuring MTV settings
If you have Administrator privileges, you can access the Overview page and change the following settings in it:
Setting | Description | Default value |
---|---|---|
Max concurrent virtual machine migrations |
Varies with provider as follows:
|
20 |
Must gather cleanup after (hours) |
The duration for retaining |
Disabled |
Controller main container CPU limit |
The CPU limit allocated to the main controller container. |
500 m |
Controller main container Memory limit |
The memory limit allocated to the main controller container. |
800 Mi |
Precopy internal (minutes) |
The interval at which a new snapshot is requested before initiating a warm migration. |
60 |
Snapshot polling interval (seconds) |
The frequency with which the system checks the status of snapshot creation or removal during a warm migration. |
10 |
-
In the OKD web console, click Migration > Overview. The Settings list is on the right side of the page.
-
In the Settings list, click the Edit icon of the setting you want to change.
-
Choose a setting from the list.
-
Click Save.
Migrating virtual machines using the MTV user interface
Use the Forklift user interface to migrate VMs from the following providers:
-
VMware vSphere
-
oVirt (oVirt)
-
OpenStack
-
Open Virtual Appliances (OVAs) that were created by VMware vSphere
-
KubeVirt clusters
For all migrations, you specify the source provider, the destination provider, and the migration plan. The specific procedures vary per provider.
You must ensure that all prerequisites are met. VMware only: You must have the minimal set of VMware privileges. VMware only: Creating a VMware Virtual Disk Development Kit (VDDK) image will increase migration speed. |
Migrating virtual machines from VMware vSphere
Adding a VMware vSphere source provider
You can migrate VMware vSphere VMs from VMware vCenter or from a VMWare ESX/ESXi server. In Forklift versions 2.6 and later, you can migrate directly from an ESX/ESXi server, without going through vCenter, by specifying the SDK endpoint to that of an ESX/ESXi server.
EMS enforcement is disabled for migrations with VMware vSphere source providers in order to enable migrations from versions of vSphere that are supported by Forklift but do not comply with the 2023 FIPS requirements. Therefore, users should consider whether migrations from vSphere source providers risk their compliance with FIPS. Supported versions of vSphere are specified in Software compatibility guidelines. |
Anti-virus software can cause migrations to fail. It is strongly recommended to remove such software from source VMs before you start a migration. |
Forklift does not support migrating VMware Non-Volatile Memory Express (NVMe) disks. |
If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OKD transfer network that you use. For more information about the OKD transfer network, see Creating a migration plan. |
-
It is strongly recommended to create a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters. A VDDK image accelerates migration and reduces the risk of a plan failing. If you are not using VDDK and a plan fails, then please retry with VDDK installed. For more information, see Creating a VDDK image.
Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN. |
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click vSphere.
-
Specify the following fields:
Provider details
-
Provider resource name: Name of the source provider.
-
Endpoint type: Select the vSphere provider endpoint type. Options: vCenter or ESXi. You can migrate virtual machines from vCenter, an ESX/ESXi server that is not managed by vCenter, or from an ESX/ESXi server that is managed by vCenter but does not go through vCenter.
-
URL: URL of the SDK endpoint of the vCenter on which the source VM is mounted. Ensure that the URL includes the
sdk
path, usually/sdk
. For example,https://vCenter-host-example.com/sdk
. If a certificate for FQDN is specified, the value of this field needs to match the FQDN in the certificate. -
VDDK init image:
VDDKInitImage
path. It is strongly recommended to create a VDDK init image to accelerate migrations. For more information, see Creating a VDDK image.
Provider credentials
-
Username: vCenter user or ESXi user. For example,
user@vsphere.local
. -
Password: vCenter user password or ESXi user password.
-
-
Choose one of the following options for validating CA certificates:
-
Use a custom CA certificate: Migrate after validating a custom CA certificate.
-
Use the system CA certificate: Migrate after validating the system CA certificate.
-
Skip certificate validation : Migrate without validating a CA certificate.
-
To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
-
To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
-
To skip certificate validation, toggle the Skip certificate validation switch to the right.
-
-
-
Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.
-
Click Fetch certificate from URL. The Verify certificate window opens.
-
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
It might take a few minutes for the provider to have the status
Ready
. -
Optional: Add access to the UI of the provider:
-
On the Providers page, click the provider.
The Provider details page opens.
-
Click the Edit icon under External UI web link.
-
Enter the link and click Save.
If you do not enter a link, Forklift attempts to calculate the correct link.
-
If Forklift succeeds, the hyperlink of the field points to the calculated link.
-
If Forklift does not succeed, the field remains empty.
-
-
Selecting a migration network for a VMware source provider
You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.
Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.
You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere. |
If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OKD transfer network that you use. For more information about the OKD transfer network, see Creating a migration plan. |
-
The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.
-
The migration network must be accessible to the KubeVirt nodes through the default gateway.
The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.
-
The migration network should have jumbo frames enabled.
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click the host number in the Hosts column beside a provider to view a list of hosts.
-
Select one or more hosts and click Select migration network.
-
Specify the following fields:
-
Network: Network name
-
ESXi host admin username: For example,
root
-
ESXi host admin password: Password
-
-
Click Save.
-
Verify that the status of each host is Ready.
If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.
Adding a KubeVirt destination provider
You can use a Red Hat KubeVirt provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.
You can also add another KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the cluster where you installed Forklift.
You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.
-
You must have a KubeVirt service account token with
cluster-admin
privileges.
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click KubeVirt.
-
Specify the following fields:
-
Provider resource name: Name of the source provider
-
URL: URL of the endpoint of the API server
-
Service account bearer token: Token for a service account with
cluster-admin
privilegesIf both URL and Service account bearer token are left blank, the local OKD cluster is used.
-
-
Choose one of the following options for validating CA certificates:
-
Use a custom CA certificate: Migrate after validating a custom CA certificate.
-
Use the system CA certificate: Migrate after validating the system CA certificate.
-
Skip certificate validation : Migrate without validating a CA certificate.
-
To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
-
To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
-
To skip certificate validation, toggle the Skip certificate validation switch to the right.
-
-
-
Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.
-
Click Fetch certificate from URL. The Verify certificate window opens.
-
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Selecting a migration network for a KubeVirt provider
You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
If you do not select a migration network, the default migration network is the pod
network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan. |
-
In the OKD web console, click Migration > Providers for virtualization.
-
Click the KubeVirt provider whose migration network you want to change.
When the Providers detail page opens:
-
Click the Networks tab.
-
Click Set default transfer network.
-
Select a default transfer network from the list and click Save.
-
Configure a gateway in the network used for Forklift migrations by completing the following steps:
-
In the OKD web console, click Networking > NetworkAttachmentDefinitions.
-
Select the appropriate default transfer network NAD.
-
Click the YAML tab.
-
Add
forklift.konveyor.io/route
to the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> (1)
1 The NetworkAttachmentDefinition
parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway. -
Click Save.
-
Creating a migration plan
Use the OKD web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration. This prevents concurrent disk access to the storage the guest points to. |
A plan cannot contain more than 500 VMs or 500 disks. |
-
In the OKD web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
-
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
-
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
-
Enter the Plan name.
-
To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
-
To add either a Network map or a Storage map, click the + sign and add a mapping.
-
Click Create migration plan.
Forklift validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.
-
Check the following items in the Settings section of the page:
-
Migration type: Type of migration. By default, Forklift sets the migration type to
cold
.-
For a warm migration, do the following:
-
Click the Edit icon.
-
Toggle the Whether this is a warm migration switch.
-
Click Save.
-
-
-
Transfer Network: The network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.
-
To edit the transfer network, do the following:
-
Click the Edit icon.
-
Select a different transfer network from the list.
-
Click Save.
-
-
Optional: To configure an OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.
-
Optional: To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
-
-
Target namespace: Destination namespace for all the migrated VMs. By default, the destination namespace is the current or active namespace.
-
To edit the namespace, do the following:
-
Click the Edit icon.
-
Select a different target namespace from the list in the window that opens.
-
Click Save.
-
-
-
Preserve static IPs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP during migration.
-
To preserve static IPs, do the following:
-
Click the Edit icon.
-
Toggle the Whether to preserve the static IPs switch.
-
Click Save.
Forklift then issues a warning message about any VMs whose vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere. This causes the vNIC properties to be reported to Forklift.
-
-
-
Disk decryption passphrases: For disks encrypted using Linux Unified Key Setup (LUKS).
-
To enter a list of decryption passphrases for LUKS-encrypted devices, do the following:
-
Click the Edit icon.
-
Enter the passphrases.
-
Click Save.
You do not need to enter the passphrases in a specific order. For each LUKS-encrypted device, Forklift tries each passphrase until one unlocks the device.
-
-
-
Root device: Applies to multi-boot VM migrations only. By default, Forklift uses the first bootable device detected as the root device.
-
To specify a different root device, do the following:
-
Click the Edit icon next to Root device.
-
Either select a device from the list or enter a device in the text box.
-
Click Save.
Forklift uses the following format for disk location:
/dev/sd<disk_identifier><disk_partition>
. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be:/dev/sdb2
. After you enter the boot device, click Save.If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by checking the conversion pod logs.
-
-
-
Shared disks: Applies to cold migrations only. Shared disks are disks that are attached to multiple VMs and that use the multi-writer option. These characteristics make shared disks difficult to migrate. By default, Forklift does not migrate shared disks.
Migrating shared disks might slow down the migration process.
-
To migrate shared disks in the migration plan, do the following:
-
Click the Edit icon.
-
Toggle the Migrate shared disks switch.
-
Click Save.
-
-
-
Optional: PVC name template: Specifies a template for the name of the persistent volume claim (PVC) for the VMs in your plan.
The template follows the Go template syntax and has access to the following variables:
-
.VnName
: Name of the VM -
.PlanName
: Name of the migration plan -
.DiskIndex
: Initial volume index of the disk -
.RootDiskIndex
: Index of the root diskExamples
-
"{{.VmName}}-disk-{{.DiskIndex}}"
-
"{{if eq .DiskIndex .RootDiskIndex}}root{{else}}data{{end}}-{{.DiskIndex}}"
Variable names cannot exceed 63 characters.
-
To specify a PVC name template for all the VMs in your plan, do the following:
-
Click the Edit icon.
-
Click Enter custom naming template.
-
Enter the template according to the instructions.
-
Click Save.
-
-
To specify a PVC name template only for specific VMs, do the following:
-
Click the Virtual Machines tab.
-
Select the desired VMs.
-
Click the Options menu
of the VM.
-
Select Edit PVC name template.
-
Enter the template according to the instructions.
-
Click Save.
Changes you make in the Virtual Machines tab override any changes in the Plan details page.
-
-
-
Optional: Volume name template: Specifies a template for the volume interface name for the VMs in your plan.
The template follows the Go template syntax and has access to the following variables:
-
.PVCName
: Name of the PVC mounted to the VM using this volume -
.VolumeIndex
: Sequential index of the volume interface (0-based)Examples
-
"disk-{{.VolumeIndex}}"
-
"pvc-{{.PVCName}}"
Variable names cannot exceed 63 characters
-
To specify a volume name template for all the VMs in your plan, do the following:
-
Click the Edit icon.
-
Click Enter custom naming template.
-
Enter the template according to the instructions.
-
Click Save.
-
-
To specify a different volume name template only for specific VMs, do the following:
-
Click the Virtual Machines tab.
-
Select the desired VMs.
-
Click the Options menu
of the VM.
-
Select Edit Volume name template.
-
Enter the template according to the instructions.
-
Click Save.
Changes you make in the Virtual Machines tab override any changes in the Plan details page.
-
-
-
Optional: Network name template: Specifies a template for the network interface name for the VMs in your plan.
The template follows the Go template syntax and has access to the following variables:
-
.NetworkName:
If the target network ismultus
, add the name of the Multus Network Attachment Definition. Otherwise, leave this variable empty. -
.NetworkNamespace
: If the target network ismultus
, add the namespace where the Multus Network Attachment Definition is located. -
.NetworkType
: Network type. Options:multus
orpod
. -
.NetworkIndex
: Sequential index of the network interface (0-based).Examples
-
"net-{{.NetworkIndex}}"
-
{{if eq .NetworkType "pod"}}pod{{else}}multus-{{.NetworkIndex}}{{end}}"
Variable names cannot exceed 63 characters.
-
To specify a network name template for all the VMs in your plan, do the following:
-
Click the Edit icon.
-
Click Enter custom naming template.
-
Enter the template according to the instructions.
-
Click Save.
-
-
To specify a different network name template only for specific VMs, do the following:
-
Click the Virtual Machines tab.
-
Select the desired VMs.
-
Click the Options menu
of the VM.
-
Select Edit Network name template.
-
Enter the template according to the instructions.
-
Click Save.
Changes you make in the Virtual Machines tab override any changes in the Plan details page.
-
-
-
-
If your plan is valid, you can do one of the following:
-
Run the plan now by clicking Start migration.
-
Run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.
-
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail. |
When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works. |
Running a migration plan
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration > Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
Warm migration only:
-
The precopy stage starts.
-
Click Cutover to complete the migration.
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual Machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2v
is not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Plans for virtualization page of the OKD web console, you can click the Options menu beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
-
Set cutover: Set the date and time for a cutover.
-
Remove cutover: Cancel a scheduled cutover. Active only if relevant.
-
-
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive Plan is irreversible. However, you can duplicate an archived plan.
-
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Plans for virtualization.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Migrating virtual machines from oVirt
Adding an oVirt source provider
You can add an oVirt source provider by using the OKD web console.
-
Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click Red Hat Virtualization
-
Specify the following fields:
-
Provider resource name: Name of the source provider.
-
URL: URL of the API endpoint of the oVirt Manager (RHVM) on which the source VM is mounted. Ensure that the URL includes the path leading to the RHVM API server, usually
/ovirt-engine/api
. For example,https://rhv-host-example.com/ovirt-engine/api
. -
Username: Username.
-
Password: Password.
-
-
Choose one of the following options for validating CA certificates:
-
Use a custom CA certificate: Migrate after validating a custom CA certificate.
-
Use the system CA certificate: Migrate after validating the system CA certificate.
-
Skip certificate validation : Migrate without validating a CA certificate.
-
To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
-
To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
-
To skip certificate validation, toggle the Skip certificate validation switch to the right.
-
-
-
Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.
-
Click Fetch certificate from URL. The Verify certificate window opens.
-
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
-
Optional: Add access to the UI of the provider:
-
On the Providers page, click the provider.
The Provider details page opens.
-
Click the Edit icon under External UI web link.
-
Enter the link and click Save.
If you do not enter a link, Forklift attempts to calculate the correct link.
-
If Forklift succeeds, the hyperlink of the field points to the calculated link.
-
If Forklift does not succeed, the field remains empty.
-
-
Adding a KubeVirt destination provider
You can use a Red Hat KubeVirt provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.
You can also add another KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the cluster where you installed Forklift.
You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.
-
You must have a KubeVirt service account token with
cluster-admin
privileges.
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click KubeVirt.
-
Specify the following fields:
-
Provider resource name: Name of the source provider
-
URL: URL of the endpoint of the API server
-
Service account bearer token: Token for a service account with
cluster-admin
privilegesIf both URL and Service account bearer token are left blank, the local OKD cluster is used.
-
-
Choose one of the following options for validating CA certificates:
-
Use a custom CA certificate: Migrate after validating a custom CA certificate.
-
Use the system CA certificate: Migrate after validating the system CA certificate.
-
Skip certificate validation : Migrate without validating a CA certificate.
-
To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
-
To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
-
To skip certificate validation, toggle the Skip certificate validation switch to the right.
-
-
-
Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.
-
Click Fetch certificate from URL. The Verify certificate window opens.
-
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Selecting a migration network for a KubeVirt provider
You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
If you do not select a migration network, the default migration network is the pod
network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan. |
-
In the OKD web console, click Migration > Providers for virtualization.
-
Click the KubeVirt provider whose migration network you want to change.
When the Providers detail page opens:
-
Click the Networks tab.
-
Click Set default transfer network.
-
Select a default transfer network from the list and click Save.
-
Configure a gateway in the network used for Forklift migrations by completing the following steps:
-
In the OKD web console, click Networking > NetworkAttachmentDefinitions.
-
Select the appropriate default transfer network NAD.
-
Click the YAML tab.
-
Add
forklift.konveyor.io/route
to the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> (1)
1 The NetworkAttachmentDefinition
parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway. -
Click Save.
-
Creating a migration plan
Use the OKD web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration. This prevents concurrent disk access to the storage the guest points to. |
A plan cannot contain more than 500 VMs or 500 disks. |
-
In the OKD web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
-
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
-
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
-
Enter the Plan name.
-
To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
-
To add either a Network map or a Storage map, click the + sign and add a mapping.
-
Click Create migration plan.
Forklift validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.
-
Check the following items in the Settings section of the page:
-
Migration type: Type of migration. By default, Forklift sets the migration type to
cold
.-
For a warm migration, do the following:
-
Click the Edit icon.
-
Toggle the Whether this is a warm migration switch.
-
Click Save.
-
-
-
Transfer Network: The network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.
-
To edit the transfer network, do the following:
-
Click the Edit icon.
-
Select a different transfer network from the list.
-
Click Save.
-
-
Optional: To configure an OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.
-
Optional: To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
-
-
Target namespace: Destination namespace for all the migrated VMs. By default, the destination namespace is the current or active namespace.
-
To edit the namespace, do the following:
-
Click the Edit icon.
-
Select a different target namespace from the list in the window that opens.
-
Click Save.
-
-
-
Preserving the CPU model of VMs that are migrated from oVirt: Generally, the CPU model (type) for oVirt VMs is set at the cluster level. However, the CPU model can be set at the VM level, which is called a custom CPU model.
By default, Forklift sets the CPU model on the destination cluster as follows: Forklift preserves custom CPU settings for VMs that have them. For VMs without custom CPU settings, Forklift does not set the CPU model. Instead, the CPU model is later set by KubeVirt.
-
To preserve the cluster-level CPU model of your oVirt VMs, do the following:
-
Click the Edit icon.
-
Toggle the Whether to preserve the CPU model switch.
-
Click Save.
-
-
-
-
If your plan is valid, you can do one of the following:
-
Run the plan now by clicking Start migration.
-
Run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.
-
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail. |
Running a migration plan
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration > Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
Warm migration only:
-
The precopy stage starts.
-
Click Cutover to complete the migration.
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual Machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2v
is not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Plans for virtualization page of the OKD web console, you can click the Options menu beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
-
Set cutover: Set the date and time for a cutover.
-
Remove cutover: Cancel a scheduled cutover. Active only if relevant.
-
-
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive Plan is irreversible. However, you can duplicate an archived plan.
-
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Plans for virtualization.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Migrating virtual machines from OpenStack
Adding an OpenStack source provider
You can add an OpenStack source provider by using the OKD web console.
When you migrate an image-based VM from an OpenStack provider, a snapshot is created for the image that is attached to the source VM and the data from the snapshot is copied over to the target VM. This means that the target VM will have the same state as that of the source VM at the time the snapshot was created. |
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click OpenStack.
-
Specify the following fields:
-
Provider resource name: Name of the source provider.
-
URL: URL of the OpenStack Identity (Keystone) endpoint. For example,
http://controller:5000/v3
. -
Authentication type: Choose one of the following methods of authentication and supply the information related to your choice. For example, if you choose Application credential ID as the authentication type, the Application credential ID and the Application credential secret fields become active, and you need to supply the ID and the secret.
-
Application credential ID
-
Application credential ID: OpenStack application credential ID
-
Application credential secret: OpenStack application credential
Secret
-
-
Application credential name
-
Application credential name: OpenStack application credential name
-
Application credential secret: OpenStack application credential
Secret
-
Username: OpenStack username
-
Domain: OpenStack domain name
-
-
Token with user ID
-
Token: OpenStack token
-
User ID: OpenStack user ID
-
Project ID: OpenStack project ID
-
-
Token with user Name
-
Token: OpenStack token
-
Username: OpenStack username
-
Project: OpenStack project
-
Domain name: OpenStack domain name
-
-
Password
-
Username: OpenStack username
-
Password: OpenStack password
-
Project: OpenStack project
-
Domain: OpenStack domain name
-
-
-
-
Choose one of the following options for validating CA certificates:
-
Use a custom CA certificate: Migrate after validating a custom CA certificate.
-
Use the system CA certificate: Migrate after validating the system CA certificate.
-
Skip certificate validation : Migrate without validating a CA certificate.
-
To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
-
To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
-
To skip certificate validation, toggle the Skip certificate validation switch to the right.
-
-
-
Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.
-
Click Fetch certificate from URL. The Verify certificate window opens.
-
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
-
Optional: Add access to the UI of the provider:
-
On the Providers page, click the provider.
The Provider details page opens.
-
Click the Edit icon under External UI web link.
-
Enter the link and click Save.
If you do not enter a link, Forklift attempts to calculate the correct link.
-
If Forklift succeeds, the hyperlink of the field points to the calculated link.
-
If Forklift does not succeed, the field remains empty.
-
-
Adding a KubeVirt destination provider
You can use a Red Hat KubeVirt provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.
You can also add another KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the cluster where you installed Forklift.
You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.
-
You must have a KubeVirt service account token with
cluster-admin
privileges.
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click KubeVirt.
-
Specify the following fields:
-
Provider resource name: Name of the source provider
-
URL: URL of the endpoint of the API server
-
Service account bearer token: Token for a service account with
cluster-admin
privilegesIf both URL and Service account bearer token are left blank, the local OKD cluster is used.
-
-
Choose one of the following options for validating CA certificates:
-
Use a custom CA certificate: Migrate after validating a custom CA certificate.
-
Use the system CA certificate: Migrate after validating the system CA certificate.
-
Skip certificate validation : Migrate without validating a CA certificate.
-
To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
-
To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
-
To skip certificate validation, toggle the Skip certificate validation switch to the right.
-
-
-
Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.
-
Click Fetch certificate from URL. The Verify certificate window opens.
-
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Selecting a migration network for a KubeVirt provider
You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
If you do not select a migration network, the default migration network is the pod
network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan. |
-
In the OKD web console, click Migration > Providers for virtualization.
-
Click the KubeVirt provider whose migration network you want to change.
When the Providers detail page opens:
-
Click the Networks tab.
-
Click Set default transfer network.
-
Select a default transfer network from the list and click Save.
-
Configure a gateway in the network used for Forklift migrations by completing the following steps:
-
In the OKD web console, click Networking > NetworkAttachmentDefinitions.
-
Select the appropriate default transfer network NAD.
-
Click the YAML tab.
-
Add
forklift.konveyor.io/route
to the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> (1)
1 The NetworkAttachmentDefinition
parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway. -
Click Save.
-
Creating a migration plan
Use the OKD web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration. This prevents concurrent disk access to the storage the guest points to. |
A plan cannot contain more than 500 VMs or 500 disks. |
-
In the OKD web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
-
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
-
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
-
Enter the Plan name.
-
To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
-
To add either a Network map or a Storage map, click the + sign and add a mapping.
-
Click Create migration plan.
Forklift validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.
-
Check the following items in the Settings section of the page:
-
Transfer Network: The network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.
-
To edit the transfer network, do the following:
-
Click the Edit icon.
-
Select a different transfer network from the list.
-
Click Save.
-
-
Optional: To configure an OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.
-
Optional: To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
-
-
Target namespace: Destination namespace for all the migrated VMs. By default, the destination namespace is the current or active namespace.
-
To edit the namespace, do the following:
-
Click the Edit icon.
-
Select a different target namespace from the list in the window that opens.
-
Click Save.
-
-
-
-
If your plan is valid, you can do one of the following:
-
Run the plan now by clicking Start migration.
-
Run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.
-
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail. |
Running a migration plan
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration > Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
+
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual Machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2v
is not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Plans for virtualization page of the OKD web console, you can click the Options menu beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
-
Set cutover: Set the date and time for a cutover.
-
Remove cutover: Cancel a scheduled cutover. Active only if relevant.
-
-
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive Plan is irreversible. However, you can duplicate an archived plan.
-
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Plans for virtualization.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Migrating virtual machines from OVA
Adding an Open Virtual Appliance (OVA) source provider
You can add Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the OKD web console.
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click Open Virtual Appliance (OVA).
-
Specify the following fields:
-
Provider resource name: Name of the source provider
-
URL: URL of the NFS file share that serves the OVA
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
An error message might appear that states that an error has occurred. You can ignore this message.
Adding a KubeVirt destination provider
You can use a Red Hat KubeVirt provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.
You can also add another KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the cluster where you installed Forklift.
You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.
-
You must have a KubeVirt service account token with
cluster-admin
privileges.
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click KubeVirt.
-
Specify the following fields:
-
Provider resource name: Name of the source provider
-
URL: URL of the endpoint of the API server
-
Service account bearer token: Token for a service account with
cluster-admin
privilegesIf both URL and Service account bearer token are left blank, the local OKD cluster is used.
-
-
Choose one of the following options for validating CA certificates:
-
Use a custom CA certificate: Migrate after validating a custom CA certificate.
-
Use the system CA certificate: Migrate after validating the system CA certificate.
-
Skip certificate validation : Migrate without validating a CA certificate.
-
To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
-
To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
-
To skip certificate validation, toggle the Skip certificate validation switch to the right.
-
-
-
Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.
-
Click Fetch certificate from URL. The Verify certificate window opens.
-
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Selecting a migration network for a KubeVirt provider
You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
If you do not select a migration network, the default migration network is the pod
network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan. |
-
In the OKD web console, click Migration > Providers for virtualization.
-
Click the KubeVirt provider whose migration network you want to change.
When the Providers detail page opens:
-
Click the Networks tab.
-
Click Set default transfer network.
-
Select a default transfer network from the list and click Save.
-
Configure a gateway in the network used for Forklift migrations by completing the following steps:
-
In the OKD web console, click Networking > NetworkAttachmentDefinitions.
-
Select the appropriate default transfer network NAD.
-
Click the YAML tab.
-
Add
forklift.konveyor.io/route
to the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> (1)
1 The NetworkAttachmentDefinition
parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway. -
Click Save.
-
Creating a migration plan
Use the OKD web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration. This prevents concurrent disk access to the storage the guest points to. |
A plan cannot contain more than 500 VMs or 500 disks. |
-
In the OKD web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
-
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
-
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
-
Enter the Plan name.
-
To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
-
To add either a Network map or a Storage map, click the + sign and add a mapping.
-
Click Create migration plan.
Forklift validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.
-
Check the following items in the Settings section of the page:
-
Transfer Network: The network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.
-
To edit the transfer network, do the following:
-
Click the Edit icon.
-
Select a different transfer network from the list.
-
Click Save.
-
-
Optional: To configure an OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.
-
Optional: To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
-
-
Target namespace: Destination namespace for all the migrated VMs. By default, the destination namespace is the current or active namespace.
-
To edit the namespace, do the following:
-
Click the Edit icon.
-
Select a different target namespace from the list in the window that opens.
-
Click Save.
-
-
-
-
If your plan is valid, you can do one of the following:
-
Run the plan now by clicking Start migration.
-
Run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.
-
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail. |
Running a migration plan
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration > Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
+
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual Machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2v
is not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Plans for virtualization page of the OKD web console, you can click the Options menu beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
-
Set cutover: Set the date and time for a cutover.
-
Remove cutover: Cancel a scheduled cutover. Active only if relevant.
-
-
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive Plan is irreversible. However, you can duplicate an archived plan.
-
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Plans for virtualization.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Migrating virtual machines from KubeVirt
Adding a Red Hat KubeVirt source provider
You can use a Red Hat KubeVirt provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.
You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.
The OKD cluster version of the source provider must be 4.13 or later. |
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click KubeVirt.
-
Specify the following fields:
-
Provider resource name: Name of the source provider
-
URL: URL of the endpoint of the API server
-
Service account bearer token: Token for a service account with
cluster-admin
privilegesIf both URL and Service account bearer token are left blank, the local OKD cluster is used.
-
-
Choose one of the following options for validating CA certificates:
-
Use a custom CA certificate: Migrate after validating a custom CA certificate.
-
Use the system CA certificate: Migrate after validating the system CA certificate.
-
Skip certificate validation : Migrate without validating a CA certificate.
-
To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
-
To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
-
To skip certificate validation, toggle the Skip certificate validation switch to the right.
-
-
-
Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.
-
Click Fetch certificate from URL. The Verify certificate window opens.
-
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
-
Optional: Add access to the UI of the provider:
-
On the Providers page, click the provider.
The Provider details page opens.
-
Click the Edit icon under External UI web link.
-
Enter the link and click Save.
If you do not enter a link, Forklift attempts to calculate the correct link.
-
If Forklift succeeds, the hyperlink of the field points to the calculated link.
-
If Forklift does not succeed, the field remains empty.
-
-
Adding a KubeVirt destination provider
You can use a Red Hat KubeVirt provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.
You can also add another KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the cluster where you installed Forklift.
You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.
-
You must have a KubeVirt service account token with
cluster-admin
privileges.
-
In the OKD web console, click Migration → Providers for virtualization.
-
Click Create Provider.
-
Click KubeVirt.
-
Specify the following fields:
-
Provider resource name: Name of the source provider
-
URL: URL of the endpoint of the API server
-
Service account bearer token: Token for a service account with
cluster-admin
privilegesIf both URL and Service account bearer token are left blank, the local OKD cluster is used.
-
-
Choose one of the following options for validating CA certificates:
-
Use a custom CA certificate: Migrate after validating a custom CA certificate.
-
Use the system CA certificate: Migrate after validating the system CA certificate.
-
Skip certificate validation : Migrate without validating a CA certificate.
-
To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
-
To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
-
To skip certificate validation, toggle the Skip certificate validation switch to the right.
-
-
-
Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.
-
Click Fetch certificate from URL. The Verify certificate window opens.
-
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
-
-
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Selecting a migration network for a KubeVirt provider
You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
If you do not select a migration network, the default migration network is the pod
network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan. |
-
In the OKD web console, click Migration > Providers for virtualization.
-
Click the KubeVirt provider whose migration network you want to change.
When the Providers detail page opens:
-
Click the Networks tab.
-
Click Set default transfer network.
-
Select a default transfer network from the list and click Save.
-
Configure a gateway in the network used for Forklift migrations by completing the following steps:
-
In the OKD web console, click Networking > NetworkAttachmentDefinitions.
-
Select the appropriate default transfer network NAD.
-
Click the YAML tab.
-
Add
forklift.konveyor.io/route
to the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> (1)
1 The NetworkAttachmentDefinition
parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway. -
Click Save.
-
Creating a migration plan
Use the OKD web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration. This prevents concurrent disk access to the storage the guest points to. |
A plan cannot contain more than 500 VMs or 500 disks. |
-
In the OKD web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
-
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
-
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
-
Enter the Plan name.
-
To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
-
To add either a Network map or a Storage map, click the + sign and add a mapping.
-
Click Create migration plan.
Forklift validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, Forklift validates the plan again.
-
Check the following items in the Settings section of the page:
-
Transfer Network: The network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.
-
To edit the transfer network, do the following:
-
Click the Edit icon.
-
Select a different transfer network from the list.
-
Click Save.
-
-
Optional: To configure an OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.
-
Optional: To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
-
-
Target namespace: Destination namespace for all the migrated VMs. By default, the destination namespace is the current or active namespace.
-
To edit the namespace, do the following:
-
Click the Edit icon.
-
Select a different target namespace from the list in the window that opens.
-
Click Save.
-
-
-
-
If your plan is valid, you can do one of the following:
-
Run the plan now by clicking Start migration.
-
Run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.
-
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail. |
Running a migration plan
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration > Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
+
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual Machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2v
is not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Plans for virtualization page of the OKD web console, you can click the Options menu beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
-
Set cutover: Set the date and time for a cutover.
-
Remove cutover: Cancel a scheduled cutover. Active only if relevant.
-
-
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive Plan is irreversible. However, you can duplicate an archived plan.
-
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Plans for virtualization.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Migrating virtual machines from the command line
You can migrate virtual machines to KubeVirt from the command line.
You must ensure that all prerequisites are met. |
Permissions needed by non-administrators to work with migration plan components
If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).
By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.
For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:
Role | Description |
---|---|
|
Can view migration plans but not to create, delete or modify them |
|
Can create, delete or modify (all parts of |
|
All |
Note that pre-defined cluster roles include a resource (for example, plans
), an API group (for example, forklift.konveyor.io-v1beta1
) and an action (for example, view
, edit
).
As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:
-
Create and modify storage maps, network maps, and migration plans for the namespaces they have access to
-
Attach providers created by administrators to storage maps, network maps, and migration plans
-
Not be able to create providers or to change system settings
Actions | API group | Resource |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Empty string |
|
Non-administrators need to have the |
Migrating virtual machines
You migrate virtual machines (VMs) using the command-line interface (CLI) by creating Forklift custom resources (CRs). The CRs and the migration procedure vary by source provider.
You must specify a name for cluster-scoped CRs. You must specify both a name and a namespace for namespace-scoped CRs. To migrate to or from an OKD cluster that is different from the one the migration plan is defined on, you must have an KubeVirt service account token with |
Migrating from a VMware vSphere source provider
You can migrate from a VMware vSphere source provider by using the command-line interface (CLI).
Anti-virus software can cause migrations to fail. It is strongly recommended to remove such software from source VMs before you start a migration. |
Forklift does not support migrating VMware Non-Volatile Memory Express (NVMe) disks. |
To migrate virtual machines (VMs) that have shared disks, see Migrating virtual machines with shared disks. |
-
Create a
Secret
manifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: (1) - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: vsphere createdForResourceType: providers type: Opaque stringData: user: <user> (2) password: <password> (3) insecureSkipVerify: <"true"/"false"> (4) cacert: | (5) <ca_certificate> url: <api_end_point> (6) EOF
1 The ownerReferences
section is optional.2 Specify the vCenter user or the ESX/ESXi user. 3 Specify the password of the vCenter user or the ESX/ESXi user. 4 Specify "true"
to skip certificate verification, and specify"false"
to verify the certificate. Defaults to"false"
if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.5 When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA. 6 Specify the API endpoint URL of the vCenter or the ESX/ESXi, for example, https://<vCenter_host>/sdk
.
-
Create a
Provider
manifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: vsphere url: <api_end_point> (1) settings: vddkInitImage: <VDDK_image> (2) sdkEndpoint: vcenter (3) secret: name: <secret> (4) namespace: <namespace> EOF
1 Specify the URL of the API endpoint, for example, https://<vCenter_host>/sdk
.2 Optional, but it is strongly recommended to create a VDDK image to accelerate migrations. Follow OpenShift documentation to specify the VDDK image you created. 3 Options: vcenter
oresxi
.4 Specify the name of the provider Secret
CR.
-
Create a
Host
manifest:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Host metadata: name: <vmware_host> namespace: <namespace> spec: provider: namespace: <namespace> name: <source_provider> (1) id: <source_host_mor> (2) ipAddress: <source_network_ip> (3) EOF
1 Specify the name of the VMware vSphere Provider
CR.2 Specify the Managed Object Reference (moRef) of the VMware vSphere host. To retrieve the moRef, see Retrieving a VMware vSphere moRef. 3 Specify the IP address of the VMware vSphere migration network.
-
Create a
NetworkMap
manifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod (1) source: (2) id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> (3) namespace: <network_attachment_definition_namespace> (4) type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are pod
,multus
, andignored
. Useignored
to avoid attaching VMs to this network for this migration.2 You can use either the id
or thename
parameter to specify the source network. Forid
, specify the VMware vSphere network Managed Object Reference (moRef). To retrieve the moRef, see Retrieving a VMware vSphere moRef.3 Specify a network attachment definition for each additional KubeVirt network. 4 Required only when type
ismultus
. Specify the namespace of the KubeVirt network attachment definition.
-
Create a
StorageMap
manifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> (1) source: id: <source_datastore> (2) provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are ReadWriteOnce
andReadWriteMany
.2 Specify the VMware vSphere datastore moRef. For example, f2737930-b567-451a-9ceb-2887f6207009
. To retrieve the moRef, see Retrieving a VMware vSphere moRef. -
Optional: Create a
Hook
manifest to run custom code on a VM during the phase specified in thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> (1) playbook: | LS0tCi0gbm... (2) EOF
1 Optional: OKD service account. Use the serviceAccount
parameter to modify any cluster resources.2 Base64-encoded Ansible Playbook. If you specify a playbook, the image
must include anansible-runner
.You can use the default
hook-runner
image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address>
-
Create a
Plan
manifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> (1) namespace: <namespace> spec: warm: false (2) provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: (3) network: (4) name: <network_map> (5) namespace: <namespace> storage: (6) name: <storage_map> (7) namespace: <namespace> preserveStaticIPs: (8) networkNameTemplate: <network_interface_template> (9) pvcNameTemplate: <pvc_name_template> (10) pvcNameTemplateUseGenerateName: true (11) targetNamespace: <target_namespace> volumeNameTemplate: <volume_name_template> (12) vms: (13) - id: <source_vm1> (14) - name: <source_vm2> networkNameTemplate: <network_interface_template_for_this_vm> (15) pvcNameTemplate: <pvc_name_template_for_this_vm> (16) volumeNameTemplate: <volume_name_template_for_this_vm> (17) targetName: <target_name> (18) hooks: (19) - hook: namespace: <namespace> name: <hook> (20) step: <step> (21) EOF
1 Specify the name of the Plan
CR.2 Specify whether the migration is warm - true
- or cold -false
. If you specify a warm migration without specifying a value for thecutover
parameter in theMigration
manifest, only the precopy stage will run.3 Specify only one network map and one storage map per plan. 4 Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 5 Specify the name of the NetworkMap
CR.6 Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 7 Specify the name of the StorageMap
CR.8 By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP address linked to the interface name in the guest VM lose their IP address.
To avoid this, setpreserveStaticIPs
totrue
. Forklift issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to Forklift.9 Optional. Specify a template for the network interface name for the VMs in your plan. The template follows the Go template syntax and has access to the following variables: -
.NetworkName:
If the target network ismultus
, add the name of the Multus Network Attachment Definition. Otherwise, leave this variable empty. -
.NetworkNamespace
: If the target network ismultus
, add the namespace where the Multus Network Attachment Definition is located. -
.NetworkType
: Specifies the network type. Options:multus
orpod
. -
.NetworkIndex
: Sequential index of the network interface (0-based).Examples
-
"net-{{.NetworkIndex}}"
-
{{if eq .NetworkType "pod"}}pod{{else}}multus-{{.NetworkIndex}}{{end}}"
Variable names cannot exceed 63 characters. This rule applies to a network name network template, a PVC name template, a VM name template, and a volume name template.
10 Optional. Specify a template for the persistent volume claim (PVC) name for a plan. The template follows the Go template syntax and has access to the following variables: -
.VnName
: Name of the VM. -
.PlanName
: Name of the migration plan. -
.DiskIndex
: Initial volume index of the disk. -
.RootDiskIndex
: Index of the root disk. -
.Shared
: Options:true
, for a shared volume,false
, for a non-shared volume.Examples
-
"{{.VmName}}-disk-{{.DiskIndex}}"
-
"{{if eq .DiskIndex .RootDiskIndex}}root{{else}}data{{end}}-{{.DiskIndex}}"
-
"{{if .Shared}}shared-{{end}}{{.VmName}}-{{.DiskIndex}}"
11 Optional: -
When set to
true
, Forklift adds one or more randonly generated alphanumeric characters to the name of the PVC in order to ensure all PVCs have unique names. -
When set to
false
, if you specify apvcNameTemplate
, Forklift does not add such charchters to the name of the PVC.If you set
pvcNameTemplateUseGenerateName
tofalse
, the generated PVC name might not be unique and might cause conflicts.
12 Optional: Specify a template for the volume interface name for the VMs in your plan. The template follows the Go template syntax and has access to the following variables: -
.PVCName
: Name of the PVC mounted to the VM using this volume. -
.VolumeIndex
: Sequential index of the volume interface (0-based).Examples
-
"disk-{{.VolumeIndex}}"
-
"pvc-{{.PVCName}}"
13 You can use either the id
or thename
parameter to specify the source VMs.14 Specify the VMware vSphere VM moRef. To retrieve the moRef, see Retrieving a VMware vSphere moRef. 15 Optional: Specify a network interface name for the specific VM. Overrides the value set in spec:networkNameTemplate
. Variables and examples as in callout 9.16 Optional: Specify a PVC name for the specific VM. Overrides the value set in spec:pvcNameTemplate
. Variables and examples as in callout 10.17 Optional: Specify a volume name for the specific VM. Overrides the value set in spec:volumeNameTemplate
. Variables and examples as in callout 12.18 Optional: Forklift automatically generates a name for the target VM. You can override this name by using this parameter and entering a new name. The name you enter must be unique and it must be a valid Kubernetes subdomain. Otherwise, the migration fails automatically. 19 Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step. 20 Specify the name of the Hook
CR.21 Allowed values are PreHook
, before the migration plan starts, orPostHook
, after the migration is complete.When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.
-
-
Create a
Migration
manifest to run thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF
If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00
.
There is an issue with the In Forklift, you need to add permissions at the datacenter level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements. If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required. |
Retrieving a VMware vSphere moRef
When you migrate VMs with a VMware vSphere source provider using Forklift from the command line, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.
You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.
-
Retrieve the routes for the project:
oc get route -n openshift-mtv
-
Retrieve the
Inventory
service route:$ kubectl get route <inventory_service> -n konveyor-forklift
-
Retrieve the access token:
$ TOKEN=$(oc whoami -t)
-
Retrieve the moRef of a VMware vSphere provider:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere -k
-
Retrieve the datastores of a VMware vSphere source provider:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k
Example output[ { "id": "datastore-11", "parent": { "kind": "Folder", "id": "group-s5" }, "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC", "revision": 46, "name": "v2v_general_porpuse_ISCSI_DC", "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11" }, { "id": "datastore-730", "parent": { "kind": "Folder", "id": "group-s5" }, "path": "/Datacenter/datastore/f01-h27-640-SSD_2", "revision": 46, "name": "f01-h27-640-SSD_2", "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730" }, ...
In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC
is datastore-11
and the moRef of the datastore f01-h27-640-SSD_2
is datastore-730
.
Migrating virtual machines with shared disks
You can migrate VMware virtual machines (VMs) with shared disks by using the Forklift. This functionality is available only for cold migrations and is not available for shared boot disks.
Shared disks are disks that are attached to more than one VM and that use the multi-writer option. As a result of these characteristics, shared disks are difficult to migrate.
In certain situations, applications in VMs require shared disks. Databases and clustered file systems are the primary use cases for shared disks.
Forklift version 2.7.11 or later includes a parameter named migrateSharedDisks
in Plan
custom resources (CRs) that instructs Forklift to either migrate shared disks or to skip them during migration, as follows:
-
If set to
true
, Forklift migrates the shared disks. Forklift uses the regular cold migration flow usingvirt-v2v
and labeling the shared persistent volume claims (PVCs). -
If set to
false
, Forklift skips the shared disks. Forklift uses the KubeVirt Containerized-Data-Importer (CDI) for disk transfer.
After the disk transfer, Forklift automatically attempts to locate the already shared PVCs and the already migrated shared disks and attach them to the VMs.
By default, migrateSharedDisks
is set to true
.
To successfully migrate VMs with shared disks, create two Plan
CRs as follows:
-
In the first, set
migrateSharedDisks
totrue
.Forklift migrates the following:
-
All shared disks.
-
For each shared disk, one of the VMs that is attached to it. If possible, choose VMs so that the plan does not contain any shared disks that are connected to more than one VM. See the following figures for further guidance.
-
All unshared disks attached to the VMs you choose for this plan.
-
-
In the second, set
migrateSharedDisks
tofalse
.Forklift migrates the following:
-
All other VMs.
-
The unshared disks of the VMs in the second
Plan
CR.
-
When Forklift migrates a VM that has a shared disk to it, it does not check if it has already migrated that shared disk. Therefore, it is important to allocate the VMs in each of the two so that each shared disk is migrated once and only once.
To understand how to assign VMs and shared disks to each of the Plan
CRs, consider the two figures that follow. In both, migrateSharedDisks
is set to true
for plan1
and set to false
for plan2
.
In the first figure, the VMs and shared disks are assigned correctly:

plan1
migrates VMs 2 and 4, shared disks 1, 2, and 3, and the non-shared disks of VMs 2 and 4. VMs 2 and 4 are included in this plan, because they connect to all the shared disks once each.
plan2
migrates VMs 1 and 3 and their non-shared disks. plan2
does not migrate the shared disks connected to VMs 1 and 3 because migrateSharedDisks
is set to false
.
Forklift migrates each VMs and its disks as follows:
-
From
plan1
:-
VM 3, shared disks 1 and 2, and the non-shared disks attached to VM 3.
-
VM 4, shared disk 3, and the non-shared disks attached to VM 4.
-
-
From
plan2
:-
VM 1 and the non-shared disks attached to it.
-
VM 2 and the non-shared disks attached to it.
-
The result is that VMs 2 and 4, all the shared disks, and all the non-shared disks are migrated, but only once. Forklift is able to reattach all VMs to their disks, including the shared disks.
In second figure, the VMs and shared disks are not assigned correctly:

In this case, Forklift migrates each VMs and its disks as follows:
-
From
plan1
:-
VM 2, shared disks 1 and 2, and the non-shared disks attached to VM 2.
-
VM 3, shared disks 2 and 3, and the non-shared disks attached to VM 3.
-
-
From
plan2
:-
VM 1 and the non-shared disks attached to it.
-
VM 4 and the non-shared disks attached to it.
-
This migration "succeeds", but it results in a problem: Shared disk 2 is migrated twice by the first Plan
CR. You can resolve this problem by using one of the two workarounds that are discussed in the Known issues section, which follows the procedure.
-
In Forklift, create a migration plan for the shared disks, the minimum number of VMs connected to them, and the unshared disk of those VMs.
-
On the VMware cluster, power off all VMs attached to the shared disks.
-
In the OKD web console, click Migration > Plans for virtualization.
-
Select the desired plan.
The Plan details page opens.
-
Click the YAML tab of the plan.
-
Verify that
migrateSharedDisks
is set totrue
.Example Plan CR withmigrateSharedDisks
set to trueapiVersion: forklift.konveyor.io/v1beta1 kind: Plan name: transfer-shared-disks namespace: openshift-mtv spec: map: network: apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap name: vsphere-7gxbs namespace: openshift-mtv uid: a3c83db3-1cf7-446a-b996-84c618946362 storage: apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap name: vsphere-mqp7b namespace: openshift-mtv uid: 20b43d4f-ded4-4798-b836-7c0330d552a0 migrateSharedDisks: true provider: destination: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: host namespace: openshift-mtv uid: abf4509f-1d5f-4ff6-b1f2-18206136922a source: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: vsphere namespace: openshift-mtv uid: be4dc7ab-fedd-460a-acae-a850f6b9543f targetNamespace: openshift-mtv vms: - id: vm-69 name: vm-1-with-shared-disks
-
Start the migration of the first plan and wait for it to finish.
-
Create a second
Plan
CR to migrate all the other VMs and their unshared disks to the same target namespace as the first. -
In the Plans for virtualization page of the OKD web console, select the new plan.
The Plan details page opens.
-
Click the YAML tab of the plan.
-
Set
migrateSharedDisks
tofalse
.Example Plan CR withmigrateSharedDisks
set to falseapiVersion: forklift.konveyor.io/v1beta1 kind: Plan name: skip-shared-disks namespace: openshift-mtv spec: map: network: apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap name: vsphere-7gxbs namespace: openshift-mtv uid: a3c83db3-1cf7-446a-b996-84c618946362 storage: apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap name: vsphere-mqp7b namespace: openshift-mtv uid: 20b43d4f-ded4-4798-b836-7c0330d552a0 migrateSharedDisks: false provider: destination: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: host namespace: openshift-mtv uid: abf4509f-1d5f-4ff6-b1f2-18206136922a source: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: vsphere namespace: openshift-mtv uid: be4dc7ab-fedd-460a-acae-a850f6b9543f targetNamespace: openshift-mtv vms: - id: vm-71 name: vm-2-with-shared-disks
-
Start the migration of the second plan and wait for it to finish.
-
Verify that all shared disks are attached to the same VMs as they were before migration and that none are duplicated. In case of problems, see the discussion of known issues that follows.
Known issues
Cyclic shared disk dependencies
Problem: VMs with cyclic shared disk dependencies cannot be migrated successfully.
Explanation: When migrateSharedDisks
is set to true
, Forklift migrates each VM in the plan, one by one, and any shared disks attached to it, without determining if a shared disk was already migrated.
In the case of 2 VMs sharing one disk, there is no problem. Forklift transfers the shared disk and attaches the 2 VMs to the shared disk after the migration.
However, if there is a cyclic dependency of shared disks between 3 or more VMs, Forklift either duplicates or omits one of the shared disks. The figure that follows illustrates the simplest version of this problem.

In this case, the VMs and shared disks cannot be migrated in the same Plan
CR. Although this problem could be solved using migrateSharedDisks
and 2 Plan
CRs, it illustrates the basic issue that must be avoided in migrating VMs with shared disks.
Workarounds
As discussed previously, it is important to try to create 2 Plan
CRs in which each shared disk is migrated once. However, if your migration does result in a shared disk either duplicated or not being transferred, you can use one of the following workarounds:
-
Duplicate one of the shared disks
-
"Remove" one of the shared disks
In the figure that follows, VMs 2 and 3 are migrated with the shared disks in the first plan, and VM 1 is migrated in the second plan. Doing this breaks the cyclic dependencies, but this workaround has a drawback: It results in shared disk 3 being duplicated. The solution is to remove the duplicated PV and migrate VM 1 again.

Advantage:
The source VMs are not affected.
Disadvantage:
One shared disk gets transferred twice, so you need to manually delete the duplicate disk and reconnect VM 3 to shared disk 3 in Red Hat OpenShift after the migration..
The figure that follows shows a different solution: Remove the link to one of the shared disks from one source VM. Doing this breaks the cyclic dependencies. Note that in the current VMware UI, removing the link is referred to as "removing" the disk.

In this case, VM 2 and 3 are migrated with the shared disks in the first plan, but the link between VM 3 and shared disk 3 is removed. As before, VM 1 is migrated in the second plan.
Doing this breaks the cyclic dependencies, but this workaround has a drawback: VM 3 is disconnected from shared disk 3 and remains disconnected after the migration. The solution is to manually reattach shared disk 3 to VM 3 after the migration finishes.
Advantage:
No disks are duplicated.
Disadvantage:
You need to modify VM 3 by removing its link to shared disk 3 before the migration, and you need to manually reconnect VM 3 to shared disk 3 in OKD after the migration.
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
-
Delete the
Migration
CR:$ kubectl delete migration <migration> -n <namespace> (1)
1 Specify the name of the Migration
CR.
-
Add the specific VMs to the
spec.cancel
block of theMigration
manifest:Example YAML for canceling the migrations of two VMs$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 (1) - id: vm-203 - name: rhel8-vm EOF
1 You can specify a VM by using the id
key or thename
key.The value of the
id
key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM. -
Retrieve the
Migration
CR to monitor the progress of the remaining VMs:$ kubectl get migration/<migration> -n <namespace> -o yaml
Migrating from a oVirt source provider
You can migrate from a oVirt (oVirt) source provider by using the command-line interface (CLI).
If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.
|
-
Create a
Secret
manifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: (1) - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ovirt createdForResourceType: providers type: Opaque stringData: user: <user> (2) password: <password> (3) insecureSkipVerify: <"true"/"false"> (4) cacert: | (5) <ca_certificate> url: <api_end_point> (6) EOF
1 The ownerReferences
section is optional.2 Specify the oVirt Engine user. 3 Specify the user password. 4 Specify "true"
to skip certificate verification, and specify"false"
to verify the certificate. Defaults to"false"
if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.5 Enter the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, enter the Engine Apache CA certificate. You can retrieve the Engine CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA. 6 Specify the API endpoint URL, for example, https://<engine_host>/ovirt-engine/api
.
-
Create a
Provider
manifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ovirt url: <api_end_point> (1) secret: name: <secret> (2) namespace: <namespace> EOF
1 Specify the URL of the API endpoint, for example, https://<engine_host>/ovirt-engine/api
.2 Specify the name of provider Secret
CR.
-
Create a
NetworkMap
manifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod (1) source: (2) id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> (3) namespace: <network_attachment_definition_namespace> (4) type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are pod
andmultus
.2 You can use either the id
or thename
parameter to specify the source network. Forid
, specify the oVirt network Universal Unique ID (UUID).3 Specify a network attachment definition for each additional KubeVirt network. 4 Required only when type
ismultus
. Specify the namespace of the KubeVirt network attachment definition.
-
Create a
StorageMap
manifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> (1) source: id: <source_storage_domain> (2) provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are ReadWriteOnce
andReadWriteMany
.2 Specify the oVirt storage domain UUID. For example, f2737930-b567-451a-9ceb-2887f6207009
. -
Optional: Create a
Hook
manifest to run custom code on a VM during the phase specified in thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> (1) playbook: | LS0tCi0gbm... (2) EOF
1 Optional: OKD service account. Use the serviceAccount
parameter to modify any cluster resources.2 Base64-encoded Ansible Playbook. If you specify a playbook, the image
must include anansible-runner
.You can use the default
hook-runner
image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address>
-
Create a
Plan
manifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> (1) namespace: <namespace> preserveClusterCpuModel: true (2) spec: warm: false (3) provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: (4) network: (5) name: <network_map> (6) namespace: <namespace> storage: (7) name: <storage_map> (8) namespace: <namespace> targetNamespace: <target_namespace> vms: (9) - id: <source_vm1> (10) - name: <source_vm2> hooks: (11) - hook: namespace: <namespace> name: <hook> (12) step: <step> (13) EOF
1 Specify the name of the Plan
CR.2 See note below. 3 Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover
parameter in theMigration
manifest, only the precopy stage will run.4 Specify only one network map and one storage map per plan. 5 Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 6 Specify the name of the NetworkMap
CR.7 Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 8 Specify the name of the StorageMap
CR.9 You can use either the id
or thename
parameter to specify the source VMs.10 Specify the oVirt VM UUID. 11 Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step. 12 Specify the name of the Hook
CR.13 Allowed values are PreHook
, before the migration plan starts, orPostHook
, after the migration is complete.-
If the migrated machine is set with a custom CPU model, it will be set with that CPU model in the destination cluster, regardless of the setting of
preserveClusterCpuModel
. -
If the migrated machine is not set with a custom CPU model:
-
If
preserveClusterCpuModel
is set to 'true`, Forklift checks the CPU model of the VM when it runs in oVirt, based on the cluster’s configuration, and then sets the migrated VM with that CPU model. -
If
preserveClusterCpuModel
is set to 'false`, Forklift does not set a CPU type and the VM is set with the default CPU model of the destination cluster.
-
-
-
Create a
Migration
manifest to run thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF
If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00
.
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
-
Delete the
Migration
CR:$ kubectl delete migration <migration> -n <namespace> (1)
1 Specify the name of the Migration
CR.
-
Add the specific VMs to the
spec.cancel
block of theMigration
manifest:Example YAML for canceling the migrations of two VMs$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 (1) - id: vm-203 - name: rhel8-vm EOF
1 You can specify a VM by using the id
key or thename
key.The value of the
id
key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM. -
Retrieve the
Migration
CR to monitor the progress of the remaining VMs:$ kubectl get migration/<migration> -n <namespace> -o yaml
Migrating from an OpenStack source provider
You can migrate from an OpenStack source provider by using the command-line interface (CLI).
-
Create a
Secret
manifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: (1) - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openstack createdForResourceType: providers type: Opaque stringData: user: <user> (2) password: <password> (3) insecureSkipVerify: <"true"/"false"> (4) domainName: <domain_name> projectName: <project_name> regionName: <region_name> cacert: | (5) <ca_certificate> url: <api_end_point> (6) EOF
1 The ownerReferences
section is optional.2 Specify the OpenStack user. 3 Specify the user OpenStack password. 4 Specify "true"
to skip certificate verification, and specify"false"
to verify the certificate. Defaults to"false"
if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.5 When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA. 6 Specify the API endpoint URL, for example, https://<identity_service>/v3
.
-
Create a
Provider
manifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openstack url: <api_end_point> (1) secret: name: <secret> (2) namespace: <namespace> EOF
1 Specify the URL of the API endpoint. 2 Specify the name of provider Secret
CR.
-
Create a
NetworkMap
manifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod (1) source:(2) id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> (3) namespace: <network_attachment_definition_namespace> (4) type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are pod
andmultus
.2 You can use either the id
or thename
parameter to specify the source network. Forid
, specify the OpenStack network UUID.3 Specify a network attachment definition for each additional KubeVirt network. 4 Required only when type
ismultus
. Specify the namespace of the KubeVirt network attachment definition.
-
Create a
StorageMap
manifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> (1) source: id: <source_volume_type> (2) provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are ReadWriteOnce
andReadWriteMany
.2 Specify the OpenStack volume_type
UUID. For example,f2737930-b567-451a-9ceb-2887f6207009
. -
Optional: Create a
Hook
manifest to run custom code on a VM during the phase specified in thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> (1) playbook: | LS0tCi0gbm... (2) EOF
1 Optional: OKD service account. Use the serviceAccount
parameter to modify any cluster resources.2 Base64-encoded Ansible Playbook. If you specify a playbook, the image
must include anansible-runner
.You can use the default
hook-runner
image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address>
-
Create a
Plan
manifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> (1) namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: (2) network: (3) name: <network_map> (4) namespace: <namespace> storage: (5) name: <storage_map> (6) namespace: <namespace> targetNamespace: <target_namespace> vms: (7) - id: <source_vm1> (8) - name: <source_vm2> hooks: (9) - hook: namespace: <namespace> name: <hook> (10) step: <step> (11) EOF
1 Specify the name of the Plan
CR.2 Specify only one network map and one storage map per plan. 3 Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 4 Specify the name of the NetworkMap
CR.5 Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 6 Specify the name of the StorageMap
CR.7 You can use either the id
or thename
parameter to specify the source VMs.8 Specify the OpenStack VM UUID. 9 Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step. 10 Specify the name of the Hook
CR.11 Allowed values are PreHook
, before the migration plan starts, orPostHook
, after the migration is complete. -
Create a
Migration
manifest to run thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF
If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00
.
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
-
Delete the
Migration
CR:$ kubectl delete migration <migration> -n <namespace> (1)
1 Specify the name of the Migration
CR.
-
Add the specific VMs to the
spec.cancel
block of theMigration
manifest:Example YAML for canceling the migrations of two VMs$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 (1) - id: vm-203 - name: rhel8-vm EOF
1 You can specify a VM by using the id
key or thename
key.The value of the
id
key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM. -
Retrieve the
Migration
CR to monitor the progress of the remaining VMs:$ kubectl get migration/<migration> -n <namespace> -o yaml
Migrating from an Open Virtual Appliance (OVA) source provider
You can migrate from Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the command-line interface (CLI).
-
Create a
Secret
manifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: (1) - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ova createdForResourceType: providers type: Opaque stringData: url: <nfs_server:/nfs_path> (2) EOF
1 The ownerReferences
section is optional.2 where: nfs_server
is an IP or hostname of the server where the share was created andnfs_path
is the path on the server where the OVA files are stored.
-
Create a
Provider
manifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ova url: <nfs_server:/nfs_path> (1) secret: name: <secret> (2) namespace: <namespace> EOF
1 where: nfs_server
is an IP or hostname of the server where the share was created andnfs_path
is the path on the server where the OVA files are stored.2 Specify the name of provider Secret
CR.
-
Create a
NetworkMap
manifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod (1) source: id: <source_network_id> (2) - destination: name: <network_attachment_definition> (3) namespace: <network_attachment_definition_namespace> (4) type: multus source: id: <source_network_id> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are pod
andmultus
.2 Specify the OVA network Universal Unique ID (UUID). 3 Specify a network attachment definition for each additional KubeVirt network. 4 Required only when type
ismultus
. Specify the namespace of the KubeVirt network attachment definition.
-
Create a
StorageMap
manifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> (1) source: name: Dummy storage for source provider <provider_name> (2) provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are ReadWriteOnce
andReadWriteMany
.2 For OVA, the StorageMap
can map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider. -
Optional: Create a
Hook
manifest to run custom code on a VM during the phase specified in thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> (1) playbook: | LS0tCi0gbm... (2) EOF
1 Optional: OKD service account. Use the serviceAccount
parameter to modify any cluster resources.2 Base64-encoded Ansible Playbook. If you specify a playbook, the image
must include anansible-runner
.You can use the default
hook-runner
image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address>
-
Create a
Plan
manifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> (1) namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: (2) network: (3) name: <network_map> (4) namespace: <namespace> storage: (5) name: <storage_map> (6) namespace: <namespace> targetNamespace: <target_namespace> vms: (7) - id: <source_vm1> (8) - name: <source_vm2> hooks: (9) - hook: namespace: <namespace> name: <hook> (10) step: <step> (11) EOF
1 Specify the name of the Plan
CR.2 Specify only one network map and one storage map per plan. 3 Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 4 Specify the name of the NetworkMap
CR.5 Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 6 Specify the name of the StorageMap
CR.7 You can use either the id
or thename
parameter to specify the source VMs.8 Specify the OVA VM UUID. 9 Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step. 10 Specify the name of the Hook
CR.11 Allowed values are PreHook
, before the migration plan starts, orPostHook
, after the migration is complete. -
Create a
Migration
manifest to run thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF
If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00
.
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
-
Delete the
Migration
CR:$ kubectl delete migration <migration> -n <namespace> (1)
1 Specify the name of the Migration
CR.
-
Add the specific VMs to the
spec.cancel
block of theMigration
manifest:Example YAML for canceling the migrations of two VMs$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 (1) - id: vm-203 - name: rhel8-vm EOF
1 You can specify a VM by using the id
key or thename
key.The value of the
id
key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM. -
Retrieve the
Migration
CR to monitor the progress of the remaining VMs:$ kubectl get migration/<migration> -n <namespace> -o yaml
Migrating from a Red Hat KubeVirt source provider
You can use a Red Hat KubeVirt provider as either a source provider or as a destination provider. You can migrate from an KubeVirt source provider by using the command-line interface (CLI).
-
Create a
Secret
manifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: (1) - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openshift createdForResourceType: providers type: Opaque stringData: token: <token> (2) password: <password> (3) insecureSkipVerify: <"true"/"false"> (4) cacert: | (5) <ca_certificate> url: <api_end_point> (6) EOF
1 The ownerReferences
section is optional.2 Specify a token for a service account with cluster-admin
privileges. If bothtoken
andurl
are left blank, the local OKD cluster is used.3 Specify the user password. 4 Specify "true"
to skip certificate verification, and specify"false"
to verify the certificate. Defaults to"false"
if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.5 When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA. 6 Specify the URL of the endpoint of the API server.
-
Create a
Provider
manifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openshift url: <api_end_point> (1) secret: name: <secret> (2) namespace: <namespace> EOF
1 Specify the URL of the endpoint of the API server. 2 Specify the name of provider Secret
CR.
-
Create a
NetworkMap
manifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod (1) source: name: <network_name> type: pod - destination: name: <network_attachment_definition> (2) namespace: <network_attachment_definition_namespace> (3) type: multus source: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are pod
andmultus
.2 Specify a network attachment definition for each additional KubeVirt network. Specify the namespace
either by using thenamespace property
or with a name built as follows:<network_namespace>/<network_name>
.3 Required only when type
ismultus
. Specify the namespace of the KubeVirt network attachment definition.
-
Create a
StorageMap
manifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> (1) source: name: <storage_class> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF
1 Allowed values are ReadWriteOnce
andReadWriteMany
. -
Optional: Create a
Hook
manifest to run custom code on a VM during the phase specified in thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> (1) playbook: | LS0tCi0gbm... (2) EOF
1 Optional: OKD service account. Use the serviceAccount
parameter to modify any cluster resources.2 Base64-encoded Ansible Playbook. If you specify a playbook, the image
must include anansible-runner
.You can use the default
hook-runner
image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address>
-
Create a
Plan
manifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> (1) namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: (2) network: (3) name: <network_map> (4) namespace: <namespace> storage: (5) name: <storage_map> (6) namespace: <namespace> targetNamespace: <target_namespace> vms: - name: <source_vm> namespace: <namespace> hooks: (7) - hook: namespace: <namespace> name: <hook> (8) step: <step> (9) EOF
1 Specify the name of the Plan
CR.2 Specify only one network map and one storage map per plan. 3 Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 4 Specify the name of the NetworkMap
CR.5 Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 6 Specify the name of the StorageMap
CR.7 Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step. 8 Specify the name of the Hook
CR.9 Allowed values are PreHook
, before the migration plan starts, orPostHook
, after the migration is complete. -
Create a
Migration
manifest to run thePlan
CR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF
If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00
.
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
-
Delete the
Migration
CR:$ kubectl delete migration <migration> -n <namespace> (1)
1 Specify the name of the Migration
CR.
-
Add the specific VMs to the
spec.cancel
block of theMigration
manifest:Example YAML for canceling the migrations of two VMs$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 (1) - id: vm-203 - name: rhel8-vm EOF
1 You can specify a VM by using the id
key or thename
key.The value of the
id
key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM. -
Retrieve the
Migration
CR to monitor the progress of the remaining VMs:$ kubectl get migration/<migration> -n <namespace> -o yaml
Advanced migration options
Changing precopy intervals for warm migration
You can change the snapshot interval by patching the ForkliftController
custom resource (CR).
-
Patch the
ForkliftController
CR:$ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
1 Specify the precopy interval in minutes. The default value is 60
.You do not need to restart the
forklift-controller
pod.
Creating custom rules for the Validation service
The Validation
service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation
service generates a list of concerns for each VM, which are stored in the Provider Inventory
service as VM attributes. The web console displays the concerns for each VM in the provider inventory.
You can create custom rules to extend the default ruleset of the Validation
service. For example, you can create a rule that checks whether a VM has multiple disks.
About Rego files
Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego
files in the /usr/share/opa/policies/io/konveyor/forklift/<provider>
directory of the Validation
pod.
Each validation rule is defined in a separate .rego
file and tests for a specific condition. If the condition evaluates as true
, the rule adds a {“category”, “label”, “assessment”}
hash to the concerns
. The concerns
content is added to the concerns
key in the inventory record of the VM. The web console displays the content of the concerns
key for each VM in the provider inventory.
The following .rego
file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:
package io.konveyor.forklift.vmware (1)
has_drs_enabled {
input.host.cluster.drsEnabled (2)
}
concerns[flag] {
has_drs_enabled
flag := {
"category": "Information",
"label": "VM running in a DRS-enabled cluster",
"assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
}
}
1 | Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt. |
2 | Query parameters are based on the input key of the Validation service JSON. |
Checking the default validation rules
Before you create a custom rule, you must check the default rules of the Validation
service to ensure that you do not create a rule that redefines an existing default value.
Example: If a default rule contains the line default valid_input = false
and you create a custom rule that contains the line default valid_input = true
, the Validation
service will not start.
-
Connect to the terminal of the
Validation
pod:$ kubectl rsh <validation_pod>
-
Go to the OPA policies directory for your provider:
$ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
1 Specify vmware
orovirt
. -
Search for the default policies:
$ grep -R "default" *
Creating a validation rule
You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation
service.
|
Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory
service.
For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"]
.
The Provider Inventory
service simplifies this configuration and returns a testable attribute with a list value:
"numaNodeAffinity": [
"0",
"1"
],
You create a Rego query, based on this attribute, and add it to the forklift-validation-config
config map:
`count(input.numaNodeAffinity) != 0`
-
Create a config map CR according to the following example:
$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: <forklift-validation-config> namespace: konveyor-forklift data: vmware_multiple_disks.rego: |- package <provider_package> (1) has_multiple_disks { (2) count(input.disks) > 1 } concerns[flag] { has_multiple_disks (3) flag := { "category": "<Information>", (4) "label": "Multiple disks detected", "assessment": "Multiple disks detected on this VM." } } EOF
1 Specify the provider package name. Allowed values are io.konveyor.forklift.vmware
for VMware andio.konveyor.forklift.ovirt
for oVirt.2 Specify the concerns
name and Rego query.3 Specify the concerns
name andflag
parameter values.4 Allowed values are Critical
,Warning
, andInformation
. -
Stop the
Validation
pod by scaling theforklift-controller
deployment to0
:$ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
-
Start the
Validation
pod by scaling theforklift-controller
deployment to1
:$ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
-
Check the
Validation
pod log to verify that the pod started:$ kubectl logs -f <validation_pod>
If the custom rule conflicts with a default rule, the
Validation
pod will not start. -
Remove the source provider:
$ kubectl delete provider <provider> -n konveyor-forklift
-
Add the source provider to apply the new rule:
$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <provider> namespace: konveyor-forklift spec: type: <provider_type> (1) url: <api_end_point> (2) secret: name: <secret> (3) namespace: konveyor-forklift EOF
1 Allowed values are ovirt
,vsphere
, andopenstack
.2 Specify the API end point URL, for example, https://<vCenter_host>/sdk
for vSphere,https://<engine_host>/ovirt-engine/api
for oVirt, orhttps://<identity_service>/v3
for OpenStack.3 Specify the name of the provider Secret
CR.
You must update the rules version after creating a custom rule so that the Inventory
service detects the changes and validates the VMs.
Updating the inventory rules version
You must update the inventory rules version each time you update the rules so that the Provider Inventory
service detects the changes and triggers the Validation
service.
The rules version is recorded in a rules_version.rego
file for each provider.
-
Retrieve the current rules version:
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
Example output{ "result": { "rules_version": 5 } }
-
Connect to the terminal of the
Validation
pod:$ kubectl rsh <validation_pod>
-
Update the rules version in the
/usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego
file. -
Log out of the
Validation
pod terminal. -
Verify the updated rules version:
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
Example output{ "result": { "rules_version": 6 } }
Retrieving the Inventory service JSON
You retrieve the Inventory
service JSON by sending an Inventory
service query to a virtual machine (VM). The output contains an "input"
key, which contains the inventory attributes that are queried by the Validation
service rules.
You can create a validation rule based on any attribute in the "input"
key, for example, input.snapshot.kind
.
-
Retrieve the routes for the project:
oc get route -n openshift-mtv
-
Retrieve the
Inventory
service route:$ kubectl get route <inventory_service> -n konveyor-forklift
-
Retrieve the access token:
$ TOKEN=$(oc whoami -t)
-
Trigger an HTTP GET request (for example, using Curl):
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
-
Retrieve the
UUID
of a provider:$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider> -k (1)
1 Allowed values for the provider are vsphere
,ovirt
, andopenstack
. -
Retrieve the VMs of a provider:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
-
Retrieve the details of a VM:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
Example output{ "input": { "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431", "id": "vm-431", "parent": { "kind": "Folder", "id": "group-v22" }, "revision": 1, "name": "iscsi-target", "revisionValidated": 1, "isTemplate": false, "networks": [ { "kind": "Network", "id": "network-31" }, { "kind": "Network", "id": "network-33" } ], "disks": [ { "key": 2000, "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk", "datastore": { "kind": "Datastore", "id": "datastore-63" }, "capacity": 17179869184, "shared": false, "rdm": false }, { "key": 2001, "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk", "datastore": { "kind": "Datastore", "id": "datastore-63" }, "capacity": 10737418240, "shared": false, "rdm": false } ], "concerns": [], "policyVersion": 5, "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49", "firmware": "bios", "powerState": "poweredOn", "connectionState": "connected", "snapshot": { "kind": "VirtualMachineSnapshot", "id": "snapshot-3034" }, "changeTrackingEnabled": false, "cpuAffinity": [ 0, 2 ], "cpuHotAddEnabled": true, "cpuHotRemoveEnabled": false, "memoryHotAddEnabled": false, "faultToleranceEnabled": false, "cpuCount": 2, "coresPerSocket": 1, "memoryMB": 2048, "guestName": "Red Hat Enterprise Linux 7 (64-bit)", "balloonedMemory": 0, "ipAddress": "10.19.2.96", "storageUsed": 30436770129, "numaNodeAffinity": [ "0", "1" ], "devices": [ { "kind": "RealUSBController" } ], "host": { "id": "host-29", "parent": { "kind": "Cluster", "id": "domain-c26" }, "revision": 1, "name": "IP address or host name of the vCenter host or oVirt Engine host", "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29", "status": "green", "inMaintenance": false, "managementServerIp": "10.19.2.96", "thumbprint": <thumbprint>, "timezone": "UTC", "cpuSockets": 2, "cpuCores": 16, "productName": "VMware ESXi", "productVersion": "6.5.0", "networking": { "pNICs": [ { "key": "key-vim.host.PhysicalNic-vmnic0", "linkSpeed": 10000 }, { "key": "key-vim.host.PhysicalNic-vmnic1", "linkSpeed": 10000 }, { "key": "key-vim.host.PhysicalNic-vmnic2", "linkSpeed": 10000 }, { "key": "key-vim.host.PhysicalNic-vmnic3", "linkSpeed": 10000 } ], "vNICs": [ { "key": "key-vim.host.VirtualNic-vmk2", "portGroup": "VM_Migration", "dPortGroup": "", "ipAddress": "192.168.79.13", "subnetMask": "255.255.255.0", "mtu": 9000 }, { "key": "key-vim.host.VirtualNic-vmk0", "portGroup": "Management Network", "dPortGroup": "", "ipAddress": "10.19.2.13", "subnetMask": "255.255.255.128", "mtu": 1500 }, { "key": "key-vim.host.VirtualNic-vmk1", "portGroup": "Storage Network", "dPortGroup": "", "ipAddress": "172.31.2.13", "subnetMask": "255.255.0.0", "mtu": 1500 }, { "key": "key-vim.host.VirtualNic-vmk3", "portGroup": "", "dPortGroup": "dvportgroup-48", "ipAddress": "192.168.61.13", "subnetMask": "255.255.255.0", "mtu": 1500 }, { "key": "key-vim.host.VirtualNic-vmk4", "portGroup": "VM_DHCP_Network", "dPortGroup": "", "ipAddress": "10.19.2.231", "subnetMask": "255.255.255.128", "mtu": 1500 } ], "portGroups": [ { "key": "key-vim.host.PortGroup-VM Network", "name": "VM Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0" }, { "key": "key-vim.host.PortGroup-Management Network", "name": "Management Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0" }, { "key": "key-vim.host.PortGroup-VM_10G_Network", "name": "VM_10G_Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-VM_Storage", "name": "VM_Storage", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-VM_DHCP_Network", "name": "VM_DHCP_Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-Storage Network", "name": "Storage Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-VM_Isolated_67", "name": "VM_Isolated_67", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2" }, { "key": "key-vim.host.PortGroup-VM_Migration", "name": "VM_Migration", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2" } ], "switches": [ { "key": "key-vim.host.VirtualSwitch-vSwitch0", "name": "vSwitch0", "portGroups": [ "key-vim.host.PortGroup-VM Network", "key-vim.host.PortGroup-Management Network" ], "pNICs": [ "key-vim.host.PhysicalNic-vmnic4" ] }, { "key": "key-vim.host.VirtualSwitch-vSwitch1", "name": "vSwitch1", "portGroups": [ "key-vim.host.PortGroup-VM_10G_Network", "key-vim.host.PortGroup-VM_Storage", "key-vim.host.PortGroup-VM_DHCP_Network", "key-vim.host.PortGroup-Storage Network" ], "pNICs": [ "key-vim.host.PhysicalNic-vmnic2", "key-vim.host.PhysicalNic-vmnic0" ] }, { "key": "key-vim.host.VirtualSwitch-vSwitch2", "name": "vSwitch2", "portGroups": [ "key-vim.host.PortGroup-VM_Isolated_67", "key-vim.host.PortGroup-VM_Migration" ], "pNICs": [ "key-vim.host.PhysicalNic-vmnic3", "key-vim.host.PhysicalNic-vmnic1" ] } ] }, "networks": [ { "kind": "Network", "id": "network-31" }, { "kind": "Network", "id": "network-34" }, { "kind": "Network", "id": "network-57" }, { "kind": "Network", "id": "network-33" }, { "kind": "Network", "id": "dvportgroup-47" } ], "datastores": [ { "kind": "Datastore", "id": "datastore-35" }, { "kind": "Datastore", "id": "datastore-63" } ], "vms": null, "networkAdapters": [], "cluster": { "id": "domain-c26", "parent": { "kind": "Folder", "id": "group-h23" }, "revision": 1, "name": "mycluster", "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26", "folder": "group-h23", "networks": [ { "kind": "Network", "id": "network-31" }, { "kind": "Network", "id": "network-34" }, { "kind": "Network", "id": "network-57" }, { "kind": "Network", "id": "network-33" }, { "kind": "Network", "id": "dvportgroup-47" } ], "datastores": [ { "kind": "Datastore", "id": "datastore-35" }, { "kind": "Datastore", "id": "datastore-63" } ], "hosts": [ { "kind": "Host", "id": "host-44" }, { "kind": "Host", "id": "host-29" } ], "dasEnabled": false, "dasVms": [], "drsEnabled": true, "drsBehavior": "fullyAutomated", "drsVms": [], "datacenter": null } } } }
Adding hooks to an MTV migration plan
You can add hooks to an Forklift migration plan to perform automated operations on a VM, either before or after you migrate it.
About hooks for Forklift migration plans
You can add hooks to Forklift migration plans using either the Forklift CLI or the Forklift user interface, which is located in the OKD web console.
-
Pre-migration hooks are hooks that perform operations on a VM that is located on a provider. This prepares the VM for migration.
-
Post-migration hooks are hooks that perform operations on a VM that has migrated to KubeVirt.
Default hook image
The default hook image for an Forklift hook is quay.io/kubev2v/hook-runner
. The image is based on the Ansible Runner image with the addition of python-openshift
to provide Ansible Kubernetes resources and a recent oc
binary.
Hook execution
An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap
. The hook container is run as a job on the desired cluster in the openshift-mtv
namespace using the ServiceAccount
you choose.
When you add a hook, you must specify the namespace where the Hook
CR is located, the name of the hook, and whether the hook is a pre-migration hook or a post-migration hook.
In order for a hook to run on a VM, the VM must be started and available using SSH. |
The illustration that follows shows the general process of using a migration hook. For specific procedures, see Adding a migration hook to a migration plan using the OKD web console and Adding a migration hook to a migration plan using the CLI.

Process:
-
Input your Ansible hook and credentials.
-
Input an Ansible hook image to the Forklift controller using either the UI or the CLI.
-
In the UI, specify the
ansible-runner
and enter theplaybook.yml
that contains the hook. -
In the CLI, input the hook image, which specifies the playbook that runs the hook.
-
-
If you need additional data to run the playbook inside the pod, such as SSH data, create a Secret that contains credentials for the VM. The Secret is not mounted to the pod, but is called by the playbook.
This Secret is not the same as the
Secret
CR that contains the credentials of your source provider.
-
-
The Forklift controller creates the
ConfigMap
, which contains:-
workload.yml
, which contains information about the VMs. -
playbook.yml
, the raw string playbook you want to execute. -
plan.yml
, which is thePlan
CR.The
ConfigMap
contains the name of the VM and instructs the playbook what to do.
-
-
The Forklift controller creates a job that starts the user specified image.
-
Mounts the
ConfigMap
to the container.The Ansible hook imports the Secret that the user previously entered.
-
-
The job runs a pre-migration hook or a post-migration hook as follows:
-
For a pre-migration hook, the job logs into the VMs on the source provider using SSH and runs the hook.
-
For a post-migration hook, the job logs into the VMs on KubeVirt using SSH and runs the hook.
-
Adding a migration hook to a migration plan using the OKD web console
You can add a migration hook to an existing migration plan using the OKD web console. Note that you need to run one command in the Forklift CLI.
For example, you can create a hook to install the cloud-init
service on a VM and write a file before migration.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan. |
-
Migration plan
-
Migration hook file, whose contents you copy and paste into the web console
-
File containing the
Secret
for the source provider -
OKD service account called by the hook and that has at least write access for the namespace you are working in
-
SSH access for VMs you want to migrate with the public key installed on the VMs
-
VMs running on Microsoft Server only: Remote Execution enabled
For instructions for creating a service account, see Understanding and creating service accounts.
-
In the OKD web console, click Migration > Plans for virtualization and then click the migration plan you want to add the hook to.
-
Click Hooks.
-
For a pre-migration hook, perform the following steps:
-
In the Pre migration hook section, toggle the Enable hook switch to Enable pre migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook
, you need to use an image that has anansible-runner
. -
Paste your hook as a YAML file in the Ansible playbook text box.
-
-
For a post-migration hook, perform the following steps:
-
In the Post migration hook, toggle the Enable hook switch Enable post migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook
, you need to use an image that has anansible-runner
. -
Paste your hook as a YAML file in the Ansible playbook text box.
-
-
At the top of the tab, click Update hooks.
-
In a terminal, enter the following command to associate each hook with your OKD service account:
$ oc -n openshift-mtv patch hook <name_of_hook> \ -p '{"spec":{"serviceAccount":"<service_account>"}}' --type merge
The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.
- name: Main
hosts: localhost
vars_files:
- plan.yml
- workload.yml
tasks:
- k8s_info:
api_version: v1
kind: Secret
name: privkey
namespace: openshift-mtv
register: ssh_credentials
- name: Ensure SSH directory exists
file:
path: ~/.ssh
state: directory
mode: 0750
- name: Create SSH key
copy:
dest: ~/.ssh/id_rsa
content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
mode: 0600
- add_host:
name: "{{ vm.ipaddress }}" # ALT "{{ vm.guestnetworks[2].ip }}"
ansible_user: root
groups: vms
- hosts: vms
vars_files:
- plan.yml
- workload.yml
tasks:
- name: Stop MariaDB
service:
name: mariadb
state: stopped
- name: Create Test File
copy:
dest: /premigration.txt
content: "Migration from {{ provider.source.name }}
of {{ vm.vm1.vm0.id }} has finished\n"
mode: 0644
Adding a migration hook to a migration plan using the CLI
You can use a Hook
CR to add a pre-migration hook or a post-migration hook to an existing migration plan using the Forklift CLI.
For example, you can create a Hook
CR to install the cloud-init
service on a VM and write a file before migration.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan. Each hook needs its own |
You can retrieve additional information stored in a secret or in a |
-
Migration plan
-
Migration hook image or the playbook containing the hook image
-
File containing the Secret for the source provider
-
OKD service account called by the hook and that has at least write access for the namespace you are working in
-
SSH access for VMs you want to migrate with the public key installed on the VMs
-
VMs running on Microsoft Server only: Remote Execution enabled
For instructions for creating a service account, see Understanding and creating service accounts.
-
If needed, create a Secret with an SSH private key for the VM.
-
Choose an existing key or generate a key pair.
-
Install the public key on the VM.
-
Encode the private key in the Secret to base64.
apiVersion: v1 data: key: VGhpcyB3YXMgZ2Vu... kind: Secret metadata: name: ssh-credentials namespace: openshift-mtv type: Opaque
-
-
Encode your playbook by concatenating a file and piping it for Base64 encoding, for example:
$ cat playbook.yml | base64 -w0
-
Create a Hook CR:
$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> (1) playbook: | LS0tCi0gbm... (2) EOF
1 (Optional) OKD service account. The serviceAccount
must be provided if you want to manipulate any resources of the cluster.2 Base64-encoded Ansible Playbook. If you specify a playbook, the image
must include anansible-runner
.You can use the default
hook-runner
image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.To decode an attached playbook, retrieve the resource with custom output and pipe it to base64. For example:
$ oc get -n konveyor-forklift hook playbook -o \ go-template='{{ .spec.playbook }}' | base64 -d
-
In the
Plan
CR of the migration, for each VM, add the following section to the end of the CR:vms: - id: <vm_id> hooks: - hook: namespace: <namespace> name: <name_of_hook> step: <type_of_hook> (1)
1 Options are PreHook
, to run the hook before the migration, andPostHook
, to run the hook after the migration.
In order for a PreHook to run on a VM, the VM must be started and available via SSH. |
The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.
- name: Main
hosts: localhost
vars_files:
- plan.yml
- workload.yml
tasks:
- k8s_info:
api_version: v1
kind: Secret
name: privkey
namespace: openshift-mtv
register: ssh_credentials
- name: Ensure SSH directory exists
file:
path: ~/.ssh
state: directory
mode: 0750
- name: Create SSH key
copy:
dest: ~/.ssh/id_rsa
content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
mode: 0600
- add_host:
name: "{{ vm.ipaddress }}" # ALT "{{ vm.guestnetworks[2].ip }}"
ansible_user: root
groups: vms
- hosts: vms
vars_files:
- plan.yml
- workload.yml
tasks:
- name: Stop MariaDB
service:
name: mariadb
state: stopped
- name: Create Test File
copy:
dest: /premigration.txt
content: "Migration from {{ provider.source.name }}
of {{ vm.vm1.vm0.id }} has finished\n"
mode: 0644
Upgrading Forklift
You can upgrade the Forklift Operator by using the OKD web console to install the new version.
-
In the OKD web console, click Operators → Installed Operators → Migration Toolkit for Virtualization Operator → Subscription.
-
Change the update channel to the correct release.
See Changing update channel in the OKD documentation.
-
Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the
CatalogSource
pod:-
Note the catalog source, for example,
redhat-operators
. -
From the command line, retrieve the catalog source pod:
$ kubectl get pod -n openshift-marketplace | grep <catalog_source>
-
Delete the pod:
$ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
Upgrade status changes from Up to date to Upgrade available.
If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.
-
-
If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.
See Manually approving a pending upgrade in the OKD documentation.
-
If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK
init
image. Otherwise, the update will change the state of any VMware providers toCritical
. For more information, see Adding a VMSphere source provider. -
If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the
AccessModes
andVolumeMode
parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.
Uninstalling Forklift
You can uninstall Forklift by using the OKD web console or the command-line interface (CLI).
Uninstalling Forklift by using the OKD web console
You can uninstall Forklift by using the OKD web console.
-
You must be logged in as a user with
cluster-admin
privileges.
-
In the OKD web console, click Operators > Installed Operators.
-
Click Forklift Operator.
The Operator Details page opens in the Details tab.
-
Click the ForkliftController tab.
-
Click Actions and select Delete ForkLiftController.
A confirmation window opens.
-
Click Delete.
The controller is removed.
-
Open the Details tab.
The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it.
-
On the upper-right side of the page, click Actions and select Uninstall Operator.
A confirmation window opens, displaying any operand instances.
-
To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared.
If your Operator configured off-cluster resources, these will continue to run and will require manual cleanup.
-
Click Uninstall.
The Installed Operators page opens, and the Forklift Operator is removed from the list of installed Operators.
-
Click Home > Overview.
-
In the Status section of the page, click Dynamic Plugins.
The Dynamic Plugins popup opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console.
-
Click forklift-console-plugin.
The ConsolePlugin details page opens in the Details tab.
-
On the upper right-hand side of the page, click Actions and select Delete ConsolePlugin from the list.
A confirmation window opens.
-
Click Delete.
The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page.
Uninstalling Forklift from the command line
You can uninstall Forklift from the command line.
This action does not remove resources managed by the Forklift Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the Forklift Operator, you might need to manually delete the Forklift Operator CRDs. |
-
You must be logged in as a user with
cluster-admin
privileges.
-
Delete the
forklift
controller by running the following command:$ oc delete ForkliftController --all -n openshift-mtv
-
Delete the subscription to the Forklift Operator by running the following command:
$ oc get subscription -o name|grep 'mtv-operator'| xargs oc delete
-
Delete the
clusterserviceversion
for the Forklift Operator by running the following command:$ oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete
-
Delete the plugin console CR by running the following command:
$ oc delete ConsolePlugin forklift-console-plugin
-
Optional: Delete the custom resource definitions (CRDs) by running the following command:
kubectl get crd -o name | grep 'forklift.konveyor.io' | xargs kubectl delete
-
Optional: Perform cleanup by deleting the Forklift project by running the following command:
oc delete project openshift-mtv
Forklift performance recommendations
The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Forklift, based on findings observed through testing.
The data provided here was collected from testing in Red Hat Labs and is provided for reference only.
Overall, these numbers should be considered to show the best-case scenarios.
The observed performance of migration can differ from these results and depends on several factors.
Ensure fast storage and network speeds
Ensure fast storage and network speeds, both for VMware and OKD (OCP) environments.
-
To perform fast migrations, VMware must have fast read access to datastores. Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks.
-
Extend the VMware network to the OCP Workers Interface network environment.
-
It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore.
-
Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services utilize that network, it may have an impact on those services and their migration rates.
-
For example, 200 to 325 MiB/s was the average network transfer rate from the
vmnic
for each ESXi host associated with transferring data to the OCP interface.
-
Ensure fast datastore read speeds to ensure efficient and performant migrations
Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host.
Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible.
Endpoint types
Forklift 2.6 allows for the following vSphere provider options:
-
ESXi endpoint (inventory and disk transfers from ESXi), introduced in Forklift 2.6
-
vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter)
-
vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi).
When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested.
As of vSphere 7.0, ESXi hosts can label which network to use for NBD transport. This is accomplished by tagging the desired virtual network interface card (NIC) with the appropriate For more details, see: (Forklift-1230) |
You can use the following ESXi command, which designates interface vmk2
for NBD backup:
esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2
Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance
Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance. Hosts which use Host Power Management controlled within vSphere should check that High Performance
is set.
Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate.
Avoid additional network load on VMware networks
You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint.
By incorporating a virtualization provider, Forklift enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OCP. Selecting this migration network from the ESXi host in the Forklift UI will ensure that the transfer is performed using the selected network as an ESXi endpoint.
It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated.
In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads.
Control maximum concurrent disk migrations per ESXi host
Set the MAX_VM_INFLIGHT MTV
variable to control the maximum number of concurrent VMs transfers allowed for the ESXi host.
Forklift allows for concurrency to be controlled using this variable; by default, it is set to 20.
When setting MAX_VM_INFLIGHT
, consider the number of maximum concurrent VMs transfers are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently. Warm migrations, which are defined by migrations of a running VM that will be migrated over a scheduled time.
Warm migrations use snapshots to compare and migrate only the differences between previous snapshots of the disk. The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OKD occurs.
In Forklift 2.6, MAX_VM_INFLIGHT
reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single vm. The total set by MAX_VM_INFLIGHT
is used to indicate how many concurrent VM tranfers per ESXi host is allowed.
-
MAX_VM_INFLIGHT = 20
and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs.
Migrations are completed faster when migrating multiple VMs concurrently
When multiple VMs from a specific ESXi host are to be migrated, starting concurrent migrations for multiple VMs leads to faster migration times.
Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another.
It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement.
-
1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s
-
10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s
-
20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s
From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner. The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously. |
Migrations complete faster using multiple hosts
Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times.
Testing showed that when transferring more than 10 single disk VMS, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time.
-
80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s.
-
80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s.
From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host. Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended. |
Multiple migration plans compared to a single large migration plan
The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203).
When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start. By breaking up one migration plan into several migration plans, it is possible to start them at the same time.
Comparing migrations of:
-
500 VMs using 8 ESXi hosts in 1 plan,
max_vm_inflight=100
, took 5 hours and 10 minutes. -
800 VMs using 8 ESXi hosts with 8 plans,
max_vm_inflight=100
, took 57 minutes.
Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, 100 VMS per plan, the total migration time can be reduced.
Maximum values tested for cold migrations
-
Maximum number of ESXi hosts tested: 8
-
Maximum number of VMs in a single migration plan: 500
-
Maximum number of VMs migrated in a single test: 5000
-
Maximum number of migration plans performed concurrently: 40
-
Maximum single disk size migrated: 6 TB disk, which contained 3 TB of data
-
Maximum number of disks on a single VM migrated: 50
-
Highest observed single datastore read rate from a single ESXi server: 312 MiB/second
-
Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second
-
Highest observed virtual NIC transfer rate to an OpenShift worker: 327 MiB/second
-
Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 TB utilized data)
-
Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi)
-
Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi)
Warm migration recommendations
The following recommendations are specific to warm migrations:
Migrate up to 400 disks in parallel
Testing involved migrating 200 VMs in parallel, with 2 disks each using 8 ESXi hosts, for a total of 400 disks. No tests were run on migration plans migrating over 400 disks in parallel, so it is not recommended to migrate over this number of disks in parallel.
Migrate up to 200 disks in parallel for the fastest rate
Testing was successfully performed on parallel disk migrations with 200, 300, and 400 disks. There was a decrease in the precopy migration rate, approximately 25%, between the tests migrating 200 disks and those migrating 300 and 400 disks.
Therefore, it is recommended to perform parallel disk migrations in groups of 200 or fewer, instead of 300 to 400 disks, unless a decline of 25% in precopy speed does not affect your cutover planning.
When possible, set cutover time to be immediately after a migration plan starts
To reduce the overall time of warm migrations, it is recommended to set the cutover to occur immediately after the migration plan is started. This causes Forklift to run only one precopy per VM. This recommendation is valid, no matter how many VMs are in the migration plan.
Increase precopy intervals between snapshots
If you are creating many migration plans with a single VM and have enough time between the migration start and the cutover, increase the value of the controller_precopy_interval
parameter to between 120 and 240 minutes, inclusive. The longer setting will reduce the total number of snapshots and disk transfers per VM before the cutover.
Maximum values tested for warm migrations
-
Maximum number of ESXi hosts tested: 8
-
Maximum number of worker nodes: 12
-
Maximum number of VMs in a single migration plan: 200
-
Maximum number of total parallel disk transfers: 400, with 200 VMs, 6 ESXis, and a transfer rate of 667 MB/s
-
Maximum single disk size migrated: 6 TB disk, which contained 3 TB of data
-
Maximum number of disks on a single VM migrated: 3
-
Maximum number of parallel disk transfers per ESXi host: 68
-
Maximum transfer rate observed of a single disk with no concurrent migrations: 76.5 MB/s
-
Maximum transfer rate observed of multiple disks from a single ESXi host: 253 MB/s (concurrent migration of 10 VMs, 1 disk each, 35/50 GiB used per disk)
-
Total transfer rate observed of multiple disks (210) from 8 ESXi hosts: 802 MB/s (concurrent migration of 70 VMs, 3 disks each, 35/50 GiB used per disk)
Recommendations for migrating VMs with large disks
The following recommendations are suggested for VMs with data on disk totaling to 1 TB or greater for each individual disk:
-
Schedule appropriate maintenance windows for migrating large disk virtual machines (VMs). Such migrations are sensitive operations and may require careful planning of maintenance windows and downtime, especially during periods of lower storage and network activity.
-
Check that no other migration activities or other heavy network or storage activities are run during those large virtual machine (VM) migrations. You should treat these large virtual machine migrations as a special case. During those migrations, prioritize Forklift activities. Plan to migrate those VMs to a time when there are fewer activities on those VMs and related datastore.
-
For large VMs with a high churn rate, which means data is frequently changed in amounts of 100 GB or more between snapshots, consider reducing the warm migration
controller_precopy_interval
from the default, which is 60 minutes. It is important to ensure that this process is started at least 24 hours before the scheduled cutover to allow for multiple successful precopy snapshots to complete. When scheduling the cutover, ensure that the maintenance window allows for sufficient time for the last snapshot of changes to be copied over and that the cutover process begins at the beginning of that maintenance window. -
In cases of particularly large single-disk VMs, where some downtime is possible, select cold migrations rather than warm migrations, especially in the case of large VM snapshots.
-
Consider splitting data on particularly large disks to multiple disks, which enables parallel disk migration with Forklift when warm migration is used.
-
If you have large database disks with continuous writes of large amounts of data, where downtime and VM snapshots are not possible, it may be necessary to consider database vendor-specific replication options of the database data to target these specific migrations outside Forklift. Please consult the vendor-specific options of your database if this case applies.
Increasing asynchronous I/O (AIO) sizes and buffer counts for NBD transport mode
This document describes how to change NBD transport NFC parameters for increased migration performance when using the Forklift product.
Using AIO buffering is only suitable for Cold Migration use cases.
|
Key findings
-
The best migration performance was achieved by migrating using multiple VMs (10) on a single ESXi host with the following values:
-
VixDiskLib.nfcAio.Session.BufSizeIn64KB=16
-
vixDiskLib.nfcAio.Session.BufCount=4
-
-
The following improvements were noted when using AIO buffer (Asynchronous Buffer Counts) settings:
-
Migration time was reduced by 31.1%, from
0:24:32
to0:16:54
. -
Read rate was increased from
347.83 MB/s
to504.93 MB/s
.
-
-
There was no significant improvement observed when using AIO buffer settings with a single VM.
-
There was no significant improvement observed when using AIO buffer settings with multiple VMs from multiple hosts.
Enabling AIO buffer configuration
-
Ensure that the
forklift-controller
pod in theopenshift-mtv
namespace supports the AIO buffer values.Since the pod name prefix is dynamic, check the pod name first by running the following command:
oc get pods -n openshift-mtv | grep forklift-controller | awk '{print $1}'
The example output is as follows:
forklift-controller-667f57c8f8-qllnx
This is the pod name prefix from the example:
forklift-controller-667f57c8f8-qllnx
-
Check the environment variables of the pod by running:
oc get pod forklift-controller-667f57c8f8-qllnx -n openshift-mtv -o yaml
-
Check for the following lines in the output:
... \- name: VIRT\_V2V\_EXTRA\_ARGS \- name: VIRT\_V2V\_EXTRA\_CONF\_CONFIG\_MAP ...
-
In the
openshift-mtv namespace
, edit theForkliftController
object to include the AIO buffer values by running the following command:oc edit forkliftcontroller -n openshift-mtv
Add the following under the spec section:
virt_v2v_extra_args: "--vddk-config /mnt/extra-v2v-conf/input.conf" virt_v2v_extra_conf_config_map: "perf"
perf
-
Create the required ConfigMap using the following command:
oc -n openshift-mtv create cm perf
-
Convert the desired buffer configuration values to Base64. For example, for 16/4:
echo -e "VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nvixDiskLib.nfcAio.Session.BufCount=4" | base64
The output will be similar to the following:
Vml4RGlza0xpYi5uZmNBaW8uU2Vzc2lvbi5CdWZTaXplSW42NEtCPTE2CnZpeERpc2tMaWIubmZjQWlvLlNlc3Npb24uQnVmQ291bnQ9NAo=
-
Update the perf ConfigMap with the Base64 string under the
binaryData
section, for example:apiVersion: v1 kind: ConfigMap binaryData: input.conf: Vml4RGlza0xpYi5uZmNBaW8uU2Vzc2lvbi5CdWZTaXplSW42NEtCPTE2CnZpeERpc2tMaWIubmZjQWlvLlNlc3Npb24uQnVmQ291bnQ9NAo= metadata: name: perf namespace: openshift-mtv
-
Restart the forklift-controller pod to apply the new configuration.
-
Ensure the
VIRT_V2V_EXTRA_ARGS
environment variable reflects the updated settings.
-
Run a migration plan and check the logs of the migration pod. Confirm that the AIO buffer settings are passed as parameters, particularly the
--vddk-config value
.For example:
exec: /usr/bin/virt-v2v … --vddk-config /mnt/extra-v2v-conf/input.conf
Sample log excerpt:
Buffer size calc for 16 value: (16 * 64 * 1024 = 1048576) nbdkit: vddk[1]: debug: [NFC VERBOSE] NfcAio_OpenSession: Opening an AIO session. nbdkit: vddk[1]: debug: [NFC INFO] NfcAioInitSession: Disabling read-ahead buffer since the AIO buffer size of 1048576 is >= the read-ahead buffer size of 65536. Explicitly setting flag '`NFC_AIO_SESSION_NO_NET_READ_AHEAD`' nbdkit: vddk[1]: debug: [NFC VERBOSE] NfcAioInitSession: AIO Buffer Size is 1048576 nbdkit: vddk[1]: debug: [NFC VERBOSE] NfcAioInitSession: AIO Buffer Count is 4
The above logs were when using
debug_level = 4
-
Log in to the migration pod and verify the buffer settings using the following command:
cat /mnt/extra-v2v-conf/input.conf
The example output is as follows:
VixDiskLib.nfcAio.Session.BufSizeIn64KB=16 vixDiskLib.nfcAio.Session.BufCount=4
-
To enable debug logs, convert the configuration to Base64, including a high log level:
echo -e "`VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nVixDiskLib.nfcAio.Session.BufCount=4\nVixDiskLib.nfc.LogLevel=4`" | base64
Adding a high log level will degrade performance and is for debugging purposes only.
Disabling AIO Buffer Configuration
To disable the AIO buffer configuration, complete the following steps:
-
Edit the ForkliftController Object: Remove the previously added lines from the spec section in the ForkliftController object:
oc edit forkliftcontroller -n openshift-mtv
-
Remove the following lines:
virt_v2v_extra_args: "`–vddk-config /mnt/extra-v2v-conf/input.conf`" virt_v2v_extra_conf_config_map: "`perf`"
-
Delete the ConfigMap: Remove the perf ConfigMap that was created earlier:
oc delete cm perf -n openshift-mtv
-
Restart the Forklift Controller Pod (Optional).
If needed, ensure the changes take effect by restarting the forklift-controller pod.
Key requirements for AIO Buffer (Asynchronous Buffer Counts) support
Support is based upon tests performed using the following versions:
-
vSphere : 7.0.3
-
VDDK : 7.0.3
-
For other VDDK and vSphere versions, please check the AIO buffer support in the official VMware documentation.
Troubleshooting
This section provides information for troubleshooting common migration issues.
Error messages
This section describes error messages and how to resolve them.
The warm import retry limit reached
error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.
To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.
The Unable to resize disk image to required size
error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.
To resolve this problem, increase the file system overhead in CDI to more than 10%.
Using the must-gather tool
You can collect logs and information about Forklift custom resources (CRs) by using the must-gather
tool. You must attach a must-gather
data file to all customer cases.
You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.
If you specify a non-existent resource in the filtered |
-
You must be logged in to the KubeVirt cluster as a user with the
cluster-admin
role. -
You must have the OKD CLI (
oc
) installed.
-
Navigate to the directory where you want to store the
must-gather
data. -
Run the
oc adm must-gather
command:$ oc adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest
The data is saved as
/must-gather/must-gather.tar.gz
. You can upload this file to a support case on the Red Hat Customer Portal. -
Optional: Run the
oc adm must-gather
command with the following options to gather filtered data:-
Namespace:
$ oc adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- NS=<namespace> /usr/bin/targeted
-
Migration plan:
$ oc adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- PLAN=<migration_plan> /usr/bin/targeted
-
Virtual machine:
$ oc adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
1 Specify the VM ID as it appears in the Plan
CR.
-
Architecture
This section describes Forklift custom resources, services, and workflows.
Forklift custom resources and services
Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.
-
Provider
CR stores attributes that enable Forklift to connect to and interact with the source and target providers. -
NetworkMapping
CR maps the networks of the source and target providers. -
StorageMapping
CR maps the storage of the source and target providers. -
Plan
CR contains a list of VMs with the same migration parameters and associated network and storage mappings. -
Migration
CR runs a migration plan.Only one
Migration
CR per migration plan can run at a given time. You can create multipleMigration
CRs for a singlePlan
CR.
-
The
Inventory
service performs the following actions:-
Connects to the source and target providers.
-
Maintains a local inventory for mappings and plans.
-
Stores VM configurations.
-
Runs the
Validation
service if a VM configuration change is detected.
-
-
The
Validation
service checks the suitability of a VM for migration by applying rules. -
The
Migration Controller
service orchestrates migrations.When you create a migration plan, the
Migration Controller
service validates the plan and adds a status label. If the plan fails validation, the plan status isNot ready
and the plan cannot be used to perform a migration. If the plan passes validation, the plan status isReady
and it can be used to perform a migration. After a successful migration, theMigration Controller
service changes the plan status toCompleted
. -
The
Populator Controller
service orchestrates disk transfers using Volume Populators. -
The
Kubevirt Controller
andContainerized Data Import (CDI) Controller
services handle most technical operations.
High-level migration workflow
The high-level workflow shows the migration process from the point of view of the user:
-
You create a source provider, a target provider, a network mapping, and a storage mapping.
-
You create a
Plan
custom resource (CR) that includes the following resources:-
Source provider
-
Target provider, if Forklift is not installed on the target cluster
-
Network mapping
-
Storage mapping
-
One or more virtual machines (VMs)
-
-
You run a migration plan by creating a
Migration
CR that references thePlan
CR.If you cannot migrate all the VMs for any reason, you can create multiple
Migration
CRs for the samePlan
CR until all VMs are migrated. -
For each VM in the
Plan
CR, theMigration Controller
service records the VM migration progress in theMigration
CR. -
Once the data transfer for each VM in the
Plan
CR completes, theMigration Controller
service creates aVirtualMachine
CR.When all VMs have been migrated, the
Migration Controller
service updates the status of thePlan
CR toCompleted
. The power state of each source VM is maintained after migration.
Detailed migration workflow
You can use the detailed migration workflow to troubleshoot a failed migration.
The workflow describes the following steps:
Warm Migration or migration to a remote OpenShift cluster:
-
When you create the
Migration
custom resource (CR) to run a migration plan, theMigration Controller
service creates aDataVolume
CR for each source VM disk.For each VM disk:
-
The
Containerized Data Importer (CDI) Controller
service creates a persistent volume claim (PVC) based on the parameters specified in theDataVolume
CR. -
If the
StorageClass
has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by theStorageClass
provisioner. -
The
CDI Controller
service creates animporter
pod. -
The
importer
pod streams the VM disk to the PV.After the VM disks are transferred:
-
The
Migration Controller
service creates aconversion
pod with the PVCs attached to it when importing from VMWare.The
conversion
pod runsvirt-v2v
, which installs and configures device drivers on the PVCs of the target VM. -
The
Migration Controller
service creates aVirtualMachine
CR for each source virtual machine (VM), connected to the PVCs. -
If the VM ran on the source environment, the
Migration Controller
powers on the VM, theKubeVirt Controller
service creates avirt-launcher
pod and aVirtualMachineInstance
CR.The
virt-launcher
pod runsQEMU-KVM
with the PVCs attached as VM disks.
Cold migration from oVirt or OpenStack to the local OpenShift cluster:
-
When you create a
Migration
custom resource (CR) to run a migration plan, theMigration Controller
service creates for each source VM disk aPersistentVolumeClaim
CR, and anOvirtVolumePopulator
when the source is oVirt, or anOpenstackVolumePopulator
CR when the source is OpenStack.For each VM disk:
-
The
Populator Controller
service creates a temporarily persistent volume claim (PVC). -
If the
StorageClass
has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by theStorageClass
provisioner.-
The
Migration Controller
service creates a dummy pod to bind all PVCs. The name of the pod containspvcinit
.
-
-
The
Populator Controller
service creates apopulator
pod. -
The
populator
pod transfers the disk data to the PV.After the VM disks are transferred:
-
The temporary PVC is deleted, and the initial PVC points to the PV with the data.
-
The
Migration Controller
service creates aVirtualMachine
CR for each source virtual machine (VM), connected to the PVCs. -
If the VM ran on the source environment, the
Migration Controller
powers on the VM, theKubeVirt Controller
service creates avirt-launcher
pod and aVirtualMachineInstance
CR.The
virt-launcher
pod runsQEMU-KVM
with the PVCs attached as VM disks.
Cold migration from VMWare to the local OpenShift cluster:
-
When you create a
Migration
custom resource (CR) to run a migration plan, theMigration Controller
service creates aDataVolume
CR for each source VM disk.For each VM disk:
-
The
Containerized Data Importer (CDI) Controller
service creates a blank persistent volume claim (PVC) based on the parameters specified in theDataVolume
CR. -
If the
StorageClass
has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by theStorageClass
provisioner.
For all VM disks:
-
The
Migration Controller
service creates a dummy pod to bind all PVCs. The name of the pod containspvcinit
. -
The
Migration Controller
service creates aconversion
pod for all PVCs. -
The
conversion
pod runsvirt-v2v
, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.After the VM disks are transferred:
-
The
Migration Controller
service creates aVirtualMachine
CR for each source virtual machine (VM), connected to the PVCs. -
If the VM ran on the source environment, the
Migration Controller
powers on the VM, theKubeVirt Controller
service creates avirt-launcher
pod and aVirtualMachineInstance
CR.The
virt-launcher
pod runsQEMU-KVM
with the PVCs attached as VM disks.
How MTV uses the virt-v2v tool
The Forklift uses the virt-v2v
tool to convert the disk image of a VM into a format compatible with KubeVirt. The tool makes migrations easier because it automatically performs the tasks needed to make your VMs work with KubeVirt, such as enabling paravirtualized VirtIO drivers in the converted virtual machine, if possible, and installing the QEMU guest agent.
virt-v2v
is included in Red Hat Enterprise Linux (RHEL) versions 7 and later.
Main functions of virt-v2v in MTV migrations
During migration, Forklift uses virt-v2v
to collect metadata about VMs, make necessary changes to VM disks, and copy the disks containing the VMs to KubeVirt.
virt-v2v
makes the following changes to VM disks to prepare them for migration:
-
Additions:
-
Injection of VirtIO drivers, for example, network or disk drivers.
-
Preparation of hypervisor-specific tools or agents, for example, a QEMU guest agent installation.
-
Modification of boot configuration, for example, updated bootloader or boot entries.
-
-
Removals:
-
Unnecessary or former hypervisor-specific files, for example, VMware tools or VirtualBox additions.
-
Old network driver configurations, for example, removing VMware-specific NIC drivers.
-
Configuration settings that are incompatible with the target system, for example, old boot settings.
-
If you are migrating from VMware or from OVA files, virt-v2v
also sets their IP addresses either during the migration or during the first reboot of the VMs after migration.
You can also run pre-defined Ansible hooks before or after a migration using Forklift. For more information, see Adding hooks to an MTV migration plan. These hooks do not necessarily use |
Customizing, removing, and installing files
Forklift uses virt-v2v
to perform additional guest customizations during the conversion, such as the following actions:
-
Customization to preserve IP addresses
-
Customization to preserve drive letters
For RHEL-based guests, |
For more information, see the man reference pages:
Permissions and virt-v2v
virt-v2v
does not require permissions or access credentials for the guest operating system itself because virt-v2v
is not run against a running VM, but only against the disks of a VM.
Logs and custom resources
You can download logs and custom resource (CR) information for troubleshooting. For more information, see the detailed migration workflow.
Collected logs and custom resource information
You can download logs and custom resource (CR) yaml
files for the following targets by using the OKD web console or the command-line interface (CLI):
-
Migration plan: Web console or CLI.
-
Virtual machine: Web console or CLI.
-
Namespace: CLI only.
The must-gather
tool collects the following logs and CR files in an archive file:
-
CRs:
-
DataVolume
CR: Represents a disk mounted on a migrated VM. -
VirtualMachine
CR: Represents a migrated VM. -
Plan
CR: Defines the VMs and storage and network mapping. -
Job
CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.
-
-
Logs:
-
importer
pod: Disk-to-data-volume conversion log. Theimporter
pod naming convention isimporter-<migration_plan>-<vm_id><5_char_id>
, for example,importer-mig-plan-ed90dfc6-9a17-4a8btnfh
, whereed90dfc6-9a17-4a8
is a truncated oVirt VM ID andbtnfh
is the generated 5-character ID. -
conversion
pod: VM conversion log. Theconversion
pod runsvirt-v2v
, which installs and configures device drivers on the PVCs of the VM. Theconversion
pod naming convention is<migration_plan>-<vm_id><5_char_id>
. -
virt-launcher
pod: VM launcher log. When a migrated VM is powered on, thevirt-launcher
pod runsQEMU-KVM
with the PVCs attached as VM disks. -
forklift-controller
pod: The log is filtered for the migration plan, virtual machine, or namespace specified by themust-gather
command. -
forklift-must-gather-api
pod: The log is filtered for the migration plan, virtual machine, or namespace specified by themust-gather
command. -
hook-job
pod: The log is filtered for hook jobs. Thehook-job
naming convention is<migration_plan>-<vm_id><5_char_id>
, for example,plan2j-vm-3696-posthook-4mx85
orplan2j-vm-3696-prehook-mwqnl
.Empty or excluded log files are not included in the
must-gather
archive file.
-
must-gather └── namespaces ├── target-vm-ns │ ├── crs │ │ ├── datavolume │ │ │ ├── mig-plan-vm-7595-tkhdz.yaml │ │ │ ├── mig-plan-vm-7595-5qvqp.yaml │ │ │ └── mig-plan-vm-8325-xccfw.yaml │ │ └── virtualmachine │ │ ├── test-test-rhel8-2disks2nics.yaml │ │ └── test-x2019.yaml │ └── logs │ ├── importer-mig-plan-vm-7595-tkhdz │ │ └── current.log │ ├── importer-mig-plan-vm-7595-5qvqp │ │ └── current.log │ ├── importer-mig-plan-vm-8325-xccfw │ │ └── current.log │ ├── mig-plan-vm-7595-4glzd │ │ └── current.log │ └── mig-plan-vm-8325-4zw49 │ └── current.log └── openshift-mtv ├── crs │ └── plan │ └── mig-plan-cold.yaml └── logs ├── forklift-controller-67656d574-w74md │ └── current.log └── forklift-must-gather-api-89fc7f4b6-hlwb6 └── current.log
Downloading logs and custom resource information from the web console
You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) from the OKD web console.
-
In the OKD web console, click Migration → Plans for virtualization.
-
Click Get logs beside a migration plan name.
-
In the Get logs window, click Get logs.
The logs are collected. A
Log collection complete
message is displayed. -
Click Download logs to download the archive file.
-
To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.
Accessing logs and custom resource information from the command line
You can access logs and information about custom resources (CRs) from the command line by using the must-gather
tool. You must attach a must-gather
data file to all customer cases.
You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.
If you specify a non-existent resource in the filtered |
-
You must be logged in to the KubeVirt cluster as a user with the
cluster-admin
role. -
You must have the OKD CLI (
oc
) installed.
-
Navigate to the directory where you want to store the
must-gather
data. -
Run the
oc adm must-gather
command:$ kubectl adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest
The data is saved as
/must-gather/must-gather.tar.gz
. You can upload this file to a support case on the Red Hat Customer Portal. -
Optional: Run the
oc adm must-gather
command with the following options to gather filtered data:-
Namespace:
$ kubectl adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- NS=<namespace> /usr/bin/targeted
-
Migration plan:
$ kubectl adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- PLAN=<migration_plan> /usr/bin/targeted
-
Virtual machine:
$ kubectl adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
1 You must specify the VM name, not the VM ID, as it appears in the Plan
CR.
-
Telemetry
Telemetry
Red Hat uses telemetry to collect anonymous usage data from Forklift installations to help us improve the usability and efficiency of Forklift.
Forklift collects the following data:
-
Migration plan status: The number of migrations. Includes those that failed, succeeded, or were canceled.
-
Provider: The number of migrations per provider. Includes oVirt, vSphere, OpenStack, OVA, and KubeVirt providers.
-
Mode: The number of migrations by mode. Includes cold and warm migrations.
-
Target: The number of migrations by target. Includes local and remote migrations.
-
Plan ID: The ID number of the migration plan. The number is assigned by Forklift.
Metrics are calculated every 10 seconds and are reported per week, per month, and per year.
Additional information
Forklift performance addendum
The data provided here was collected from testing in Red Hat Labs and is provided for reference only.
Overall, these numbers should be considered to show the best-case scenarios.
The observed performance of migration can differ from these results and depends on several factors.
ESXi performance
Test migration using the same ESXi host.
In each iteration, the total VMs are increased, to display the impact of concurrent migration on the duration.
The results show that migration time is linear when increasing the total VMs (50 GiB disk, Utilization 70%).
The optimal number of VMs per ESXi is 10.
Test Case Description | MTV | VDDK | max_vm inflight | Migration Type | Total Duration |
---|---|---|---|---|---|
cold migration, 10 VMs, Single ESXi, Private Network [2] |
2.6 |
7.0.3 |
100 |
cold |
0:21:39 |
cold migration, 20 VMs, Single ESXi, Private Network |
2.6 |
7.0.3 |
100 |
cold |
0:41:16 |
cold migration, 30 VMs, Single ESXi, Private Network |
2.6 |
7.0.3 |
100 |
cold |
1:00:59 |
cold migration, 40 VMs, Single ESXi, Private Network |
2.6 |
7.0.3 |
100 |
cold |
1:23:02 |
cold migration, 50 VMs, Single ESXi, Private Network |
2.6 |
7.0.3 |
100 |
cold |
1:46:24 |
cold migration, 80 VMs, Single ESXi, Private Network |
2.6 |
7.0.3 |
100 |
cold |
2:42:49 |
cold migration, 100 VMs, Single ESXi, Private Network |
2.6 |
7.0.3 |
100 |
cold |
3:25:15 |
In each iteration, the number of ESXi hosts were increased, to show that increasing the number of ESXi hosts improves the migration time (50 GiB disk, Utilization 70%).
Test Case Description | MTV | VDDK | Max_vm inflight | Migration Type | Total Duration |
---|---|---|---|---|---|
cold migration, 100 VMs, Single ESXi, Private Network [3] |
2.6 |
7.0.3 |
100 |
cold |
3:25:15 |
cold migration, 100 VMs, 4 ESXs (25 VMs per ESX), Private Network |
2.6 |
7.0.3 |
100 |
cold |
1:22:27 |
cold migration, 100 VMs, 5 ESXs (20 VMs per ESX), Private Network, 1 DataStore |
2.6 |
7.0.3 |
100 |
cold |
1:04:57 |
Different migration network performance
Each iteration the Migration Network
was changed, using the Provider, to find the fastest network for migration.
The results show that there is no degradation using management compared to non-managment networks when all interfaces and network speeds are the same.
Test Case Description | MTV | VDDK | max_vm inflight | Migration Type | Total Duration |
---|---|---|---|---|---|
cold migration, 10 VMs, Single ESXi, MGMT Network |
2.6 |
7.0.3 |
100 |
cold |
0:21:30 |
cold migration, 10 VMs, Single ESXi, Private Network [4] |
2.6 |
7.0.3 |
20 |
cold |
0:21:20 |
cold migration, 10 VMs, Single ESXi, Default Network |
2.6.2 |
7.0.3 |
20 |
cold |
0:21:30 |