Planning your migration

Planning a migration

You can use the Migration Toolkit for Virtualization (MTV) to plan your migration of virtual machines from the following source providers to OpenShift Virtualization destination providers:

  • VMware vSphere

  • oVirt

  • OpenStack

  • Open Virtual Appliances (OVAs) that were created by VMware vSphere

  • Remote KubeVirt clusters

OVA migration is validated for migrating supported guest operating systems exported from VMware vSphere. For third-party networking or security appliances, check with the vendor for native QCOW2 or KVM images.

Types of migration

Forklift supports three types of migration: cold, warm, and live.

  • Cold migration is available for all of the source providers listed above. This type of migration migrates VMs that are powered off and does not require common stored storage.

  • Warm migration is available only for VMware vSphere and oVirt. This type of migration migrates VMs that are powered on and does require common stored storage.

    These two types of migration are discussed in detail in About cold and warm migration.

  • Live migration is available only for migrations between KubeVirt clusters or between namespaces on the same KubeVirt cluster. It requires Forklift version 2.10 or later and KubeVirt version 4.20 or later.

    Live migration is discussed in detail in Live migration in Forklift.

Cold and warm migration in Forklift

Forklift supports cold migration for VMs that are shut down and warm migration for running VMs with minimal downtime.

About cold and warm migration

Choose cold migration for VMs that are shut down and warm migration for running VMs. Cold migration works with VMware vSphere, oVirt, OpenStack, Open Virtual Appliances (OVAs) created by VMware vSphere, and remote KubeVirt clusters. Warm migration works with VMware vSphere and oVirt.

OVA migration is validated for migrating supported guest operating systems exported from VMware vSphere. For third-party networking or security appliances, check with the vendor for native QCOW2 or KVM images.

Cold migration

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

Cold migration converts each VM to be compatible with OKD before transferring it. If a VM cannot be converted, the migration fails immediately (fail fast). All disk blocks are copied once in a sequential process.

VMware only: In cold migrations, in situations in which a package manager cannot be used during the migration, Forklift does not install the qemu-guest-agent daemon on the migrated VMs. This has some impact on the functionality of the migrated VMs, but overall, they are still expected to function.

To enable Forklift to automatically install qemu-guest-agent on the migrated VMs, ensure that your package manager can install the daemon during the first boot of the VM after migration.

If that is not possible, use your preferred automated or manual procedure to install qemu-guest-agent manually.

Warm migration

Warm migration copies most of the data while the source virtual machines (VMs) remain running, minimizing downtime.

The migration process has two stages:

  • Precopy stage: Most data is copied while VMs continue running. VM disks are copied incrementally using changed block tracking (CBT) snapshots.

  • Cutover stage: VMs are shut down and the remaining data is migrated. Data stored in RAM is not migrated.

Warm migration transfers snapshots to OKD first, then converts the VM during the cutover stage. This means disk blocks may be copied multiple times if the VM has high utilization, but VMs remain available during most of the migration process. You must enable CBT for each source VM and each VM disk before starting a warm migration.

Warm migration stages

Warm migration operates in two stages: the precopy stage, where most data is copied while VMs run, and the cutover stage, where VMs are shut down to complete the migration.

Precopy stage

The VMs are not shut down during the precopy stage.

The VM disks are copied incrementally by using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

Cutover stage

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration custom resource (CR).

The duration of the cutover stage depends on how much data has changed since the last snapshot was created during the precopy stage.

Migration speed comparison

Compare cold and warm migration speeds to choose the best migration type for your workload. Both have similar transfer speeds, but warm migration runs in the background with hourly snapshots while cold migration requires VMs to be shut down. Warm migration requires a shutdown at the cutover stage, but cold migration requires a shutdown for the whole process.

  • The observed speeds for the warm migration single disk transfer and disk conversion are approximately the same as for cold migration.

  • The benefit of warm migration is that the transfer of the snapshot happens in the background while the VM is running.

  • The default snapshot time is taken every 60 minutes. If VMs change substantially, more data needs to be transferred than in cold migration when the VM is shut down.

  • The cutover stage, meaning the final VM shutdown and last snapshot transfer, depends on how much the VM has changed since the last snapshot.

Live migration reduces downtime even further than warm migration, but live migration is available only for migration between KubeVirt clusters or between namespaces on the same KubeVirt cluster. Therefore it is not included in the comparison above.

Choosing a migration type

Choose the migration type that best fits your requirements for downtime, migration speed, and source VM compatibility.

Cold migration

A cold migration moves a shutdown VM between hosts. It is the default migration type in Forklift.

Use cold migration when:

  • Downtime is acceptable: You can afford to shut down VMs during migration. The VM remains shut down for the entire duration of the data transfer.

  • The VM is not a critical production workload: For VMs used for development, testing, or other non-essential tasks, the downtime is unlikely to have a major business impact.

  • You are migrating VMs with large amounts of data on a single disk: Since the VM is offline, there are no live changes to track, making the data copy a one-time, full-disk transfer.

  • You are migrating from OpenStack or OVA sources: Warm migration is not supported for these source types.

  • Changed block tracking (CBT) cannot be enabled on the source VMs: Warm migration requires CBT to track changes incrementally.

  • You need to ensure a clean state: Because the VM is fully shut down, there is no risk of data changes or I/O operations during the migration.

  • You want the fastest migration for individual VMs with minimal complexity: Cold migration is the simplest and most straightforward method.

Cold migration shuts down the source VMs and copies all data in a single operation. Each VM is converted to be compatible with OKD before transfer, which means migrations fail immediately if conversion is not possible (fail fast).

Warm migration

A warm migration moves an active VM between hosts with minimal downtime. This is not live migration.

You must enable changed block tracking (CBT) for each source VM and each VM disk before starting a warm migration. CBT allows the migration to copy only the data that has changed since the last copy, which enables incremental data transfer while the VM runs.

Warm migration is only supported when migrating from VMware vSphere or oVirt.

Use warm migration when:

  • You must minimize downtime for a critical workload: The process is designed to reduce the service interruption to a few minutes, or even just seconds, during the final cutover phase.

  • The VM is a production server or a business-critical application: For applications that need continuous availability, warm migration is the preferred choice to ensure business continuity.

  • You are migrating VMs with large amounts of data spread across multiple disks: Parallel disk transfers reduce overall migration time.

  • You need to stage the migration over a period of time: Warm migration copies the majority of the VM data (the precopy stage) while the VM is still running. This allows you to perform the bulk of the data transfer during business hours without impacting users.

  • You have a pre-planned maintenance window for the final cutover: Even with warm migration, there is a brief period of downtime when the VM is shut down to perform the final data synchronization. You can schedule this cutover for a time with the least impact.

Decision guide

Use the following guide to select the appropriate migration type:

Table 1. Migration type decision guide
Your priority Recommended migration type Key consideration

Minimize downtime

Warm migration

Requires CBT enabled on source VMs and disks. Most data transfers while VMs run.

Fastest migration for single-disk VMs

Cold migration

VMs are shut down during entire migration. Each disk block copied once.

Migrating from OpenStack or OVA

Cold migration

Warm migration is not supported for these source types.

Cannot enable CBT

Cold migration

Warm migration requires CBT to track changes incrementally.

Multi-disk VMs with even data distribution

Warm migration

Parallel disk transfers reduce overall migration time despite longer total duration.

Fail-fast validation

Cold migration

Conversion happens first, so incompatible VMs fail immediately without data transfer.

Clean state required

Cold migration

VM is fully shut down with no risk of data changes during migration.

Production workload requiring continuous availability

Warm migration

Service interruption reduced to seconds or minutes during cutover.

Pre-planned maintenance window

Warm migration

Bulk data transfer during business hours, final cutover during maintenance window.

OpenStack and OVA sources do not support warm migration. Remote KubeVirt clusters support only cold migration.

Live migration in Forklift

You can use live migration to migrate VMs between KubeVirt clusters or namespaces on the same KubeVirt cluster with minimal downtime. Live migration makes it easier to perform Day 2 tasks, such as maintenance and workload balancing after you have migrated your VMs to KubeVirt.

Live migration is supported by Forklift version 2.10.0 and later. It requires KubeVirt 4.20 or later on both your source and target clusters.

There is a known issue with live migration: Migration between namespaces on the same KubeVirt cluster fails because of MAC address collisions.

Benefits of live migration

Use live migration to perform Day 2 operations like maintenance and workload balancing with minimal service disruption. You can migrate VMs between KubeVirt clusters and namespaces while they run, avoiding the need for scheduled downtime.

Live migration has the following benefits:

  • Additional migration functionality: Live migration supports migrating virtual machines (VMs) between KubeVirt clusters and between namespaces on the same KubeVirt clusters, making Day 2 operations easier and safer to perform.

  • Improved service continuity: Live migration lets you quickly migrate VMs from one cluster to another, allowing you to eliminate the need for scheduled downtime during cluster maintenance or upgrades. This allows you to provide more consistent and reliable services.

  • Greater operational flexibility: Live migration allows your IT team to manage your infrastructure dynamically without harming business operations. Your team can respond to changing demands or perform necessary maintenance without complex, disruptive procedures.

  • Enhanced performance and scalability: Live migration gives you the ability to balance workloads across clusters. This helps ensure that applications have the resources they need, leading to better overall system performance and scalability.

Live migration, Forklift, and KubeVirt

Live migration is a joint operation between Forklift and KubeVirt that leverages the strengths of Forklift when you migrate VMs from one KubeVirt cluster to another.

Tasks and responsibilities are divided between Forklift and KubeVirt:

  • Forklift manages the high-level orchestration that is needed to perform a live migration of KubeVirt Kubevirt VMs from one cluster to another.

  • KubeVirt is responsible for the low-level migration mechanics, such as the actual state and storage transfer between the clusters.

Orchestration is done by the ForkliftController component of Forklift, rather than by KubeVirt, because ForkliftController is already designed to manage the migration pipeline, which includes the following responsibilities:

  • Build an inventory of source resources and map them to the destination cluster.

  • Create and run the migration plan.

  • Ensure that all necessary shared resources, such as instance types, SSH keys, secrets, and config maps, are available and accessible on the destination cluster.

Limitations of live migration

Perform a live migration to migrate VMs between KubeVirt clusters or between namespaces on the same KubeVirt cluster with a minimum of downtime.

Limitations
  • Live migration is available only for migrations between KubeVirt clusters or between namespaces on the same KubeVirt cluster. It is not available for any other source provider, whether the provider is supported by Forklift or not.

  • Live migration does not establish connectivity between KubeVirt clusters. Establishing such connectivity is the responsibility of the cluster administrator.

  • Live migration does not migrate resources unrelated to VMs, such as services, routes, or other application components, that may be necessary for application availability after a migration.

Live migration workflow

Live migration follows a unique workflow. Understanding how Forklift orchestrates with KubeVirt helps you troubleshoot migration issues.

  1. Start: When you click Start plan, Forklift initiates the migration plan.

  2. PreHook: If you added a pre-migration hook, Forklift runs it now.

  3. Create empty DataVolumes: Forklift creates empty target DataVolumes in the target KubeVirt cluster. KubeVirt uses KubeVirt to handle the actual storage migration.

  4. Ensure resources: Forklift copies all secrets or config maps that are mounted by a source VM to the target namespace.

  5. Create target VMs: Forklift creates target VMs in a running state and creates a VirtualMachineInstanceMigration resource on each cluster. The VMs have a special KubeVirt annotation indicating to start them in migration target mode.

  6. Wait for state transfer: Forklift waits for KubeVirt to handle the state transfer and for the destination VMs to report as ready. KubeVirt also handles the shutdown of the source VMs after the state transfer.

  7. PostHook: If you added a post-migration hook, Forklift runs it now.

  8. Completed: Forklift indicates that the migration is finished.

Software requirements for migration

Review the following software requirements to ensure that your environment is prepared for migration.

Forklift has software requirements for all providers as well as specific software requirements per provider.

You must install compatible versions of OKD and KubeVirt.

Storage support and default modes

Forklift uses the following default volume and access modes for supported storage.

Table 2. Default volume and access modes
Provisioner Volume mode Access mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

  • Filesystem volume mode

    Filesystem volume mode is slower than Block volume mode.

  • ReadWriteOnce access mode

    ReadWriteOnce access mode does not support live virtual machine migration.

See Enabling a statically-provisioned storage class for details on editing the storage profile.

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in the Containerized Data Importer (CDI) to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

When you migrate from OpenStack, or when you run a cold migration from oVirt to the OKD cluster that Forklift is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead.

If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead.

In some cases, however, you might want to decrease the file system overhead to reduce storage consumption.

You can change the file system overhead by changing the value of the controller_filesystem_overhead in the spec portion of the forklift-controller CR, as described in Configuring the MTV Operator.

Network prerequisites

Network prerequisites apply to all migrations from a source provider to KubeVirt.

Prerequisites
  • Do not change IP addresses, VLANs, and other network configuration settings during a migration. The MAC addresses of the virtual machines (VMs) are preserved during migration.

  • The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

  • If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network. For more information, see network attachment definition.

Ports

The firewalls must enable traffic over the following ports:

Table 3. Network ports required for migrating from VMware vSphere
Port Protocol Source Destination Purpose

443

TCP

OKD nodes

VMware vCenter

VMware provider inventory

Disk transfer authentication

443

TCP

OKD nodes

VMware ESXi hosts

Disk transfer authentication

902

TCP

OKD nodes

VMware ESXi hosts

Disk transfer data copy

Table 4. Network ports required for migrating from oVirt
Port Protocol Source Destination Purpose

443

TCP

OKD nodes

oVirt Engine

oVirt provider inventory

Disk transfer authentication

54322

TCP

OKD nodes

oVirt hosts

Disk transfer data copy

Table 5. Network ports required for migrating from OpenStack
Port Protocol Source Destination Purpose

8776

Cinder

OKD nodes

OpenStack hosts

Block storage API

8774

Nova

OKD nodes

OpenStack hosts

Virtualization API

5000

Keystone

OKD nodes

OpenStack hosts

Authentication API

9696

Neutron

OKD nodes

OpenStack hosts

Network API

9292

Glance

OKD nodes

OpenStack hosts

Image service API

Table 6. Network ports required for migrating from Open Virtual Appliance (OVA) files
Port Protocol Source Destination Purpose

2049

TCP

OKD nodes

Server containing the OVA files

NFS service

111

TCP or UCP

OKD nodes

Server containing the OVA files

RPC Portmapper, only needed for NFSv4.0

Table 7. Network ports required for migrating from KubeVirt
Port Protocol Source Destination Purpose

6443

API

OKD nodes

KubeVirt host

Access API to get information from a VM’s manifest

443

TCP

OKD nodes

KubeVirt host

Download VM data using the virtualMachineExport resource

Source VM prerequisites

Source VM prerequisites apply to all migrations from a source provider to KubeVirt.

Prerequisites
  • ISO images and CD-ROMs are unmounted.

  • Each NIC contains an IPv4 address, an IPv6 address, or both.

  • The OS of each VM is certified and supported as a guest OS for conversions.

    You can check that the OS is supported by referring to the table in Converting virtual machines from other hypervisors to KVM with virt-v2v. See the columns of the table that refer to RHEL 8 hosts and RHEL 9 hosts.

Source VM migration considerations

Review these considerations when planning your migration of VMs from a source provider to KubeVirt.

VM naming
  • DNS compliance in KubeVirt: VM names must be DNS-compliant and unique in the KubeVirt environment. Forklift automatically adjusts non-compliant VM names in the target cluster. Alternatively, you can rename target VMs in the Forklift UI. For information about renaming VMs, see Renaming virtual machines.

Windows-specific considerations
  • VSS requirement for Windows warm migrations: For VMs running Microsoft Windows, the Volume Shadow Copy Service (VSS) inside the guest VM is used to quiesce the file system and applications. When performing a warm migration of a Microsoft Windows VM from VMware, you must start VSS on the Windows guest OS for the snapshot and Quiesce guest file system to succeed. If you do not start VSS on the Windows guest OS, the snapshot creation during the Warm migration fails with the following error:

    An error occurred while taking a snapshot: Failed to restart the virtual machine

    If you set the VSS service to Manual and start a snapshot creation with Quiesce guest file system = yes, the VMware Snapshot provider service requests VSS to start the shadow copy in the background.

  • Measured Boot limitation: Microsoft Windows VMs, which use the Measured Boot feature, cannot be migrated. Measured Boot is a mechanism to prevent any kind of device changes by checking each start-up component, including the firmware, all the way to the boot driver. For more information, see Measured Boot in the Microsoft documentation.

    The alternative to migration is to re-create the Windows VM directly on KubeVirt.

  • Secure Boot limitation: VMs with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot would prevent the VMs from booting on the destination provider. Secure boot is a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM).

    Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot in the Microsoft documentation.

Operating system compatibility
  • Limited support for dual-boot OS VMs: Forklift has limited support for the migration of dual-boot OS VMs. In the case of a dual-boot OS VM, Forklift attempts to convert the first boot disk it finds. Alternatively, you can specify the root device in the Forklift UI.

Forklift encryption support

Migrate encrypted VMs to maintain security during migration to Forklift. You can migrate Linux VMs encrypted with Linux Unified Key Setup (LUKS) and Windows VMs encrypted with BitLocker.

Provider-specific requirements for migration

Review the specific software requirements per source provider.

oVirt prerequisites

The following prerequisites apply to oVirt migrations:

  • To create a source provider, you must have at least the UserRole and ReadOnlyAdmin roles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.

You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.

  • To migrate virtual machines:

    • You must have one of the following:

      • oVirt admin permissions. These permissions allow you to migrate any virtual machine in the system.

      • DiskCreator and UserVmManager permissions on every virtual machine you want to migrate.

    • You must use a compatible version of oVirt.

    • You must have the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate.

      You can obtain the Engine CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.

    • If you are migrating a virtual machine with a direct logical unit number (LUN) disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

  • Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

  • LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

OpenStack prerequisites

To migrate from OpenStack to KubeVirt, verify you have a compatible OpenStack version and configure authentication. You can use token authentication, application credentials, or standard username and password credentials with Forklift.

You can use these methods to migrate VMs from OpenStack source providers by using the command-line interface (CLI) the same way you migrate other VMs, except for how you prepare the Secret manifest.

Using token authentication with an OpenStack source provider

You can use token authentication, instead of username and password authentication, when you create an OpenStack source provider.

Forklift supports both of the following types of token authentication:

  • Token with user ID

  • Token with user name

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

Prerequisites

Have an OpenStack account.

Procedure
  1. In the dashboard of the OpenStack web console, click Project > API Access.

  2. Expand Download OpenStack RC file and click OpenStack RC file.

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    OS_AUTH_URL
    OS_PROJECT_ID
    OS_PROJECT_NAME
    OS_DOMAIN_NAME
    OS_USERNAME
  3. To get the data needed for token authentication, run the following command:

    $ openstack token issue

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

  4. Create a Secret manifest similar to the following:

    • For authentication using a token with user ID:

      cat << EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: openstack-secret-tokenid
        namespace: openshift-mtv
        labels:
          createdForProviderType: openstack
      type: Opaque
      stringData:
        authType: token
        token: <token_from_openstack_token_output>
        projectID: <projectID_from_openstack_token_output>
        userID: <userID_from_openstack_token_output>
        url: <OS_AUTH_URL_from_openstack_rc_file>
      EOF
    • For authentication using a token with user name:

      cat << EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: openstack-secret-tokenname
        namespace: openshift-mtv
        labels:
          createdForProviderType: openstack
      type: Opaque
      stringData:
        authType: token
        token: <token_from_openstack_token_output>
        domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
        projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
        username: <OS_USERNAME_from_openstack_rc_file>
        url: <OS_AUTH_URL_from_openstack_rc_file>
      EOF

Using application credential authentication with an OpenStack source provider

You can use application credential authentication, instead of username and password authentication, when you create an OpenStack source provider.

Forklift supports both of the following types of application credential authentication:

  • Application credential ID

  • Application credential name

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

Prerequisites

You have an OpenStack account.

Procedure
  1. In the dashboard of the OpenStack web console, click Project > API Access.

  2. Expand Download OpenStack RC file and click OpenStack RC file.

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    OS_AUTH_URL
    OS_PROJECT_ID
    OS_PROJECT_NAME
    OS_DOMAIN_NAME
    OS_USERNAME
  3. To get the data needed for application credential authentication, run the following command:

    $ openstack application credential create --role member --role reader --secret redhat forklift

    The output, referred to here as <openstack_credential_output>, includes:

    • The id and secret that you need for authentication using an application credential ID

    • The name and secret that you need for authentication using an application credential name

  4. Create a Secret manifest similar to the following:

    • For authentication using the application credential ID:

      cat << EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: openstack-secret-appid
        namespace: openshift-mtv
        labels:
          createdForProviderType: openstack
      type: Opaque
      stringData:
        authType: applicationcredential
        applicationCredentialID: <id_from_openstack_credential_output>
        applicationCredentialSecret: <secret_from_openstack_credential_output>
        url: <OS_AUTH_URL_from_openstack_rc_file>
      EOF
    • For authentication using the application credential name:

      cat << EOF | oc apply -f -
      apiVersion: v1
      kind: Secret
      metadata:
        name: openstack-secret-appname
        namespace: openshift-mtv
        labels:
          createdForProviderType: openstack
      type: Opaque
      stringData:
        authType: applicationcredential
        applicationCredentialName: <name_from_openstack_credential_output>
        applicationCredentialSecret: <secret_from_openstack_credential_output>
        domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
        username: <OS_USERNAME_from_openstack_rc_file>
        url: <OS_AUTH_URL_from_openstack_rc_file>
      EOF

VMware prerequisites

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN.

Forklift cannot migrate VMware vSphere 6 and VMware vSphere 7 VMs to a FIPS-compliant KubeVirt cluster.

The following prerequisites apply to VMware migrations:

  • You must use a compatible version of VMware vSphere.

  • You must be logged in as a user with at least the minimal set of VMware privileges.

  • To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.

  • The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

  • If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

  • If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the Network File Copy (NFC) service memory of the host.

  • It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

  • The target namespace must have network connectivity to the VMware source environment. NetworkPolicies that block egress connections from the target namespace prevent migration from succeeding.

For virtual machines (VMs) running Microsoft Windows, Volume Shadow Copy Service (VSS) inside the guest VM is used to quiesce the file system and applications. When performing a warm migration of a Microsoft Windows virtual machine from VMware, you must start VSS on the Windows guest operating system in order for the snapshot and Quiesce guest file system to succeed.

If you do not start VSS on the Windows guest operating system, the snapshot creation during the warm migration fails with the following error: An error occurred while taking a snapshot: Failed to restart the virtual machine.

If you set the VSS service to Manual and start a snapshot creation with Quiesce guest file system = yes. In the background, the VMware Snapshot provider service requests VSS to start the shadow copy.

In case of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail.

VMware privileges

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

Table 8. VMware privileges
Privilege Description

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

Virtual machine.Guest operating system management by VIX API

Allows managing a virtual machine by the VMware Virtual Infrastructure eXtension (VIX) API.

Virtual machine.Provisioning privileges:

All Virtual machine.Provisioning privileges are required.

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

Datastore privileges:

Datastore.Browse datastore

Allows exploring the contents of a datastore.

Datastore.Low level file operations

Allows performing low-level file operations - read, write, delete, and rename - in a datastore.

Sessions privileges:

Sessions.Validate session

Allows verification of the validity of a session.

Cryptographic privileges:

Cryptographic.Decrypt

Allows decryption of an encrypted virtual machine.

Cryptographic.Direct access

Allows access to encrypted resources.

Create a role in VMware with the permissions described in the preceding table and then apply this role to the Inventory section, as described in Creating a VMware role to apply Forklift permissions.

Creating a VMware role to grant MTV privileges

You can create a role in VMware to grant privileges for Forklift and then grant those privileges to users with that role.

The procedure that follows explains how to do this in general. For detailed instructions, see VMware documentation.

Procedure
  1. In the vCenter Server UI, create a role that includes the set of privileges described in the table in VMware prerequisites.

  2. In the vSphere inventory UI, grant privileges for users with this role to the appropriate vSphere logical objects at one of the following levels:

    1. At the user or group level: Assign privileges to the appropriate logical objects in the data center and use the Propagate to child objects option.

    2. At the object level: Apply the same role individually to all the relevant vSphere logical objects involved in the migration, for example, hosts, vSphere clusters, data centers, or networks.

Creating a VDDK image

It is strongly recommended that Forklift should be used with the VMware Virtual Disk Development Kit (VDDK) SDK when transferring virtual disks from VMware vSphere.

Creating a VDDK image, although optional, is highly recommended. Using Forklift without VDDK is not recommended and could result in significantly lower migration speeds.

To make use of this feature, you download the VDDK, build a VDDK image, and push the VDDK image to your image registry.

The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).

Storing the VDDK image in a public registry might violate the VMware license terms.

Prerequisites
  • OKD image registry.

  • You have podman installed.

  • You are working on a file system that preserves symbolic links (symlinks).

  • If you are using an external registry, KubeVirt must be able to access it.

Procedure
  1. Create and navigate to a temporary directory:

    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
  2. In a browser, navigate to the VMware VDDK version 8 download page.

  3. Select version 8.0.1 and click Download.

    To migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.
  4. Save the VDDK archive file in the temporary directory.

  5. Extract the VDDK archive:

    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
  6. Create a Dockerfile:

    $ cat > Dockerfile <<EOF
    FROM registry.access.redhat.com/ubi8/ubi-minimal
    USER 1001
    COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    RUN mkdir -p /opt
    ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    EOF
  7. Build the VDDK image:

    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
  8. Push the VDDK image to the registry:

    $ podman push <registry_route_or_server_path>/vddk:<tag>
  9. Ensure that the image is accessible to your KubeVirt environment.

Increasing the NFC service memory of an ESXi host

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the Network File copy (NFC) service memory of the host. Otherwise, the migration fails because the NFC service memory is limited to 10 parallel connections.

Procedure
  1. Log in to the ESXi host as root.

  2. Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    ...
          <nfcsvc>
             <path>libnfcsvc.so</path>
             <enabled>true</enabled>
             <maxMemory>1000000000</maxMemory>
             <maxStreamMemory>10485760</maxStreamMemory>
          </nfcsvc>
    ...
  3. Restart hostd:

    # /etc/init.d/hostd restart

    You do not need to reboot the host.

VDDK validator containers need requests and limits

If you have the cluster or project resource quotas set, you must ensure that you have a sufficient quota for the Forklift pods to perform the migration. 

You can see the defaults, which you can override in the ForkliftController custom resource (CR), listed as follows. If necessary, you can adjust these defaults. 

These settings are highly dependent on your environment. If there are many migrations happening at once and the quotas are not set enough for the migrations, then the migrations can fail. This can also be correlated to the MAX_VM_INFLIGHT setting that determines how many VMs/disks are migrated at once.

The following defaults can be overriden in the ForkliftController CR:

  • Defaults that affect both cold and warm migrations:

    For cold migration, it is likely to be more resource intensive as it performs the disk copy. For warm migration, you could potentially reduce the requests.

    • virt_v2v_container_limits_cpu: 4000m

    • virt_v2v_container_limits_memory: 8Gi

    • virt_v2v_container_requests_cpu: 1000m

    • virt_v2v_container_requests_memory: 1Gi

      Cold and warm migration using virt-v2v can be resource-intensive. For more details, see Compute power and RAM.

  • Defaults that affect any migrations with hooks:

    • hooks_container_limits_cpu: 1000m

    • hooks_container_limits_memory: 1Gi

    • hooks_container_requests_cpu: 100m

    • hooks_container_requests_memory: 150Mi

  • Defaults that affect any OVA migrations:

    • ova_container_limits_cpu: 1000m

    • ova_container_limits_memory: 1Gi

    • ova_container_requests_cpu: 100m

    • ova_container_requests_memory: 150Mi

Open Virtual Appliance (OVA) prerequisites

Open Virtual Appliance (OVA) migrations to KubeVirt require VMware vSphere files in NFS-shared directories. OVA files can be compressed Open Virtualization Format (OVF) packages with .ova extensions or extracted packages. Forklift scans root and first-level subfolders for compressed packages, and up to second-level subfolders for extracted packages.

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere. Moreover, converting a vendor OVA may invalidate vendor support agreements.

To ensure stability and vendor support, always prioritize importing the vendor’s native QCOW2 image by using either the KubeVirt "Upload Image" or the KubeVirt "Import from URL" workflow rather than using the Forklift OVA path.

Prerequisites
  • The NFS share is writable by the QEMU group (GID 107) if you plan to use the web upload feature for OVA files.

  • The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    • In one or more compressed OVF packages that hold all the VM information.

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      For example, if the NFS share is /nfs, then:

      • The folder /nfs is scanned.

      • The folder /nfs/subfolder1 is scanned.

      • However, /nfs/subfolder1/subfolder2 is not scanned.

    • In extracted OVF packages.

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages.

      However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      For example, if the NFS share is /nfs, then:

      • The OVF file /nfs/vm.ovf is scanned.

      • The OVF file /nfs/subfolder1/vm.ovf is scanned.

      • The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.

      • However, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

  • If you plan to upload OVA files using the web browser, ensure that each .ova file has a unique filename.

You can optionally configure OVA file upload by web browser to upload OVA files directly to the NFS share. For more information, see Configuring OVA file upload by web browser.

KubeVirt prerequisites

To migrate between KubeVirt clusters, verify that both clusters have matching Forklift versions and the source uses KubeVirt 4.16 or later. You can migrate forward to newer KubeVirt versions if both are compatible with your Forklift version.

It is strongly recommended to migrate only between clusters with the same version of KubeVirt, although migration from an earlier version of KubeVirt to a later one is supported.

KubeVirt live migration prerequisites

In addition to the regular KubeVirt prerequisites, live migration has the following additional prerequisites:

  • Forklift 2.10.0 or later installed. Forklift treats all KubeVirt migrations run on Forklift 2.9 or earlier as cold migrations, even if they are configured as live migrations.

  • KubeVirt 4.20.0 or later installed on both source and target clusters.

  • In the KubeVirt resource of both clusters in the featureGates of the YAML,DecentralizedLiveMigration is listed. You must have cluster-admin privileges to set this field.

  • Connectivity between the clusters must be established, including connectivity for state transfer. Technologies such as Submariner can be used for this purpose.

  • The target cluster has VirtualMachineInstanceTypes and VirtualMachinePreferences that match those used by the VMs on the source cluster.

Software compatibility guidelines

You must install compatible software versions. The table that follows lists the relevant software versions for this version of Forklift.

Table 9. Compatible software versions
Forklift OKD KubeVirt VMware vSphere oVirt OpenStack

2.11

4.21, 4.20, 4.19

4.21, 4.20, 4.19

6.5 or later

4.4 SP1 or later

16.1 or later

Migration from oVirt 4.3

Forklift was tested only with oVirt 4.4 SP1. Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.11. While not supported, basic migrations from oVirt 4.3 are expected to work.

Generally it is advised to upgrade oVirt Manager to the previously mentioned supported version before the migration to KubeVirt.

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and might work in practice in many environments using Forklift 2.11. In this case, it is recommended to upgrade oVirt Manager to the previously mentioned supported version before the migration to KubeVirt.

OpenShift Operator Life Cycles

For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.

Installing and configuring the Forklift Operator

You can install the Forklift Operator by using the OKD web console or CLI. Forklift version 2.4 and later includes the Forklift plugin for the web console.

Installing the Forklift Operator by using the OKD web console

You can install the Forklift Operator by using the OKD web console.

Prerequisites
  • OKD 4.21, 4.20, 4.19 installed.

  • KubeVirt Operator installed on an OpenShift migration target cluster.

  • You must be logged in as a user with cluster-admin permissions.

Procedure
  1. In the OKD web console, click OperatorsOperatorHub.

  2. Use the Filter by keyword field to search for forklift-operator.

    The Forklift Operator is a Community Operator. Red Hat does not support Community Operators.

  3. Click Migration Toolkit for Virtualization Operator and then click Install.

  4. Click Create ForkliftController when the button becomes active.

  5. Click Create.

    Your ForkliftController appears in the list that is displayed.

  6. Click WorkloadsPods to verify that the Forklift pods are running.

  7. Click OperatorsInstalled Operators to verify that Migration Toolkit for Virtualization Operator appears in the konveyor-forklift project with the status Succeeded.

    When the plugin is ready you will be prompted to reload the page. The Migration menu item is automatically added to the navigation bar, displayed on the left of the OKD web console.

Installing the Forklift Operator by using the command-line interface

You can install the Forklift Operator by using the command-line interface (CLI).

Prerequisites
  • OKD 4.21, 4.20, 4.19 installed.

  • KubeVirt Operator installed on an OpenShift migration target cluster.

  • You must be logged in as a user with cluster-admin permissions.

Procedure
  1. Create the konveyor-forklift project:

    $ cat << EOF | kubectl apply -f -
    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: konveyor-forklift
    EOF
  2. Create an OperatorGroup CR called migration:

    $ cat << EOF | kubectl apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: migration
      namespace: konveyor-forklift
    spec:
      targetNamespaces:
        - konveyor-forklift
    EOF
  3. Create a Subscription CR for the Operator:

    $ cat << EOF | kubectl apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: forklift-operator
      namespace: konveyor-forklift
    spec:
      channel: development
      installPlanApproval: Automatic
      name: forklift-operator
      source: community-operators
      sourceNamespace: openshift-marketplace
      startingCSV: "konveyor-forklift-operator.2.11.0"
    EOF
  4. Create a ForkliftController CR:

    $ cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: ForkliftController
    metadata:
      name: forklift-controller
      namespace: konveyor-forklift
    spec:
      olm_managed: true
    EOF
  5. Verify that the Forklift pods are running:

    $ kubectl get pods -n konveyor-forklift

    The following is example output:

    Example:

NAME                                                    READY   STATUS    RESTARTS   AGE
forklift-api-bb45b8db4-cpzlg                            1/1     Running   0          6m34s
forklift-controller-7649db6845-zd25p                    2/2     Running   0          6m38s
forklift-must-gather-api-78fb4bcdf6-h2r4m               1/1     Running   0          6m28s
forklift-operator-59c87cfbdc-pmkfc                      1/1     Running   0          28m
forklift-ui-plugin-5c5564f6d6-zpd85                     1/1     Running   0          6m24s
forklift-validation-7d84c74c6f-fj9xg                    1/1     Running   0          6m30s
forklift-volume-populator-controller-85d5cb64b6-mrlmc   1/1     Running   0          6m36s

Configuring the Forklift Operator

Configure the Forklift Operator through the Settings section of the Overview page or by modifying the ForkliftController custom resource (CR). Some configurations are available only in the CR.

You can only use the string values "true" and "false" for feature gates, such as plugins and some services, in the ForkliftController CR. The ForkliftController API does not accept boolean values for these settings.

For information about the Settings section of the Overview page, see Settings tab.

Procedure
  • Change a parameter’s value in the spec section of the ForkliftController CR by adding the parameter and value as follows:

    spec:
      <parameter: value>
    • Replace <parameter: value> with one of the parameters from the Forklift Operator parameters table. For more information, see Forklift Operator parameters.

Forklift Operator parameters

The Forklift Operator parameters table contains a description of each Forklift Operator parameter and its default value.

Table 10. Forklift Operator parameters
Parameter Description Default value

controller_max_vm_inflight

The maximum number of disks or VMs that can transfer or migrate simultaneously. Varies with provider. For more information, see Configuring the controller_max_vm_inflight parameter.

20

must_gather_api_cleanup_max_age

The duration in hours for retaining must gather reports before they are automatically deleted.

-1 (disabled)

controller_container_limits_cpu

The CPU limit allocated to the main controller container.

500m

controller_container_limits_memory

The memory limit allocated to the main controller container.

800Mi

controller_precopy_interval

The interval in minutes at which a new snapshot is requested before initiating a warm migration.

60

controller_snapshot_status_check_rate_seconds

The frequency in seconds with which the system checks the status of snapshot creation or removal during a warm migration.

10

controller_filesystem_overhead

Percentage of space in persistent volumes allocated as file system overhead when the storageclass is filesystem.

ForkliftController CR only.

10

controller_block_overhead

Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any storageclass that is block-based. It can be used when data, such as encryption headers, is written to the persistent volumes in addition to the content of the virtual disk.

ForkliftController CR only.

0

vsphere_osmap_configmap_name

Config map for vSphere source providers. This config map maps the operating system of the incoming VM to a KubeVirt preference name. This config map needs to be in the namespace where the Forklift Operator is deployed.

To see the list of preferences in your KubeVirt environment, open the OpenShift web console and click Virtualization > Preferences.

Add values to the config map when this parameter has the default value, forklift-vsphere-osmap. To override or delete values, specify a config map that is different from forklift-vsphere-osmap.

ForkliftController CR only.

forklift-vsphere-osmap

ovirt_osmap_configmap_name

Config map for oVirt source providers. This config map maps the operating system of the incoming VM to a KubeVirt preference name. This config map needs to be in the namespace where the Forklift Operator is deployed.

To see the list of preferences in your KubeVirt environment, open the OpenShift web console and click VirtualizationPreferences.

You can add values to the config map when this parameter has the default value, forklift-ovirt-osmap. To override or delete values, specify a config map that is different from forklift-ovirt-osmap.

ForkliftController CR only.

forklift-ovirt-osmap

controller_retain_precopy_importer_pods

Whether to retain importer pods so that the Containerized Data Importer (CDI) does not delete them during migration.

ForkliftController CR only.

false

controller_transfer_network

The NetworkAttachmentDefinition used for data transmission.

Default transfer network

Configuring the controller_max_vm_inflight parameter

The value of controller_max_vm_inflight parameter, which is shown in the UI as Max concurrent virtual machine migrations, varies by the source provider of the migration

  • For all migrations except Open Virtual Appliance (OVA) or VMware migrations, the parameter specifies the maximum number of disks that Forklift can transfer simultaneously. In these migrations, Forklift migrates the disks in parallel. This means that if the combined number of disks that you want to migrate is greater than the value of the setting, additional disks must wait until the queue is free, without regard for whether a VM has finished migrating.

    For example, if the value of the parameter is 15, and VM A has 5 disks, VM B has 5 disks, and VM C has 6 disks, all the disks except for the 16th disk start migrating at the same time. Once any of them has migrated, the 16th disk can be migrated, even though not all the disks on VM A and the disks on VM B have finished migrating.

  • For OVA migrations, the parameter specifies the maximum number of VMs that Forklift can migrate simultaneously, meaning that all additional disks must wait until at least one VM has been completely migrated.

    For example, if the value of the parameter is 2, and VM A has 5 disks, VM B has 5 disks, and VM C has 6 disks, all the disks on VM C must wait to migrate until either all the disks on VM A or on VM B finish migrating.

  • For VMware migrations, the parameter has the following meanings:

    • Cold migration:

      • To local KubeVirt: VMs for each ESXi host that can migrate simultaneously.

      • To remote KubeVirt: Disks for each ESXi host that can migrate simultaneously.

    • Warm migration: Disks for each ESXi host that can migrate simultaneously.

Migrating virtual machines by using the OKD web console

Use the Forklift user interface to migrate virtual machines (VMs) from VMware vSphere, oVirt (oVirt), OpenStack, Open Virtual Appliances (OVAs) that were created by VMware vSphere, or KubeVirt clusters. For all migrations, you specify the source provider, the destination provider, and the migration plan. The specific procedures vary per provider.

You must ensure that all prerequisites are met. For more information, see Software requirements for migration.

VMware only: You must have the minimal set of VMware privileges.

VMware only: Creating a VMware Virtual Disk Development Kit (VDDK) image will increase migration speed.

Navigating MTV pages

The Forklift user interface provides several main pages to help you manage your VM migrations.

You can access Forklift in the OKD web console by clicking Migration for Virtualization in the left navigation menu under the Virtualization section.

The main pages include:

Providers

View and manage your source and target virtualization providers. Add new providers, test connections, and monitor provider health.

Migration plans

Create, view, and manage migration plans. Monitor the status of running migrations and view migration history. Access the plan details page to configure settings, select VMs, and customize migration parameters.

Mappings for virtualization

Create and manage network and storage mappings that define how source resources map to target resources in the destination environment.

Overview (administrators only)

View system-wide migration statistics, health information, and configure Forklift settings. This page provides charts showing migration history, VM status, and plan status across all migrations.

Each page provides filtering, sorting, and search capabilities to help you manage your resources efficiently.

The MTV user interface

The Forklift user interface is integrated into the OKD web console.

In the left panel, you can choose a page related to a component of the migration progress, for example, Providers. Or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

The Tips and tricks panel

The Tips and tricks panel provides contextual guidance and best practices to help you plan and execute your migrations successfully.

You can access the panel by clicking the Tips and tricks link in the upper-right corner of all Forklift pages in the web console.

The panel includes a dropdown menu where you can select from the following topics:

  • Creating a provider

  • Migrating your virtual machines

  • Choosing the right migration type

  • Creating a network mapping

  • Creating a storage mapping

  • Optimizing migration speed

  • Troubleshooting

Each topic includes explanations of key terminology and considerations, with links to Forklift documentation, Forklift performance recommendations, Red Hat Customer Support, and Red Hat KubeVirt Administration training.

The MTV Overview page

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

If you have Administrator privileges, you can access the Overview page by clicking Migration for Virtualization > Overview in the OKD web console.

The Overview page has 3 tabs:

  • Overview

  • YAML

  • Health

  • History

  • Settings

Overview tab

The Overview tab is to help you quickly create providers and find information about the whole system:

  • In the upper pane, is the Welcome section, which includes buttons that let you open the Create provider UI for each vendor (VMware, Open Virtual Appliance, OpenStack, oVirt, and KubeVirt). You can close this section by clicking the Options menu kebab in the upper-right corner and selecting Hide from view. You can reopen it by clicking Show the welcome card in the upper-right corner.

  • In the center-left pane is a "donut" chart named Virtual machines. This chart shows the number of running, failed, and successful virtual machine migrations that Forklift ran for the time interval that you select. You can choose a different interval by clicking the list in the upper-right corner of the pane. You can select a different interval by clicking the list. The options are: Last 24 hours, Last 10 days, Last 31 days, and All. By clicking on each division of the chart, you can navigate to the History tab for information about the migrations.

    Data for this chart includes only the most recent run of a migration plan that was modified due to a failure. For example, if a plan with 3 VMs fails 4 times, then this chart shows that 3 VMs failed, not 12.

  • In the center-right pane is an area chart named Migration history. This chart shows the number of migrations that succeeded, failed, or were running during the interval shown in the title of the chart. You can choose a different interval by clicking the Options menu kebab in the upper-right corner of the pane. The options are: Last 24 hours, Last 10 days, and Last 31 days. By clicking on each division of the chart, you can navigate to the History tab for information about the migrations.

  • In the lower-left pane is a "donut" chart named Migration plans. This chart shows the current number of migration plans grouped by their status. This includes plans that were not started, cannot be started, are incomplete, archived, paused, or have an unknown status. By clicking the Show all plans link, you can quickly navigate to the Migration plans page.

    Since a single migration might involve many virtual machines, the number of migrations performed using Forklift might vary significantly from the number of migrated virtual machines.

  • In the lower-right pane is a table named Forklift health. This table lists all of the Forklift pods. The most important one, forklift-controller, is first. The remaining pods are listed in alphabetical order. The View all link opens the Health tab. The status and creation time of each pod are listed. You can also see a link to the logs of each pod.

YAML tab

The YAML tab displays the ForkliftController custom resource (CR) that defines the operation of the Forklift Operator. You can modify the CR in this tab.

Health tab

The Health tab has two panes:

  • In the upper pane, there is a table named Health. It lists all the Forklift pods. The most important one, forklift-controller, is first. The remaining pods are listed in alphabetical order. For each pod, the status, and creation time of the pod are listed, and there is a link to the logs of the pod.

  • In the lower pane, there is a table named Conditions. It lists the following possible types (states) of the Forklift Operator, the status of the type, the last time the condition was updated, the reason for the update, and a message about the condition.

History tab

The History tab displays information about migrations.

  • In the upper-left of the page, there is a filter that you can use to display only migrations of a certain status, for example, Succeeded.

  • To the right of the filter is the Group by plan toggle switch, which lets you display either all migrations or view only the most recent migration run per plan within the specified time range.

Settings tab

The table that follows describes the settings that are visible in the Settings tab, their default values, and other possible values that can be set or chosen, if needed.

Table 11. Forklift settings
Setting Description Default value Additional values

Maximum concurrent VM migrations

Varies with provider as follows:

  • For all migrations except OVA or VMware migrations: The maximum number of disks that Forklift can transfer simultaneously.

  • For OVA migrations: The maximum number of VMs that Forklift can migrate simultaneously.

  • For VMware migrations, the setting has the following meanings:

    • Cold migration:

      • To local KubeVirt: VMs for each ESXi host that can migrate simultaneously.

      • To remote KubeVirt: Disks for each ESXi host that can migrate simultaneously.

    • Warm migration: Disks for each ESXi host that can migrate simultaneously.

      See Configuring the controller_max_vm_inflight parameter for a detailed explanation of this setting.

20.

Adjustable by either using the + and - keys to set a different value or by clicking the textbox and entering a new value.

Controller main container CPU limit

The CPU limit that is allocated to the main controller container, in milliCPUs (m).

500 m.

Adjustable by selecting another value from the list. Options: 200 m, 500 m, 2000 m, 8000 m.

Controller main container memory limit

The memory limit that is allocated to the main controller container in mebibytes (Mi).

800 Mi.

Adjustable by selecting another value from the list. Options: 200 Mi, 800 Mi, 2000 Mi, 8000 Mi.

Controller inventory container memory limit

The memory limit that is allocated to the inventory controller container in mebibytes (Mi).

1000 Mi.

Adjustable by selecting another value from the list. Options: 400 Mi, 1000 Mi, 2000 Mi, 8000 Mi.

Precopy internal (minutes)

The interval in minutes at which a new snapshot is requested before initiating a warm migration.

60 minutes.

Adjustable by selecting another value from the list. Options: 5 minutes, 30 minutes, 60 minutes, 120 minutes.

Snapshot polling interval

The interval in seconds between which the system checks the status of snapshot creation or removal during a warm migration.

10 seconds.

Adjustable by choosing another value from the list. Options: 1 second, 5 seconds, 10 seconds, 60 seconds.

Controller transfer network

The NetworkAttachmentDefinition used for data transmission.

Default transfer network

Adjustable by choosing another value from the list.

Choosing a different controller transfer network

Some enterprise environments isolate provider APIs, such as the vSphere API or the OpenStack API, on a dedicated transfer network segment. Choosing a different controller transfer network in these environments gives you reliable connectivity and migration capability for Forklift.

Prerequisites
  • You have Forklift Administrator privileges.

Procedure
  1. In the OKD web console, click Migration for VirtualizationOverview.

  2. Click Settings and scroll down to Controller transfer network.

  3. Choose a different controller transfer network from the list and click Save.

  4. To verify that your choice was recorded, perform the following actions:

    1. Click the YAML tab of the Overview page.

      The Forklift Controller YAML opens.

    2. Verify that the value of the spec:controller_transfer_network section is correct.

      The spec:controller_transfer_network section of the YAML does not appear when the default transfer network is used.

Preparing VMs for migration

Before running a migration plan, you can prepare your virtual machines to ensure a successful migration. The Forklift web console provides tools to help you configure VM settings in advance.

Common preparation tasks include:

  • Renaming VMs: Ensure VM names comply with DNS naming requirements for the target KubeVirt environment.

  • Configuring power states: Set the target power state for VMs after migration to control whether they start automatically or remain powered off.

You can perform these tasks after creating a migration plan but before running the migration. The changes you make apply to the target VMs in the destination environment.

Renaming virtual machines

VM names must be DNS-compliant and unique in the KubeVirt environment. When you migrate source VMs, Forklift automatically adjusts noncompliant VM names in the target cluster to compliant names. Alternatively, you can rename target VMs in the Forklift UI.

Valid names consist only of lowercase alphanumeric characters (a-z, 0-9) and hyphens (-), with no leading or trailing hyphens, no consecutive hyphens, and a maximum of 63 characters.

Table 12. Examples of VM naming conflicts
Source VM name Naming conflict Target VM name

App_VM_01

Contains uppercase letters and underscores

app-vm-01

-app-server

Starts with a hyphen

app-server

app--server

Contains consecutive hyphens

app-server

long-name-for-server-that-contains-more-than-sixty-three-characters

Exceeds the 63-character length limit

db-server-prod-01

Procedure
  1. In the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.

  2. Open the Plan details page for your migration plan.

  3. Click the Virtual machines tab to view a table of all VMs from the configured source provider.

  4. If the Target name column does not already show in the VM table, click Manage columns to select Target name and display the column.

  5. Identify noncompliant names in the list of VMs by checking the alerts in the Concerns column.

  6. To rename a VM, click the More icon at the end of the row for the VM, and click Edit target name.

  7. Enter and save a new name for the VM. Ensure that the new name consists only of lowercase alphanumeric characters (a-z, 0-9) and hyphens (-), with no leading or trailing hyphens, no consecutive hyphens, and a maximum of 63 characters.

Configuring the target power state of VMs

You can configure the post-migration power state of VMs in advance of a migration. Plan for VMs to start up automatically, remain powered off, or preserve the power state of the source VM. For example, if you are migrating a VM that is running in the source environment, you can ensure that the VM is powered off post-migration to preserve start-up dependencies between VMs.

You can set the power state of a target VM to off, on, or auto either in the MTV UI or in the spec.vms section of the Plan Custom Resource (CR). The auto setting preserves the power state of the source VM.

You can apply different post-migration power states to different VMs in a single migration plan.

Procedure
  1. Configure the power state for target VMs in the Forklift UI.

    1. In the Create migration plan wizard, navigate to Other settings under Additional setup in the left navigation pane.

    2. Scroll down to VM target power state.

    3. Select Auto, Powered on, or Powered off from the dropdown menu to apply the setting to all VMs in your migration plan. You can customize the power state for selected VMs when your plan is created.

    4. Click Next, and verify that the correct power state shows for VM target power state under Other settings (optional).

    5. When you create your migration plan, click Migration plans in the left navigation menu, and open the Plan details page for your migration plan.

    6. Verify that the correct power state shows for the VM target power state field. You can click the Edit icon to change the power state.

    7. Click the Virtual machines tab to view a table of all VMs from the configured source provider.

    8. Click Manage columns to select Target power state and display the column.

    9. To change the power state for a specific VM, click the More icon at the end of the row for the VM, and click Edit target power state.

    10. Select and save a new power state for the VM.

  2. Configure the power state for target VMs in the YAML file.

    1. In the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.

    2. Open the Plan details page for your migration plan.

    3. Click the YAML tab to open the Plan custom resource (CR) for your migration plan.

    4. For each VM under vms in the YAML file, enter the target power state. In this example, you set a different target power state for each VM:

      Example:

        vms:
          - id: vm-1
            targetPowerState: off
          - id: vm-2
            targetPowerState: on
          - id: vm-3
            targetPowerState: auto

Migrating virtual machines by using the command-line interface

You migrate VMs to KubeVirt from the command-line by creating Forklift custom resources (CRs). The CRs and the migration procedure vary by source provider.

You must specify a name for cluster-scoped CRs.

You must specify both a name and a namespace for namespace-scoped CRs.

To migrate to or from an OKD cluster that is different from the one the migration plan is defined on, you must have an KubeVirt service account token with cluster-admin privileges.

You must ensure that all prerequisites are met. For more information, see Software requirements for migration.

Permissions needed by non-administrators to work with migration plan components

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

Table 13. Example migration plan roles and their privileges
Role Description

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

Predefined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

  • Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

  • Attach providers created by administrators to storage maps, network maps, and migration plans

  • Not be able to create providers or to change system settings

Table 14. Example permissions required for non-adminstrators to work with migration plan components but not create providers
Actions API group Resource

get, list, watch, create, update, patch, delete

forklift.konveyor.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyor.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyor.io

hooks

get, list, watch

forklift.konveyor.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyor.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyor.io

storagemaps

get, list, watch

forklift.konveyor.io

forkliftcontrollers

create, patch, delete

Empty string

secrets

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

Mapping networks and storage in migration plans

You can create network maps and storage maps in the Forklift to map source networks and disk storage to KubeVirt networks and storage classes.

About network maps in migration plans

Create network maps in Forklift to map source networks to KubeVirt networks. You can create two types of network map: maps for a specific migration plan and maps for use by any migration plan.

Plan-specific maps

Plan-specific maps are owned by that plan. You can create this kind of map in the Network maps step of the Plan creation wizard.

Ownerless maps

Maps created for use by any migration plan are to said to be ownerless. You can create this kind of map in the Network maps page of the Migration for Virtualization section of the KubeVirt web console.

You, or anyone working in the same project, can use ownerless maps when creating a migration plan in the Plan creation wizard. When you choose one of these unowned maps for a migration plan, Forklift creates a copy of the map and defines your migration plan as the owner of that copy. Any changes you make to the copy do not affect the original map, nor do they apply to any other plan that uses a copy of the map.

Both types of network map for a project are shown in the Network maps page, but there is an important difference in the information displayed in the Owner column of that page for each:

  • Maps created in the Network maps step of the Plan creation wizard are shown as being owned by the migration plan.

  • Maps created in the Network maps page of the Migration for Virtualization section of the KubeVirt web console are shown as having no owner.

About storage maps in migration plans

Create storage maps in Forklift to map source disk storages to KubeVirt storage classes. You can create two types of storage map: maps for a specific migration plan and maps for use by any migration plan.

Plan-specific maps

Plan-specific maps are owned by that plan. You can create these kinds of maps in the Storage maps step of the Plan creation wizard.

Ownerless maps

Maps created for use by any migration plan are to said to be ownerless. You can create this kind of map in the Storage maps page of the Migration for Virtualization section of the KubeVirt web console.

You, or anyone working in the same project, can use ownerless maps when creating a migration plan in the Plan creation wizard. When you choose one of these unowned maps for a migration plan, Forklift creates a copy of the map and defines your migration plan as the owner of that copy. Any changes you make to the copy do not affect the original map, nor do they apply to any other plan that uses a copy of the map.

Both types of storage map for a project are shown in the Storage maps page, but there is an important difference in the information displayed in the Owner column of that page for each:

  • Maps created in the Storage maps step of the Plan creation wizard are shown as being owned by the migration plan.

  • Maps created in the Storage maps page of the Migration for Virtualization section of the KubeVirt web console are shown as having no owner.

Creating ownerless storage maps in the Forklift UI

You can create ownerless storage maps by using the Forklift UI to map source disk storage to KubeVirt storage classes.

You can create this type of map by using one of the following methods:

  • Create with form, selecting items such as a source provider from lists

  • Create with YAML, either by entering YAML or JSON definitions or by attaching files containing the same

Planning a migration of virtual machines from VMware vSphere

Create a VMware vSphere migration plan by setting up network maps, configuring source and destination providers with migration networks, and defining the migration plan in the Forklift UI.

Creating ownerless network maps in the Forklift UI

You can create ownerless network maps by using the Forklift UI to map source networks to KubeVirt networks.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Network maps.

  2. Click Create network map to open the Create network map page.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $  cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod
          source:
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • type: Allowed values are pod, multus, and ignored. Use ignored to avoid attaching VMs to this network for this migration.

    • source: You can use either the id or the name parameter to specify the source network. For id, specify the VMware vSphere network Managed Object Reference (moRef). For more information about retrieving the moRef, see Retrieving a VMware vSphere moRef in Migrating your virtual machines.

    • <network_attachment_definition>: Specify a network attachment definition for each additional KubeVirt network.

    • <network_attachment_definition_namespace>: Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of network maps.

Creating ownerless storage maps using the form page of the Forklift UI

You can create ownerless storage maps by using the form page of the Forklift UI.

Prerequisites
Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with form.

  3. Specify the following:

    • Map name: Name of the storage map.

    • Project: Select from the list.

    • Source provider: Select from the list.

    • Target provider: Select from the list.

    • Source storage: Select from the list.

    • Target storage: Select from the list.

  4. Optional: If this is a storage map for a migration using storage copy offload, specify the following offload options:

    • Offload plugin: Select vSphere XCOPY from the list.

    • Storage secret: Select from the list.

    • Storage product: Select from the list.

      Storage copy offload is a feature that allows you to migrate VMware virtual machines (VMs) that are in a storage array network (SAN) more efficiently. This feature makes use of the command vmkfstools on the ESXi host, which invokes the XCOPY command on the storage array using an Internet Small Computer Systems Interface (iSCSI) or Fibre Channel (FC) connection. Storage copy offload lets you copy data inside a SAN more efficiently than copying the data over a network. For Forklift 2.11, storage copy offload is available as GA for cold migration and as a Technology Preview feature for warm migration.

      For more information about storage copy offload, see Migrating VMware virtual machines by using storage copy offload.

  5. Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.

  6. Click Create.

    Your map appears in the list of storage maps.

Creating ownerless storage maps using YAML or JSON definitions in the Forklift UI

You can create ownerless storage maps by using YAML or JSON definitions in the Forklift UI.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with YAML.

    The Create StorageMap page opens.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $ cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode>
          source:
            id: <source_datastore>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • accessMode: Allowed values are ReadWriteOnce and ReadWriteMany.

    • id: Specify the VMware vSphere datastore moRef, for example, f2737930-b567-451a-9ceb-2887f6207009. For more information about retrieving the moRef, see Retrieving a VMware vSphere moRef in Migrating your virtual machines.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of storage maps.

Migrating VMware virtual machines by using storage copy offload

You can migrate VMware virtual machines (VMs) that are in a storage array network (SAN) more efficiently by using a method called storage copy offload. You use this method to accelerate migration speed and reduce the load on your network.

VMware’s vSphere Storage APIs-Array Integration (VAAI) includes a command named vmkfstools. This command sends the XCOPY command, which is part of the SCSI protocol. The XCOPY command lets you copy data inside a SAN more efficiently than copying the data over a network. The command is executed by a populator named vsphere-xcopy-volume-populator.

Forklift 2.10.0 leverages this command as the basis for storage copy offload, which clones your VMs' data to the storage hardware instead of transmitting it between Forklift and KubeVirt. This improved migration saves both time and resources.

You enable storage copy offload by configuring the storage map in your migration plan to point to your storage array instead of the network you usually use for migration. When you start the migration plan, Forklift migrates your VMs by copying them to the storage array you choose and using XCOPY to copy them directly to KubeVirt, instead of transmitting the contents of your VMs to KubeVirt.

The storage copy offload feature has some unique configuration prerequisites, which are discussed in Planning and running storage copy offload migrations. Once you configure your system, you can migrate plans using storage copy offload by using either the Forklift UI or its CLI. Instructions for using storage offload have been integrated into the procedures for migrating VMware VMs for both the UI and CLI.

You must ensure that your migration plans do not mix VDDK mappings with copy-offload mappings. Because the migration controller copies disks either through CDI volumes (VDDK) or through Volume Populators (copy-offload), all storage pairs in the plan must either include copy-offload details (a Secret + product) or none of them must. Otherwise, the plan fails.

For Forklift 2.11, storage copy offload is available as GA for cold migration and as a Technology Preview feature for warm migration.

Storage copy offload for warm migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

How storage copy offload works

Without storage copy offload, Forklift migrates a virtual disk as follows:

  1. Forklift reads the disk from the source storage.

  2. Forklift sends the data over a network to KubeVirt.

  3. KubeVirt writes the data to its storage.

    This method can be slow and consume significant network and host resources.

With storage copy offload, the process is streamlined:

  1. Forklift initiates a disk transfer request.

  2. Instead of sending the data, Forklift instructs the storage array in which the vSphere Virtual Machine File System (VMFS) datastore holds the source VMs to perform a direct copy from the source storage to the target volume, on the same array, in the correct storage class.

    The storage array handles the cloning of the VM disk internally, often at a much higher speed than a network-based transfer.

The Forklift project, a key component of Forklift, includes a specialized volume populator named vsphere-xcopy-volume-populator that directly interacts with VMware’s VAAI. This allows Forklift to trigger the high-speed, array-level data copy operation for supported storage systems.

The storage arrays must be the ones specified above. Otherwise, XCOPY performs a fallback network disk copy on the ESXi. Although a fallback network disk copy on the ESXi is usually considerably faster than a standard migration using a VDDK image over the network, it is not as quick as a properly configured storage copy offload migration.

Supported storage providers

The following storage providers support storage copy offload:

  • Hitachi Vantara

  • NetApp ONTAP

  • Pure Storage FlashArray

  • Dell PowerMax

  • Dell PowerFlex

  • Dell PowerStore

  • HPE 3PAR

  • HPE Primera

  • Infinidat Infinibox

  • IBM Flashsystem

Planning and running storage copy offload migrations

You need to perform the following steps when you plan and run storage copy offload migrations:

Procedure
  1. Before your first migration, choose and implement a cloning method. This step is discussed in Cloning methods used by storage copy offload and in the sections that follow it.

  2. For each migration, follow the procedure in either Migrating VMware vSphere VMs in the UI by using storage copy offload or Migrating VMware vSphere VMs in the CLI by using storage copy offload.

  3. If you encounter problems that are specific to storage copy offload, consult Troubleshooting storage copy offload.

Cloning methods used by storage copy offload

You can use either of the following two cloning (copying) methods to run storage copy offload migrations: vSphere Installation on Bundle (VIB) or SSH. Both use the volume populator named vsphere-xcopy-volume-populator to perform vmkfstools clone operations on ESXi hosts.

vSphere Installation on Bundle (VIB) is the default method. This method uses a custom VIB installed on ESXi hosts to expose vmkfstools operations via the vSphere API.

SSH is the recommended method. This method uses SSH to directly run vmkfstools commands on ESXi hosts. This method is useful when VIB installation is not possible and for the advantages that follow.

Advantages of the SSH method

The SSH method offers you the following advantages:

  • No VIB installation: Does not require custom VIB deployment on ESXi hosts

  • Standard SSH: Uses the standard ESXi SSH service with no custom components

  • Security: Uses secure key-based authentication with command restrictions

  • Compatibility: Works with any ESXi version that supports SSH

  • Flexibility: Easier to troubleshoot and monitor SSH connections

Setting up storage copy offload using the VIB

You can set up storage copy offload using the vSphere Installation on Bundle (VIB). This is the default method for running storage copy offload migrations.

If you use this method, you must install the VIB on every ESXi host that you use for copy-offload operations.

Prerequisites
  • Podman or Docker installed on your local machine.

  • Root user SSH access to ESXi hosts.

  • SSH private key for ESXi authentication. This can be the same key as used for the SSH clone method, if you use both.

  • Optional: vSphere credentials. These allow you to auto-discovery ESXi hosts.

Procedure
  1. Configure the VIB clone method in your Provider CR by setting settings:exsiCloneMethod to "vib".

    Example Provider CR
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: my-vsphere-provider
      namespace: openshift-mtv
    spec:
      type: vsphere
      url: https://vcenter.example.com
      secret:
        name: vsphere-credentials
        namespace: openshift-mtv
      settings:
        esxiCloneMethod: "vib"
  2. Install the VIB by using the vib-installer utility included in the container image by running one of the following methods:

    1. Auto-discover ESXi hosts from vSphere and install the VIB by running the following commands:

      $ podman run -it --rm \
        --entrypoint /bin/vib-installer \
        -v $HOME/.ssh/id_rsa:/tmp/esxi_key:Z \
        -e GOVMOMI_USERNAME='administrator@vsphere.local' \
        -e GOVMOMI_PASSWORD='your-password' \
        -e GOVMOMI_HOSTNAME='vcenter.example.com' \
        -e GOVMOMI_INSECURE='true' \
        $(oc get deployment forklift-volume-populator-controller  -n openshift-mtv -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name == "VSPHERE_XCOPY_VOLUME_POPULATOR_IMAGE")].value}') \
        --ssh-key-file /tmp/esxi_key \
        --datacenter MyDatacenter
    2. Or specify ESXi hosts manually and install the VIB by running the following commands:

      $ podman run -it --rm \
        --entrypoint /bin/vib-installer \
        -v $HOME/.ssh/id_rsa:/tmp/esxi_key:Z \
        $(oc get deployment forklift-volume-populator-controller  -n openshift-mtv -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name == "VSPHERE_XCOPY_VOLUME_POPULATOR_IMAGE")].value}') \
        --ssh-key-file /tmp/esxi_key \
        --esxi-hosts 'esxi1.example.com,esxi2.example.com,esxi3.example.com'

      Run vib-installer --help for a list of all available flags. Flags match the main populator naming conventions and support environment variables such as SSH_KEY_FILE, ESXI_HOSTS, and GOVMOMI_USERNAME.

      For alternative VIB installation methods using Ansible, see Esxcli plugin that wraps vmkfstools.

Setting up storage copy offload by using an SSH key

You can use either an automatically generated SSH key or a manually generated SSH key for your storage copy offload migrations.

Although SSH keys are automatically generated when you choose to use the SSH method, you have the option to generate manual SSH keys.

Procedures for both options are given in the sections that follow.

Important notes and security considerations
  • All public keys must include command restrictions for security.

  • The command path in the restrictions must match the secure script path: /vmfs/volumes/{datastore-name}/secure-vmkfstools-wrapper.py.

  • You must install the SSH key in each ESXi host in your migration environment.

  • SSH service must be enabled on all target ESXi hosts.

  • To support ESXi access control, commands are restricted to vmkfstools operations only.

Security recommendations

It is recommended to follow the following security recommendations:

  • Use separate key pairs for different environments.

  • Rotate keys periodically.

  • Consider using shorter-lived keys for enhanced security.

Setting up storage copy offload by using automatically generated SSH keys

By default, when you use the SSH method for setting up storage copy migrations, SSH keys are automatically generated when you create or update a relevant vSphere provider.

These keys have the following characteristics:

  • 2048-bit RSA keys

  • Stored in separate Kubernetes Secrets in the provider’s namespace

  • Automatically injected into migration pods as needed

SSH keys are stored in secrets with predictable names based on the name of your vSphere provider:

Table 15. Patterns of SSH secret names
Secret type Naming pattern Contains

Private key

offload-ssh-keys-{provider-name}-private

private-key: RSA private key in PEM format

Public key

offload-ssh-keys-{proider-name}-public

`public-key: SSH public key in authorized_keys format

Example: For a provider with the name vcenter-example, the secrets would be offload-ssh-keys-vcenter-example-private and offload-ssh-keys-vcenter-example-public.

Prerequisites
  • Ensure that SSH traffic is permitted from the KubeVirt network to the ESXis.

Procedure
  1. Configure the SSH clone method in your Provider CR by setting settings:exsiCloneMethod to "ssh".

    Example Provider CR
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: my-vsphere-provider
      namespace: openshift-mtv
    spec:
      type: vsphere
      url: https://vcenter.example.com
      secret:
        name: vsphere-credentials
        namespace: openshift-mtv
      settings:
        esxiCloneMethod: "ssh"
  2. Find the SSH secrets for your vSphere Provider by running one of the following commands:

    1. List all SSH key secrets in the provider’s namespace by running the following command:

      $ oc get secrets -l app.kubernetes.io/component=ssh-keys -n openshift-mtv
    2. View a specific private or public key secret by running the following command:

      $ oc get secret <name_of_private_or_public_key> -o yaml -n openshift-mtv
  3. Optional: If needed, you can replace an auto-generated key pair by running the following command:

    $ ssh-keygen -t rsa -b 4096 -f custom_esxi_key -N ""

    This is a simpler procedure than manually generating the key pair, as described in TBD.

  4. Optional: If needed, you can replace either a private key secret or a public key secret by running one of the following commands:

    1. Replace a private key secret by running the following command:

      $ oc create secret generic <name_of_private_key> \
        --from-file=private-key=custom_esxi_key \
        --dry-run=client -o yaml | oc replace -f - -n openshift-mtv
    2. Replace a public key secret by running the following command:

      $ oc create secret generic <name of public key> \
        --from-file=public-key=custom_esxi_key.pub \
        --dry-run=client -o yaml | oc replace -f - -n openshift-mtv
  5. Optional: Configure the SSH timeout by adding it to your provider secret, which is the main storage credentials secret, by running the following command:

    $ oc patch secret <provider_credentials> -p '{"data":{"SSH_TIMEOUT_SECONDS":"'$(echo -n "60" | base64)'"}}' -n <provider_namespace>
Setting up storage copy offload by using manually generated SSH keys

You can manually generate restricted SSH keys to use for storage copy offload migrations. After you generate the keys, you can then add the public key to your ESXi hosts.

Prerequisites
  • Ensure that SSH traffic is permitted from the KubeVirt network to the ESXis.

  • Ensure you have network access from your local machine to the ESXi host.

Procedure
  1. Configure the SSH clone method in your Provider CR by setting settings:exsiCloneMethod to "ssh".

    Example Provider CR
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: my-vsphere-provider
      namespace: openshift-mtv
    spec:
      type: vsphere
      url: https://vcenter.example.com
      secret:
        name: vsphere-credentials
        namespace: openshift-mtv
      settings:
        esxiCloneMethod: "ssh"
  2. Get the public key from the auto-generated secret by performing the following steps:

    1. Get a list of SSH key secrets by running the following command:

      $ oc get secrets -l app.kubernetes.io/component=ssh-keys -n <namespace_with_key>
    2. Extract the public key you want by running the following command:

      $ oc get secret <your_public_key> \
        -o jsonpath='{.data.public-key}' -n <namespace_with_key> | base64 -d > esxi_public_key.pub
    3. View the public key by running the following command:

      $ cat esxi_public_key.pub
  3. Prepare the restricted key entry by performing the following steps:

    1. Prefix the public key with command restrictions by running the following command:

      $ echo 'command="python /vmfs/volumes/<datastore_name>/secure-vmkfstools-wrapper.sh",no-port-forwarding,no-agent-forwarding,no-X11-forwarding '$(cat esxi_public_key.pub) > restricted_key.pub

      The command runs a script that adds the prefix.

    2. View the final restricted key by running the following command:

      $ cat restricted_key.pub
  4. Install the restricted key on the ESXI host directly by running the following command:

    $ cat restricted_key.pub | ssh root@<your_ESXi_host_IP> \
      'cat >> /etc/ssh/keys-root/authorized_keys'
  5. Verify the installation by performing the following steps:

    1. Extract the private key from the Secret by running the following command:

      $ oc get secret <your_private_key> \
        -o jsonpath='{.data.private-key}' -n <namespace_with_key> | base64 -d > esxi_private_key
    2. Set the permissions of your private key by running the following command:

      $ chmod 600 esxi_private_key
    3. Test the connection by running the following command:

      $ ssh -i esxi_private_key root@<your_ESXi_host_IP>

      If the installation was successful, you are connected to the ESXi host with restricted commands.

    4. Run a test command that is restricted to the secure script to verify the connection.

  6. Clean up the local key files by running the following command:

    $ rm -f esxi_public_key.pub restricted_key.pub esxi_private_key

Migrating VMware vSphere VMs in the UI by using storage copy offload

You can use the storage copy offload feature of Forklift to migrate VMware vSphere virtual machines (VMs) faster than by other methods.

Prerequisites

In addition to the regular VMware prerequisites, storage copy offload has the following additional prerequisites:

  • One of the following storage systems, configured:

    • Hitachi Vantara

    • NetApp ONTAP

    • Pure Storage FlashArray

    • Dell PowerMax

    • Dell PowerFlex

    • Dell PowerStore

    • HPE 3PAR or HPE Primera

    • Infinidat Infinibox

    • IBM Flashsystem

  • A working Container Storage Interface (CSI) driver connected to the above and to KubeVirt

  • A configured VMware vSphere provider

  • vSphere users must have a role that includes the following privileges (suggested name: StorgeOffloader):

    • Global

      • Settings

    • Datastore

      • Browse datastore

      • Low level file operations

    • Host Configuration

      • Advanced settings

      • Query patch

      • Storage partition configuration

Procedure
  1. In the Forklift Operator, set the value of feature_copy_offload to true in forklift-controller by running the following command:

    $ oc patch forkliftcontrollers.forklift.konveyor.io forklift-controller --type merge -p '{"spec": {"feature_copy_offload": "true"}}' -n openshift-mtv
  2. Create a Secret in the namespace in which the migration provider is set up, usually openshift-mtv. Include the credentials from the appropriate vendor in your Secret.

    Table 16. Credentials for a Hitachi storage copy offload Secret
    Key Description Mandatory? Default

    GOVMOMI_HOSTNAME

    hostname or URL of the vSphere API (string).

    Yes.

    NA.

    GOVMOMI_USERNAME

    User name of the vSphere API (string).

    Yes.

    NA.

    GOVMOMI_PASSWORD

    Password of the vSphere API (string).

    Yes.

    NA.

    STORAGE_HOSTNAME

    The hostname or URL of the storage vendor API (string).

    Yes.

    NA.

    STORAGE_USERNAME

    The username of the storage vendor API (string).

    Yes.

    NA.

    STORAGE_PASSWORD

    The password of the storage vendor API (string).

    Yes.

    NA.

    STORAGE_PORT

    The port of the storage vendor API (string).

    Yes.

    NA.

    STORAGE_ID

    Storage array serial number (string).

    Yes.

    NA.

    HOSTGROUP_ID_LIST

    List of IO ports and host group IDs, for example. CL1-A,1:CL2-B,2:CL4-A,1:CL6-A,1.

    Yes.

    NA.

    Table 17. Credentials for a NetApp ONTAP storage copy offload Secret
    Key Description Mandatory? Default

    STORAGE_HOSTNAME

    IP or URL of the host (string). Either enter the management IP for the entire cluster or enter a dedicated storage virtual machine management logical interface (SVM LIF).

    Yes.

    NA.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    NA.

    STORAGE_PASSWORD

    The user’s password (string).

    Yes.

    NA.

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No.

    false.

    ONTAP_SVM

    The storage virtual machine (SVM) to be used in all client interactions. It can be taken from trident.netapp.io/v1/TridentBackend.config.ontap_config.svm resource field.

    Yes.

    NA.

    Table 18. Credentials for a Pure FlashArray storage copy offload Secret
    Key Description Mandatory? Default

    STORAGE_HOSTNAME

    IP or URL of the host (string).

    Yes.

    NA.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    NA.

    STORAGE_PASSWORD

    The user’s password (string).

    Yes.

    NA.

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No.

    false

    PURE_CLUSTER_PREFIX

    The cluster prefix is set in the StorageCluster resource. Retrieve it by running printf "px_%.8s" $(oc get storagecluster -A -o=jsonpath='{.items[?(@.spec.cloudStorage.provider=="pure")].status.clusterUid}') in the CLI.

    Yes.

    NA.

    Table 19. Credentials for a Dell PowerMax storage copy offload Secret
    Key Description Mandatory? Default

    STORAGE_HOSTNAME

    IP or URL of the host (string).

    Yes.

    NA.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    NA.

    STORAGE_PASSWORD

    The user’s password (string).

    Yes.

    NA.

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No..

    false

    POWERMAX_SYMMETRIX_ID

    The Symmetrix ID of the storage array. Can be taken from the config map under the powermax namespace, which the CSI driver uses.

    Yes.

    NA.

    POWERMAX_PORT_GROUP_NAME

    The port group to use for masking view creation.

    Yes.

    NA.

    Table 20. Credentials for a Dell PowerFlex storage copy offload Secret
    Key Description Mandatory? Default

    STORAGE_HOSTNAME

    IP or URL of the host (string).

    Yes.

    NA.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    NA.

    STORAGE_PASSWORD

    The user’s password (string).

    Yes.

    NA.

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No.

    false.

    POWERFLEX_SYSTEM_ID

    The system ID of the storage array. Can be taken from vxflexos-config` from the vxflexos` namespace or from the openshift-operators namespace.

    Yes.

    NA.

    Table 21. Credentials for a Dell PowerStore storage copy offload Secret
    Key Description Mandatory? Default

    STORAGE_HOSTNAME

    IP or URL of the host (string)

    Yes.

    NA.

    STORAGE_USERNAME

    The user’s name (string).

    Yes.

    NA.

    STORAGE_PASSWORD

    The user’s password (string)

    Yes

    NA

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No

    false

    Table 22. Credentials for an HPE 3PAR or HPE Primera storage copy offload Secret
    Key Description Mandatory? Default

    STORAGE_HOSTNAME

    Must include the full URL with protocol. For HPE 3PAR, must also include Web Services API (WSAPI) port. Use the HPE 3PAR command cli% showwsapi to determine the correct WSAPI port. HPE 3PAR systems default to port 8080 for both HTTP and HTTPS connections, HPE Primera defaults to port 443 (SSL/HTTPS). Depending on configured certificates, you might need to skip SSL verification. Example: https://192.168.1.1:8080.

    Yes

    NA

    STORAGE_USERNAME

    The user’s name (string)

    Yes

    NA

    STORAGE_PASSWORD

    The user’s password (string)

    Yes

    NA

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No

    false

    Table 23. Credentials for an Infinidat InfiBox storage copy offload Secret
    Key Description Mandatory? Default

    STORAGE_HOSTNAME

    IP or URL of the host (string)

    Yes

    NA

    STORAGE_USERNAME

    The user’s name (string)

    Yes

    NA

    STORAGE_PASSWORD

    The user’s password (string)

    Yes

    NA

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No

    false

    Table 24. Credentials for an IBM FlashSystem storage copy offload Secret
    Key Description Mandatory? Default

    STORAGE_HOSTNAME

    IP or URL of the host (string)

    Yes

    NA

    STORAGE_USERNAME

    The user’s name (string)

    Yes

    NA

    STORAGE_PASSWORD

    The user’s password (string)

    Yes

    NA +

    STORAGE_SKIP_SSL_VERIFICATION

    If set to true, SSL verification is not performed (true, false).

    No

    false

  3. In the UI complete the following steps:

    1. Create an ownerless storage map by using the procedure in Creating ownerless storage maps using the form page of the Forklift UI. Use the Offload plugin named vSphere XCOPY.

    2. Create a migration plan by using the procedure in Creating a VMware vSphere migration plan by using the MTV wizard.

Adding a VMware vSphere source provider

You can migrate VMware vSphere VMs from VMware vCenter or from a VMware ESX/ESXi server without going through vCenter.

Considerations
  • EMS enforcement is disabled for migrations with VMware vSphere source providers in order to enable migrations from versions of vSphere that are supported by Forklift but do not comply with the 2023 FIPS requirements. Therefore, users should consider whether migrations from vSphere source providers risk their compliance with FIPS. Supported versions of vSphere are specified in Software compatibility guidelines.

  • Anti-virus software can cause migrations to fail. It is strongly recommended to remove such software from source VMs before you start a migration.

  • Forklift does not support migrating VMware Non-Volatile Memory Express (NVMe) disks.

  • If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OKD transfer network that you use. For more information about the OKD transfer network, see Creating a VMware vSphere migration plan using the Forklift wizard.

Prerequisites
  • It is strongly recommended to create a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters. A VDDK image accelerates migration and reduces the risk of a plan failing. If you are not using VDDK and a plan fails, retry with VDDK installed. For more information, see Creating a VDDK image.

Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN.

Procedure
  1. Access the Create provider page for VMware by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click VMware.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click VMware.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click VMware when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    1. Provider details

      • Provider resource name: Name of the source provider.

      • Endpoint type: Select the vSphere provider endpoint type. Options: vCenter or ESXi. You can migrate virtual machines from vCenter, an ESX/ESXi server that is not managed by vCenter, or from an ESX/ESXi server that is managed by vCenter but does not go through vCenter.

      • URL: URL of the SDK endpoint of the vCenter on which the source VM is mounted. Ensure that the URL includes the sdk path, usually /sdk. For example, https://vCenter-host-example.com/sdk. If a certificate for FQDN is specified, the value of this field needs to match the FQDN in the certificate.

      • VDDK init image: VDDKInitImage path. It is strongly recommended to create a VDDK init image to accelerate migrations. For more information, see Creating a VDDK image.

        Do one of the following:

        • Select the Skip VMWare Virtual Disk Development Kit (VDDK) SDK acceleration (not recommended).

        • Enter the path in the VDDK init image text box. Format: <registry_route_or_server_path>/vddk:<tag>.

        • Upload a VDDK archive and build a VDDK init image from the archive by doing the following:

          • Click Browse next to the VDDK init image archive text box, select the desired file, and click Select.

          • Click Upload.

            The URL of the uploaded archive is displayed in the VDDK init image archive text box.

    2. Provider credentials

      • Username: vCenter user or ESXi user. For example, user@vsphere.local.

      • Password: vCenter user password or ESXi user password.

  1. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.

    • Use the system CA certificate: Migrate after validating the system CA certificate.

    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.

  2. Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.

    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  3. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

    It might take a few minutes for the provider to have the status Ready.

  4. Optional: Add access to the UI of the provider:

    1. On the Providers page, click the provider.

      The Provider details page opens.

    2. Click the Edit icon under External UI web link.

    3. Enter the link and click Save.

      If you do not enter a link, Forklift attempts to calculate the correct link.

      • If Forklift succeeds, the hyperlink of the field points to the calculated link.

      • If Forklift does not succeed, the field remains empty.

Selecting a migration network for a VMware source provider

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere.

If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OKD transfer network that you use. For more information about the OKD transfer network, see Creating a migration plan.

Prerequisites
  • The migration network must have enough throughput, minimum speed of 10 Gbps, for disk transfer.

  • The migration network must be accessible to the KubeVirt nodes through the default gateway.

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

  • The target namespace must have network connectivity to the VMware source environment.

    Migration pods run in the target namespace and require outbound access to the VMware API and ESXi hosts. If you use NetworkPolicies to restrict egress connections from the target namespace, you must configure policies that allow connections to VMware. This requirement applies whether you use the pod network, user-defined networks (UDNs), or cluster user-defined networks (CUDNs) in the target namespace.

  • The migration network should have jumbo frames enabled.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Providers.

  2. Click the host number in the Hosts column beside a provider to view a list of hosts.

  3. Select one or more hosts and click Select migration network.

  4. Specify the following fields:

    • Network: Network name

    • ESXi host admin username: For example, root

    • ESXi host admin password: Password

  5. Click Save.

  6. Verify that the status of each host is Ready.

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

Adding a KubeVirt destination provider

Use a Red Hat KubeVirt provider as both source and destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster or from a remote cluster to the cluster that Forklift is deployed on.

Prerequisites
Procedure
  1. Access the Create KubeVirt provider interface by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click KubeVirt.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click KubeVirt.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click KubeVirt when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider

    • URL: URL of the endpoint of the API server

    • Service account bearer token: Token for a service account with cluster-admin privileges

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

  3. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.

    • Use the system CA certificate: Migrate after validating the system CA certificate.

    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.

  4. Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.

    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  5. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

Selecting a migration network for a KubeVirt provider

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.

In Forklift version 2.9 and earlier, Forklift used the pod network as the default network.

In version 2.10.0 and later, Forklift detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.

Forklift supports using UDNs for all providers except KubeVirt.

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Providers.

  2. Click the KubeVirt provider whose migration network you want to change.

    When the Providers detail page opens:

  3. Click the Networks tab.

  4. Click Set default transfer network.

  5. Select a default transfer network from the list and click Save.

  6. Configure a gateway in the network used for Forklift migrations by completing the following steps:

    1. In the OKD web console, click Networking > NetworkAttachmentDefinitions.

    2. Select the appropriate default transfer network NAD.

    3. Click the YAML tab.

    4. Add forklift.konveyor.io/route to the metadata:annotations section of the YAML, as in the following example:

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: localnet-network
        namespace: mtv-test
        annotations:
          forklift.konveyor.io/route: <IP address>
      • The NetworkAttachmentDefinition parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.

    5. Click Save.

Creating a VMware vSphere migration plan by using the MTV wizard

You can migrate VMware vSphere virtual machines (VMs) from VMware vCenter or from a VMware ESX or ESXi server by using the Forklift plan creation wizard.

The wizard is designed to lead you step-by-step in creating a migration plan.

Limitations
  • Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration. This prevents concurrent disk access to the storage the guest points to.

  • A plan cannot contain more than 500 VMs or 500 disks.

Forklift cannot migrate VMware vSphere 6 and VMware vSphere 7 VMs to a FIPS-compliant KubeVirt cluster.

Prerequisites
  • Have a VMware source provider and a KubeVirt destination provider. For more information, see Adding a VMware vSphere source provider or Adding a KubeVirt destination provider.

  • If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.

  • If you are using a user-defined network (UDN), note the name of its namespace as defined in KubeVirt.

Procedure
  1. On the OKD web console, click Migration for Virtualization > Migration plans.

  2. Click Create plan.

    The Create migration plan wizard opens.

  3. On the General page, specify the following fields:

    • Plan name: Enter a name.

    • Plan project: Select from the list.

    • Source provider: Select from the list.

    • Target provider: Select from the list.

    • Target project: Click the list and do one of the following:

      1. Select an existing project from the list.

      2. Create a new project by clicking Create project and doing the following:

        1. Enter the Name of the project. A project name must consist of lowercase alphanumeric characters or -. A project name must start and end with alphanumeric characters. For example, my-name or 123-abc.

        2. Optional: Enter a Display name for the project.

        3. Optional: Enter a Description of the project.

        4. Click Create project.

  4. Click Next.

  5. On the Virtual machines page, select the virtual machines you want to migrate and click Next.

  6. If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails.

  7. On the Network map page, choose one of the following options:

    • Use an existing network map: Select an existing network map from the list.

      These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.

      You can create an ownerless network map, which you and others can use for additional migration plans, in the Network maps section of the UI.

      • Source network: Select from the list.

      • Target network: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Network map name: Enter a name or let Forklift automatically generate a name for the network map.

  8. Click Next.

  9. On the Storage map page, choose one of the following options:

    • Use an existing storage map: Select an existing storage map from the list.

      These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.

      You can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage maps section of the UI.

      • Source storage: Select from the list.

      • Target storage: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Storage map name: Enter a name or let Forklift automatically generate a name for the storage map.

  10. Click Next.

  11. On the Migration type page, choose one of the following:

    • Cold migration (default)

    • Warm migration

  12. Click Next.

  13. On the Other settings (optional) page, specify any of the following settings that are appropriate for your plan. All are optional.

    • Disk decryption passphrases: For disks encrypted using Linux Unified Key Setup (LUKS).

      • Enter a decryption passphrase for a LUKS-encrypted device.

      • To add another passphrase, click Add passphrase and add a passphrase.

      • Repeat as needed.

        You do not need to enter the passphrases in a specific order. For each LUKS-encrypted device, Forklift tries each passphrase until one unlocks the device.

    • Transfer Network: The network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider.

      • Verify that the transfer network is in the selected target project.

      • To choose a different transfer network, select a different transfer network from the list.

      • Optional: To configure another OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.

        To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.

      • To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.

    • Preserve static IPs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP during migration.

      • To preserve static IPs, select the Preserve the static IPs checkbox.

        Forklift then issues a warning message about any VMs whose vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere. This causes the vNIC properties to be reported to Forklift.

    • Root device: Applies to multi-boot VM migrations only. By default, Forklift uses the first bootable device detected as the root device.

      • To specify a different root device, enter it in the text box.

        Forklift uses the following format for disk location: /dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be: /dev/sdb2. After you enter the boot device, click Save.

        If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by checking the conversion pod logs.

    • Shared disks: Applies to cold migrations only. Shared disks are disks that are attached to multiple VMs and that use the multi-writer option. These characteristics make shared disks difficult to migrate. By default, Forklift migrates shared disks.

      Migrating shared disks might slow down the migration process.

      • To migrate shared disks in the migration plan, verify that Shared disks is selected in the checkbox.

      • To avoid migrating shared disks, clear the Shared disks checkbox.

  14. Click Next.

  15. On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.

  16. To add a hook, select the appropriate Enable hook checkbox.

  17. Enter the Hook runner image.

  18. Enter the Ansible playbook of the hook in the window.

    You cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.

  19. Click Next.

  20. On the Review and create page, review the information displayed.

  21. Edit any item by doing the following:

    1. Click its Edit step link.

      The wizard opens to the page where you defined the item.

    2. Edit the item.

    3. Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.

  22. When you finish reviewing the details of the plan, click Create plan.

    Forklift validates your plan. When your plan is validated, the Plan details page for your plan opens. If everything is OK, the Plan details page for your plan opens. This page contains important settings that do not appear in the wizard.

Next steps

Configuring VMware migration plan settings

After you create a migration plan using the Forklift wizard, the Plan details page opens. This page contains important settings that do not appear in the wizard but can affect your migration. You can configure these settings immediately after creating the plan or return to configure them later before running the plan.

Prerequisites
Procedure
  1. On the Plan details page for your plan, review the Plan settings section.

    The Plan settings section includes settings that you specified in the Other settings (optional) page of the wizard and some additional optional settings. The steps below refer to the additional optional settings, but all of the settings can be edited by clicking the Options menu kebab, making the change, and then clicking Save.

  2. Check the following items in the Plan settings section of the page:

    1. Volume name template: Specifies a template for the volume interface name for the VMs in your plan.

      The template follows the Go template syntax and has access to the following variables:

      • .PVCName: Name of the PVC mounted to the VM using this volume

      • .VolumeIndex: Sequential index of the volume interface (0-based)

        Examples

      • "disk-{{.VolumeIndex}}"

      • "pvc-{{.PVCName}}"

        Variable names cannot exceed 63 characters.

      • To specify a volume name template for all the VMs in your plan, do the following:

        • Click the Edit icon.

        • Click Enter custom naming template.

        • Enter the template according to the instructions. Be sure that your template generates VM names that follow RFC 1123, but do not include uppercase letters.

        • Click Save.

      • To specify a different volume name template only for specific VMs, do the following:

        • Click the Virtual Machines tab.

        • Select the desired VMs.

        • Click the Options menu kebab of the VM.

        • Select Edit Volume name template.

        • Enter the template according to the instructions. Be sure that your template generates VM names that follow RFC 1123, but do not include uppercase letters.

        • Click Save.

          Changes you make on the Virtual Machines tab override any changes on the Plan details page.

    2. PVC name template: Specifies a template for the name of the persistent volume claim (PVC) for the VMs in your plan.

      The template follows the Go template syntax and has access to the following variables:

      • .VmName: Name of the VM

      • .PlanName: Name of the migration plan

      • .DiskIndex: Initial volume index of the disk

      • .RootDiskIndex: Index of the root disk

        Examples

      • "{{.VmName}}-disk-{{.DiskIndex}}"

      • "{{if eq .DiskIndex .RootDiskIndex}}root{{else}}data{{end}}-{{.DiskIndex}}"

        Variable names cannot exceed 63 characters.

      • To specify a PVC name template for all the VMs in your plan, do the following:

        • Click the Edit icon.

        • Click Enter custom naming template.

        • Enter the template according to the instructions. Be sure that your template generates VM names that follow RFC 1123, but do not include uppercase letters.

        • Click Save.

      • To specify a PVC name template only for specific VMs, do the following:

        • Click the Virtual Machines tab.

        • Select the desired VMs.

        • Click the Options menu kebab of the VM.

        • Select Edit PVC name template.

        • Enter the template according to the instructions. Be sure that your template generates VM names that follow RFC 1123, but do not include uppercase letters.

        • Click Save.

          Changes you make on the Virtual Machines tab override any changes on the Plan details page.

    3. Network name template: Specifies a template for the network interface name for the VMs in your plan.

      The template follows the Go template syntax and has access to the following variables:

      • .NetworkName: If the target network is multus, add the name of the Multus Network Attachment Definition. Otherwise, leave this variable empty.

      • .NetworkNamespace: If the target network is multus, add the namespace where the Multus Network Attachment Definition is located.

      • .NetworkType: Network type. Options: multus or pod.

      • .NetworkIndex: Sequential index of the network interface (0-based).

        Examples

      • "net-{{.NetworkIndex}}"

      • {{if eq .NetworkType "pod"}}pod{{else}}multus-{{.NetworkIndex}}{{end}}"

        Variable names cannot exceed 63 characters.

      • To specify a network name template for all the VMs in your plan, do the following:

        • Click the Edit icon.

        • Click Enter custom naming template.

        • Enter the template according to the instructions. Be sure that your template generates VM names that follow RFC 1123, but do not include uppercase letters.

        • Click Save.

      • To specify a different network name template only for specific VMs, do the following:

        • Click the Virtual Machines tab.

        • Select the desired VMs.

        • Click the Options menu kebab of the VM.

        • Select Edit Network name template.

        • Enter the template according to the instructions. Be sure that your template generates VM names that follow RFC 1123, but do not include uppercase letters.

        • Click Save.

          Changes you make on the Virtual Machines tab override any changes on the Plan details page.

          Forklift does not validate VM names generated by the templates you enter on the Plan details page. Migrations that include VMs whose names include uppercase letters or that violate RFC 1123 rules fail automatically. To avoid failures, you might want to run a Go script that uses the sprig methods that Forklift supports. For tables documenting the methods that Forklift supports, see /documentation/doc-Migrating_your_virtual_machines/master.html?assembly_migrating-from-vmware#mtv-template-utility_vmware[Forklift template utility for VMware VM names].

    4. Raw copy mode: By default, during migration, virtual machines (VMs) are converted using a tool named virt-v2v that makes them compatible with KubeVirt. For more information about the virt-v2v conversion process, see How Forklift uses the virt-v2v tool. Raw copy mode copies VMs without converting them. This allows for faster conversions, migrating VMs running a wider range of operating systems, and supporting migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on KubeVirt.

      • To use raw copy mode for your migration plan, do the following:

        • Click the Edit icon.

        • Toggle the Raw copy mode switch to enable it.

        • Optional: Configure the Use compatibility mode setting:

          When you enable Use compatibility mode (default), Forklift uses compatibility devices (SATA bus, E1000E NICs, USB) to ensure the VM can boot on KubeVirt.

          When you disable Use compatibility mode, Forklift uses pre-installed VirtIO devices on the source VM for better performance.

          Only disable Use compatibility mode if VirtIO drivers are already installed in the source VM. VMs without pre-installed VirtIO drivers do not boot on KubeVirt if you disable compatibility mode.

        • Click Save.

    5. VM target node selector, VM target labels, and VM target affinity rules are options that support VM target scheduling, a feature that lets you direct Forklift to migrate virtual machines (VMs) to specific nodes or workloads (pods) of KubeVirt as well as to schedule when to power on the VMs.

      For more information on the feature in general, see Target VM scheduling options.

      For more details on using the feature with the UI, see Scheduling target VMs from the user interface.

      • VM target node selector allows you to create mandatory exact match key-value label pairs that the target node must possess. If no node on the cluster has all the labels specified, the VM is not scheduled and it remains in a Pending state until there is space on a node that fits these key-value label pairs.

        • To use the VM target node selector for your migration plan, do the following:

          • Click the Edit icon.

          • Enter a key-value label pair. For example, to require that all VMs in the plan be migrated to your east data center, enter dataCenter as your key and east as your label.

          • To add another key-value label pair, click + and enter another key-value pair.

          • Click Save.

      • VM target labels allows you to apply organizational or operational labels to migrated VMs for identification and management. One use for these labels is to use them to specify a different scheduler for your migrated VMs, by creating a specific target VM label for it.

        • To use the VM target node selector for your migration plan, do the following:

          • Click the Edit icon.

          • Enter one or more VM target labels.

          • Click Save.

      • VM target affinity rules: Target affinity rules let you use conditions to either require or prefer scheduling on specific nodes or workloads (pods).

        Target anti-affinity rules let you prevent VMs from being scheduled to run on selected workloads (pods) or prefer that they are not scheduled. These kind of rules offer more flexible placement control than rigid Node Selector rules, because they support conditionals such as In, or NotIn.

        For example, you could require that a VM be powered on "only if it is migrated to node A or if it is migrated to an SSD disk, but it cannot be migrated to a node for which license-tier=silver is true."

        Additionally, both target affinity and target anti-affinity rules allow you to include both hard and soft conditions in the same rule. A hard condition is a requirement, and a soft condition is a preference. The previous example used only hard conditions. A rule that states that "a VM can be powered on if it is migrated to node A or if it is migrated to an SSD disk, but it is preferred not to migrate it to a node for which license-tier=silver is true" is an example of a rule that uses soft conditions.

        Forklift supports target affinity rules at both the node level and the workload (pod) level. It supports anti-affinity rules at the workload (pod) level only.

        • To use VM target affinity rules for your migration plan, do the following:

          • Click the Edit icon.

          • Click Add affinity rule.

          • Select the Type of affinity rule from the list. Valid options: Node Affinity, Workload (pod) Affinity, Workload (pod) Anti-Affinity.

          • Select the Condition rom the list. Valid options: Preferred during scheduling (soft condition), Required during scheduling (hard condition).

          • Soft condition only: Enter a numerical Weight. The higher the weight, the stronger the preference. Valid options: whole numbers from 1-100.

          • Enter a Typology key, the key for the node label that the system uses to denote the domain.

          • Optional: Select the Workload labels that you want to set by doing the following:

            • Enter a Key.

            • Select an Operator from the list. Valid options: Exists, DoesNotExist, In, and NotIn.

            • Enter a Value.

          • To add another label, click Add expression and add another key-value pair with an operator.

          • Click Save affinity rule.

          • To add another affinity rule, click Add affinity rule. Rules with a preferred condition will stack with an AND relation between them. Rules with a required condition will stack with an OR relation between them.

            Forklift validates any changes you made on this page.

  3. In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:

    • Migration history: Details about successful and unsuccessful attempts to run the plan

    • Conditions: Any changes that need to be made to the plan so that it can run successfully

  4. When you have fixed all conditions listed, you can run your plan from the Plans page.

    The Plan details page also includes five additional tabs, which are described in the table that follows:

    Table 25. Tabs of the Plan details page
    YAML Virtual Machines Resources Mappings Hooks

    Editable YAML Plan manifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMs

    The VMs the plan migrates

    Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs

    Editable specification of the network and storage maps used by your plan

    Updatable specification of the hooks used by your plan, if any

Migration of LUKS-encrypted disks

If you have virtual machines (VMs) with LUKS-encrypted disks in your source VMware VSphere environment, you can migrate them to Red Hat KubeVirt by enabling Network-Bound Disk Encryption (NBDE) with Clevis. Alternatively, you can manually add passwords for LUKS-encrypted devices in your migration plan.

If you enable NBDE, passwords are retrieved from the Clevis server. When you manually add LUKS passwords to your migration plan, you provide the list of passwords, and you cannot use NBDE to retrieve them. The two methods for migrating LUKS-encrypted disks are incompatible. You must use either NBDE or manual LUKS passwords to migrate LUKS-encrypted disks.

MTV transfers the data of VMs with LUKS-encrypted disks from the source environment to the target KubeVirt cluster. MTV reads only the blocks that are required for guest conversion, decrypting the required blocks and re-encrypting them locally with the same key.

Clevis is a client-side framework that automates the decryption of LUKS volumes by binding a LUKS key slot to a policy. During the migration of VMs with LUKS-encrypted disks, MTV authenticates with the configured network service by requesting the key to unlock the LUKS-encrypted disk from the Tang server. The automatic retrieval of the key allows the VM to boot without a manual passphrase entry from an administrator.

Benefits of NBDE
  • Automation: Eliminates the need to enter keys manually during migration.

  • Enhanced security: Maintains the security of VMs throughout their migration lifecycle by preserving LUKS encryption from the source to the destination.

  • Seamless operation: Ensures that VMs with encrypted disks can be brought online in the new OpenShift Virtualization environment with minimal interruption.

Enabling Network-Bound Disk Encryption with Clevis

When you enable Network-Bound Disk Encryption (NBDE) with Clevis, the Tang server manages the keys for Linux Unified Key Setup (LUKS)-encrypted disks during a migration. If you do not use NBDE to migrate LUKS-encrypted disks from your source environment, you can manually add passwords for LUKS-encrypted devices instead. You must use either NBDE or manual LUKS passwords to migrate LUKS-encrypted disks.

You can enable NBDE with Clevis either in the MTV UI or in the YAML file for your migration plan:

  • In the MTV UI, you must select either NBDE with Clevis or LUKS passphrases. You can have only one encryption type, and you apply the setting to all VMs in your migration plan.

  • In the YAML file for your migration plan, you can combine encryption types and apply the setting to selected VMs in the YAML file.

Prerequisites
  • The Tang server is accessible from your OpenShift cluster and from the migration network.

  • You have a LUKS key slot bound to the Tang server policy.

    For MTV to access the keys from the Tang server, the keys must be on a different subnet range than a user-defined network (UDN).
Procedure
  1. Enable NBDE with Clevis in the MTV UI.

    1. In the Create migration plan wizard, navigate to Other settings under Additional setup in the left navigation pane.

    2. Select Use NBDE/Clevis.

      If you are not using NBDE with Clevis, you add passphrases for LUKS-encrypted devices so that the Tang servers can decrypt the disks during a migration.

    3. Click Next, and verify that Use NBDE/Clevis shows as Enabled under Other settings (optional).

    4. When you create your migration plan, click Migration plans in the left navigation menu, and open the Plan details page for your migration plan.

    5. Click the Edit icon for Disk decryption under Plan settings.

    6. Verify that Use network-bound disk encryption (NBDE/Clevis) is selected.

      If you are not using NBDE with Clevis, verify that the passphrases for LUKS-encrypted devices are added.

  2. Enable NBDE with Clevis in the YAML file.

    1. Click Migration plans in the left navigation menu and open the Plan details page for your migration plan.

    2. Click the YAML tab to open the Plan custom resource (CR) for your migration plan.

    3. For each VM under vms in the YAML file, enter the encryption type. In this example, you set nbdeClevis as the encryption type for vm-1, LUKS passphrase as the encryption type for vm-2, and no encryption type for vm-3:

      Example:

      vms:
        - id: vm-1
          name: vm-1-esx8.0-rhel8.10-raid1
          targetPowerState: on
          nbdeClevis: true
        - id: vm-2
          name: vm-2-esx8.0-rhel8.10-raid1
          luks: { name: 'test-secret-1' }
        - id: vm-3
          name: vm-3-esx8.0-rhel8.10-raid1
Troubleshooting

About scheduling importer pods

Forklift uses virt-v2v convertor pods, or importer pods, to transfer data from VMware source virtual machines (VMs) to target VMs.

By default, KubeVirt assigns the nodes to which these importer pods transfer data. However, for cold migrations from VMware VMs, you can schedule the destination nodes for the importer pods.

Scheduling importer pods is supported only for cold migrations from VMware VMs. It is not supported for warm migrations from VMware or for migrations from other vendors.

In Forklift 2.11, scheduling importer pods is available only for migrations from the command-line interface. You schedule the importer pods in the Plan CR, as described in step 8 of /documentation/doc-Migrating_your_virtual_machines/master.html?assembly_migrating-from-vmware_mtv#proc_migrating-vms-cli-vmware_vmware[Running a VMware vSphere migration from the command-line].

Planning a migration of virtual machines from oVirt

Create a oVirt migration plan by setting up network maps, configuring source and destination providers with migration networks, and defining the migration plan in the Forklift UI.

Creating ownerless network maps in the Forklift UI

You can create ownerless network maps by using the Forklift UI to map source networks to KubeVirt networks.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Network maps.

  2. Click Create network map to open the Create network map page.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $  cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod
          source:
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • type: Allowed values are pod and multus.

    • source: You can use either the id or the name parameter to specify the source network. For id, specify the oVirt network Universal Unique ID (UUID).

    • <network_attachment_definition>: Specify a network attachment definition for each additional KubeVirt network.

    • <network_attachment_definition_namespace>: Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of network maps.

Creating ownerless storage maps using the form page of the Forklift UI

You can create ownerless storage maps by using the form page of the Forklift UI.

Prerequisites
Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with form.

  3. Specify the following:

    • Map name: Name of the storage map.

    • Project: Select from the list.

    • Source provider: Select from the list.

    • Target provider: Select from the list.

    • Source storage: Select from the list.

    • Target storage: Select from the list.

  4. Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.

  5. Click Create.

    Your map appears in the list of storage maps.

Creating ownerless storage maps using YAML or JSON definitions in the Forklift UI

You can create ownerless storage maps by using YAML or JSON definitions in the Forklift UI.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with YAML.

    The Create StorageMap page opens.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $ cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode>
          source:
            id: <source_storage_domain>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • accessMode: Allowed values are ReadWriteOnce and ReadWriteMany.

    • id: Specify the oVirt storage domain UUID, for example, f2737930-b567-451a-9ceb-2887f6207009.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of storage maps.

Adding an oVirt source provider

You can add an oVirt source provider by using the OKD web console.

Prerequisites
  • Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate

Procedure
  1. Access the Create provider page for oVirt by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click oVirt.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click oVirt.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click oVirt when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider.

    • URL: URL of the API endpoint of the oVirt Manager (RHVM) on which the source VM is mounted. Ensure that the URL includes the path leading to the RHVM API server, usually /ovirt-engine/api. For example, https://rhv-host-example.com/ovirt-engine/api.

    • Username: Username.

    • Password: Password.

  3. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.

    • Use the system CA certificate: Migrate after validating the system CA certificate.

    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.

  4. Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.

    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  5. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

  6. Optional: Add access to the UI of the provider:

    1. On the Providers page, click the provider.

      The Provider details page opens.

    2. Click the Edit icon under External UI web link.

    3. Enter the link and click Save.

      If you do not enter a link, Forklift attempts to calculate the correct link.

      • If Forklift succeeds, the hyperlink of the field points to the calculated link.

      • If Forklift does not succeed, the field remains empty.

Adding a KubeVirt destination provider

Use a Red Hat KubeVirt provider as both source and destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster or from a remote cluster to the cluster that Forklift is deployed on.

Prerequisites
Procedure
  1. Access the Create KubeVirt provider interface by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click KubeVirt.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click KubeVirt.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click KubeVirt when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider

    • URL: URL of the endpoint of the API server

    • Service account bearer token: Token for a service account with cluster-admin privileges

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

  3. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.

    • Use the system CA certificate: Migrate after validating the system CA certificate.

    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.

  4. Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.

    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  5. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

Selecting a migration network for a KubeVirt provider

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.

In Forklift version 2.9 and earlier, Forklift used the pod network as the default network.

In version 2.10.0 and later, Forklift detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.

Forklift supports using UDNs for all providers except KubeVirt.

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Providers.

  2. Click the KubeVirt provider whose migration network you want to change.

    When the Providers detail page opens:

  3. Click the Networks tab.

  4. Click Set default transfer network.

  5. Select a default transfer network from the list and click Save.

  6. Configure a gateway in the network used for Forklift migrations by completing the following steps:

    1. In the OKD web console, click Networking > NetworkAttachmentDefinitions.

    2. Select the appropriate default transfer network NAD.

    3. Click the YAML tab.

    4. Add forklift.konveyor.io/route to the metadata:annotations section of the YAML, as in the following example:

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: localnet-network
        namespace: mtv-test
        annotations:
          forklift.konveyor.io/route: <IP address>
      • The NetworkAttachmentDefinition parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.

    5. Click Save.

Creating a Red Hat Virtualization migration plan by using the MTV wizard

You can migrate oVirt virtual machines (VMs) by using the Forklift plan creation wizard.

The wizard is designed to lead you step-by-step in creating a migration plan.

Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.

This prevents concurrent disk access to the storage the guest points to.

A plan cannot contain more than 500 VMs or 500 disks.

When you click Create plan on the Review and create page of the wizard, Forklift validates your plan. If everything is OK, the Plan details page for your plan opens. This page contains settings that do not appear in the wizard, but are important. Be sure to read and follow the instructions for this page carefully, even though it is outside the plan creation wizard. The page can be opened later, any time before you run the plan, so you can come back to it if needed.

Prerequisites
  • Have an oVirt source provider and a KubeVirt destination provider. For more information, see Adding a oVirt source provider or Adding a KubeVirt destination provider.

  • If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.

  • If you are using a user-defined network (UDN), note the name of the namespace as defined in KubeVirt.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Migration plans.

  2. Click Create plan.

    The Create migration plan wizard opens.

  3. On the General page, specify the following fields:

    • Plan name: Enter a name.

    • Plan project: Select from the list.

    • Source provider: Select from the list.

    • Target provider: Select from the list.

    • Target project: Click the list and do one of the following:

      1. Select an existing project from the list.

      2. Create a new project by clicking Create project and doing the following:

        1. Enter the Name of the project. A project name must consist of lowercase alphanumeric characters or -. A project name must start and end with alphanumeric characters. For example, my-name or 123-abc.

        2. Optional: Enter a Display name for the project.

        3. Optional: Enter a Description of the project.

        4. Click Create Project.

  4. Click Next.

  5. On the Virtual machines page, select the virtual machines you want to migrate and click Next.

  6. If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails.

  7. On the Network map page, choose one of the following options:

    • Use an existing network map: Select an existing network map from the list.

      These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.

      You can create an ownerless network map, which you and others can use for additional migration plans, in the Network maps section of the UI.

      • Source network: Select from the list.

      • Target network: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Network map name: Enter a name or let Forklift automatically generate a name for the network map.

  8. Click Next.

  9. On the Storage map page, choose one of the following options:

    • Use an existing storage map: Select an existing storage map from the list.

      These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.

      You can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage maps section of the UI.

      • Source storage: Select from the list.

      • Target storage: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Storage map name: Enter a name or let Forklift automatically generate a name for the storage map.

  10. Click Next.

  11. On the Migration type page, choose one of the following:

    • Cold migration (default)

    • Warm migration

  12. Click Next.

  13. On the Other settings (optional) page, you have the option to change the Transfer network of your migration plan.

    The transfer network is the network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider.

    • Verify that the transfer network is in the selected target project.

    • To choose a different transfer network, select a different transfer network from the list.

    • Optional: To configure another OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.

      To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.

    • To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.

  14. Click Next.

  15. On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.

  16. To add a hook, select the appropriate Enable hook checkbox.

  17. Enter the Hook runner image.

  18. Enter the Ansible playbook of the hook in the window.

    You cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.

  19. Click Next.

  20. On the Review and create page, review the information displayed.

  21. Edit any item by doing the following:

    1. Click its Edit step link.

      The wizard opens to the page where you defined the item.

    2. Edit the item.

    3. Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.

  22. When you finish reviewing the details of the plan, click Create plan. Forklift validates your plan.

    When your plan is validated, the Plan details page for your plan opens in the Details tab.

    The Plan settings section of the page includes settings that you specified in the Other settings (optional) page and some additional optional settings. The steps below refer to the additional optional steps, but all of the settings can be edited by clicking the Options menu kebab, making the change, and then clicking Save.

  23. Check the following item on the Plan settings section of the page:

    • Preserve CPU mode: Generally, the CPU model (type) for oVirt VMs is set at the cluster level. However, the CPU model can be set at the VM level, which is called a custom CPU model.

      By default, MTV sets the CPU model on the destination cluster as follows: MTV preserves custom CPU settings for VMs that have them. For VMs without custom CPU settings, MTV does not set the CPU model. Instead, the CPU model is later set by OpenShift Virtualization.

      To preserve the cluster-level CPU model of your oVirt VMs, do the following:

      1. Click the Options menu kebab.

      2. Toggle the Whether to preserve the CPU model switch.

      3. Click Save.

        Forklift validates any changes you made on this page.

  24. In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:

    • Migration history: Details about successful and unsuccessful attempts to run the plan

    • Conditions: Any changes that need to be made to the plan so that it can run successfully

  25. When you have fixed all conditions listed, you can run your plan from the Plans page.

    The Plan details page also includes five additional tabs, which are described in the table that follows:

    Table 26. Tabs of the Plan details page
    YAML Virtual Machines Resources Mappings Hooks

    Editable YAML Plan manifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMs

    The VMs the plan migrates

    Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs

    Editable specification of the network and storage maps used by your plan

    Updatable specification of the hooks used by your plan, if any

Planning a migration of virtual machines from OpenStack

Create an OpenStack migration plan by setting up network maps, configuring source and destination providers with migration networks, and defining the migration plan in the Forklift UI.

Creating ownerless network maps in the Forklift UI

You can create ownerless network maps by using the Forklift UI to map source networks to KubeVirt networks.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Network maps.

  2. Click Create network map to open the Create network map page.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $  cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod
          source:
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • type: Allowed values are pod and multus.

    • source: You can use either the id or the name parameter to specify the source network. For id, specify the OpenStack network UUID.

    • <network_attachment_definition>: Specify a network attachment definition for each additional KubeVirt network.

    • <network_attachment_definition_namespace> is required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of network maps.

Creating ownerless storage maps using the form page of the Forklift UI

You can create ownerless storage maps by using the form page of the Forklift UI.

Prerequisites
Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with form.

  3. Specify the following:

    • Map name: Name of the storage map.

    • Project: Select from the list.

    • Source provider: Select from the list.

    • Target provider: Select from the list.

    • Source storage: Select from the list.

    • Target storage: Select from the list.

  4. Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.

  5. Click Create.

    Your map appears in the list of storage maps.

Creating ownerless storage maps using YAML or JSON definitions in the Forklift UI

You can create ownerless storage maps by using YAML or JSON definitions in the Forklift UI.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with YAML.

    The Create StorageMap page opens.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $ cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode>
          source:
            id: <source_volume_type>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • accessMode: Allowed values are ReadWriteOnce and ReadWriteMany.

    • id: Specify the OpenStack volume_type UUID, for example, f2737930-b567-451a-9ceb-2887f6207009.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of storage maps.

Adding an OpenStack source provider

You can add an OpenStack source provider by using the OKD web console.

When you migrate an image-based VM from an OpenStack provider, a snapshot is created for the image that is attached to the source VM, and the data from the snapshot is copied over to the target VM. This means that the target VM will have the same state as that of the source VM at the time the snapshot was created.

Procedure
  1. Access the Create provider page for OpenStack by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click OpenStack.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click OpenStack.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click OpenStack when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider.

    • URL: URL of the OpenStack Identity (Keystone) endpoint. For example, http://controller:5000/v3.

    • Authentication type: Choose one of the following methods of authentication and supply the information related to your choice. For example, if you choose Application credential ID as the authentication type, the Application credential ID and the Application credential secret fields become active, and you need to supply the ID and the secret.

      • Application credential ID

        • Application credential ID: OpenStack application credential ID

        • Application credential secret: OpenStack application credential Secret

      • Application credential name

        • Application credential name: OpenStack application credential name

        • Application credential secret: OpenStack application credential Secret

        • Username: OpenStack username

        • Domain: OpenStack domain name

      • Token with user ID

        • Token: OpenStack token

        • User ID: OpenStack user ID

        • Project ID: OpenStack project ID

      • Token with user Name

        • Token: OpenStack token

        • Username: OpenStack username

        • Project: OpenStack project

        • Domain name: OpenStack domain name

      • Password

        • Username: OpenStack username

        • Password: OpenStack password

        • Project: OpenStack project

        • Domain: OpenStack domain name

  3. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.

    • Use the system CA certificate: Migrate after validating the system CA certificate.

    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.

  4. Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.

    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  5. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

  6. Optional: Add access to the UI of the provider:

    1. On the Providers page, click the provider.

      The Provider details page opens.

    2. Click the Edit icon under External UI web link.

    3. Enter the link and click Save.

      If you do not enter a link, Forklift attempts to calculate the correct link.

      • If Forklift succeeds, the hyperlink of the field points to the calculated link.

      • If Forklift does not succeed, the field remains empty.

Adding a KubeVirt destination provider

Use a Red Hat KubeVirt provider as both source and destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster or from a remote cluster to the cluster that Forklift is deployed on.

Prerequisites
Procedure
  1. Access the Create KubeVirt provider interface by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click KubeVirt.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click KubeVirt.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click KubeVirt when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider

    • URL: URL of the endpoint of the API server

    • Service account bearer token: Token for a service account with cluster-admin privileges

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

  3. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.

    • Use the system CA certificate: Migrate after validating the system CA certificate.

    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.

  4. Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.

    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  5. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

Selecting a migration network for a KubeVirt provider

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.

In Forklift version 2.9 and earlier, Forklift used the pod network as the default network.

In version 2.10.0 and later, Forklift detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.

Forklift supports using UDNs for all providers except KubeVirt.

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Providers.

  2. Click the KubeVirt provider whose migration network you want to change.

    When the Providers detail page opens:

  3. Click the Networks tab.

  4. Click Set default transfer network.

  5. Select a default transfer network from the list and click Save.

  6. Configure a gateway in the network used for Forklift migrations by completing the following steps:

    1. In the OKD web console, click Networking > NetworkAttachmentDefinitions.

    2. Select the appropriate default transfer network NAD.

    3. Click the YAML tab.

    4. Add forklift.konveyor.io/route to the metadata:annotations section of the YAML, as in the following example:

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: localnet-network
        namespace: mtv-test
        annotations:
          forklift.konveyor.io/route: <IP address>
      • The NetworkAttachmentDefinition parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.

    5. Click Save.

Creating an OpenStack migration plan by using the MTV wizard

You can migrate OpenStack virtual machines (VMs) by using the Forklift plan creation wizard.

The wizard is designed to lead you step-by-step in creating a migration plan.

Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.

This prevents concurrent disk access to the storage the guest points to.

A plan cannot contain more than 500 VMs or 500 disks.

When you click Create plan on the Review and create page of the wizard, Forklift validates your plan. If everything is OK, the Plan details page for your plan opens. This page contains settings that do not appear in the wizard, but are important. Be sure to read and follow the instructions for this page carefully, even though it is outside the plan creation wizard. The page can be opened later, any time before you run the plan, so you can come back to it if needed.

Prerequisites
  • Have an OpenStack source provider and an KubeVirt destination provider. For more information, see Adding an OpenStack source provider or Adding a KubeVirt destination provider.

  • If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.

  • If you are using a user-defined network (UDN), note the name of its namespace as defined in KubeVirt.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Migration plans.

  2. Click Create plan.

    The Create migration plan wizard opens.

  3. On the General page, specify the following fields:

    • Plan name: Enter a name.

    • Plan project: Select from the list.

    • Source provider: Select from the list.

    • Target provider: Select from the list.

    • Target project: Click the list and do one of the following:

      1. Select an existing project from the list.

      2. Create a new project by clicking Create project and doing the following:

        1. Enter the Name of the project. A project name must consist of lowercase alphanumeric characters or -. A project name must start and end with alphanumeric characters. For example, my-name or 123-abc.

        2. Optional: Enter a Display name for the project.

        3. Optional: Enter a Description of the project.

        4. Click Create project.

  4. Click Next.

  5. On the Virtual machines page, select the virtual machines you want to migrate and click Next.

  6. If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails.

  7. On the Network map page, choose one of the following options:

    • Use an existing network map: Select an existing network map from the list.

      These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.

      You can create an ownerless network map, which you and others can use for additional migration plans, in the Network maps section of the UI.

      • Source network: Select from the list.

      • Target network: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Network map name: Enter a name or let Forklift automatically generate a name for the network map.

  8. Click Next.

  9. On the Storage map page, choose one of the following options:

    • Use an existing storage map: Select an existing storage map from the list.

      These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.

      You can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage maps section of the UI.

      • Source storage: Select from the list.

      • Target storage: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Storage map name: Enter a name or let Forklift automatically generate a name for the storage map.

  10. Click Next.

  11. On the Other settings (optional) page, you have the option to change the Transfer network of your migration plan.

    The transfer network is the network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider.

    • Verify that the transfer network is in the selected target project.

    • To choose a different transfer network, select a different transfer network from the list.

    • Optional: To configure another OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.

      To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.

    • To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.

  12. Click Next.

  13. On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.

  14. To add a hook, select the appropriate Enable hook checkbox.

  15. Enter the Hook runner image.

  16. Enter the Ansible playbook of the hook in the window.

    You cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.

  17. Click Next.

  18. On the Review and create page, review the information displayed.

  19. Edit any item by doing the following:

    1. Click its Edit step link.

      The wizard opens to the page where you defined the item.

    2. Edit the item.

    3. Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.

  20. When you finish reviewing the details of the plan, click Create plan. Forklift validates your plan.

    When your plan is validated, the Plan details page for your plan opens in the Details tab.

  21. In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:

    • Migration history: Details about successful and unsuccessful attempts to run the plan

    • Conditions: Any changes that need to be made to the plan so that it can run successfully

  22. When you have fixed all conditions listed, you can run your plan from the Plans page.

    The Plan details page also includes five additional tabs, which are described in the table that follows:

    Table 27. Tabs of the Plan details page
    YAML Virtual Machines Resources Mappings Hooks

    Editable YAML Plan manifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMs

    The VMs the plan migrates

    Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs

    Editable specification of the network and storage maps used by your plan

    Updatable specification of the hooks used by your plan, if any

Planning a migration of virtual machines from OVA

Create an OVA migration plan by setting up network maps, configuring source and destination providers with migration networks, and defining the migration plan in the Forklift UI.

Creating ownerless network maps in the Forklift UI

You can create ownerless network maps by using the Forklift UI to map source networks to KubeVirt networks.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Network maps.

  2. Click Create network map to open the Create network map page.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $  cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod
          source:
            id: <source_network_id>
        - destination:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
          source:
            id: <source_network_id>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • type: Allowed values are pod and multus.

    • source: Specify the OVA network Universal Unique ID (UUID).

    • <network_attachment_definition>: Specify a network attachment definition for each additional KubeVirt network.

    • <network_attachment_definition_namespace>: Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of network maps.

Creating ownerless storage maps using the form page of the Forklift UI

You can create ownerless storage maps by using the form page of the Forklift UI.

Prerequisites
Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with form.

  3. Specify the following:

    • Map name: Name of the storage map.

    • Project: Select from the list.

    • Source provider: Select from the list.

    • Target provider: Select from the list.

    • Source storage: Select from the list.

    • Target storage: Select from the list.

  4. Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.

  5. Click Create.

    Your map appears in the list of storage maps.

Creating ownerless storage maps using YAML or JSON definitions in the Forklift UI

You can create ownerless storage maps by using YAML or JSON definitions in the Forklift UI.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with YAML.

    The Create StorageMap page opens.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $ cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode>
          source:
            name:  Dummy storage for source provider <provider_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • accessMode: Allowed values are ReadWriteOnce and ReadWriteMany.

    • name: For OVA, the StorageMap can map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of storage maps.

Adding an Open Virtual Appliance (OVA) source provider

You can add Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the OKD web console.

Procedure
  1. Access the Create provider page for Open Virtual Appliance by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click Open Virtual Appliance.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click Open Virtual Appliance.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click Open Virtual Appliance when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider

    • URL: URL of the NFS file share that serves the OVA

      After creating the provider, you can optionally configure OVA file upload by web browser to upload OVA files directly to the NFS share. For more information, see Configuring OVA file upload by web browser.

  3. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

    An error message might appear that states that an error has occurred. You can ignore this message.

Adding a KubeVirt destination provider

Use a Red Hat KubeVirt provider as both source and destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster or from a remote cluster to the cluster that Forklift is deployed on.

Prerequisites
Procedure
  1. Access the Create KubeVirt provider interface by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click KubeVirt.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click KubeVirt.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click KubeVirt when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider

    • URL: URL of the endpoint of the API server

    • Service account bearer token: Token for a service account with cluster-admin privileges

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

  3. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.

    • Use the system CA certificate: Migrate after validating the system CA certificate.

    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.

  4. Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.

    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  5. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

Selecting a migration network for a KubeVirt provider

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.

In Forklift version 2.9 and earlier, Forklift used the pod network as the default network.

In version 2.10.0 and later, Forklift detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.

Forklift supports using UDNs for all providers except KubeVirt.

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Providers.

  2. Click the KubeVirt provider whose migration network you want to change.

    When the Providers detail page opens:

  3. Click the Networks tab.

  4. Click Set default transfer network.

  5. Select a default transfer network from the list and click Save.

  6. Configure a gateway in the network used for Forklift migrations by completing the following steps:

    1. In the OKD web console, click Networking > NetworkAttachmentDefinitions.

    2. Select the appropriate default transfer network NAD.

    3. Click the YAML tab.

    4. Add forklift.konveyor.io/route to the metadata:annotations section of the YAML, as in the following example:

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: localnet-network
        namespace: mtv-test
        annotations:
          forklift.konveyor.io/route: <IP address>
      • The NetworkAttachmentDefinition parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.

    5. Click Save.

Creating an Open Virtualization Appliance (OVA) migration plan by using the MTV wizard

You can migrate Open Virtual Appliance (OVA) files that were created by VMware vSphere by using the Forklift plan creation wizard.

The wizard is designed to lead you step-by-step in creating a migration plan.

Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.

This prevents concurrent disk access to the storage the guest points to.

A plan cannot contain more than 500 VMs or 500 disks.

When you click Create plan on the Review and create page of the wizard, Forklift validates your plan. If everything is OK, the Plan details page for your plan opens. This page contains settings that do not appear in the wizard, but are important. Be sure to read and follow the instructions for this page carefully, even though it is outside the plan creation wizard. The page can be opened later, any time before you run the plan, so you can come back to it if needed.

Prerequisites
  • Have an OVA source provider and a KubeVirt destination provider. For more information, see Adding an Open Virtual Appliance (OVA) source provider or Adding a KubeVirt destination provider.

  • If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.

  • If you are using a user-defined network (UDN), note the name of its namespace as defined in KubeVirt.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Migration plans.

  2. Click Create plan.

    The Create migration plan wizard opens.

  3. On the General page, specify the following fields:

    • Plan name: Enter a name.

    • Plan project: Select from the list.

    • Source provider: Select from the list.

    • Target project: Click the list and do one of the following:

      1. Select an existing project from the list.

      2. Create a new project by clicking Create project and doing the following:

        1. Enter the Name of the project. A project name must consist of lowercase alphanumeric characters or -. A project name must start and end with alphanumeric characters. For example, my-name or 123-abc.

        2. Optional: Enter a Display name for the project.

        3. Optional: Enter a Description of the project.

        4. Click Create Project.

  4. Click Next.

  5. On the Virtual machines page, select the virtual machines you want to migrate and click Next.

  6. If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails.

  7. On the Network map page, choose one of the following options:

    • Use an existing network map: Select an existing network map from the list.

      These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.

      You can create an ownerless network map, which you and others can use for additional migration plans, in the Network maps section of the UI.

      • Source network: Select from the list.

      • Target network: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Network map name: Enter a name or let Forklift automatically generate a name for the network map.

  8. Click Next.

  9. On the Storage map page, choose one of the following options:

    • Use an existing storage map: Select an existing storage map from the list.

      These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.

      You can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage maps section of the UI.

      • Source storage: Select from the list.

      • Target storage: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Storage map name: Enter a name or let Forklift automatically generate a name for the storage map.

  10. Click Next.

  11. On the Other settings (optional) page, you have the option to change the Transfer network of your migration plan.

    The transfer network is the network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider.

    • Verify that the transfer network is in the selected target project.

    • To choose a different transfer network, select a different transfer network from the list.

    • Optional: To configure another OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.

      To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.

    • To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.

  12. Click Next.

  13. On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.

  14. To add a hook, select the appropriate Enable hook checkbox.

  15. Enter the Hook runner image.

  16. Enter the Ansible playbook of the hook in the window.

    You cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.

  17. Click Next.

  18. On the Review and create page, review the information displayed.

  19. Edit any item by doing the following:

    1. Click its Edit step link.

      The wizard opens to the page where you defined the item.

    2. Edit the item.

    3. Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.

  20. When you finish reviewing the details of the plan, click Create plan. Forklift validates your plan.

    When your plan is validated, the Plan details page for your plan opens in the Details tab.

  21. In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:

    • Migration history: Details about successful and unsuccessful attempts to run the plan

    • Conditions: Any changes that need to be made to the plan so that it can run successfully

  22. When you have fixed all conditions listed, you can run your plan from the Plans page.

    The Plan details page also includes five additional tabs, which are described in the table that follows:

    Table 28. Tabs of the Plan details page
    YAML Virtual Machines Resources Mappings Hooks

    Editable YAML Plan manifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMs

    The VMs the plan migrates

    Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs

    Editable specification of the network and storage maps used by your plan

    Updatable specification of the hooks used by your plan, if any

Configuring OVA file upload by web browser

You can configure Open Virtual Appliance (OVA) file upload by web browser to upload an OVA file directly to an NFS share. To configure OVA file upload, you first enable OVA appliance management in the ForkliftController custom resource (CR) and then enable OVA upload for each OVA provider. When you enable OVA upload for an OVA provider, the Upload local OVA files option populates on the provider’s Details page in the MTV UI.

Prerequisites
  • You have an NFS share to point the OVA provider at.

  • You have enough storage space in your NFS share.

  • The NFS share is writable by the QEMU group (GID 107).

  • You have a valid .ova file to upload.

  • Your .ova file has a unique file name.

Procedure
  1. In the Red Hat OpenShift web console, click Operators > Installed Operators.

  2. Click Migration Toolkit for Virtualization Operator.

    The Operator Details page opens in the Details tab.

  3. Click the ForkliftController tab, and open the forklift-controller resource.

  4. Click the forklift-controller YAML tab, and add the feature_ova_appliance_management field to the spec section of the forklift-controller custom resource (CR):

    Example:

    spec:
      ...
      feature_ova_appliance_management: 'true'

    Wait for the operator to redeploy the controller after updating the forklift-controller CR.

  5. In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.

  6. Click the provider to open the Details page. Click the provider’s YAML tab.

    For information about creating a provider, see Adding an Open Virtual Appliance (OVA) source provider.

  7. Scroll down the provider’s YAML file, and add the applianceManagement field to the spec section. Set applianceManagement to 'true':

    Example:

    spec:
      secret:
        ...
      settings:
        applianceManagement: 'true'
      type: ova
      ...
    • A temporary ConnectionTestFailed error message displays while the update is processing. You can ignore the error message.

  8. Click the provider’s Details tab, and scroll down to the Conditions section. Verify that ApplianceManagementEnabled shows as True in the list of conditions.

  9. In the Upload local OVA files section, click Browse to find a valid .ova file.

  10. Click Upload.

    A success message confirms the file upload. After several seconds, the number of virtual machines increases under the Provider inventory section.

Planning a migration of virtual machines from KubeVirt

Create a KubeVirt migration plan by setting up network maps, configuring source and destination providers with migration networks, and defining the migration plan in the Forklift UI.

Creating ownerless network maps in the Forklift UI

You can create ownerless network maps by using the Forklift UI to map source networks to KubeVirt networks.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Network maps.

  2. Click Create network map to open the Create network map page.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $  cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod
          source:
            name: <network_name>
            type: pod
        - destination:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
          source:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • type: Allowed values are pod and multus.

    • <network_attachment_definition>: Specify a network attachment definition for each additional KubeVirt network. Specify the namespace either by using the namespace property or with a name built as follows: <network_namespace>/<network_name>.

    • <network_attachment_definition_namespace>: Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of network maps.

Creating ownerless storage maps using the form page of the Forklift UI

You can create ownerless storage maps by using the form page of the Forklift UI.

Prerequisites
Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with form.

  3. Specify the following:

    • Map name: Name of the storage map.

    • Project: Select from the list.

    • Source provider: Select from the list.

    • Target provider: Select from the list.

    • Source storage: Select from the list.

    • Target storage: Select from the list.

  4. Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.

  5. Click Create.

    Your map appears in the list of storage maps.

Creating ownerless storage maps using YAML or JSON definitions in the Forklift UI

You can create ownerless storage maps by using YAML or JSON definitions in the Forklift UI.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Storage maps.

  2. Click Create storage map > Create with YAML.

    The Create StorageMap page opens.

  3. Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.

  4. If you enter YAML definitions, use the following:

    $ cat << EOF | kubectl apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode>
          source:
            name: <storage_class>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    • accessMode: Allowed values are ReadWriteOnce and ReadWriteMany.

  5. Optional: To download your input, click Download.

  6. Click Create.

    Your map appears in the list of storage maps.

Adding a Red Hat KubeVirt source provider

You can use a Red Hat KubeVirt provider as both source and destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.

The OKD cluster version of the source provider must be 4.16 or later.

Procedure
  1. Access the Create provider interface KubeVirt by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click KubeVirt.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click KubeVirt.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click KubeVirt when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider

    • URL: URL of the endpoint of the API server

    • Service account bearer token: Token for a service account with cluster-admin privileges

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

  3. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.

    • Use the system CA certificate: Migrate after validating the system CA certificate.

    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.

  4. Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.

    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  5. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

  6. Optional: Add access to the UI of the provider:

    1. On the Providers page, click the provider.

      The Provider details page opens.

    2. Click the Edit icon under External UI web link.

    3. Enter the link and click Save.

      If you do not enter a link, Forklift attempts to calculate the correct link.

      • If Forklift succeeds, the hyperlink of the field points to the calculated link.

      • If Forklift does not succeed, the field remains empty.

Adding a KubeVirt destination provider

Use a Red Hat KubeVirt provider as both source and destination provider. You can migrate VMs from the cluster that Forklift is deployed on to another cluster or from a remote cluster to the cluster that Forklift is deployed on.

Prerequisites
Procedure
  1. Access the Create KubeVirt provider interface by doing one of the following:

    1. In the OKD web console, click Migration for Virtualization > Providers.

      1. Click Create Provider.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

      3. Click KubeVirt.

    2. If you have Administrator privileges, in the OKD web console, click Migration for Virtualization > Overview.

      1. In the Welcome pane, click KubeVirt.

        If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click KubeVirt when the Welcome pane opens.

      2. Select a Project from the list. The default project shown depends on the active project of Forklift.

        If the active project is All projects, then the default project is openshift-mtv. Otherwise, the default project is the same as the active project.

        If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.

  2. Specify the following fields:

    • Provider resource name: Name of the source provider

    • URL: URL of the endpoint of the API server

    • Service account bearer token: Token for a service account with cluster-admin privileges

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

  3. Choose one of the following options for validating CA certificates:

    • Use a custom CA certificate: Migrate after validating a custom CA certificate.

    • Use the system CA certificate: Migrate after validating the system CA certificate.

    • Skip certificate validation : Migrate without validating a CA certificate.

      1. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.

      2. To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.

      3. To skip certificate validation, toggle the Skip certificate validation switch to the right.

  4. Optional: Ask Forklift to fetch a custom CA certificate from the provider’s API endpoint URL.

    1. Click Fetch certificate from URL. The Verify certificate window opens.

    2. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.

      Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.

  5. Click Create provider to add and save the provider.

    The provider appears in the list of providers.

Selecting a migration network for a KubeVirt provider

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.

In Forklift version 2.9 and earlier, Forklift used the pod network as the default network.

In version 2.10.0 and later, Forklift detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.

Forklift supports using UDNs for all providers except KubeVirt.

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Providers.

  2. Click the KubeVirt provider whose migration network you want to change.

    When the Providers detail page opens:

  3. Click the Networks tab.

  4. Click Set default transfer network.

  5. Select a default transfer network from the list and click Save.

  6. Configure a gateway in the network used for Forklift migrations by completing the following steps:

    1. In the OKD web console, click Networking > NetworkAttachmentDefinitions.

    2. Select the appropriate default transfer network NAD.

    3. Click the YAML tab.

    4. Add forklift.konveyor.io/route to the metadata:annotations section of the YAML, as in the following example:

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: localnet-network
        namespace: mtv-test
        annotations:
          forklift.konveyor.io/route: <IP address>
      • The NetworkAttachmentDefinition parameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.

    5. Click Save.

Creating an KubeVirt migration plan by using the MTV wizard

You can migrate KubeVirt virtual machines (VMs) by using the Forklift plan creation wizard.

The wizard is designed to lead you step-by-step in creating a migration plan.

Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.

This prevents concurrent disk access to the storage the guest points to.

A plan cannot contain more than 500 VMs or 500 disks.

When you click Create plan on the Review and create page of the wizard, Forklift validates your plan. If everything is OK, the Plan details page for your plan opens. This page contains settings that do not appear in the wizard, but are important. Be sure to read and follow the instructions for this page carefully, even though it is outside the plan creation wizard. The page can be opened later, any time before you run the plan, so you can come back to it if needed.

Prerequisites
  • Have an KubeVirt source provider and an KubeVirt destination provider. For more information, see Adding an KubeVirt source provider or Adding a KubeVirt destination provider.

  • If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Migration plans.

  2. Click Create plan.

    The Create migration plan wizard opens.

  3. On the General page, specify the following fields:

    • Plan name: Enter a name.

    • Plan project: Select from the list.

    • Source provider: Select from the list.

    • Target provider: Select from the list.

    • Target project: Click the list and do one of the following:

      1. Select an existing project from the list.

      2. Create a new project by clicking Create project and doing the following:

        1. Enter the Name of the project. A project name must consist of lowercase alphanumeric characters or -. A project name must start and end with alphanumeric characters. For example, my-name or 123-abc.

        2. Optional: Enter a Display name for the project.

        3. Optional: Enter a Description of the project.

        4. Click Create Project.

  4. Click Next.

  5. On the Virtual machines page, select the virtual machines you want to migrate and click Next.

  6. On the Network map page, choose one of the following options:

    • Use an existing network map: Select an existing network map from the list.

      These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and target provider as the ones you want to use in your plan.

    • Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.

      You can create an ownerless network map, which you and others can use for additional migration plans, in the Network maps section of the UI.

      • Source network: Select from the list.

      • Target network: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Network map name: Enter a name or let Forklift automatically generate a name for the network map.

  7. Click Next.

  8. On the Storage map page, choose one of the following options:

    • Use an existing storage map: Select an existing storage map from the list.

      These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.

      You can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage maps section of the UI.

      • Source storage: Select from the list.

      • Target storage: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Storage map name: Enter a name or let Forklift automatically generate a name for the storage map.

  9. Click Next.

  10. On the Other settings (optional) page, you have the option to change the Transfer network of your migration plan.

    The transfer network is the network used to transfer the VMs to KubeVirt. This is the default transfer network of the provider.

    • Verify that the transfer network is in the selected target project.

    • To choose a different transfer network, select a different transfer network from the list.

    • Optional: To configure another OKD network in the OKD web console, click Networking > NetworkAttachmentDefinitions.

      To learn more about the different types of networks OKD supports, see Additional Networks in OpenShift Container Platform.

    • To adjust the maximum transmission unit (MTU) of the OKD transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.

  11. Click Next.

  12. On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.

  13. To add a hook, select the appropriate Enable hook checkbox.

  14. Enter the Hook runner image.

  15. Enter the Ansible playbook of the hook in the window.

    You cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.

  16. Click Next.

  17. On the Review and create page, review the information displayed.

  18. Edit any item by doing the following:

    1. Click its Edit step link.

      The wizard opens to the page where you defined the item.

    2. Edit the item.

    3. Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.

  19. When you finish reviewing the details of the plan, click Create plan. Forklift validates your plan.

    When your plan is validated, the Plan details page for your plan opens in the Details tab.

  20. In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:

    • Migration history: Details about successful and unsuccessful attempts to run the plan

    • Conditions: Any changes that need to be made to the plan so that it can run successfully

  21. When you have fixed all conditions listed, you can run your plan from the Plans page.

    The Plan details page also includes five additional tabs, which are described in the table that follows:

    Table 29. Tabs of the Plan details page
    YAML Virtual Machines Resources Mappings Hooks

    Editable YAML Plan manifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMs

    The VMs the plan migrates

    Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs

    Editable specification of the network and storage maps used by your plan

    Updatable specification of the hooks used by your plan, if any

Creating a migration plan for a live migration by using the Forklift wizard

You create a migration plan for a live migration of virtual machines (VMs) almost exactly as you create a migration plan for a cold migration. The only difference is that you specify it is a Live migration in the Migration type page.

Prerequisites

As described in KubeVirt live migration prerequisites.

Procedure
  1. In the OKD web console, click Migration for Virtualization > Migration plans.

  2. Click Create plan.

    The Create migration plan wizard opens.

  3. On the General page, specify the following fields:

    • Plan name: Enter a name.

    • Plan project: Select from the list.

    • Source provider: Select from the list. You can use any KubeVirt provider. You do not need to create a new one for a live migration.

    • Target provider: Select from the list. Be sure to select the correct KubeVirt target provider.

    • Target project: Click the list and do one of the following:

      1. Select an existing project from the list.

      2. Create a new project by clicking Create project and doing the following:

        1. Enter the Name of the project. A project name must consist of lowercase alphanumeric characters or -. A project name must start and end with alphanumeric characters. For example, my-name or 123-abc.

        2. Optional: Enter a Display name for the project.

        3. Optional: Enter a Description of the project.

        4. Click Create project.

  4. Click Next.

  5. On the Virtual machines page, select the virtual machines you want to migrate and ensure they are powered on. A live migration fails if any of its VMs are powered off.

  6. Click Next.

  7. On the Network map page, choose one of the following options:

    • Use an existing network map: Select an existing network map from the list.

      These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you select an existing map, be sure it has the same source provider and target provider as the ones you want to use in your plan.

    • Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.

      You can create an ownerless network map, which you and others can use for additional migration plans, in the Network maps section of the UI.

      • Source network: Select from the list.

      • Target network: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Network map name: Enter a name or let Forklift automatically generate a name for the network map.

  8. Click Next.

  9. On the Storage map page, choose one of the following options:

    • Use an existing storage map: Select an existing storage map from the list.

      These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.

      If you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.

    • Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.

      You can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage maps section of the UI.

      • Source storage: Select from the list.

      • Target storage: Select from the list.

        If needed, click Add mapping to add another mapping.

      • Storage map name: Enter a name or let Forklift automatically generate a name for the storage map.

  10. Click Next.

  11. On the Migration type page, choose Live migration.

    If Live migration does not appear as an option, return to the General page and verify that both the source provider and target provider are KubeVirt clusters.

    If they are both KubeVirt clusters, have someone with cluster-admin privileges ensure that the prerequisites for live migration are met. For more information, see KubeVirt live migration prerequisites.

  12. Click Next.

  13. On the Other settings (optional) page, click Next without doing anything else.

  14. On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.

  15. To add a hook, select the appropriate Enable hook checkbox.

  16. Enter the Hook runner image.

  17. Enter the Ansible playbook of the hook in the window.

    You cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.

  18. Click Next.

  19. On the Review and create page, review the information displayed.

  20. Edit any item by doing the following:

    1. Click its Edit step link.

      The wizard opens to the page where you defined the item.

    2. Edit the item.

    3. Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.

  21. When you finish reviewing the details of the plan, click Create plan. Forklift validates your plan.

    When your plan is validated, the Plan details page for your plan opens in the Details tab.

  22. In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:

    • Migration history: Details about successful and unsuccessful attempts to run the plan

    • Conditions: Any changes that need to be made to the plan so that it can run successfully

  23. When you have fixed all conditions listed, you can run your plan from the Plans page.

    The Plan details page also includes five additional tabs, which are described in the table that follows:

    Table 30. Tabs of the Plan details page
    YAML Virtual Machines Resources Mappings Hooks

    Editable YAML Plan manifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMs

    The VMs the plan migrates

    Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs

    Editable specification of the network and storage maps used by your plan

    Updatable specification of the hooks used by your plan, if any