Migrating your virtual machines
Performing a migration
When you have planned your migration by using the Forklift, you can migrate virtual machines from the following source providers to KubeVirt destination providers:
-
VMware vSphere
-
oVirt (oVirt)
-
OpenStack
-
Open Virtual Appliances (OVAs) that were created by VMware vSphere
-
Remote KubeVirt clusters
Migrating from VMware vSphere
Run your VMware migration plan from the MTV UI or from the command-line.
Running a migration plan in the MTV UI
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
Warm migration only:
-
The precopy stage starts.
A
PreFlightInspectionduring warm migration enables early detection of issues during guest conversion. If an issue is detected, the warm migration fails before VM shutdown, and you can adjust VM settings or skip guest conversion. Potential errors during thePreFlightInspectioninclude the following:-
Missing LUKS passwords
-
Unsupported OS
-
Unsupported FS
-
-
-
To set the cutover of the plan, perform the following steps:
-
In the Migration type column of the migration plan, click Schedule cutover.
-
In the Schedule cutover window, set the date and time of the cutover, and then click Set cutover.
-
-
To edit a cutover, perform the following steps:
-
In the Migration type column of the migration plan, click Edit cutover.
-
In the Edit cutover window, set the date and time of the cutover, and then click Set cutover.
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual machines tab of the Plan details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2vis not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Migration plans page of the OKD web console, you can click the Options menu
beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Set cutover: Warm migrations only. Active only if relevant. Clicking Set cutover opens the Set cutover window, which allows you to set the date and time for a cutover.
-
Edit cutover: Change the date or time of a scheduled cutover. Active only if relevant. Clicking Edit cutover opens the Edit cutover window, which allows you to set a new date or time for a cutover.
-
Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive is irreversible. However, you can duplicate an archived plan.
-
Delete: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
-
Restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Running a VMware vSphere migration from the command-line
You can migrate from a VMware vSphere source provider by using the command-line interface (CLI).
- Considerations
-
-
Anti-virus software can cause migrations to fail. It is strongly recommended to remove such software from source VMs before you start a migration.
-
Forklift does not support migrating VMware Non-Volatile Memory Express (NVMe) disks.
-
To migrate virtual machines (VMs) that have shared disks, see Migrating virtual machines with shared disks.
-
| Forklift cannot migrate VMware vSphere 6 and VMware vSphere 7 VMs to a FIPS-compliant KubeVirt cluster. |
-
If you are using a user-defined network (UDN), note the name of its namespace as defined in KubeVirt.
-
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: vsphere createdForResourceType: providers type: Opaque stringData: user: <user> password: <password> insecureSkipVerify: <"true"/"false"> cacert: | <ca_certificate> url: <api_end_point> EOFwhere:
ownerReferences-
Is an optional section in which you can specify a provider’s
nameanduid. <user>-
Specifies the vCenter user or the ESX/ESXi user.
<password>-
Specifies the password of the vCenter user or the ESX/ESXi user.
<"true"/"false">-
Specifies
"true"to skip certificate verification, and specifies"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. cacert-
Specifies the CA cert object. When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA.
<api_end_point>-
Specifies the API endpoint URL of the vCenter or the ESX/ESXi, for example,
https://<vCenter_host>/sdk.
-
Create a
Providermanifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: vsphere url: <api_end_point> settings: vddkInitImage: <VDDK_image> sdkEndpoint: vcenter secret: name: <secret> namespace: <namespace> EOFwhere:
<api_end_point>-
Specifies the URL of the API endpoint, for example,
https://<vCenter_host>/sdk. <VDDK_image>-
Specifies the VDDK image. This is an optional label, but it is strongly recommended to create a VDDK image to accelerate migrations. Follow OpenShift documentation to specify the VDDK image you created.
sdkEndpoint-
Specifies the URL used by the provider’s SDK. Options:
vcenteroresxi. <secret>-
Specifies the name of the provider
SecretCR.
-
Create a
Hostmanifest:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Host metadata: name: <vmware_host> namespace: <namespace> spec: provider: namespace: <namespace> name: <source_provider> id: <source_host_mor> ipAddress: <source_network_ip> EOFwhere:
<source_provider>-
Specifies the name of the VMware vSphere
ProviderCR. <source_host_mor>-
Specifies the Managed Object Reference (moRef) of the VMware vSphere host. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
<source_network_ip>-
Specifies the IP address of the VMware vSphere migration network.
-
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod source: id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
type-
Specifies the network type. Allowed values are
pod,multus, andignored. Useignoredto avoid attaching VMs to this network for this migration. source-
Specifies the source network. You can use either the
idor thenameparameter to specify the source network. Forid, specify the VMware vSphere network Managed Object Reference (moRef). To retrieve the moRef, see Retrieving a VMware vSphere moRef. <network_attachment_definition>-
Specifies a network attachment definition (NAD) for each additional KubeVirt network.
<network_attachment_definition_namespace>-
Specifies the namespace of the KubeVirt NAD. Required only when
typeismultus. namespace-
Specifies the namespace. If you are using a user-defined network (UDN), its namespace is defined in KubeVirt.
-
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> offloadPlugin: vsphereXcopyConfig: secretRef: <Secret_for_the_storage_vendor_product> storageVendorProduct: <storage_vendor_product> source: id: <source_datastore> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
<access_mode>-
Specifies the access mode. Optional for storage copy offload, required for other VMware migrations. Allowed values are
ReadWriteOnceandReadWriteMany. offloadPlugin-
Specifies labels and values used in storage copy migrations. This section, through and including
storageVendorProduct, is only for storage copy offload migrations. <Secret_for_the_storage_vendor_product>-
Specifies the
Secretthat contains the storage provider credentials. <storage_vendor_product>-
Specifies the name of the storage product used in the migration. For example,
vantarafor Hitachi Vantara. Storage copy offload only. Valid strings are listed in the table that follows this CR.Storage copy offload is a feature that allows you to migrate VMware virtual machines (VMs) that are in a storage array network (SAN) more efficiently. This feature makes use of the command
vmkfstoolson the ESXi host, which invokes theXCOPYcommand on the storage array using an Internet Small Computer Systems Interface (iSCSI) or Fibre Channel (FC) connection. Storage copy offload lets you copy data inside a SAN more efficiently than copying the data over a network. For Forklift 2.11, storage copy offload is available as GA for cold migration and as a Technology Preview feature for warm migration. For more information, see Migrating VMware virtual machines by using storage copy offload. <source_datastore>-
Specifies the VMware vSphere datastore moRef. For example,
f2737930-b567-451a-9ceb-2887f6207009. To retrieve the moRef, see Retrieving a VMware vSphere moRef.Table 1. Storage copy offload only: Supported storage vendors and their identifying strings in the CLI Vendor Identifying string (Value of storageVendorProductlabel)Hitachi Vantara
vantaraNetApp
ontapHewlett Packard Enterprise
primera3parPure Storage
pureFlashArrayDell (PowerFlex)
powerflexDell (PowerMax)
powermaxDell (PowerStore)
powerstoreInfinidat
infiniboxIBM
flashsystem
-
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> playbook: | LS0tCi0gbm... EOFwhere:
<service account>-
Specifies the OKD service account. This is an optional label. Use the
serviceAccountparameter to modify any cluster resources. playbook-
Specifies the Base64-encoded Ansible Playbook. If you specify a playbook, the
imagemust include anansible-runner.You can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address> -
Create a
Planmanifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> namespace: <namespace> spec: warm: false provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: network: name: <network_map> namespace: <namespace> storage: name: <storage_map> namespace: <namespace> preserveStaticIPs: networkNameTemplate: <network_interface_template> pvcNameTemplate: <pvc_name_template> pvcNameTemplateUseGenerateName: true skipGuestConversion: false targetAffinity: <target_affinity_rule> targetLabels: label: <label> targetNodeSelector: <key>:<value> targetNamespace: <target_namespace> convertorLabels: <importer_converter_labels> convertorNodeSelector: <key>:<value> convertorAffinity<importer_affinity_rule> useCompatibilityMode: true volumeNameTemplate: <volume_name_template> vms: - id: <source_vm1> - name: <source_vm2> networkNameTemplate: <network_interface_template_for_this_vm> pvcNameTemplate: <pvc_name_template_for_this_vm> volumeNameTemplate: <volume_name_template_for_this_vm> targetName: <target_name> hooks: - hook: namespace: <namespace> name: <hook> step: <step> EOFwhere:
<plan>-
Specifies the name of the
PlanCR. warm-
Specifies whether the migration is warm -
true- or cold -false. If you specify a warm migration without specifying a value for thecutoverparameter in theMigrationmanifest, only the precopy stage will run. map-
Specifies the network map and the storage map used by the plan.
network-
Specifies a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
<network_map>-
Specifies the name of the
NetworkMapCR. storage-
Specifies a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
<storage_map>-
Specifies the name of the
StorageMapCR. preserveStaticIPs-
Specifies wheteher to prerserve static IP addresses. By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP address linked to the interface name in the guest VM lose their IP address. To avoid this, set
preserveStaticIPstotrue. Forklift issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to Forklift. networkNameTemplate-
Specifies a template for the network interface name for the VMs in your plan. This is an aoptional label. The template follows the Go template syntax and has access to the following variables:
-
.NetworkName:If the target network ismultus, add the name of the Multus Network Attachment Definition. Otherwise, leave this variable empty. -
.NetworkNamespace: If the target network ismultus, add the namespace where the Multus Network Attachment Definition is located. -
.NetworkType: Specifies the network type. Options:multusorpod. -
.NetworkIndex: Sequential index of the network interface (0-based).Examples
-
"net-{{.NetworkIndex}}" -
{{if eq .NetworkType "pod"}}pod{{else}}multus-{{.NetworkIndex}}{{end}}"Variable names cannot exceed 63 characters. VM names geneated by templates must not include uppercase letters or violate RFC 1123 rules. These rules apply to a network name network template, a PVC name template, a VM name template, and a volume name template.
Forklift does not validate VM names generated by the templates described here. Migrations that include VMs whose names include uppercase letters or that violate RFC 1123 rules fail automatically. To avoid failures, you might want to run a Go script that uses the
sprigmethods that Forklift supports. For tables documenting the methods that Forklift supports, see Forklift template utility for VMware VM names.
-
pvcNameTemplate-
Specifies a template for the persistent volume claim (PVC) name for a plan. This is an optional label. The template follows the Go template syntax and has access to the following variables:
-
.VmName: Name of the VM. -
.PlanName: Name of the migration plan. -
.DiskIndex: Initial volume index of the disk. -
.RootDiskIndex: Index of the root disk. -
.Shared: Options:true, for a shared volume,false, for a non-shared volume.Examples
-
"{{.VmName}}-disk-{{.DiskIndex}}" -
"{{if eq .DiskIndex .RootDiskIndex}}root{{else}}data{{end}}-{{.DiskIndex}}" -
"{{if .Shared}}shared-{{end}}{{.VmName}}-{{.DiskIndex}}"
-
pvcNameTemplateUseGenerateName-
Specifies whether to add alphanumeric characters to the name of a PVC.
-
When set to
true, Forklift adds one or more randomly generated alphanumeric characters to the name of the PVC in order to ensure all PVCs have unique names. -
When set to
false, if you specify apvcNameTemplate, Forklift does not add such characters to the name of the PVC.If you set
pvcNameTemplateUseGenerateNametofalse, the generated PVC name might not be unique and might cause conflicts.
-
skipGuestConversion-
Specifies whether VMs are converted before migration using the
virt-v2vtool, which makes the VMs compatible with KubeVirt.-
When set to
false, the default value, Forklift migrates VMs usingvirt-v2v. -
When set to
true, Forklift migrates VMs using raw copy mode, which copies the VMs without converting them first.Raw copy mode copies VMs without converting them with
virt-v2v. This provides faster conversions for migrating VMs running a wider range of operating systems and supports migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on KubeVirt. For more information onvirt-v2v, see How Forklift uses the virt-v2v tool.
-
targetAffinity-
Specifies a VM target affinity rule that is entered in the lines following this label. This is an optional label.
targetAffinity,targetLabels, andtargetNodeSelectorare labels that support VM target scheduling, a feature that lets you direct Forklift to migrate virtual machines (VMs) to specific nodes or workloads (pods) of KubeVirt as well as to schedule when the VMs are powered on. For more information on the feature in general, see Target VM scheduling options. For more details on using the feature with the CLI, including an example YAML snippet, see Scheduling target VMs from the command-line interface. targetLabels-
Specifies organizational or operational labels to migrated VMs for identification and management. This is an optional label.
targetNodeSelector-
Specifies the key-value pairs that must be matched for VMs to be scheduled on nodes. This is an optional label.
convertorLabels-
Cold migrations only: Specifies organizational or operational labels for the
virt-v2vconvertor pods (importer pods) for identification and management. This is an optional label.To ensure proper system functionality, system-managed labels override any user-defined labels with the same keys. System-managed labels include
migration,plan,vmID, andforklift.app.For more details on labels and selectors in Kubernetes, see Labels and Selectors.
convertorLabels,convertorNodeSelector, andconvertorAffinityare fields that support scheduling thevirt-v2vconversion pod (importer pod) for cold migrations from VMware providers. With this feature, you can set theconvertorLabels,convertorNodeSelector, andconvertorAffinitythat control the labels,noteSelector, andAffinityof the convertor pod.For more information on importer files, see /documentation/doc-Planning_your_migration/master.html?assembly_planning-migration-vmware#con_about-configuring-importer-pods_vmware[About scheduling importer pods].
convertorNodeSelector-
Cold migrations only: Specifies the key-value pairs that must be matched for data to be transferred by the
virt-v2vconvertor pods (importer pods) to the specified target nodes. This is an optional label. With this feature, you can dedicate specific nodes for disk conversion workloads that require high I/O performance or network access to source VMware infrastructure.For more details on node selectors in Kubernetes, see nodeSelector.
convertorAffinity-
Cold migrations only: Specifies a hard-affinity or a soft-affinity rule for
virt-v2vconvertor pods (importer pods). This is an optional label. Affinity rules can be used to optimize placement for disk conversion performance, such as co-locating with storage or ensuring network proximity to VMware infrastructure for cold migration data transfers.For more information on affinity rules in Kubernetes, see Affinity and anti-affinity.
useCompatibilityMode-
Determines whether the migration uses VirtIO devices or compatibility devices when
skipGuestConversionistrue(raw copy mode). This setting has no effect whenskipGuestConversionisfalsebecause standard V2V conversion always uses VirtIO devices.-
When you set
useCompatibilityModetotrue(default): Forklift uses compatibility devices (SATA bus, E1000E NICs, USB) to ensure the VMs can boot after migration. -
When you set
useCompatibilityModetofalse: Forklift uses pre-installed VirtIO devices on source VMs for better performance. VMs without pre-installed VirtIO drivers do not boot on KubeVirt if you disable compatibility mode.
-
volumeNameTemplate-
Specifies a template for the volume interface name for the VMs in your plan. This is an optional label. The template follows the Go template syntax and has access to the following variables:
-
.PVCName: Name of the PVC mounted to the VM using this volume. -
.VolumeIndex: Sequential index of the volume interface (0-based).Examples
-
"disk-{{.VolumeIndex}}" -
"pvc-{{.PVCName}}"
-
vms-
Specifies the source VMs. Use either the
idor thenameparameter to specify the source VMs. If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails. <source_vm1>-
Specifies the VMware vSphere VM moRef. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
networkNameTemplate-
Specifies a network interface name for the specific VM. Overrides the value set in
spec:networkNameTemplate. Variables and examples as inspec:networkNameTemplate. This is an optional label. pvcNameTemplate-
Specifies a PVC name for the specific VM. Overrides the value set in
spec:pvcNameTemplate. Variables and examples as inspec:pvcNameTemplate. This is an optional label. volumeNameTemplate-
Specifies a volume name for the specific VM. Overrides the value set in
spec:volumeNameTemplate. Variables and examples as inspec:volumeNameTemplate. This is an optional label. targetName-
Specifies the name of the target VM. Forklift automatically generates a name for the target VM. You can override this name by using this parameter and entering a new name. The name you enter must be unique, and it must be a valid Kubernetes subdomain. Otherwise, the migration fails automatically. This is an optional label.
hooks-
Specifies up to two hooks for a migration. Each hook must run during a separate migration step. This is an optional label.
<hook>-
Specifies the name of the
HookCR. <step>-
Specifies the type of hook. Allowed values are
PreHookbefore the migration plan starts orPostHookafter the migration is complete.When you migrate a VMware 7 VM to an OKD 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.
-
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.When you specify the user permissions only on the VM, the
forklift-controllerconsistently fails to reconcile a migration plan, and subsequently returns an HTTP 500 error.In Forklift, you must add permissions at the data center level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.
If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.
Running a VMware vSphere migration from the command-line by using storage copy offload
You can use the storage copy offload feature of Forklift to migrate VMware vSphere virtual machines (VMs) faster than by other methods.
In addition to the regular VMware prerequisites, storage copy offload has the following additional prerequisites:
-
One of the following storage systems, configured:
-
Hitachi Vantara
-
NetApp ONTAP
-
Pure Storage FlashArray
-
Dell PowerMax
-
Dell PowerFlex
-
Dell PowerStore
-
HPE 3PAR or HPE Primera
-
Infinidat Infinibox
-
IBM Flashsystem
-
-
A working Container Storage Interface (CSI) driver connected to the above and to KubeVirt
-
A configured VMware vSphere provider
-
vSphere users must have a role that includes the following privileges (suggested name:
StorgeOffloader):-
Global
-
Settings
-
-
Datastore
-
Browse datastore
-
Low level file operations
-
-
Host Configuration
-
Advanced settings
-
Query patch
-
Storage partition configuration
-
-
-
In the Forklift Operator, set the value of
feature_copy_offloadtotrueinforklift-controllerby running the following command:oc patch forkliftcontrollers.forklift.konveyor.io forklift-controller --type merge -p '{"spec": {"feature_copy_offload": "true"}}' -n openshift-mtv -
Create a
Secretin the namespace in which the migration provider is set up, usuallyopenshift-mtv. Include the credentials from the appropriate vendor in yourSecret.Table 2. Credentials for a Hitachi Vantara storage copy offload Secret Key Description Mandatory? Default GOVMOMI_HOSTNAMEhostname or URL of the vSphere API (string).
Yes.
NA.
GOVMOMI_USERNAMEUser name of the vSphere API (string).
Yes.
NA.
GOVMOMI_PASSWORDPassword of the vSphere API (string).
Yes.
NA.
STORAGE_HOSTNAMEThe hostname or URL of the storage vendor API (string).
Yes.
NA.
STORAGE_USERNAMEThe username of the storage vendor API (string).
Yes.
NA.
STORAGE_PASSWORDThe password of the storage vendor API (string).
Yes.
NA.
STORAGE_PORTThe port of the storage vendor API (string).
Yes.
NA.
STORAGE_IDStorage array serial number (string).
Yes.
NA.
HOSTGROUP_ID_LISTList of IO ports and host group IDs, for example.
CL1-A,1:CL2-B,2:CL4-A,1:CL6-A,1.Yes.
NA.
Table 3. Credentials for a NetApp ONTAP storage copy offload Secret Key Description Mandatory? Default STORAGE_HOSTNAMEIP or URL of the host (string). Either enter the management IP for the entire cluster or enter a dedicated storage virtual machine management logical interface (SVM LIF).
Yes.
NA.
STORAGE_USERNAMEThe user’s name (string).
Yes.
NA.
STORAGE_PASSWORDThe user’s password (string).
Yes.
NA.
STORAGE_SKIP_SSL_VERIFICATIONIf set to
true, SSL verification is not performed (true,false).No.
false.ONTAP_SVMThe storage virtual machine (SVM) to be used in all client interactions. It can be taken from
trident.netapp.io/v1/TridentBackend.config.ontap_config.svmresource field.Yes.
NA.
Table 4. Credentials for a Pure Storage FlashArray storage copy offload Secret Key Description Mandatory? Default STORAGE_HOSTNAMEIP or URL of the host (string).
Yes.
NA.
STORAGE_USERNAMEThe user’s name (string).
Yes.
NA.
STORAGE_PASSWORDThe user’s password (string).
Yes.
NA.
STORAGE_SKIP_SSL_VERIFICATIONIf set to
true, SSL verification is not performed (true,false).No.
falsePURE_CLUSTER_PREFIXThe cluster prefix is set in the
StorageClusterresource. Retrieve it by runningprintf "px_%.8s" $(oc get storagecluster -A -o=jsonpath='{.items[?(@.spec.cloudStorage.provider=="pure")].status.clusterUid}')in the CLI.Yes.
NA.
Table 5. Credentials for a Dell PowerMax storage copy offload Secret Key Description Mandatory? Default STORAGE_HOSTNAMEIP or URL of the host (string).
Yes.
NA.
STORAGE_USERNAMEThe user’s name (string).
Yes.
NA.
STORAGE_PASSWORDThe user’s password (string).
Yes.
NA.
STORAGE_SKIP_SSL_VERIFICATIONIf set to
true, SSL verification is not performed (true,false).No..
falsePOWERMAX_SYMMETRIX_IDThe Symmetrix ID of the storage array. Can be taken from the config map under the powermax namespace, which the CSI driver uses.
Yes.
NA.
POWERMAX_PORT_GROUP_NAMEThe port group to use for masking view creation.
Yes.
NA.
Table 6. Credentials for a Dell PowerFlex storage copy offload Secret Key Description Mandatory? Default STORAGE_HOSTNAMEIP or URL of the host (string).
Yes.
NA.
STORAGE_USERNAMEThe user’s name (string).
Yes.
NA.
STORAGE_PASSWORDThe user’s password (string).
Yes.
NA.
STORAGE_SKIP_SSL_VERIFICATIONIf set to
true, SSL verification is not performed (true,false).No.
false.POWERFLEX_SYSTEM_IDThe system ID of the storage array. Can be taken from
vxflexos-config`from thevxflexos`namespace or from theopenshift-operatorsnamespace.Yes.
NA.
Table 7. Credentials for a Dell PowerStore storage copy offload Secret Key Description Mandatory? Default STORAGE_HOSTNAMEIP or URL of the host (string)
Yes.
NA.
STORAGE_USERNAMEThe user’s name (string).
Yes.
NA.
STORAGE_PASSWORDThe user’s password (string)
Yes
NA
STORAGE_SKIP_SSL_VERIFICATIONIf set to
true, SSL verification is not performed (true,false).No
falseTable 8. Credentials for an HPE 3PAR or HPE Primera storage copy offload Secret storage copy offload Secret Key Description Mandatory? Default STORAGE_HOSTNAMEMust include the full URL with protocol. For HPE 3PAR, must also include Web Services API (WSAPI) port. Use the HPE 3PAR command
cli% showwsapito determine the correct WSAPI port. HPE 3PAR systems default to port 8080 for both HTTP and HTTPS connections, HPE Primera defaults to port 443 (SSL/HTTPS). Depending on configured certificates, you might need to skip SSL verification. Example:https://192.168.1.1:8080.Yes
NA
STORAGE_USERNAMEThe user’s name (string)
Yes
NA
STORAGE_PASSWORDThe user’s password (string)
Yes
NA
STORAGE_SKIP_SSL_VERIFICATIONIf set to
true, SSL verification is not performed (true,false).No
falseTable 9. Credentials for an Infinidat InfiBox storage copy offload Secret Key Description Mandatory? Default STORAGE_HOSTNAMEIP or URL of the host (string)
Yes
NA
STORAGE_USERNAMEThe user’s name (string)
Yes
NA
STORAGE_PASSWORDThe user’s password (string)
Yes
NA
STORAGE_SKIP_SSL_VERIFICATIONIf set to
true, SSL verification is not performed (true,false).No
falseTable 10. Credentials for an IBM FlashSystem storage copy offload Secret Key Description Mandatory? Default STORAGE_HOSTNAMEIP or URL of the host (string)
Yes
NA
STORAGE_USERNAMEThe user’s name (string)
Yes
NA
STORAGE_PASSWORDThe user’s password (string)
Yes
NA
STORAGE_SKIP_SSL_VERIFICATIONIf set to
true, SSL verification is not performed (true,false).No
false -
In the CLI, complete the following steps:
-
Create a
StorageMapcustom resource (CR) according to the following example:apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: copy-offload namespace: openshift-mtv spec: map: - destination: accessMode: ReadWriteMany storageClass: <storage_class> offloadPlugin: vsphereXcopyConfig: secretRef: <Secret_for_the_storage_vendor_product> storageVendorProduct: <storage_vendor_product> source: id: <datastore_ID> provider: destination: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: host namespace: openshift-mtv uid: <ID_of_provider_host> source: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <name_of_vSphere_provider> namespace: openshift-mtv uid: <ID_of_vSphere_provider>where
<access_mode>-
Specifies the access mode. Optional for storage copy offload, required for other VMware migrations. Allowed values are
ReadWriteOnceandReadWriteMany. <storage_class>-
Specifies the storage class for the target Persistent Volume Claim (PVC) of the VM.
<Secret_for_the_storage_vendor_product>-
Specifies the
Secretthat contains the storage provider credentials. <storage_vendor_product>-
Specifies the string that identifies the storage product. Valid strings are listed in the table that follows this CR.
<datastore_ID>-
Specifies the datastore ID as set by VMware vSphere.
Table 11. Supported storage vendors and their identifying strings in the CLI Vendor Identifying string (Value of storageVendorProductlabel)Hitachi Vantara
vantaraNetApp
ontapHewlett Packard Enterprise
primera3parPure Storage
pureFlashArrayDell (PowerFlex)
powerflexDell (PowerMax)
powermaxDell (PowerStore)
powerstoreInfinidat
infiniboxIBM
flashsystem
-
Create a migration plan by using the procedure in Running a VMware vSphere migration from the command-line.
-
In the
PlanCR, modify thespec:map:storageportion of the CR as follows:spec: map: storage: apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap name: <storage_map_in_StorageMap_CR> namespace: <namespace>
-
-
Retrieving a VMware vSphere moRef
When you migrate VMs with a VMware vSphere source provider by using Forklift from the command line, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.
You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.
-
Retrieve the routes for the project:
oc get route -n openshift-mtv -
Retrieve the
Inventoryservice route:$ kubectl get route <inventory_service> -n konveyor-forklift -
Retrieve the access token:
$ TOKEN=$(oc whoami -t) -
Retrieve the moRef of a VMware vSphere provider:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere -k -
Retrieve the datastores of a VMware vSphere source provider:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k-
Example output: In this example, the moRef of the datastore
v2v_general_porpuse_ISCSI_DCisdatastore-11and the moRef of the datastoref01-h27-640-SSD_2isdatastore-730.[ { "id": "datastore-11", "parent": { "kind": "Folder", "id": "group-s5" }, "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC", "revision": 46, "name": "v2v_general_porpuse_ISCSI_DC", "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11" }, { "id": "datastore-730", "parent": { "kind": "Folder", "id": "group-s5" }, "path": "/Datacenter/datastore/f01-h27-640-SSD_2", "revision": 46, "name": "f01-h27-640-SSD_2", "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730" }, ...
-
Migrating virtual machines with shared disks
You can migrate VMware virtual machines (VMs) with shared disks by using the Forklift. This functionality is available only for cold migrations and is not available for shared boot disks.
Shared disks are disks that are attached to more than one VM and that use the multi-writer option. As a result of these characteristics, shared disks are difficult to migrate.
In certain situations, applications in VMs require shared disks. Databases and clustered file systems are the primary use cases for shared disks.
Forklift version 2.7.11 or later includes a parameter named migrateSharedDisks in Plan custom resources (CRs) that instructs Forklift to either migrate shared disks or to skip them during migration, as follows:
-
If set to
true, Forklift migrates the shared disks. Forklift uses the regular cold migration flow usingvirt-v2vand labeling the shared persistent volume claims (PVCs). -
If set to
false, Forklift skips the shared disks. Forklift uses the KubeVirt Containerized-Data-Importer (CDI) for disk transfer.
After the disk transfer, Forklift automatically attempts to locate the already shared PVCs and the already migrated shared disks and attach them to the VMs.
By default, migrateSharedDisks is set to true.
To successfully migrate VMs with shared disks, create two Plan CRs as follows:
-
In the first, set
migrateSharedDiskstotrue.Forklift migrates the following:
-
All shared disks.
-
For each shared disk, one of the VMs that is attached to it. If possible, choose VMs so that the plan does not contain any shared disks that are connected to more than one VM. See the following figures for further guidance.
-
All unshared disks attached to the VMs you choose for this plan.
-
-
In the second, set
migrateSharedDiskstofalse.Forklift migrates the following:
-
All other VMs.
-
The unshared disks of the VMs in the second
PlanCR.
-
When Forklift migrates a VM that has a shared disk to it, it does not check if it has already migrated that shared disk. Therefore, it is important to allocate the VMs in each of the two so that each shared disk is migrated once and only once.
To understand how to assign VMs and shared disks to each of the Plan CRs, consider the two figures that follow. In both, migrateSharedDisks is set to true for plan1 and set to false for plan2.
In the first figure, the VMs and shared disks are assigned correctly:
plan1 migrates VMs 2 and 4, shared disks 1, 2, and 3, and the non-shared disks of VMs 2 and 4. VMs 2 and 4 are included in this plan, because they connect to all the shared disks once each.
plan2 migrates VMs 1 and 3 and their non-shared disks. plan2 does not migrate the shared disks connected to VMs 1 and 3 because migrateSharedDisks is set to false.
Forklift migrates each VMs and its disks as follows:
-
From
plan1:-
VM 3, shared disks 1 and 2, and the non-shared disks attached to VM 3.
-
VM 4, shared disk 3, and the non-shared disks attached to VM 4.
-
-
From
plan2:-
VM 1 and the non-shared disks attached to it.
-
VM 2 and the non-shared disks attached to it.
-
The result is that VMs 2 and 4, all the shared disks, and all the non-shared disks are migrated, but only once. Forklift is able to reattach all VMs to their disks, including the shared disks.
In the second figure, the VMs and shared disks are not assigned correctly:
In this case, Forklift migrates each VMs and its disks as follows:
-
From
plan1:-
VM 2, shared disks 1 and 2, and the non-shared disks attached to VM 2.
-
VM 3, shared disks 2 and 3, and the non-shared disks attached to VM 3.
-
-
From
plan2:-
VM 1 and the non-shared disks attached to it.
-
VM 4 and the non-shared disks attached to it.
-
This migration "succeeds", but it results in a problem: Shared disk 2 is migrated twice by the first Plan CR. You can resolve this problem by using one of the two workarounds that are discussed in the Known issues section, which follows the procedure.
-
In Forklift, create a migration plan for the shared disks, the minimum number of VMs connected to them, and the unshared disk of those VMs.
-
On the VMware cluster, power off all VMs attached to the shared disks.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
-
Select the desired plan to open the Plan details page.
-
Click the YAML tab of the plan.
-
Verify that
migrateSharedDisksis set totrueas in the example that follows:apiVersion: forklift.konveyor.io/v1beta1 kind: Plan name: transfer-shared-disks namespace: openshift-mtv spec: map: network: apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap name: vsphere-7gxbs namespace: openshift-mtv uid: a3c83db3-1cf7-446a-b996-84c618946362 storage: apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap name: vsphere-mqp7b namespace: openshift-mtv uid: 20b43d4f-ded4-4798-b836-7c0330d552a0 migrateSharedDisks: true provider: destination: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: host namespace: openshift-mtv uid: abf4509f-1d5f-4ff6-b1f2-18206136922a source: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: vsphere namespace: openshift-mtv uid: be4dc7ab-fedd-460a-acae-a850f6b9543f targetNamespace: openshift-mtv vms: - id: vm-69 name: vm-1-with-shared-disks -
Start the migration of the first plan and wait for it to finish.
-
Create a second
PlanCR to migrate all the other VMs and their unshared disks to the same target namespace as the first. -
In the Migration plans page of the OKD web console, select the new plan to open the Plan details page.
-
Click the YAML tab of the plan.
-
Set
migrateSharedDiskstofalseas in the example that follows:apiVersion: forklift.konveyor.io/v1beta1 kind: Plan name: skip-shared-disks namespace: openshift-mtv spec: map: network: apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap name: vsphere-7gxbs namespace: openshift-mtv uid: a3c83db3-1cf7-446a-b996-84c618946362 storage: apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap name: vsphere-mqp7b namespace: openshift-mtv uid: 20b43d4f-ded4-4798-b836-7c0330d552a0 migrateSharedDisks: false provider: destination: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: host namespace: openshift-mtv uid: abf4509f-1d5f-4ff6-b1f2-18206136922a source: apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: vsphere namespace: openshift-mtv uid: be4dc7ab-fedd-460a-acae-a850f6b9543f targetNamespace: openshift-mtv vms: - id: vm-71 name: vm-2-with-shared-disks -
Start the migration of the second plan and wait for it to finish.
-
Verify that all shared disks are attached to the same VMs as they were before migration and that none are duplicated. In case of problems, see the discussion of known issues that follows.
Known issue: Cyclic shared disk dependencies
When you migrate shared disks, virtual machines (VMs) with cyclic shared disk dependencies cannot be migrated successfully.
Explanation: When migrateSharedDisks is set to true, Forklift migrates each VM in the plan, one by one, and any shared disks attached to it, without determining if a shared disk was already migrated.
In the case of 2 VMs sharing one disk, there is no problem. Forklift transfers the shared disk and attaches the 2 VMs to the shared disk after the migration.
However, if there is a cyclic dependency of shared disks between 3 or more VMs, Forklift either duplicates or omits one of the shared disks. The figure that follows illustrates the simplest version of this problem.
In this case, the VMs and shared disks cannot be migrated in the same Plan CR. Although this problem could be solved using migrateSharedDisks and 2 Plan CRs, it illustrates the basic issue that must be avoided in migrating VMs with shared disks.
Workarounds for VMs with shared disk dependencies
As discussed previously, it is important to try to create 2 Plan CRs in which each shared disk is migrated once. However, if your migration does result in a shared disk either being duplicated or not being transferred, you can use one of the following workarounds:
-
Duplicate one of the shared disks
-
"Remove" one of the shared disks
Duplicate a shared disk
In the figure that follows, VMs 2 and 3 are migrated with the shared disks in the first plan, and VM 1 is migrated in the second plan. This eliminates the cyclic dependencies, but there is a disadvantage to this workaround: it duplicates shared disk 3. The solution is to remove the duplicated PV and migrate VM 1 again.
Advantage: The source VMs are not affected.
Disadvantage: One shared disk gets transferred twice, so you need to manually delete the duplicate disk and reconnect VM 3 to shared disk 3 in Red Hat OpenShift after the migration.
"Remove" a shared disk
The figure that follows shows an alternative solution: Remove the link to one of the shared disks from one source VM. Doing this breaks the cyclic dependencies. Note that in the current VMware UI, removing the link is referred to as "removing" the disk.
In this case, VM 2 and 3 are migrated with the shared disks in the first plan, but the link between VM 3 and shared disk 3 is removed. As before, VM 1 is migrated in the second plan.
Doing this breaks the cyclic dependencies, but this workaround has a drawback: VM 3 is disconnected from shared disk 3 and remains disconnected after the migration. The solution is to manually reattach shared disk 3 to VM 3 after the migration finishes.
Advantage: No disks are duplicated.
Disadvantage: You need to modify VM 3 by removing its link to shared disk 3 before the migration, and you need to manually reconnect VM 3 to shared disk 3 in OKD after the migration.
Forklift template utility for VMware VM names
You can use the template utility of Forklift to generate names for your virtual machines (VMs). Using names generated by these methods reduces the possibilities of problems with your VMs after their migration to KubeVirt.
The tables that follow describe string and mathematical functions that you can use in templates that rename VMs for use with Forklift.
| Function | Description | Example |
|---|---|---|
| Converts a string to lowercase. |
|
| Converts a string to uppercase. |
|
| Checks if a string contains a specific substring. |
|
| Replaces occurrences in a string. |
|
| Removes whitespace from both ends of a string. |
|
| Removes specified characters from both ends of a string. |
|
| Removes the specified suffix from a string, if it is present. |
|
| Removes the specified prefix from a string, if it is present. |
|
| Converts a string to title case. |
|
| Converts a string in title case to lowercase. |
|
| Repeats a string n times. |
|
| Extracts substring from index>=a to b<index. |
|
| Removes all whitespace from a string. |
|
| Truncates a string to specified length. |
|
| Extracts the first letter of each word in a string. |
|
| Checks if a string starts with the specified prefix. |
|
| Checks if a string ends with the specified suffix. |
|
| Replaces matches using regular expressions with submatch expansion. |
|
| Function | Description | Example |
|---|---|---|
| Sum of the numbers that follow. |
|
| Increment by 1. |
|
| Subtract the second number from the first. |
|
| Integer division (remainder discarded). |
|
| Modulo operation. |
|
| Multiply the numbers that follow. |
|
| Return the largest of the following integers. |
|
| Return the smallest of the following integers. |
|
| Round the following number down to the nearest integer. |
|
| Round the following number up to the nearest integer. |
|
| Round the following number to the specified number of decimal places. |
|
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration from the command-line interface
You can use the command-line interface (CLI) to cancel an entire migration while a migration is in progress.
-
Delete the
MigrationCR:$ kubectl delete migration <migration> -n <namespace>where:
<migration>-
Specifies the name of the
MigrationCR.
Canceling the migration of specific VMs from the command-line interface
You can use the command-line interface (CLI) to cancel the migration of specific virtual machines (VMs) while a migration is in progress.
-
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest, following this example:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 - id: vm-203 name: rhel8-vm EOFwhere:
idorname-
Specifies a VM by using the
idkey or thenamekey.The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.
-
Retrieve the
Migrationcustom resource (CR) to monitor the progress of the remaining VMs, following this example:$ kubectl get migration/<migration> -n <namespace> -o yaml
Migrating from oVirt
Run your oVirt migration plan from the MTV UI or from the command-line.
Running a migration plan in the MTV UI
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
Warm migration only:
-
The precopy stage starts.
A
PreFlightInspectionduring warm migration enables early detection of issues during guest conversion. If an issue is detected, the warm migration fails before VM shutdown, and you can adjust VM settings or skip guest conversion. Potential errors during thePreFlightInspectioninclude the following:-
Missing LUKS passwords
-
Unsupported OS
-
Unsupported FS
-
-
-
To set the cutover of the plan, perform the following steps:
-
In the Migration type column of the migration plan, click Schedule cutover.
-
In the Schedule cutover window, set the date and time of the cutover, and then click Set cutover.
-
-
To edit a cutover, perform the following steps:
-
In the Migration type column of the migration plan, click Edit cutover.
-
In the Edit cutover window, set the date and time of the cutover, and then click Set cutover.
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual machines tab of the Plan details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2vis not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Migration plans page of the OKD web console, you can click the Options menu
beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Set cutover: Warm migrations only. Active only if relevant. Clicking Set cutover opens the Set cutover window, which allows you to set the date and time for a cutover.
-
Edit cutover: Change the date or time of a scheduled cutover. Active only if relevant. Clicking Edit cutover opens the Edit cutover window, which allows you to set a new date or time for a cutover.
-
Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive is irreversible. However, you can duplicate an archived plan.
-
Delete: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
-
Restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Running a oVirt migration from the command-line
You can migrate from a oVirt source provider by using the command-line interface (CLI).
-
If you are using a user-defined network (UDN), note the name of its namespace as defined in KubeVirt.
-
If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster can access the backend storage.
|
-
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ovirt createdForResourceType: providers type: Opaque stringData: user: <user> password: <password> insecureSkipVerify: <"true"/"false"> cacert: | <ca_certificate> url: <api_end_point> EOFwhere:
ownerReferences-
Is an optional section in which you can specify a provider’s
nameanduid. <user>-
Specifies the oVirt Engine user.
<password>-
Specifies the user’s password.
<"true"/"false">-
Specifies
"true"to skip certificate verification, and specifies"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. cacert-
Specifies the CA cert object. Enter the Engine CA certificate, unless it was replaced by a third-party certificate, in which case, enter the Engine Apache CA certificate. You can retrieve the Engine CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA.
<api_end_point>-
Specifies the API endpoint URL, for example,
https://<engine_host>/ovirt-engine/api.
-
Create a
Providermanifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ovirt url: <api_end_point> secret: name: <secret> namespace: <namespace> EOFwhere:
<api_end_point>-
Specifies the URL of the API endpoint, for example,
https://<engine_host>/ovirt-engine/api. <secret>-
Specifies the name of the provider
SecretCR.
-
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod source: id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
type-
Specifies the network type. Allowed values are
podandmultus. source-
Specifies the source network. You can use either the
idor thenameparameter to specify the source network. Forid, specify the oVirt network Universal Unique ID (UUID). <network_attachment_definition>-
Specifies a network attachment definition (NAD) for each additional KubeVirt network.
<network_attachment_definition_namespace>-
Specifies the namespace of the KubeVirt NAD. Required only when
typeismultus. namespace-
Specifies the namespace. If you are using a user-defined network (UDN), its namespace is defined in KubeVirt.
-
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> source: id: <source_storage_domain> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
<access_mode>-
Specifies the access mode. Allowed values are
ReadWriteOnceandReadWriteMany. <source_storage_domain>-
Specifies the oVirt storage domain UUID. For example,
f2737930-b567-451a-9ceb-2887f6207009.
-
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> playbook: | LS0tCi0gbm... EOFwhere:
<service account>-
Specifies the OKD service account. This is an optional label. Use the
serviceAccountparameter to modify any cluster resources. playbook-
Specifies the Base64-encoded Ansible Playbook. If you specify a playbook, the
imagemust include anansible-runner.You can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address> -
Create a
Planmanifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> namespace: <namespace> preserveClusterCpuModel: true spec: warm: false provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: network: name: <network_map> namespace: <namespace> storage: name: <storage_map> namespace: <namespace> targetNamespace: <target_namespace> vms: - id: <source_vm1> - name: <source_vm2> hooks: - hook: namespace: <namespace> name: <hook> step: <step> EOFwhere:
<plan>-
Specifies the name of the
PlanCR. preserveClusterCpuModel-
Specifies whether a custom CPU model is used, as detailed in the note that follows.
warm-
Specifies whether the migration is warm or cold. If you specify a warm migration without specifying a value for the
cutoverparameter in theMigrationmanifest, only the precopy stage will run. map-
Specifies the network map and the storage map used by the plan.
network-
Specifies a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
<network_map>-
Specifies the name of the
NetworkMapCR. storage-
Specifies a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
<storage_map>-
Specifies the name of the
StorageMapCR. vms-
Specifies the source VM. Use either the
idor thenameparameter to specify the source VMs. If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails. <source_vm1>-
Specifies the oVirt VM UUID.
hooks-
Specifies up to two hooks for a migration. Each hook must run during a separate migration step. This is an optional label.
<hook>-
Specifies the name of the
HookCR. <step>-
Specifies the type of hook. Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.-
If the migrated machine is set with a custom CPU model, it will be set with that CPU model in the destination cluster, regardless of the setting of
preserveClusterCpuModel. -
If the migrated machine is not set with a custom CPU model:
-
If
preserveClusterCpuModelis set totrue, Forklift checks the CPU model of the VM when it runs in oVirt, based on the cluster’s configuration, and then sets the migrated VM with that CPU model. -
If
preserveClusterCpuModelis set tofalse, Forklift does not set a CPU type and the VM is set with the default CPU model of the destination cluster.
-
-
-
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration from the command-line interface
You can use the command-line interface (CLI) to cancel an entire migration while a migration is in progress.
-
Delete the
MigrationCR:$ kubectl delete migration <migration> -n <namespace>where:
<migration>-
Specifies the name of the
MigrationCR.
Canceling the migration of specific VMs from the command-line interface
You can use the command-line interface (CLI) to cancel the migration of specific virtual machines (VMs) while a migration is in progress.
-
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest, following this example:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 - id: vm-203 name: rhel8-vm EOFwhere:
idorname-
Specifies a VM by using the
idkey or thenamekey.The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.
-
Retrieve the
Migrationcustom resource (CR) to monitor the progress of the remaining VMs, following this example:$ kubectl get migration/<migration> -n <namespace> -o yaml
Migrating from OpenStack
Run your OpenStack migration plan from the MTV UI or from the command-line.
Running a migration plan in the MTV UI
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
+
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual machines tab of the Plan details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2vis not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Migration plans page of the OKD web console, you can click the Options menu
beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive is irreversible. However, you can duplicate an archived plan.
-
Delete: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
-
Restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Running an OpenStack migration from the command-line
You can migrate from an OpenStack source provider by using the command-line interface (CLI).
-
If you are using a user-defined network (UDN), note the name of its namespace as defined in KubeVirt.
-
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openstack createdForResourceType: providers type: Opaque stringData: user: <user> password: <password> insecureSkipVerify: <"true"/"false"> domainName: <domain_name> projectName: <project_name> regionName: <region_name> cacert: | <ca_certificate> url: <api_end_point> EOFwhere:
ownerReferences-
Is an optional section in which you can specify a provider’s
nameanduid. <user>-
Specifies the OpenStack user.
<password>-
Specifies the user OpenStack password.
<"true"/"false">-
Specifies
"true"to skip certificate verification, and specifies"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. cacert-
Specifies the CA cert object. When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA.
<api_end_point>-
Specifies the API endpoint URL, for example,
https://<identity_service>/v3.
-
Create a
Providermanifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openstack url: <api_end_point> secret: name: <secret> namespace: <namespace> EOFwhere:
<api_end_point>-
Specifies the URL of the API endpoint.
<secret>-
Specifies the name of the provider
SecretCR.
-
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod source: id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
type-
Specifies the network type. Allowed values are
podandmultus. source-
Specifies the source network. You can use either the
idor thenameparameter to specify the source network. Forid, specify the OpenStack network UUID. <network_attachment_definition>-
Specifies a network attachment definition (NAD) for each additional KubeVirt network.
<network_attachment_definition_namespace>-
Specifies the namespace of the KubeVirt NAD. Required only when
typeismultus. namespace-
Specifies the namespace. If you are using a user-defined network (UDN), its namespace is defined in KubeVirt.
-
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> source: id: <source_volume_type> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
<access_mode>-
Specifies the access mode. Allowed values are
ReadWriteOnceandReadWriteMany. <source_volume_type>-
Specifies the OpenStack
volume_typeUUID. For example,f2737930-b567-451a-9ceb-2887f6207009.
-
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> playbook: | LS0tCi0gbm... EOFwhere:
<service account>-
Specifies the OKD service account. This is an optional label. Use the
serviceAccountparameter to modify any cluster resources. playbook-
Specifies the Base64-encoded Ansible Playbook. If you specify a playbook, the
imagemust include anansible-runner.You can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address> -
Create a
Planmanifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: network: name: <network_map> namespace: <namespace> storage: name: <storage_map> namespace: <namespace> targetNamespace: <target_namespace> vms: - id: <source_vm1> - name: <source_vm2> hooks: - hook: namespace: <namespace> name: <hook> step: <step> EOFwhere:
<plan>-
Specifies the name of the
PlanCR. map-
Specifies only one network map and one storage map per plan.
network-
Specifies a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
<network_map>-
Specifies the name of the
NetworkMapCR. storage-
Specifies a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
<storage_map>-
Specifies the name of the
StorageMapCR. vms-
Specifies the source VMs. Accepts either the
idor thenameparameter to specify the source VMs. If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails. <source_vm1>-
Specifies the OpenStack VM UUID.
hooks-
Specifies up to two hooks for a VM. Each hook must run during a separate migration step. This is an optional label.
<hook>-
Specifies the name of the
HookCR. <step>-
Specifies the type of hook. Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
-
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration from the command-line interface
You can use the command-line interface (CLI) to cancel an entire migration while a migration is in progress.
-
Delete the
MigrationCR:$ kubectl delete migration <migration> -n <namespace>where:
<migration>-
Specifies the name of the
MigrationCR.
Canceling the migration of specific VMs from the command-line interface
You can use the command-line interface (CLI) to cancel the migration of specific virtual machines (VMs) while a migration is in progress.
-
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest, following this example:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 - id: vm-203 name: rhel8-vm EOFwhere:
idorname-
Specifies a VM by using the
idkey or thenamekey.The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.
-
Retrieve the
Migrationcustom resource (CR) to monitor the progress of the remaining VMs, following this example:$ kubectl get migration/<migration> -n <namespace> -o yaml
Migrating from OVA
Run your OVA migration plan from the MTV UI or from the command-line.
OVA scope and limitations
OVA migration is validated for migrating supported guest operating systems exported from VMware vSphere. For third-party networking or security appliances, check with the vendor for native QCOW2 or KVM images.
The OVA import process utilizes virt-v2v to prepare guest operating systems for KVM. Vendor-supplied appliances often use proprietary bootloaders or disk layouts that are incompatible with this conversion. Moreover, converting a vendor OVA may invalidate vendor support agreements.
To ensure stability and vendor support, always prioritize importing the vendor’s native QCOW2 image by using either the KubeVirt "Upload Image" or the KubeVirt "Import from URL" workflow rather than using the Forklift OVA path.
Running a migration plan in the MTV UI
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
+
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual machines tab of the Plan details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2vis not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Migration plans page of the OKD web console, you can click the Options menu
beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive is irreversible. However, you can duplicate an archived plan.
-
Delete: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
-
Restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Running an Open Virtual Appliance (OVA) migration from the command-line
You can migrate from Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the command-line interface (CLI).
-
If you are using a user-defined network (UDN), note the name of its namespace as defined in KubeVirt.
-
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ova createdForResourceType: providers type: Opaque stringData: url: <nfs_server:/nfs_path> EOFwhere:
ownerReferences-
Is an optional section in which you can specify a provider’s
nameanduid. <nfs_server:/nfs_path>-
Specifies the
nfs_server, which is an IP or hostname of the server where the share was created andnfs_path, which is the path on the server where the OVA files are stored.
-
Create a
Providermanifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ova url: <nfs_server:/nfs_path> secret: name: <secret> namespace: <namespace> EOFwhere:
<nfs_server:/nfs_path>-
Specifies the
nfs_server, which is an IP or hostname of the server where the share was created andnfs_path, which is the path on the server where the OVA files are stored. <secret>-
Specifies the name of the provider
SecretCR.
-
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod source: id: <source_network_id> - destination: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus source: id: <source_network_id> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
type-
Specifies the network type. Allowed values are
podandmultus. <source_network_id>-
Specifies the OVA network Universal Unique ID (UUID).
<network_attachment_definition>-
Specifies a network attachment definition (NAD) for each additional KubeVirt network.
<network_attachment_definition_namespace>-
Specifies the namespace of the KubeVirt NAD. Required only when
typeismultus. namespace-
Specifies the namespace. If you are using a user-defined network (UDN), its namespace is defined in KubeVirt.
-
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> source: name: Dummy storage for source provider <provider_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
<access_mode>-
Specifies the access mode. Allowed values are
ReadWriteOnceandReadWriteMany. name-
Specifies the source provider. For OVA, the
StorageMapcan map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as Dummy storage for source provider <provider_name>. In theStorageMapCR, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider.
-
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> playbook: | LS0tCi0gbm... EOFwhere:
<service account>-
Specifies the OKD service account. This is an optional label. Use the
serviceAccountparameter to modify any cluster resources. playbook-
Specifies the Base64-encoded Ansible Playbook. If you specify a playbook, the
imagemust include anansible-runner.You can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address> -
Create a
Planmanifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: network: name: <network_map> namespace: <namespace> storage: name: <storage_map> namespace: <namespace> targetNamespace: <target_namespace> vms: - id: <source_vm1> - name: <source_vm2> hooks: - hook: namespace: <namespace> name: <hook> step: <step> EOFwhere:
<plan>-
Specifies the name of the
PlanCR. map-
Specifies only one network map and one storage map per plan.
network-
Specifies a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
<network_map>-
Specifies the name of the
NetworkMapCR. storage-
Specifies a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
<storage_map>-
Specifies the name of the
StorageMapCR. vms-
Specifies the source VMs and their hooks. Accepts either the
idor thenameparameter to specify the source VMs. If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails. <source_vm1>-
Specifies the OVA VM UUID.
hooks-
Specifies up to two hooks for a migration. Each hook must run during a separate migration step. This is an optional label.
<hook>-
Specifies the name of the
HookCR. <step>-
Specifies the type of hook. Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
-
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration from the command-line interface
You can use the command-line interface (CLI) to cancel an entire migration while a migration is in progress.
-
Delete the
MigrationCR:$ kubectl delete migration <migration> -n <namespace>where:
<migration>-
Specifies the name of the
MigrationCR.
Canceling the migration of specific VMs from the command-line interface
You can use the command-line interface (CLI) to cancel the migration of specific virtual machines (VMs) while a migration is in progress.
-
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest, following this example:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 - id: vm-203 name: rhel8-vm EOFwhere:
idorname-
Specifies a VM by using the
idkey or thenamekey.The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.
-
Retrieve the
Migrationcustom resource (CR) to monitor the progress of the remaining VMs, following this example:$ kubectl get migration/<migration> -n <namespace> -o yaml
Migrating from KubeVirt
Run your KubeVirt migration plan from the MTV UI or from the command-line.
Running a migration plan in the MTV UI
You can run a migration plan and view its progress in the OKD web console.
-
Valid migration plan.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
-
Click Start beside a migration plan to start the migration.
-
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
+
Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.
-
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
-
The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
-
The link on the right opens the Virtual machines tab of the Plan details page. For each VM, the tab displays the following data:
-
The name of the VM
-
The start and end times of the migration
-
The amount of data copied
-
A progress pipeline for the VM’s migration
-
-
-
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
-
Click the Virtual machines tab.
-
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
-
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
Logs are not always available. The following are common reasons for logs not being available:
-
The migration is from KubeVirt to KubeVirt. In this case,
virt-v2vis not involved, so no pod is required. -
No pod was created.
-
The pod was deleted.
-
The migration failed before running the pod.
-
-
To see the raw logs, click the Raw link.
-
To download the logs, click the Download link.
-
Migration plan options
On the Migration plans page of the OKD web console, you can click the Options menu
beside a migration plan to access the following options:
-
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
-
All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
-
The plan’s mapping on the Mappings tab.
-
The hooks listed on the Hooks tab.
-
-
Start migration: Active only if relevant.
-
Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
-
Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
-
Migrate VMs to a different namespace.
-
Edit an archived migration plan.
-
Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
-
-
Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
Archive is irreversible. However, you can duplicate an archived plan.
-
Delete: Permanently remove a migration plan. You cannot delete a running migration plan.
Delete is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.
-
If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
-
If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.
-
Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
-
Click the name of a running migration plan to view the migration details.
-
Select one or more VMs and click Cancel.
-
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
-
Restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Running a Red Hat KubeVirt migration from the command-line
You can use a Red Hat KubeVirt provider as either a source provider or as a destination provider. You can migrate from an KubeVirt source provider by using the command-line interface (CLI).
| The OKD cluster version of the source provider must be 4.16 or later. |
-
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openshift createdForResourceType: providers type: Opaque stringData: token: <token> password: <password> insecureSkipVerify: <"true"/"false"> cacert: | <ca_certificate> url: <api_end_point> EOFwhere:
ownerReferences-
Is an optional section in which you can specify a provider’s
nameanduid. <token>-
Specifies a token for a service account with
cluster-adminprivileges. If bothtokenandurlare left blank, the local OKD cluster is used. <password>-
Specifies the user password.
<"true"/"false">-
Specifies
"true"to skip certificate verification, and specifies"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. cacert-
Specifies the CA cert object. When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA.
<api_end_point>-
Specifies the URL of the endpoint of the API server.
-
Create a
Providermanifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openshift url: <api_end_point> secret: name: <secret> namespace: <namespace> EOFwhere:
<api_end_point>-
Specifies the URL of the endpoint of the API server.
<secret>-
Specifies the name of the provider
SecretCR.
-
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod source: name: <network_name> type: pod - destination: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus source: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
type-
Specifies the network type. Allowed values are
pod,ignored, andmultus. <network_attachment_definition>-
Specifies the network name. When the
typeismultus, use the KubeVirt (NAD) name. <network_attachment_definition_namespace>-
Specifies the namespace of the KubeVirt NAD. Required only when the
typeismultus.
-
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> source: name: <storage_class> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
<access_mode>-
Specifies the access mode. Allowed values are
ReadWriteOnceandReadWriteMany.
-
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount:<service account> playbook: | LS0tCi0gbm... EOFwhere:
<service account>-
Specifies the OKD service account. This is an optional label. Use the
serviceAccountparameter to modify any cluster resources. playbook-
Specifies the Base64-encoded Ansible Playbook. If you specify a playbook, the
imagemust include anansible-runner.You can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Enter the following command to create the network attachment definition (NAD) of the transfer network used for Forklift migrations.
You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.
Configuring the IP address enables the interface to reach the configured gateway.
$ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit> apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name_of_transfer_network> namespace: <namespace> annotations: forklift.konveyor.io/route: <IP_address> -
Create a
Planmanifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: network: name: <network_map> namespace: <namespace> storage: name: <storage_map> namespace: <namespace> targetNamespace: <target_namespace> vms: - name: <source_vm> namespace: <namespace> hooks: - hook: namespace: <namespace> name: <hook> step: <step> EOFwhere:
<plan>-
Specifies the name of the
PlanCR. map-
Specifies only one network map and one storage map per plan.
network-
Specifies a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
<network_map>-
Specifies the name of the
NetworkMapCR. storage-
Specifies a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
<storage_map>-
Specifies the name of the
StorageMapCR. hooks-
Specifies up to two hooks for a VM. Each hook must run during a separate migration step. This is an optional label.
<hook>-
Specifies the name of the
HookCR. <step>-
Specifies the type of hook. Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
-
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOFIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
Running a Red Hat KubeVirt live migration from the command-line
You can perform a live migration by using the command-line interface (CLI). The procedure for live migration is identical to the procedure for other migrations between KubeVirt clusters except for the addition of the type label in the Plan CR. For a live migration, the type label must be set to live.
As described in the prerequisites for live migration. For more information, see KubeVirt live migration prerequisites.
-
Create a
Secretmanifest for the source provider credentials:$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openshift createdForResourceType: providers type: Opaque stringData: token: <token> password: <password> insecureSkipVerify: <"true"/"false"> cacert: | <ca_certificate> url: <api_end_point> EOFwhere:
ownerReferences-
Is an optional section in which you can specify a provider’s
nameanduid. <token>-
Specifies a token for a service account with
cluster-adminprivileges. If bothtokenandurlare left blank, the local OKD cluster is used. <password>-
Specifies the user password.
<"true"/"false">-
Specifies
"true"to skip certificate verification, and specifies"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. cacert-
Specifies the CA cert object. When this field is not set and skip certificate verification is disabled, Forklift attempts to use the system CA.
<api_end_point>-
Specifies the URL of the endpoint of the API server.
-
Create a
Providermanifest for the source provider:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openshift url: <api_end_point> secret: name: <secret> namespace: <namespace> EOFwhere:
<api_end_point>-
Specifies the URL of the endpoint of the API server.
<secret>-
Specifies the name of the provider
SecretCR.
-
Create a
NetworkMapmanifest to map the source and destination networks:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod source: name: <network_name> type: pod - destination: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus source: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
type-
Specifies the network type. Allowed values are
pod,ignored, andmultus. <network_attachment_definition>-
Specifies the network name. When the
typeismultus, use the name of the KubeVirt network attachment (NAD) definition. <network_attachment_definition_namespace>-
Specifies the namespace of the KubeVirt NAD. Required only when the
typeismultus.
-
Create a
StorageMapmanifest to map source and destination storage:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> source: name: <storage_class> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOFwhere:
<access_mode>-
Specifies the access mode. Allowed values are
ReadWriteOnceandReadWriteMany.
-
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount: <service_account> playbook: | LS0tCi0gbm... EOFwhere:
serviceAccount-
Specifies the OKD service account. This is an optional label. Use the
serviceAccountparameter to modify any cluster resources. playbook-
Specifies the Base64-encoded Ansible Playbook. If you specify a playbook, the
imagemust include anansible-runner.You can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
-
Create a
Planmanifest for the migration:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: network: name: <network_map> namespace: <namespace> storage: name: <storage_map> namespace: <namespace> type: live targetNamespace: <target_namespace> vms: - name: <source_vm> namespace: <namespace> hooks: - hook: namespace: <namespace> name: <hook> step: <step> EOFwhere:
<plan>-
Specifies the name of the
PlanCR. map-
Specifies only one network map and one storage map per plan.
network-
Specifies a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
<network_map>-
Specifies the name of the
NetworkMapCR. storage-
Specifies a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
<storage_map>-
Specifies the name of the
StorageMapCR. type-
Specifies the type of migration. Must be set to
live. hooks-
Specifies up to two hooks for a VM. Each hook must run during a separate migration step. This is an optional label.
<hook>-
Specifies the name of the
HookCR. <step>-
Specifies the hook step. Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
-
Create a
Migrationmanifest to run thePlanCR:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> EOFThe
cutoverfield is irrelevant for live migrations, so it is not included in theMigrationCR of this procedure.
Canceling a migration from the command-line interface
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration from the command-line interface
You can use the command-line interface (CLI) to cancel an entire migration while a migration is in progress.
-
Delete the
MigrationCR:$ kubectl delete migration <migration> -n <namespace>where:
<migration>-
Specifies the name of the
MigrationCR.
Canceling the migration of specific VMs from the command-line interface
You can use the command-line interface (CLI) to cancel the migration of specific virtual machines (VMs) while a migration is in progress.
-
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest, following this example:$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 - id: vm-203 name: rhel8-vm EOFwhere:
idorname-
Specifies a VM by using the
idkey or thenamekey.The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.
-
Retrieve the
Migrationcustom resource (CR) to monitor the progress of the remaining VMs, following this example:$ kubectl get migration/<migration> -n <namespace> -o yaml
Advanced migration options
Perform advanced migration operations, such as changing precopy snapshot intervals for warm migration, creating custom rules for validation, or adding hooks to your migration plan.
Changing precopy intervals for warm migration
You can change the snapshot interval by patching the ForkliftController custom resource (CR).
-
Patch the
ForkliftControllerCR:$ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <interval_in_minutes>}}' --type=mergewhere:
- <interval_in_minutes>`
-
Specifies the precopy interval in minutes. The default value is
60.You do not need to restart the
forklift-controllerpod.
Creating custom rules for the Validation service
The Validation service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation service generates a list of concerns for each VM, which are stored in the Provider Inventory service as VM attributes. The web console displays the concerns for each VM in the provider inventory.
You can create custom rules to extend the default ruleset of the Validation service. For example, you can create a rule that checks whether a VM has multiple disks.
About Rego files
Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.
Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.
The following .rego file example checks for distributed resource scheduling enabled (has_drs_enabled) in the cluster of a VMware VM:
package io.konveyor.forklift.vmware
has_drs_enabled {
input.host.cluster.drsEnabled
}
concerns[flag] {
has_drs_enabled
flag := {
"category": "Information",
"label": "VM running in a DRS-enabled cluster",
"assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
}
}where:
package io.konveyor.forklift.vmware-
Is the package namespace in this example, The package namespaces are
io.konveyor.forklift.vmwarefor VMware andio.konveyor.forklift.ovirtfor oVirt. input.host.cluster.drsEnabled-
Is the query parameter in this example. Query parameters are based on the
inputkey of theValidationservice JSON.
For information about Rego queries and rules and examples of Rego rules, see the following resources:
-
Policy Language in the Open Policy Agent documentation.
-
OPA Rules Files in the
forkliftdocumentation. -
VMware .rego files in the
forkliftrepository.
Checking the default validation rules
Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.
Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.
-
Connect to the terminal of the
Validationpod:$ kubectl rsh <validation_pod> -
Go to the OPA policies directory for your provider:
$ cd /usr/share/opa/policies/io/konveyor/forklift/<provider>where:
- provider
-
Specifies the provider. Valid options:
vmwareorovirt.
-
Search for the default policies:
$ grep -R "default" *
Creating validation rules
To ensure that your custom validation rules persist across pod restarts, scaling events, and Forklift upgrades, deploy the rules by using a ConfigMap Custom Resource (CR). By default, the Validation service reads validation rules from a ConfigMap named forklift-validation-config in the konveyor-forklift namespace. You can optionally customize the ConfigMap name by updating the forklift-controller CR.
|
Validation rules are based on VM attributes collected by the Provider Inventory service. The Provider Inventory service presents provider-specific VM properties as simplified attributes for the validation engine. You can then create Rego queries based on the attributes, and add the queries to the ConfigMap CR to apply validation rules across different source environments.
For example, in a validation rule that checks if a VMware VM has NUMA node affinity configured, you have these elements:
-
VMware API path:
MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"]. -
Provider Inventoryservice attribute with a list value:Inventory Attribute Example Value numa.nodeAffinity["True"]or[](empty list if not configured) -
Rego query based on the attribute:
count(input.numaNodeAffinity) != 0For information about Rego files, see About Rego files.
-
Create a
ConfigMapnamedforklift-validation-configin the konveyor-forklift namespace:If you want to use a different ConfigMap name, you must also configure the
forklift-controllerCR. For more information, see step 2.Example:
$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: forklift-validation-config namespace: konveyor-forklift data: vmware_multiple_disks.rego: |- package <provider_package> has_multiple_disks { count(input.disks) > 1 } concerns[flag] { has_multiple_disks flag := { "category": "<Information>", "label": "Multiple disks detected", "assessment": "Multiple disks detected on this VM." } } EOF-
<provider_package>: The provider package name. Valid values areio.konveyor.forklift.vmwarefor VMware andio.konveyor.forklift.ovirtfor oVirt. -
count(input.disks): Your Rego query. -
category: Valid values areCritical,Warning, andInformation.
-
-
Optional: If you are using a custom ConfigMap name instead of the default
forklift-validation-config, add thevalidation_configmap_nameparameter to thespecsection of theForkliftControllerCR:spec: ... validation_configmap_name: <custom_configmap_name> ...-
Replace
<custom_configmap_name>with the name of your ConfigMap.
-
-
Stop the
Validationpod by scaling theforklift-validationdeployment to0:$ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-validation -
Start the
Validationpod by scaling theforklift-validationdeployment to1:$ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-validation -
Check the
Validationpod log to verify that the pod started:$ kubectl logs -f <validation_pod>If the custom rule conflicts with a default rule, the
Validationpod does not start. -
Remove the source provider:
$ kubectl delete provider <provider> -n konveyor-forklift -
Add the source provider to apply the new rule. For information about adding a source provider, see the sections about adding a source provider in Chapters 10-14 of Planning your migration to Red Hat OpenShift Virtualization:
$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <provider> namespace: konveyor-forklift spec: type: <provider_type> url: <api_end_point> secret: name: <secret> namespace: konveyor-forklift EOF-
<provider_type>: Valid values areovirt,vsphere, andopenstack. -
<api_end_point>: The API endpoint URL, for example,https://<vCenter_host>/sdkfor vSphere,https://<engine_host>/ovirt-engine/apifor oVirt, orhttps://<identity_service>/v3for OpenStack. -
<secret>: The name of the providerSecretCR.
-
Update the inventory rules version after creating a custom rule so that the Provider Inventory service detects the changes and validates the VMs. For more information, see Updating the inventory rules version.
Updating the inventory rules version
You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.
The rules version is recorded in a rules_version.rego file for each provider.
-
Retrieve the current rules version:
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_versionThe output looks like the following example:
{ "result": { "rules_version": 5 } } -
Connect to the terminal of the
Validationpod:$ kubectl rsh <validation_pod> -
Update the rules version in the
/usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.regofile. -
Log out of the
Validationpod terminal. -
Verify the updated rules version:
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_versionThe output looks like the following example:
{ "result": { "rules_version": 6 } }
Retrieving the Inventory service JSON
You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.
You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.
-
Retrieve the routes for the project:
oc get route -n openshift-mtv -
Retrieve the
Inventoryservice route:$ kubectl get route <inventory_service> -n konveyor-forklift -
Retrieve the access token:
$ TOKEN=$(oc whoami -t) -
Trigger an HTTP GET request (for example, using Curl):
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k -
Retrieve the
UUIDof a provider:$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider> -kwhere:
- <provider>
-
Specifies the type of provider. Allowed values are
vsphere,ovirt, andopenstack.
-
Retrieve the VMs of a provider:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k -
Retrieve the details of a VM:
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -kExample output{ "input": { "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431", "id": "vm-431", "parent": { "kind": "Folder", "id": "group-v22" }, "revision": 1, "name": "iscsi-target", "revisionValidated": 1, "isTemplate": false, "networks": [ { "kind": "Network", "id": "network-31" }, { "kind": "Network", "id": "network-33" } ], "disks": [ { "key": 2000, "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk", "datastore": { "kind": "Datastore", "id": "datastore-63" }, "capacity": 17179869184, "shared": false, "rdm": false }, { "key": 2001, "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk", "datastore": { "kind": "Datastore", "id": "datastore-63" }, "capacity": 10737418240, "shared": false, "rdm": false } ], "concerns": [], "policyVersion": 5, "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49", "firmware": "bios", "powerState": "poweredOn", "connectionState": "connected", "snapshot": { "kind": "VirtualMachineSnapshot", "id": "snapshot-3034" }, "changeTrackingEnabled": false, "cpuAffinity": [ 0, 2 ], "cpuHotAddEnabled": true, "cpuHotRemoveEnabled": false, "memoryHotAddEnabled": false, "faultToleranceEnabled": false, "cpuCount": 2, "coresPerSocket": 1, "memoryMB": 2048, "guestName": "Red Hat Enterprise Linux 7 (64-bit)", "balloonedMemory": 0, "ipAddress": "10.19.2.96", "storageUsed": 30436770129, "numaNodeAffinity": [ "0", "1" ], "devices": [ { "kind": "RealUSBController" } ], "host": { "id": "host-29", "parent": { "kind": "Cluster", "id": "domain-c26" }, "revision": 1, "name": "IP address or host name of the vCenter host or oVirt Engine host", "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29", "status": "green", "inMaintenance": false, "managementServerIp": "10.19.2.96", "thumbprint": <thumbprint>, "timezone": "UTC", "cpuSockets": 2, "cpuCores": 16, "productName": "VMware ESXi", "productVersion": "6.5.0", "networking": { "pNICs": [ { "key": "key-vim.host.PhysicalNic-vmnic0", "linkSpeed": 10000 }, { "key": "key-vim.host.PhysicalNic-vmnic1", "linkSpeed": 10000 }, { "key": "key-vim.host.PhysicalNic-vmnic2", "linkSpeed": 10000 }, { "key": "key-vim.host.PhysicalNic-vmnic3", "linkSpeed": 10000 } ], "vNICs": [ { "key": "key-vim.host.VirtualNic-vmk2", "portGroup": "VM_Migration", "dPortGroup": "", "ipAddress": "192.168.79.13", "subnetMask": "255.255.255.0", "mtu": 9000 }, { "key": "key-vim.host.VirtualNic-vmk0", "portGroup": "Management Network", "dPortGroup": "", "ipAddress": "10.19.2.13", "subnetMask": "255.255.255.128", "mtu": 1500 }, { "key": "key-vim.host.VirtualNic-vmk1", "portGroup": "Storage Network", "dPortGroup": "", "ipAddress": "172.31.2.13", "subnetMask": "255.255.0.0", "mtu": 1500 }, { "key": "key-vim.host.VirtualNic-vmk3", "portGroup": "", "dPortGroup": "dvportgroup-48", "ipAddress": "192.168.61.13", "subnetMask": "255.255.255.0", "mtu": 1500 }, { "key": "key-vim.host.VirtualNic-vmk4", "portGroup": "VM_DHCP_Network", "dPortGroup": "", "ipAddress": "10.19.2.231", "subnetMask": "255.255.255.128", "mtu": 1500 } ], "portGroups": [ { "key": "key-vim.host.PortGroup-VM Network", "name": "VM Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0" }, { "key": "key-vim.host.PortGroup-Management Network", "name": "Management Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0" }, { "key": "key-vim.host.PortGroup-VM_10G_Network", "name": "VM_10G_Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-VM_Storage", "name": "VM_Storage", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-VM_DHCP_Network", "name": "VM_DHCP_Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-Storage Network", "name": "Storage Network", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1" }, { "key": "key-vim.host.PortGroup-VM_Isolated_67", "name": "VM_Isolated_67", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2" }, { "key": "key-vim.host.PortGroup-VM_Migration", "name": "VM_Migration", "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2" } ], "switches": [ { "key": "key-vim.host.VirtualSwitch-vSwitch0", "name": "vSwitch0", "portGroups": [ "key-vim.host.PortGroup-VM Network", "key-vim.host.PortGroup-Management Network" ], "pNICs": [ "key-vim.host.PhysicalNic-vmnic4" ] }, { "key": "key-vim.host.VirtualSwitch-vSwitch1", "name": "vSwitch1", "portGroups": [ "key-vim.host.PortGroup-VM_10G_Network", "key-vim.host.PortGroup-VM_Storage", "key-vim.host.PortGroup-VM_DHCP_Network", "key-vim.host.PortGroup-Storage Network" ], "pNICs": [ "key-vim.host.PhysicalNic-vmnic2", "key-vim.host.PhysicalNic-vmnic0" ] }, { "key": "key-vim.host.VirtualSwitch-vSwitch2", "name": "vSwitch2", "portGroups": [ "key-vim.host.PortGroup-VM_Isolated_67", "key-vim.host.PortGroup-VM_Migration" ], "pNICs": [ "key-vim.host.PhysicalNic-vmnic3", "key-vim.host.PhysicalNic-vmnic1" ] } ] }, "networks": [ { "kind": "Network", "id": "network-31" }, { "kind": "Network", "id": "network-34" }, { "kind": "Network", "id": "network-57" }, { "kind": "Network", "id": "network-33" }, { "kind": "Network", "id": "dvportgroup-47" } ], "datastores": [ { "kind": "Datastore", "id": "datastore-35" }, { "kind": "Datastore", "id": "datastore-63" } ], "vms": null, "networkAdapters": [], "cluster": { "id": "domain-c26", "parent": { "kind": "Folder", "id": "group-h23" }, "revision": 1, "name": "mycluster", "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26", "folder": "group-h23", "networks": [ { "kind": "Network", "id": "network-31" }, { "kind": "Network", "id": "network-34" }, { "kind": "Network", "id": "network-57" }, { "kind": "Network", "id": "network-33" }, { "kind": "Network", "id": "dvportgroup-47" } ], "datastores": [ { "kind": "Datastore", "id": "datastore-35" }, { "kind": "Datastore", "id": "datastore-63" } ], "hosts": [ { "kind": "Host", "id": "host-44" }, { "kind": "Host", "id": "host-29" } ], "dasEnabled": false, "dasVms": [], "drsEnabled": true, "drsBehavior": "fullyAutomated", "drsVms": [], "datacenter": null } } } }
About hooks for Forklift migration plans
You can add hooks to an Forklift migration plan to perform automated operations on a VM, either before or after you migrate it.
You can add hooks to Forklift migration plans by using either the Forklift CLI or the Forklift user interface, which is located in the OKD web console.
- Hook types
-
-
Pre-migration hooks: Hooks that perform operations on a VM that is located on a provider. This prepares the VM for migration.
-
Post-migration hooks: Hooks that perform operations on a VM that has migrated to KubeVirt.
-
- Hook configuration
-
-
Default hook image: The default hook image for an Forklift hook is
quay.io/kubev2v/hook-runner. The image is based on the Ansible Runner image with the addition ofpython-openshiftto provide Ansible Kubernetes resources and a recentocbinary. -
Hook execution: An Ansible Playbook that is provided as part of a migration hook is mounted into the hook container as a
ConfigMap. The hook container is run as a job on the relevant cluster in theopenshift-mtvnamespace. When you add a hook, you must specify the name of the hook and whether it is a pre-migration hook or a post-migration hook. -
Service account: You can optionally specify a service account when adding a hook. When you add a hook by using the OKD web console, you can specify the service account name in the Service account field when creating or editing the migration plan. When you add a hook by using the CLI, you can specify the service account in the
HookCR. If you specify a service account, it must have the appropriate RBAC permissions to manage cluster resources and at least write access for theopenshift-mtvnamespace where hooks execute.
-
| For a hook to run on a VM, the VM must be started and available using SSH. |
Migration hook workflow
The illustration that follows shows the general process of using a migration hook.
Process:
-
Input your Ansible hook and credentials.
-
Input an Ansible hook image to the Forklift controller using either the UI or the CLI.
-
In the UI, specify the
ansible-runnerand enter theplaybook.ymlthat contains the hook. -
In the CLI, input the hook image, which specifies the playbook that runs the hook.
-
-
If you need additional data to run the playbook inside the pod, such as SSH data, create a Secret that contains credentials for the VM. The Secret is not mounted to the pod, but is called by the playbook.
This Secret is not the same as the
SecretCR that contains the credentials of your source provider.
-
-
The Forklift controller creates the
ConfigMap, which contains:-
workload.yml, which contains information about the VMs. -
playbook.yml, the raw string playbook you want to run. -
plan.yml, which is thePlanCR.The
ConfigMapcontains the name of the VM and instructs the playbook what to do.
-
-
The Forklift controller creates a job that starts the user-specified image.
-
Mounts the
ConfigMapto the container.The Ansible hook imports the Secret that the user previously entered.
-
-
The job runs a pre-migration hook or a post-migration hook as follows:
-
For a pre-migration hook, the job logs into the VMs on the source provider using SSH and runs the hook.
-
For a post-migration hook, the job logs into the VMs on KubeVirt using SSH and runs the hook.
-
Adding a migration hook to a migration plan using the OKD web console
You can add a migration hook to an existing migration plan by using the OKD web console. For example, you can create a hook to install the cloud-init service on a VM and write a file before migration. You can optionally specify a service account for the hook directly in the web console.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan.
-
Migration plan.
-
Migration hook file, whose contents you copy and paste into the web console.
-
File containing the
Secretfor the source provider. -
(Optional) OKD service account called by the hook. The service account must have at least write access for the
openshift-mtvnamespace where hooks execute. For information about creating a service account, see Understanding and creating service accounts. -
SSH access for VMs you want to migrate with the public key installed on the VMs.
-
VMs running on Microsoft Server only: Remote Execution enabled.
-
In the OKD web console, click Migration for Virtualization > Migration plans and then click the migration plan you want to add the hook to.
-
Click Hooks.
-
For a pre-migration hook, perform the following steps:
-
In the Pre migration hook section, toggle the Enable hook switch to Enable pre migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook, you need to use an image that has anansible-runner. -
Optional: Enter the Service account name. The service account must have the necessary RBAC permissions to manage cluster resources and at least write access for the
openshift-mtvnamespace where hooks execute. -
Paste your hook as a YAML file in the Ansible playbook text box.
-
-
For a post-migration hook, perform the following steps:
-
In the Post migration hook, toggle the Enable hook switch to Enable post migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook, you need to use an image that has anansible-runner. -
Optional: Enter the Service account name. The service account must have the necessary RBAC permissions to manage cluster resources and at least write access for the
openshift-mtvnamespace where hooks execute. -
Paste your hook as a YAML file in the Ansible playbook text box.
-
-
At the top of the tab, click Update hooks.
The following example hook ensures that the VM can be accessed using SSH, creates an SSH key, and runs two tasks: stopping the MariaDB database and generating a text file.
- name: Main hosts: localhost vars_files: - plan.yml - workload.yml tasks: - k8s_info: api_version: v1 kind: Secret name: privkey namespace: openshift-mtv register: ssh_credentials - name: Ensure SSH directory exists file: path: ~/.ssh state: directory mode: 0750 - name: Create SSH key copy: dest: ~/.ssh/id_rsa content: "{{ ssh_credentials.resources[0].data.key | b64decode }}" mode: 0600 - add_host: name: "{{ vm.ipaddress }}" # ALT "{{ vm.guestnetworks[2].ip }}" ansible_user: root groups: vms - hosts: vms vars_files: - plan.yml - workload.yml tasks: - name: Stop MariaDB service: name: mariadb state: stopped - name: Create Test File copy: dest: /premigration.txt content: "Migration from {{ provider.source.name }} of {{ vm.vm1.vm0.id }} has finished\n" mode: 0644
Adding a migration hook to a migration plan using the CLI
You can use a Hook CR to add a pre-migration hook or a post-migration hook to an existing migration plan by using the Forklift CLI. For example, you can create a Hook custom resource (CR) to install the cloud-init service on a VM and write a file before migration.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan. Each hook needs its own Hook CR, but a Plan CR contains data for all the hooks it uses. You can retrieve additional information stored in a secret or in a ConfigMap by using a k8s module.
-
Migration plan.
-
Migration hook image or the playbook containing the hook image.
-
File containing the Secret for the source provider.
-
(Optional) OKD service account called by the hook. The service account must have at least write access for the
openshift-mtvnamespace where hooks execute. For information about creating a service account, see Understanding and creating service accounts. -
SSH access for VMs you want to migrate with the public key installed on the VMs.
-
VMs running on Microsoft Server only: Remote Execution enabled.
-
If needed, create a Secret with an SSH private key for the VM.
-
Choose an existing key or generate a key pair.
-
Install the public key on the VM.
-
Encode the private key in the Secret to base64.
apiVersion: v1 data: key: VGhpcyB3YXMgZ2Vu... kind: Secret metadata: name: ssh-credentials namespace: openshift-mtv type: Opaque
-
-
Encode your playbook by concatenating a file and piping it for Base64 encoding, for example:
$ cat playbook.yml | base64 -w0 -
Create a Hook CR:
$ cat << EOF | kubectl apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/kubev2v/hook-runner serviceAccount: <service_account> playbook: | <playbook> EOFwhere:
- <service_account>
-
Optional: Specifies the OKD service account. The
serviceAccountmust be provided if you want to manipulate any resources of the cluster.You can set the service account directly in the OKD web console when creating or editing a migration plan by using the Service account field. See Adding a migration hook using the web console for the simplified UI workflow. The CLI/YAML method documented here is still supported for advanced use cases and automation.
- <playbook>
-
Specifies the Base64-encoded Ansible Playbook to use. For example,
LS0tCi0gbm.... If you specify a playbook, theimagemust include anansible-runner. You can use the defaulthook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.To decode an attached playbook, retrieve the resource with custom output and pipe it to base64. For example:
$ oc get -n konveyor-forklift hook playbook -o \ go-template='{{ .spec.playbook }}' | base64 -d
-
In the
PlanCR of the migration, for each VM, add the following section to the end of the CR:vms: - id: <vm_id> hooks: - hook: namespace: <namespace> name: <name_of_hook> step: <type_of_hook>where:
- <type_of_hook>
-
Specifies the type of hook. Options are
PreHook, to run the hook before the migration, andPostHook, to run the hook after the migration. In order for a PreHook to run on a VM, the VM must be started and available using SSH.The following example hook ensures that the VM can be accessed using SSH, creates an SSH key, and runs two tasks: stopping the MariaDB database and generating a text file.
- name: Main hosts: localhost vars_files: - plan.yml - workload.yml tasks: - k8s_info: api_version: v1 kind: Secret name: privkey namespace: openshift-mtv register: ssh_credentials - name: Ensure SSH directory exists file: path: ~/.ssh state: directory mode: 0750 - name: Create SSH key copy: dest: ~/.ssh/id_rsa content: "{{ ssh_credentials.resources[0].data.key | b64decode }}" mode: 0600 - add_host: name: "{{ vm.ipaddress }}" # ALT "{{ vm.guestnetworks[2].ip }}" ansible_user: root groups: vms - hosts: vms vars_files: - plan.yml - workload.yml tasks: - name: Stop MariaDB service: name: mariadb state: stopped - name: Create Test File copy: dest: /premigration.txt content: "Migration from {{ provider.source.name }} of {{ vm.vm1.vm0.id }} has finished\n" mode: 0644
About user defined networks
Beginning with Forklift 2.10, you can use a user-defined network (UDN) as your default network for migrations from all providers, except for KubeVirt. This flexibility allows you to migrate virtual machines (VMs) to KubeVirt more consistently.
Forklift has been redesigned to make it easy for you to migrate VMs to UDN namespaces. Once you configure your UDN in KubeVirt, you can specify it as your default network in the migration plan mapping. Forklift is now able to distinguish a UDN from a conventional pod network.
You can use this feature for creating migration plans by using the OKD web console or by using the Forklift command-line interface. The procedures for creating migration plans have been updated to include the new feature.
Scheduling target VMs
By default, KubeVirt assigns the destination nodes of virtual machines (VMs) during migration. However, you can use the scheduling target VMs feature to define the destination nodes and apply specific conditions to schedule when the VMs are switched from pending to on.
About scheduling target VMs
Starting with Forklift 2.10, you can use the target VM scheduling feature to direct Forklift to migrate virtual machines (VMs) to specific nodes of KubeVirt as well as to schedule when to power on the VMs. Using the feature, you can design and enforce rules that you set using either the UI or command-line interface.
Previously, when you migrated VMs to KubeVirt, KubeVirt automatically determined the node the VMs would be migrated to. Although this served many customers' needs, there are certain situations in which it is useful to be able to specify the target node of a VM or the conditions under which the VM is powered on, regardless of the type of migration involved.
Use cases
Target VM scheduling is designed to help you with the following use cases, among others:
-
Business continuity and disaster recovery: You can use scheduling rules to migrate and power up critical VMs to several sites, in different time zones or otherwise geographically separated by significant distances. This allows you to deploy these VMs as strategic assets for business continuity, such as disaster recovery.
-
Working with fluctuating demands: In situations where demand for a service might vary significantly, rules for scheduling when to spin up VMs based upon demand allows you to use your resources more efficiently.
Target VM scheduling prerequisites
Migrations that use target VM scheduling require the following prerequisites, in addition to the prerequisites for your source provider:
-
Forklift 2.10 or later.
-
Version of KubeVirt that is compatible with your version of Forklift. For Forklift 2.10, the compatible versions of KubeVirt are 4.18, 4.19, and 4.20 only.
-
cluster-adminor equivalent security privileges that allow managingVirtualMachineInstanceobjects and associated Kubernetes scheduling primitives.
Target VM scheduling options
You can use the following options to schedule when your target VMs are powered on:
-
Node Selector rule: This is both the simplest rule. You define a set of mandatory exact match key-value label pairs that the target node must possess. If no node on the cluster has all the labels specified, the VM is not scheduled and it remains in a
Pendingstate until there is space on a node that fits the key-value label pairs. -
Affinity and Anti-Affinity rules: Node Affinity rules let you schedule VMs to run on selected nodes or workloads (pods). Node Anti-affinity rules let you prevent VMs from being scheduled to run on selected workloads (pods).
Node Affinity and Node Anti-Affinity rules offer more flexible placement control than rigid Node Selector rules, because they support conditionals such as
In,NotIn.Additionally, Affinity rules and Anti-Affinity rules allow you to include both hard and soft conditions in the same rule. A hard condition is a requirement, and a soft condition is a preference.
| Affinity rules are supported by Forklift at both the node and the workload (pod) levels, but Anti-Affinity rules are supported at the workload (pod) level only. |
-
Custom Scheduler Name: If your KubeVirt environment uses a secondary or specialized scheduler, in addition to the default
kube-scheduler, to handle specific workload types, you can instruct Forklift to apply this custom scheduler name to the target VM’s manifest. This directs the VM to use the specialized logic designed for that workload. This feature is implemented by using the VM target label feature.
By integrating any of these three types of controls into your migration plan, you ensure that the complex scheduling logic required for modern applications is defined upfront, preventing post-migration performance degradation or unexpected scheduling errors.
| Any scheduling rule applied in a migration plan applies to all VMs in it. |
Target VM scheduling in Forklift derives from Kubernetes’s support for Assigning Pods to Nodes.
Scheduling target VMs from the command-line interface
You can use the command-line interface (CLI) to tell Forklift to migrate virtual machines (VMs) to specific nodes or workloads (pods) of KubeVirt as well as to schedule when the VMs are powered on.
The Forklift CLI supports the following scheduling-related labels, all of which are added to the Plan CR:
-
targetAffinity: Implements placement policies such as co-locating related workloads or, for disaster recovery, ensuring that specific VMs are migrated to different nodes. This type of label uses hard (requirements) and soft (preferences) conditions combined with logical operators, such asand,or,andnot, to provide greater flexibility than thetargetLabelSelectorlabel discussed following. -
targetLabels: Applies organizational or operational labels to migrated VMs for identification and management. -
targetNodeSelector: Ensures VMs are scheduled on nodes that are an exact match for key-value pairs you create. This type of label is often used for nodes with special capabilities, such as GPU nodes or storage nodes.
| System-managed labels, such as migration, plan, VM ID, or application labels, override any user-defined labels. |
Migrations that use target VM scheduling require the following prerequisites, in addition to the prerequisites for your source provider:
-
Forklift 2.10 or later.
-
Version of KubeVirt that is compatible with your version of Forklift. For Forklift 2.10, the compatible versions of KubeVirt are 4.18, 4.19, and 4.20 only.
-
cluster-adminor equivalent security privileges that allow managingVirtualMachineInstanceobjects and associated Kubernetes scheduling primitives.
-
Create custom resources (CR)s for the migration according to the procedure for the provider.
-
In the
PlanCR, add the following labels beforespec:targetNamespace. All are optional.... targetAffinity: <affinity rule, which can be quite complex, is entered in lines following this label. See example that follows> targetLabels: label: <label> targetNodeSelector: <key>:<value> targetNamespace:<target_namespace> ...Example:
The following scheduling rule migrates the VMs in the plan to different nodes for disaster recovery:
targetLabels:
label: test1
targetAffinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: label
operator: In
values:
- test1
topologyKey: kubernetes.io/hostnameAs a result of the preceding rule, the VMs are migrated accordingly to the resulting spec:
spec:
runStrategy: Always
template:
metadata:
creationTimestamp: null
labels:
app: mtv-rhel8-sanity-ceph-rbd
label: test1
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: label
operator: In
values:
- test1
topologyKey: kubernetes.io/hostnameScheduling target VMs from the user interface
You can use the Forklift user interface, which is located in the OKD web console, to tell Forklift to migrate virtual machines (VMs) to specific nodes or workloads (pods) of KubeVirt as well as to schedule when the VMs are powered on.
The Virtualization section of the OKD web console supports the following options for scheduling target VMs:
-
VM target node selector: Ensures VMs are scheduled on nodes that are an exact match for key-value pairs you create. This type of label is often used for nodes with special capabilities, such as GPU nodes or storage nodes.
-
VM target labels: Applies organizational or operational labels to migrated VMs for identification and management.
-
VM target affinity rules: Implements placement policies such as co-locating related workloads or, for disaster recovery, ensuring that specific VMs are migrated to different nodes. This type of rule uses hard (requirements) and soft (preferences) conditions combined with logical operators, such as
ExistsorDoesNotExistinstead of using the rigid key-value pairs used by a VM target node selector. As a result, target affinity rules are more flexible than target node selector rules.The Forklift UI supports the following affinity rules:
-
Node affinity rules
-
Workload (pod) affinity and anti-affinity rules
-
You configure target VM scheduling options on the Plan details page of the relevant migration plan. The options apply to all VMs that are included in that migration.
Upgrading or uninstalling the Forklift
You can upgrade or uninstall the Forklift by using the OKD web console or the command-line interface (CLI).
Upgrading the Forklift
You can upgrade the Forklift Operator by using the OKD web console to install the new version.
-
In the OKD web console, click Operators → Installed Operators → Migration Toolkit for Virtualization Operator → Subscription.
-
Change the update channel to the correct release.
See Changing update channel in the OKD documentation.
-
Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the
CatalogSourcepod:-
Note the catalog source, for example,
redhat-operators. -
From the command line, retrieve the catalog source pod:
$ kubectl get pod -n openshift-marketplace | grep <catalog_source> -
Delete the pod:
$ kubectl delete pod -n openshift-marketplace <catalog_source_pod>Upgrade status changes from Up to date to Upgrade available.
If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.
-
-
If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.
See Manually approving a pending upgrade in the OKD documentation.
-
If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK
initimage. Otherwise, the update will change the state of any VMware providers toCritical. For more information, see Adding a VMware source provider. -
If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the
AccessModesandVolumeModeparameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing storage profiles.
Uninstalling Forklift by using the OKD web console
You can uninstall Forklift by using the OKD web console.
-
You must be logged in as a user with
cluster-adminprivileges.
-
In the OKD web console, click Operators > Installed Operators.
-
Click Forklift Operator.
The Operator Details page opens in the Details tab.
-
Click the ForkliftController tab.
-
Click Actions and select Delete ForkLiftController.
A confirmation window opens.
-
Click Delete.
The controller is removed.
-
Open the Details tab.
The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it.
-
On the upper-right side of the page, click Actions and select Uninstall Operator.
A confirmation window opens, displaying any operand instances.
-
To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared.
If your Operator configured off-cluster resources, these will continue to run and will require manual cleanup.
-
Click Uninstall.
The Installed Operators page opens, and the Forklift Operator is removed from the list of installed Operators.
-
Click Home > Overview.
-
In the Status section of the page, click Dynamic Plugins.
The Dynamic Plugins pop-up opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console.
-
Click forklift-console-plugin.
The ConsolePlugin details page opens in the Details tab.
-
On the upper right side of the page, click Actions and select Delete ConsolePlugin from the list.
A confirmation window opens.
-
Click Delete.
The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page.
Uninstalling Forklift from the command line
You can uninstall Forklift from the command line.
| This action does not remove resources managed by the Forklift Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the Forklift Operator, you might need to manually delete the Forklift Operator CRDs. |
-
You must be logged in as a user with
cluster-adminprivileges.
-
Delete the
forkliftcontroller by running the following command:$ oc delete ForkliftController --all -n openshift-mtv -
Delete the subscription to the Forklift Operator by running the following command:
$ oc get subscription -o name|grep 'mtv-operator'| xargs oc delete -
Delete the
clusterserviceversionfor the Forklift Operator by running the following command:$ oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete -
Delete the plugin console CR by running the following command:
$ oc delete ConsolePlugin forklift-console-plugin -
Optional: Delete the custom resource definitions (CRDs) by running the following command:
kubectl get crd -o name | grep 'forklift.konveyor.io' | xargs kubectl delete -
Optional: Perform cleanup by deleting the Forklift project by running the following command:
oc delete project openshift-mtv
Understanding MTV migration
Understand the Forklift custom resources, services, and workflows that enable virtual machine migration to KubeVirt.
Forklift custom resources and services
The Forklift Operator is provided as a OKD Operator. It creates and manages the following custom resources (CRs) and services.
Forklift custom resources
-
ProviderCR stores attributes that enable Forklift to connect to and interact with the source and target providers. -
NetworkMappingCR maps the networks of the source and target providers. -
StorageMappingCR maps the storage of the source and target providers. -
PlanCR contains a list of VMs with the same migration parameters and associated network and storage mappings. -
MigrationCR runs a migration plan.Only one
MigrationCR per migration plan can run at a given time. You can create multipleMigrationCRs for a singlePlanCR.
Forklift services
-
The
Inventoryservice performs the following actions:-
Connects to the source and target providers.
-
Maintains a local inventory for mappings and plans.
-
Stores VM configurations.
-
Runs the
Validationservice if a VM configuration change is detected.
-
-
The
Validationservice checks the suitability of a VM for migration by applying rules. -
The
Migration Controllerservice orchestrates migrations.When you create a migration plan, the
Migration Controllerservice validates the plan and adds a status label. If the plan fails validation, the plan status isNot readyand the plan cannot be used to perform a migration. If the plan passes validation, the plan status isReadyand it can be used to perform a migration. After a successful migration, theMigration Controllerservice changes the plan status toCompleted. -
The
Populator Controllerservice orchestrates disk transfers using Volume Populators. -
The
Kubevirt ControllerandContainerized Data Import (CDI) Controllerservices handle most technical operations.
High-level migration workflow
The high-level workflow shows the migration process from the point of view of the user:
-
You create a source provider, a target provider, a network mapping, and a storage mapping.
-
You create a
Plancustom resource (CR) that includes the following resources:-
Source provider
-
Target provider, if Forklift is not installed on the target cluster
-
Network mapping
-
Storage mapping
-
One or more virtual machines (VMs)
-
-
You run a migration plan by creating a
MigrationCR that references thePlanCR.If you cannot migrate all the VMs for any reason, you can create multiple
MigrationCRs for the samePlanCR until all VMs are migrated. -
For each VM in the
PlanCR, theMigration Controllerservice records the VM migration progress in theMigrationCR. -
Once the data transfer for each VM in the
PlanCR completes, theMigration Controllerservice creates aVirtualMachineCR.When all VMs have been migrated, the
Migration Controllerservice updates the status of thePlanCR toCompleted. The power state of each source VM is maintained after migration.
Detailed migration workflows
You can use the detailed migration workflows to troubleshoot a failed migration.
Warm migration or migration to a remote OpenShift cluster:
-
When you create the
Migrationcustom resource (CR) to run a migration plan, theMigration Controllerservice creates aDataVolumeCR for each source VM disk.For each VM disk:
-
The
Containerized Data Importer (CDI) Controllerservice creates a persistent volume claim (PVC) based on the parameters specified in theDataVolumeCR. -
If the
StorageClasshas a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by theStorageClassprovisioner. -
The
CDI Controllerservice creates animporterpod. -
The
importerpod streams the VM disk to the PV.After the VM disks are transferred:
-
The
Migration Controllerservice creates aconversionpod with the PVCs attached to it when importing from VMware.The
conversionpod runsvirt-v2v, which installs and configures device drivers on the PVCs of the target VM. -
The
Migration Controllerservice creates aVirtualMachineCR for each source virtual machine (VM), connected to the PVCs. -
If the VM ran on the source environment, the
Migration Controllerpowers on the VM, theKubeVirt Controllerservice creates avirt-launcherpod and aVirtualMachineInstanceCR.The
virt-launcherpod runsQEMU-KVMwith the PVCs attached as VM disks.
Cold migration from oVirt or OpenStack to the local OpenShift cluster:
-
When you create a
Migrationcustom resource (CR) to run a migration plan, theMigration Controllerservice creates for each source VM disk aPersistentVolumeClaimCR, and anOvirtVolumePopulatorwhen the source is oVirt, or anOpenstackVolumePopulatorCR when the source is OpenStack.For each VM disk:
-
The
Populator Controllerservice creates a temporarily persistent volume claim (PVC). -
If the
StorageClasshas a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by theStorageClassprovisioner.-
The
Migration Controllerservice creates a dummy pod to bind all PVCs. The name of the pod containspvcinit.
-
-
The
Populator Controllerservice creates apopulatorpod. -
The
populatorpod transfers the disk data to the PV.After the VM disks are transferred:
-
The temporary PVC is deleted, and the initial PVC points to the PV with the data.
-
The
Migration Controllerservice creates aVirtualMachineCR for each source virtual machine (VM), connected to the PVCs. -
If the VM ran on the source environment, the
Migration Controllerpowers on the VM, theKubeVirt Controllerservice creates avirt-launcherpod and aVirtualMachineInstanceCR.The
virt-launcherpod runsQEMU-KVMwith the PVCs attached as VM disks.
Cold migration from VMware to the local OpenShift cluster:
-
When you create a
Migrationcustom resource (CR) to run a migration plan, theMigration Controllerservice creates aDataVolumeCR for each source VM disk.For each VM disk:
-
The
Containerized Data Importer (CDI) Controllerservice creates a blank persistent volume claim (PVC) based on the parameters specified in theDataVolumeCR. -
If the
StorageClasshas a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by theStorageClassprovisioner.For all VM disks:
-
The
Migration Controllerservice creates a dummy pod to bind all PVCs. The name of the pod containspvcinit. -
The
Migration Controllerservice creates aconversionpod for all PVCs. -
The
conversionpod runsvirt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.After the VM disks are transferred:
-
The
Migration Controllerservice creates aVirtualMachineCR for each source virtual machine (VM), connected to the PVCs. -
If the VM ran on the source environment, the
Migration Controllerpowers on the VM, theKubeVirt Controllerservice creates avirt-launcherpod and aVirtualMachineInstanceCR.The
virt-launcherpod runsQEMU-KVMwith the PVCs attached as VM disks.
How MTV uses the virt-v2v tool
The Forklift uses the virt-v2v tool to convert the disk image of a virtual machine (VM) into a format compatible with KubeVirt. The tool makes migrations easier because it automatically performs the tasks needed to make your VMs work with KubeVirt. For example, enabling paravirtualized VirtIO drivers in the converted VM, if possible, and installing the QEMU guest agent.
virt-v2v is included in Red Hat Enterprise Linux (RHEL) versions 7 and later.
Main functions of virt-v2v in MTV migrations
During migration, Forklift uses virt-v2v to collect metadata about VMs, make necessary changes to VM disks, and copy the disks containing the VMs to KubeVirt.
virt-v2v makes the following changes to VM disks to prepare them for migration:
-
Additions:
-
Injection of VirtIO drivers, for example, network or disk drivers.
-
Preparation of hypervisor-specific tools or agents, for example, a QEMU guest agent installation.
-
Modification of boot configuration, for example, updated boot loader or boot entries.
-
-
Removals:
-
Unnecessary or former hypervisor-specific files, for example, VMware tools or VirtualBox additions.
-
Old network driver configurations, for example, removing VMware-specific NIC drivers.
-
Configuration settings that are incompatible with the target system, for example, old boot settings.
-
If you are migrating from VMware or from Open Virtual Appliances (OVA) files, virt-v2v also sets their IP addresses either during the migration or during the first reboot of the VMs after migration.
| You can also run predefined Ansible hooks before or after a migration using Forklift. For more information, see About hooks for MTV migration plans. These hooks do not necessarily use |
Customizing, removing, and installing files
Forklift uses virt-v2v to perform additional guest customizations during the conversion, such as the following actions:
-
Customization to preserve IP addresses
-
Customization to preserve drive letters
| For Red Hat Enterprise Linux (RHEL)-based guests, |
For more information, see the man reference pages:
Raw copy mode
In regular cold and warm migrations, Forklift uses a program called virt-v2v to prepare virtual machines (VMs) for migration to KubeVirt after the VMs have been copied from their source provider.
The main function of virt-v2v is to convert the disk image of a VM into a format compatible with KubeVirt.
This program is described in detail in How Forklift uses the virt-v2v tool.
What is important to note here is that although virt-v2v is compatible with major operating systems such as recent versions of Red Hat Enterprise Linux, Windows, and Windows Server, it is not compatible with macOS and some other operating systems.
| For a list of the operating systems that |
As a workaround for migrating VMs that use an operating system that virt-v2v does not support, Forklift includes a feature called raw copy mode. Raw copy mode copies VMs without applying any tool to convert them for use with KubeVirt. The migrated VMs use emulated devices.
This enables more robust migrations, enabling a wider range of operating systems and configurations. Examples are VMs with uncommon file systems, VMs with uncommon encryption technologies or without access to keys.
However, VMs migrated using raw copy mode might not boot on KubeVirt or perform as well as VMs migrated in the regular way.
Therefore, using raw copy mode is a tradeoff between being a more versatile migration option compared to increasing the risk of problems following migration.
Because of this risk, users are asked to request that Red Hat support perform raw copy mode migrations.
-
For information about configuring raw copy mode by using the Forklift user interface, see Configuring VMware migration plan settings in Planning your migration to Red Hat KubeVirt.
-
For information about configuring raw copy mode by using the command-line interface, see the
skipGuestConversionanduseCompatibilityModeparameters in Running a VMware vSphere migration from the command-line. -
For information about device compatibility mode for raw copy migrations, see Device compatibility mode for raw copy migrations.
Device compatibility mode for raw copy migrations
When you use raw copy mode, you can configure which device types Forklift uses for migrated VMs through the Use compatibility mode setting.
- Compatibility devices (default)
-
By default, when you enable raw copy mode, Forklift uses compatibility devices to ensure VMs can boot on KubeVirt:
-
SATA bus for disk controllers
-
E1000E NICs for network interfaces
-
USB controllers
These emulated devices work without requiring additional drivers in the guest operating system, ensuring maximum bootability for migrated VMs.
-
- VirtIO devices
-
For source VMs that already have VirtIO drivers installed, you can disable compatibility mode to use VirtIO devices instead. VirtIO devices provide better performance:
-
VirtIO disk bus provides higher I/O throughput than SATA
-
VirtIO network interface provides better network performance than E1000E
The Use compatibility mode setting only applies when you enable raw copy mode. When you disable raw copy mode (standard V2V conversion), Forklift always uses VirtIO devices regardless of this setting.
Before you disable compatibility mode, verify that VirtIO drivers are installed in the source VM. VMs without VirtIO drivers do not boot on KubeVirt if you disable compatibility mode.
-
- When to disable compatibility mode
-
Consider disabling compatibility mode when:
-
Your source VMs are running modern operating systems with VirtIO drivers pre-installed, for example, recent versions of Red Hat Enterprise Linux or Windows with VirtIO drivers
-
Maximum performance is required for your workload
-
You have verified VirtIO driver presence in the source VM before migration
-
- Configuring compatibility mode
-
You can configure the compatibility mode setting when you create a migration plan. For more information, see Configuring VMware migration plan settings.
Troubleshooting migration
Troubleshoot migration issues by following diagnostic workflows, resolving common errors, and collecting logs for analysis.
Troubleshooting workflow
When troubleshooting migration issues, follow this recommended sequence to identify and resolve problems efficiently.
-
Check the migration progress for a high-level overview of your virtual machine (VM) migration status.
-
Navigate to the Virtual machines tab on your migration plan’s details page.
-
Review the status of each VM to identify where errors are occurring.
You can typically find where the error is occurring at this stage. VMs can be migrated in two different ways:
-
Warm migration: VMs are migrated with minimal downtime while remaining powered on during the precopy stage.
-
Cold migration: VMs are shut down during the entire migration process.
-
-
-
View pod logs for specific information about pod status within Kubernetes.
Pod logs are only available after the image conversion stage has started.
-
On the migration plan’s details page, expand the Migration Resources section.
-
Under the Pod subheading, click View logs.
-
Review the logs for error messages or warnings that indicate the cause of the failure.
-
-
Review forklift controller logs if the migration progress and pod logs are not helpful.
Forklift controller logs capture Forklift related events and provide detailed information about the migration process.
-
Access the forklift controller logs through the OKD web console or CLI.
-
Search for error messages or warnings related to your migration plan or specific VMs.
-
-
Collect must-gather logs if previous troubleshooting steps do not resolve the issue.
The
must-gathertool collects comprehensive diagnostic information about your cluster.-
Navigate to the directory where you want to store the
must-gatherdata:$ cd /path/to/must-gather-directory -
Run the
must-gathercommand:$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:latest -
Review the collected logs in the newly created directory.
-
-
Optional: Open a support case with Red Hat if you need assistance from Red Hat Support.
-
Create a compressed file from the must-gather directory:
$ tar -czf must-gather-$(date +%Y%m%d-%H%M%S).tar.gz must-gather.local.* -
Open a support case on the Red Hat Customer Portal.
-
Attach the compressed must-gather file to your support case.
-
Provide a detailed description of the issue, including:
-
Migration plan details
-
Source and target provider information
-
Error messages from the UI or logs
-
Steps to reproduce the issue
-
-
Common migration issues
Review common issues that you might encounter when planning and executing virtual machine migrations, and how to avoid or resolve them.
My virtual machine fails to migrate or behaves unexpectedly after migration. Is the operating system supported?
Check the official list of guest operating systems supported by KubeVirt for your version of OKD. If the operating system is not on the list, it might cause migration failures or unexpected behavior after the migration is complete. The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.
My migration plan shows a Destination network not found error. What should I do?
Verify that you have created a network mapping that correctly links the source network from your source environment to the destination network attachment definition in KubeVirt. If the network map displays a Destination network not found error, you must create a network attachment definition for the destination network before the migration can proceed.
To resolve this issue:
-
Create the required network attachment definition in KubeVirt.
-
Update or recreate your network mapping to reference the correct destination network.
-
Validate that the network mapping shows a
Readystatus before starting the migration.
Why does warm migration fail with a snapshot error?
This often happens when changed block tracking (CBT) is not enabled on the source VM. Warm migration relies on CBT to efficiently track and transfer changes while the VM is running. You must enable CBT on the source VM and on each VM disk in your source environment before starting a warm migration.
For VMware environments:
-
Enable CBT on each source VM that you plan to migrate using warm migration.
-
Enable CBT on each disk attached to the VM.
-
Verify that the VM does not exceed the maximum of 28 CBT snapshots.
| A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. |
My virtual machine migrated successfully, but it does not function properly. What might be wrong?
A common reason VMs do not function properly, even after a successful migration, is that the name of the VM does not meet Kubernetes DNS naming requirements. VM names in KubeVirt must be DNS-compliant and unique.
Invalid VM names include those that:
-
Use periods (
.) anywhere in the name -
Use hyphens (
-) at the start or end of the name -
Exceed 63 characters in length
-
Use uppercase letters
-
Use a name that differs from the VM’s files or folder name on the datastore
Forklift automatically adjusts non-compliant VM names in the target cluster by replacing invalid characters. Alternatively, you can rename target VMs in the Forklift UI during migration plan creation.
For VMware environments, you can use Storage vMotion to rename the VM before migration. This migration process automatically renames the VM’s files and folder on the datastore to match the new name you have given it in the vSphere Client. Alternatively, you can manually remove the VM from inventory, rename the files and folders, edit the .vmx file to update the references, and then re-add the VM to the inventory.
For more information about renaming VMs in the Forklift UI, see Renaming virtual machines.
Error messages
Review common error messages encountered during migration. For detailed resolution steps, see the linked troubleshooting procedures.
| Error message | Description | Resolution |
|---|---|---|
| Displayed during a warm migration when a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage. | |
| Displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The default file system overhead assumed by CDI does not include the reserved space for the root partition. | |
| Displayed during warm migration of Microsoft Windows VMs when the Volume Shadow Copy Service (VSS) is not running on the guest operating system. | |
| Displayed when creating an Open Virtual Appliance (OVA) provider. Error messages might appear before the provider status changes to | |
| Displayed when a provider in a namespace different from | |
| Displayed when Forklift cannot establish an SSH connection to the ESXi host during storage copy offload operations. | |
| Displayed when a source VM name does not comply with Kubernetes DNS naming requirements. | |
| Displayed during storage copy offload operations with NetApp storage when ONTAP is not configured correctly. | |
| Displayed when a VM is backed by VMware vSAN storage and no VDDK image is configured. VDDK is mandatory for vSAN migrations. |
Resolving warm import retry limit errors
The warm import retry limit reached error occurs during a warm migration when a VMware virtual machine (VM) has reached the maximum number of changed block tracking (CBT) snapshots during the precopy stage.
A VM can support up to 28 CBT snapshots. During warm migration, Forklift creates snapshots at regular intervals (one-hour intervals by default) to track changes incrementally. If the source VM has too many CBT snapshots and the Migration Controller service cannot create a new snapshot, warm migration fails with this error.
-
You have access to the VMware vSphere environment with appropriate permissions.
-
In the VMware vSphere Client, navigate to the source VM that failed migration.
-
Review the existing snapshots on the VM:
-
Right-click the VM and select Snapshots → Manage Snapshots.
-
Identify CBT snapshots created by Forklift.
-
-
Delete some of the CBT snapshots to bring the total number below 28:
-
Select the snapshots you want to delete.
-
Click Delete to remove the selected snapshots.
The Migration Controller service automatically deletes each snapshot when it is no longer required. You only need to manually delete snapshots if the limit is reached before the controller can clean them up.
-
-
In the Forklift UI, restart the migration plan.
-
Navigate to the migration plan’s details page.
-
Verify that the VM migration progresses through the precopy stage without the retry limit error.
Resolving disk resize errors
The Unable to resize disk image to required size error occurs when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage.
The problem occurs because the default file system overhead that is assumed by the Containerized Data Importer (CDI) does not completely include the reserved space for the root partition on EXT4 file systems.
-
You have access to the OKD cluster as a user with the
cluster-adminrole.
-
Increase the file system overhead in CDI to more than 10%:
-
Edit the CDI configuration:
$ oc edit cdi -n openshift-cnv -
Locate the
filesystemOverheadconfiguration section. -
Set the global file system overhead percentage to a value greater than
0.10(10%):spec: config: filesystemOverhead: global: "0.15" -
Save and close the editor.
-
-
Retry the migration plan.
-
Navigate to the migration plan’s details page.
-
Verify that the VM migration completes without disk resize errors.
-
After migration, verify that the VM can boot and access all disk volumes.
Resolving OVA connection test errors
When you create an Open Virtual Appliance (OVA) provider in the Forklift UI, ConnectionTestFailed error messages might be displayed before the provider status changes to Ready.
The error messages are misleading and do not accurately reflect the in-progress status of the connection test.
-
Wait for the provider status to update. If the provider configuration is correct, the status will change to
Readydespite the temporary error messages. -
If the status does not change to
Readyafter several minutes, verify your OVA provider configuration:-
Check that the OVA URL or path is correct and accessible.
-
Verify network connectivity to the OVA storage location.
-
Ensure that any required authentication credentials are correct.
-
-
If the issue persists, check the forklift controller logs for more detailed error information:
$ oc logs -n openshift-mtv deployment/forklift-controller -
Correct any configuration issues identified and update the provider.
-
Navigate to Providers in the Forklift UI.
-
Verify that the OVA provider shows a status of
Ready. -
Attempt to create a migration plan using the OVA provider to confirm it is accessible.
Resolving VDDK image pull errors
When you create a provider in a namespace different from openshift-mtv, migrations fail with an image pull error.
The VDDK init image URL is located in the openshift-mtv namespace. If you create a provider in a different namespace, there is an error when pulling the image.
-
You have access to the OKD cluster as a user with appropriate permissions.
-
The VDDK image has been created and uploaded.
-
Verify that the VDDK image is accessible from the target namespace:
$ oc get imagestream -n openshift-mtv -
Choose one of the following options:
Option 1: Upload the VDDK image to the provider’s namespace
-
Upload the VDDK image to the same namespace as your provider.
-
Update the provider configuration to reference the VDDK image in the local namespace.
Option 2: Configure image pull secrets for cross-namespace access
-
Create or verify image pull secrets in the provider’s namespace:
$ oc get secrets -n <provider_namespace> -
If necessary, create a service account with access to pull images from the
openshift-mtvnamespace. -
Update the provider configuration to use the cross-namespace image reference.
-
-
Update the provider to reference the correct VDDK image location:
-
In the Forklift UI, navigate to Providers.
-
Edit the VMware provider.
-
In the VDDK init image field, enter the correct image URL.
-
Save the provider configuration.
-
-
Retry the migration plan.
-
Navigate to the migration plan’s details page.
-
Verify that the VM migration starts without image pull errors.
-
Check the pod logs to confirm that the VDDK image was pulled successfully.
Resolving VDDK vSAN errors
Virtual machine migrations fail with an error when a VM is backed by VMware vSAN storage and no VDDK image is configured.
VDDK (VMware Virtual Disk Development Kit) is mandatory for migrations from VMware vSAN storage. Migrations do not work without VDDK when a VM is backed by vSAN.
| Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN. |
-
You have access to the VMware VDDK package from VMware.
-
You have access to the OKD cluster as a user with appropriate permissions.
-
Create a VDDK image. For more information, see Creating a VDDK image.
-
Upload the VDDK image to the cluster:
-
Verify that the image is uploaded to the
openshift-mtvnamespace or the provider’s namespace. -
Note the image URL for the next step.
-
-
Update the provider configuration to reference the VDDK image:
-
In the Forklift UI, navigate to Providers.
-
Edit the VMware provider.
-
In the VDDK init image field, enter the image URL.
-
Save the provider configuration.
-
-
Verify that the provider status is
Ready. -
Restart the migration plan.
-
Navigate to the migration plan’s details page.
-
Verify that the VM migration starts and progresses without VDDK-related errors.
-
Monitor the migration progress to ensure successful completion.
Troubleshooting storage copy offload
This section describes problems that are unique to storage copy offload and how you can resolve them.
vSphere-ESXi connectivity issues
- Remote ESXi connection fails with a SOAP error
-
Description: Sometimes a remote ESXi execution fails, returning a SOAP error with no apparent root cause message.
Explanation: Because vSphere invokes some SOAP/REST endpoints on the ESXi, a connection can fail because of standard error reasons that vanish after the next try.
Solution: If the populator fails, the migration can be restarted. Try to restart or retry the populator, or restart the migration.
- VIB issues returned with a CLI error
-
Description: Forklift returns the following error:
CLI Fault: The object or item referred to could not be found. <obj xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:vim25" versionId="5.0" xsi:type="LocalizedMethodFault"><fault xsi:type="NotFound"></fault><localizedMessage>The object or item referred to could not be found.</localizedMessage></obj>Explanation: If the VIB is installed, but
/etc/init.d/hostddid not restart, then thevmkfstoolsnamespace inesxcliis either not updated or does not exist. If that namespace does not exist, it means that this is the first usage, probably right after the first use.Solution: Use SSH to log in to the ESXi and run
/etc/init.d/hostd restart. Wait for a few seconds until the ESXi renews the connection with vSphere.
SSH error messages
- Manual SSH key connection errors
-
Description: Forklift returns one of the following errors:
Manual SSH key configuration required,Failed to connect via SSH, orSSH connection timeout.Explanation: The SSH connection is not running on the ESXi host for one of the following reasons:
-
The SSH is disabled.
Solution: Manually enable an SSH connection on the ESXi host by using the commands in /documentation/doc-Migrating_your_virtual_machines/master.html?assembly_planning-migration-vmware_mtv#proc_storage-copy-offload-manual-ssh-set-up_vmware[Setting up storage copy offload using manually generated SSH keys].
-
There is a problem with the network connectivity.
Solution: Verify that the ESXi management network is accessible from the migration pods.
-
Timeout issue (least likely issue)
Solution: Increase the value of
SSH_TIMEOUT_SECONDSin the providerSecret:-
Edit the provider Secret:
$ oc edit secret <provider_secret_name> -n openshift-mtv -
Add or update the
SSH_TIMEOUT_SECONDSfield with a higher value (for example,300for 5 minutes). -
Save and close the editor.
Verification steps for the preceding solutions:
-
-
To verify that the SSH service is running on an ESXi host, run the following command:
$ vim-cmd hostsvc/get_ssh_status -
To manually test SSH connectivity from a migration pod, run the following command:
$ ssh -i /path/to/<private_key> root@<ESXI_host_IP>
-
NetApp Error
Description: Forklift returns the following error:
Cannot derive SVM to use; please specify SVM in config file
Explanation: ONTAP is not configured correctly.
Solution: Configure your default ONTAP Storage Virtual Machine (SVM) by running the following commands:
-
Show the current configuration for the SVM by running the following command on the ONTAP server:
$ vserver show -vserver ${NAME_OF_SVM} -
Set a management interface for the SVM and enter its
hostnamein theSTORAGE_HOSTNAMEby following the instructions in the NetApp Knowledge Base article, link: Trident fails to access ONTAP on SVM level and on Cluster level. The link requires you to log in.
Using the must-gather tool
You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.
You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.
| If you specify a non-existent resource in the filtered |
-
You must be logged in to the KubeVirt cluster as a user with the
cluster-adminrole. -
You must have the OKD CLI (oc) installed.
-
Navigate to the directory where you want to store the
must-gatherdata. -
Run the
oc adm must-gathercommand:$ oc adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latestThe data is saved as
/must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal. -
Optional: Run the
oc adm must-gathercommand with the following options to gather filtered data:-
Namespace:
$ oc adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- NS=<namespace> /usr/bin/targeted -
Migration plan:
$ oc adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- PLAN=<migration_plan> /usr/bin/targeted -
Virtual machine:
$ oc adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- VM=<vm_id> NS=<namespace> /usr/bin/targetedwhere:
- <vm_id>
-
Specifies the VM ID as it appears in the
PlanCR.
-
Collected logs and custom resource information
You can download logs and custom resource (CR) YAML files for the following targets by using the OKD web console or the command-line interface (CLI):
-
Migration plan: Web console or CLI.
-
Virtual machine: Web console or CLI.
-
Namespace: CLI only.
The must-gather tool collects the following logs and CR files in an archive file:
-
CRs:
-
DataVolumeCR: Represents a disk mounted on a migrated VM. -
VirtualMachineCR: Represents a migrated VM. -
PlanCR: Defines the VMs and storage and network mapping. -
JobCR: Optional: Represents a pre-migration hook, a post-migration hook, or both.
-
-
Logs:
-
importerpod: Disk-to-data-volume conversion log. Theimporterpod naming convention isimporter-<migration_plan>-<vm_id><5_char_id>, for example,importer-mig-plan-ed90dfc6-9a17-4a8btnfh, whereed90dfc6-9a17-4a8is a truncated oVirt VM ID andbtnfhis the generated 5-character ID. -
conversionpod: VM conversion log. Theconversionpod runsvirt-v2v, which installs and configures device drivers on the PVCs of the VM. Theconversionpod naming convention is<migration_plan>-<vm_id><5_char_id>. -
virt-launcherpod: VM launcher log. When a migrated VM is powered on, thevirt-launcherpod runsQEMU-KVMwith the PVCs attached as VM disks. -
forklift-controllerpod: The log is filtered for the migration plan, virtual machine, or namespace specified by themust-gathercommand. -
forklift-must-gather-apipod: The log is filtered for the migration plan, virtual machine, or namespace specified by themust-gathercommand. -
hook-jobpod: The log is filtered for hook jobs. Thehook-jobnaming convention is<migration_plan>-<vm_id><5_char_id>, for example,plan2j-vm-3696-posthook-4mx85orplan2j-vm-3696-prehook-mwqnl.Empty or excluded log files are not included in the
must-gatherarchive file.
-
The following schematic drawing shows the must-gather archive structure for an example VMware migration plan:
must-gather
└── namespaces
├── target-vm-ns
│ ├── crs
│ │ ├── datavolume
│ │ │ ├── mig-plan-vm-7595-tkhdz.yaml
│ │ │ ├── mig-plan-vm-7595-5qvqp.yaml
│ │ │ └── mig-plan-vm-8325-xccfw.yaml
│ │ └── virtualmachine
│ │ ├── test-test-rhel8-2disks2nics.yaml
│ │ └── test-x2019.yaml
│ └── logs
│ ├── importer-mig-plan-vm-7595-tkhdz
│ │ └── current.log
│ ├── importer-mig-plan-vm-7595-5qvqp
│ │ └── current.log
│ ├── importer-mig-plan-vm-8325-xccfw
│ │ └── current.log
│ ├── mig-plan-vm-7595-4glzd
│ │ └── current.log
│ └── mig-plan-vm-8325-4zw49
│ └── current.log
└── openshift-mtv
├── crs
│ └── plan
│ └── mig-plan-cold.yaml
└── logs
├── forklift-controller-67656d574-w74md
│ └── current.log
└── forklift-must-gather-api-89fc7f4b6-hlwb6
└── current.logDownloading logs and custom resource information from the web console
You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) from the OKD web console.
-
In the OKD web console, click Migration for Virtualization > Migration plans.
-
Click Get logs beside a migration plan name.
-
In the Get logs window, click Get logs.
The logs are collected. A
Log collection completemessage is displayed. -
Click Download logs to download the archive file.
-
To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.
Accessing logs and custom resource information from the command line
You can access logs and information about custom resources (CRs) from the command line by using the must-gather tool. You must attach a must-gather data file to all customer cases.
You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.
| If you specify a non-existent resource in the filtered |
-
You must be logged in to the KubeVirt cluster as a user with the
cluster-adminrole. -
You must have the OKD CLI (oc) installed.
-
Navigate to the directory where you want to store the
must-gatherdata. -
Run the
oc adm must-gathercommand:$ kubectl adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latestThe data is saved as
/must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal. -
Optional: Run the
oc adm must-gathercommand with the following options to gather filtered data:-
Namespace:
$ kubectl adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- NS=<namespace> /usr/bin/targeted -
Migration plan:
$ kubectl adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- PLAN=<migration_plan> /usr/bin/targeted -
Virtual machine:
$ kubectl adm must-gather --image=quay.io/kubev2v/forklift-must-gather:latest \ -- VM=<vm_name> NS=<namespace> /usr/bin/targetedwhere:
<vm_name>-
Specifies the VM name, not the VM ID, as it appears in the
PlanCR.
-
MTV performance recommendations
Review recommendations for network and storage performance, cold and warm migrations, and multiple migrations or single migrations.
The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Forklift, based on findings observed through testing.
The data provided here was collected from testing in Red Hat labs and is provided for reference only.
Overall, these numbers should be considered to show the best-case scenarios.
The observed performance of migration can differ from these results and depends on several factors.
Ensure fast storage and network speeds
Ensure fast storage and network speeds, both for VMware and OKD (OCP) environments.
-
To perform fast migrations, VMware must have fast read access to datastores. Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks.
-
Extend the VMware network to the OCP Workers Interface network environment.
-
It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore.
-
Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services use that network, it might have an impact on those services and their migration rates.
-
For example, 200 to 325 MiB/s was the average network transfer rate from the
vmnicfor each ESXi host associated with transferring data to the OCP interface.
-
Ensure fast datastore read speeds to ensure efficient and performant migrations
Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host.
Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible.
Endpoint types
Forklift allows for the following vSphere provider options:
-
ESXi endpoint (inventory and disk transfers from ESXi).
-
vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter).
-
vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi).
When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested.
| As of vSphere 7.0, ESXi hosts can label which network to use for Network Block Device (NBD) transport. This is accomplished by tagging the desired virtual network interface controller (NIC) with the appropriate For more details, see: (Forklift-1230) |
You can use the following ESXi command, which designates interface vmk2 for NBD backup:
$ esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2ESXi performance
ESXi performance can be measured for a single ESXI host or for multiple ESXi hosts.
Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance. Hosts which use Host Power Management controlled within vSphere should check that High Performance is set.
Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate.
Single ESXi host performance
Test migration by using the same ESXi host.
In each iteration, the total VMs increase to display the impact of concurrent migration on the duration.
The results show that migration time is linear when increasing the total VMs (50 GiB disk, Utilization 70%).
The optimal number of VMs per ESXi is 10.
| Test Case Description | MTV | VDDK | max_vm inflight | Migration Type | Total Duration |
|---|---|---|---|---|---|
Cold migration, 10 VMs, Single ESXi, Private Network (non-management network) | 2.6 | 7.0.3 | 100 | cold | 0:21:39 |
cold migration, 20 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 0:41:16 |
Cold migration, 30 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 1:00:59 |
Cold migration, 40 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 1:23:02 |
Cold migration, 50 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 1:46:24 |
Cold migration, 80 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 2:42:49 |
Cold migration, 100 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 3:25:15 |
Multiple ESXi hosts and a single data store
In each iteration, the number of ESXi hosts was increased, to show that increasing the number of ESXi hosts improves the migration time (50 GiB disk, Utilization 70%).
| Test Case Description | MTV | VDDK | max_vm inflight | Migration Type | Total Duration |
|---|---|---|---|---|---|
Cold migration, 100 VMs, Single ESXi, Private Network (non-management network) | 2.6 | 7.0.3 | 100 | cold | 3:25:15 |
Cold migration, 100 VMs, 4 ESXs (25 VMs per ESX), Private Network | 2.6 | 7.0.3 | 100 | cold | 1:22:27 |
Cold migration, 100 VMs, 5 ESXs (20 VMs per ESX), Private Network, 1 Data Store | 2.6 | 7.0.3 | 100 | cold | 1:04:57 |
Performance using different migration networks
In each test, the Migration Network was changed, by using the Provider, to find the fastest network for migration.
The results indicate that there is no degradation by using management compared to non-management networks when all interfaces and network speeds are the same.
| Test Case Description | MTV | VDDK | max_vm inflight | Migration Type | Total Duration |
|---|---|---|---|---|---|
Cold migration, 10 VMs, Single ESXi, MGMT Network | 2.6 | 7.0.3 | 100 | cold | 0:21:30 |
Cold migration, 10 VMs, Single ESXi, Private Network (non-management network) | 2.6 | 7.0.3 | 20 | cold | 0:21:20 |
Cold migration, 10 VMs, Single ESXi, Default Network | 2.6.2 | 7.0.3 | 20 | cold | 0:21:30 |
Avoid additional network load on VMware networks
You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint.
By incorporating a virtualization provider, Forklift enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OKD. Selecting this migration network from the ESXi host in the Forklift UI ensures that the transfer is performed using the selected network as an ESXi endpoint.
It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated.
In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads.
Control maximum concurrent disk migrations per ESXi host
Set the MAX_VM_INFLIGHT MTV variable to control the maximum number of concurrent VM transfers allowed for the ESXi host.
Forklift allows for concurrency to be controlled by using this variable; by default, it is set to 20.
When setting MAX_VM_INFLIGHT, consider the number of maximum concurrent VM transfers that are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently.
Warm migrations use snapshots to compare and migrate only the differences between previous snapshots of the disk. The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OKD occurs.
In Forklift, MAX_VM_INFLIGHT reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single VM. The total set by MAX_VM_INFLIGHT is used to indicate how many concurrent VM transfers per ESXi host is allowed.
Example:
MAX_VM_INFLIGHT = 20 and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs.
Migrations are completed faster when migrating multiple VMs concurrently
When multiple VMs from a specific ESXi host are to be migrated by using Forklift, starting concurrent migrations for multiple VMs leads to faster migration times.
Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another.
It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement.
Examples:
-
1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s
-
10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s
-
20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s
| From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner. The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously. |
Migrations complete faster using multiple hosts
Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times.
Testing showed that when transferring more than 10 single disk VMs, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time.
Examples:
-
80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s.
-
80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s.
| From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host. Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended. |
Multiple migration plans compared to a single large migration plan
The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203).
When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start. By breaking up one migration plan into several migration plans, it is possible to start them at the same time.
Comparing migrations of:
-
500 VMs using 8 ESXi hosts in 1 plan,
max_vm_inflight=100, took 5 hours and 10 minutes. -
800 VMs using 8 ESXi hosts with 8 plans,
max_vm_inflight=100, took 57 minutes.
Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, by using 100 VMs per plan, the total migration time can be reduced.
Maximum values tested for cold migrations
The following maximum values were tested for cold migrations:
-
Maximum number of ESXi hosts tested: 8
-
Maximum number of VMs in a single migration plan: 500
-
Maximum number of VMs migrated in a single test: 5000
-
Maximum number of migration plans performed concurrently: 40
-
Maximum single disk size migrated: 6 TB disk, which contained 3 TB of data
-
Maximum number of disks on a single VM migrated: 50
-
Highest observed single datastore read rate from a single ESXi server: 312 MiB/second
-
Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second
-
Highest observed virtual NIC transfer rate to an OpenShift worker: 327 MiB/second
-
Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 TB utilized data)
-
Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi)
-
Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi)
Warm migration recommendations
The following recommendations are specific to warm migrations:
-
Migrate up to 400 disks in parallel
Testing involved migrating 200 VMs in parallel, with 2 disks each using 8 ESXi hosts, for a total of 400 disks. No tests were run on migration plans migrating over 400 disks in parallel, so it is not recommended to migrate over this number of disks in parallel.
-
Migrate up to 200 disks in parallel for the fastest rate
Testing was successfully performed on parallel disk migrations with 200, 300, and 400 disks. There was a decrease in the precopy migration rate, approximately 25%, between the tests migrating 200 disks and those migrating 300 and 400 disks.
Therefore, it is recommended to perform parallel disk migrations in groups of 200 or fewer, instead of 300 to 400 disks, unless a decline of 25% in precopy speed does not affect your cutover planning.
-
When possible, set cutover time to be immediately after a migration plan starts
To reduce the overall time of warm migrations, it is recommended to set the cutover to occur immediately after the migration plan is started. This causes Forklift to run only one precopy per VM. This recommendation is valid, no matter how many VMs are in the migration plan.
-
Increase precopy intervals between snapshots
If you are creating many migration plans with a single VM and have enough time between the migration start and the cutover, increase the value of the controller_precopy_interval parameter to between 120 and 240 minutes, inclusive. The longer setting will reduce the total number of snapshots and disk transfers per VM before the cutover.
Maximum values tested for warm migrations
The following maximum values were tested for warm migrations:
-
Maximum number of ESXi hosts tested: 8
-
Maximum number of worker nodes: 12
-
Maximum number of VMs in a single migration plan: 200
-
Maximum number of total parallel disk transfers: 400, with 200 VMs, 6 ESXis, and a transfer rate of 667 MB/s
-
Maximum single disk size migrated: 6 TB disk, which contained 3 TB of data
-
Maximum number of disks on a single VM migrated: 3
-
Maximum number of parallel disk transfers per ESXi host: 68
-
Maximum transfer rate observed of a single disk with no concurrent migrations: 76.5 MB/s
-
Maximum transfer rate observed of multiple disks from a single ESXi host: 253 MB/s (concurrent migration of 10 VMs, 1 disk each, 35/50 GiB used per disk)
-
Total transfer rate observed of multiple disks (210) from 8 ESXi hosts: 802 MB/s (concurrent migration of 70 VMs, 3 disks each, 35/50 GiB used per disk)
Recommendations for migrating VMs with large disks
The following recommendations are suggested for VMs with data on disk totaling to 1 TB or greater for each individual disk:
-
Schedule appropriate maintenance windows for migrating large disk virtual machines (VMs). Such migrations are sensitive operations and might require careful planning of maintenance windows and downtime, especially during periods of lower storage and network activity.
-
Check that no other migration activities or other heavy network or storage activities are run during those large virtual machine (VM) migrations. During those migrations, prioritize Forklift activities. Plan to migrate those VMs to a time when there are fewer activities on those VMs and related datastore.
-
For large VMs with a high churn rate, which means data is frequently changed in amounts of 100 GB or more between snapshots, consider reducing the warm migration
controller_precopy_intervalfrom the default, which is 60 minutes. It is important to ensure that this process is started at least 24 hours before the scheduled cutover to allow for multiple successful precopy snapshots to complete. When scheduling the cutover, ensure that the maintenance window allows for enough time for the last snapshot of changes to be copied over and that the cutover process begins at the beginning of that maintenance window. -
In cases of particularly large single-disk VMs, where some downtime is possible, select cold migrations rather than warm migrations, especially in the case of large VM snapshots.
-
Consider splitting data on particularly large disks to multiple disks, which enables parallel disk migration with Forklift when warm migration is used.
-
If you have large database disks with continuous writes of large amounts of data, where downtime and VM snapshots are not possible, it might be necessary to consider database vendor-specific replication options of the database data to target these specific migrations outside Forklift. Consult the vendor-specific options of your database if this case applies.
Increasing AIO sizes and buffer counts for NBD transport mode
You can change Network Block Device (NBD) transport network file copy (NFC) parameters to increase migration performance when you use Asynchronous Input/Output (AIO) buffering with the Forklift.
| Using AIO buffering is only suitable for cold migration use cases. Disable AIO settings before initializing warm migrations. For more details, see Disabling AIO Buffer Configuration. |
Key findings
-
The best migration performance was achieved by migrating multiple (10) virtual machines (VMs) on a single ESXi host with the following values:
-
VixDiskLib.nfcAio.Session.BufSizeIn64KB=16 -
vixDiskLib.nfcAio.Session.BufCount=4
-
-
The following improvements were noted when using AIO buffer settings (asynchronous buffer counts):
-
Migration time was reduced by 31.1%, from 0:24:32 to 0:16:54.
-
Read rate was increased from 347.83 MB/s to 504.93 MB/s.
-
-
There was no significant improvement observed when using AIO buffer settings with a single VM.
-
There was no significant improvement observed when using AIO buffer settings with multiple VMs from multiple hosts.
Key requirements for support for AIO sizes and buffer counts
Support is based upon tests performed using the following versions:
-
vSphere 7.0.3
-
VDDK 7.0.3
Enabling and configuring AIO buffering
You can enable and configure Asynchronous Input/Output (AIO) buffering for use with the Forklift.
-
Ensure that the
forklift-controllerpod in theopenshift-mtvnamespace supports the AIO buffer values. Since the pod name prefix is dynamic, check the pod name by running the following command:oc get pods -n openshift-mtv | grep forklift-controller | awk '{print $1}'For example, the output if the pod name prefix is "forklift-controller-667f57c8f8-qllnx" would be:
forklift-controller-667f57c8f8-qllnx -
Check the environment variables of the pod by running the following command:
oc get pod forklift-controller-667f57c8f8-qllnx -n openshift-mtv -o yaml -
Check for the following lines in the output:
... \- name: VIRT\_V2V\_EXTRA\_ARGS \- name: VIRT\_V2V\_EXTRA\_CONF\_CONFIG\_MAP ... -
In the
openshift-mtv namespace, edit theForkliftControllercustom resource (CR) by performing the following steps:-
Access the
ForkliftControllerCR for editing by running the following command:oc edit forkliftcontroller -n openshift-mtv -
Add the following lines to the
specsection of theForkliftControllerCR:virt_v2v_extra_args: "--vddk-config /mnt/extra-v2v-conf/input.conf" virt_v2v_extra_conf_config_map: "perf"
-
-
Create the required config map
perfby running the following command:oc -n openshift-mtv create cm perf -
Convert the desired buffer configuration values to Base64. For example, for 16/4, run the following command:
echo -e "VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nvixDiskLib.nfcAio.Session.BufCount=4" | base64The output will be similar to the following:
Vml4RGlza0xpYi5uZmNBaW8uU2Vzc2lvbi5CdWZTaXplSW42NEtCPTE2CnZpeERpc2tMaWIubmZjQWlvLlNlc3Npb24uQnVmQ291bnQ9NAo= -
In the config map
perf, enter the Base64 string in thebinaryDatasection, for example:apiVersion: v1 kind: ConfigMap binaryData: input.conf: Vml4RGlza0xpYi5uZmNBaW8uU2Vzc2lvbi5CdWZTaXplSW42NEtCPTE2CnZpeERpc2tMaWIubmZjQWlvLlNlc3Npb24uQnVmQ291bnQ9NAo= metadata: name: perf namespace: openshift-mtv -
Restart the
forklift-controllerpod to apply the new configuration. -
Ensure the
VIRT_V2V_EXTRA_ARGSenvironment variable reflects the updated settings. -
Run a migration plan and check the logs of the migration pod. Confirm that the AIO buffer settings are passed as parameters, particularly the
--vddk-configvalue.For example, if you run the following command:
exec: /usr/bin/virt-v2v … --vddk-config /mnt/extra-v2v-conf/input.confThe logs include a section similar to the following, if
debug_level = 4:Buffer size calc for 16 value: (16 * 64 * 1024 = 1048576) nbdkit: vddk[1]: debug: [NFC VERBOSE] NfcAio_OpenSession: Opening an AIO session. nbdkit: vddk[1]: debug: [NFC INFO] NfcAioInitSession: Disabling read-ahead buffer since the AIO buffer size of 1048576 is >= the read-ahead buffer size of 65536. Explicitly setting flag '`NFC_AIO_SESSION_NO_NET_READ_AHEAD`' nbdkit: vddk[1]: debug: [NFC VERBOSE] NfcAioInitSession: AIO Buffer Size is 1048576 nbdkit: vddk[1]: debug: [NFC VERBOSE] NfcAioInitSession: AIO Buffer Count is 4 -
Verify that the correct config map values are in the migration pod. Do this by logging into the migration pod and running the following command:
cat /mnt/extra-v2v-conf/input.confExample output is as follows:
VixDiskLib.nfcAio.Session.BufSizeIn64KB=16 vixDiskLib.nfcAio.Session.BufCount=4 -
Optional: Enable debug logs by running the following command. The command converts the configuration to Base64, including a high log level:
echo -e "`VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nVixDiskLib.nfcAio.Session.BufCount=4\nVixDiskLib.nfc.LogLevel=4`" | base64Adding a high log level reduces performance and is for debugging purposes only.
Disabling AIO buffering
You can disable AIO buffering for a cold migration using Forklift. You must disable AIO buffering for a warm migration using Forklift.
| The procedure that follows assumes the AIO buffering was enabled and configured according to the procedure in Enabling and configuring AIO buffering. |
-
In the
openshift-mtv namespace, edit theForkliftControllercustom resource (CR) by performing the following steps:-
Access the
ForkliftControllerCR for editing by running the following command:oc edit forkliftcontroller -n openshift-mtv -
Remove the following lines from the
specsection of theForkliftControllerCR:virt_v2v_extra_args: "`–vddk-config /mnt/extra-v2v-conf/input.conf`" virt_v2v_extra_conf_config_map: "`perf`"
-
-
Delete the config map named
perf:oc delete cm perf -n openshift-mtv -
Optional: Restart the
forklift-controllerpod to ensure that the changes took effect.
Forklift performance addendum
The data that forms the basis of the preceding recommendations was collected from testing in Red Hat labs and is provided for reference only.
Overall, these numbers should be considered to show the best-case scenarios.
The observed performance of migration can differ from these results and depends on several factors.
Telemetry
Red Hat uses telemetry to collect anonymous usage data from Forklift installations to help us improve the usability and efficiency of Forklift.
Forklift collects the following data:
-
Migration plan status: The number of migrations. Includes those that failed, succeeded, or were canceled.
-
Provider: The number of migrations per provider. Includes oVirt, vSphere, OpenStack, OVA, and KubeVirt providers.
-
Mode: The number of migrations by mode. Includes cold and warm migrations.
-
Target: The number of migrations by target. Includes local and remote migrations.
-
Plan ID: The ID number of the migration plan. The number is assigned by Forklift.
Metrics are calculated every 10 seconds and are reported per week, per month, and per year.