This repository describes how to install FusionCompute on a common PC for testing or learning purposes.
- Machine Number: 7
- Machine Hostname: Isetcom07
- Host Machine IP: 172.16.16.207/24
- CNA01 IP: 172.16.16.11/24
- VRM IP: 172.16.16.12/24
- Default Gateway: 172.16.16.1
FusionCompute is virtualization software deployed on physical servers to virtualize server resources, including CPUs, memory, and network interface cards (NICs). In other words, FusionCompute enables a physical server to simultaneously run multiple isolated VM execution environments, improving resource utilization, meeting flexible and dynamic resource allocation requirements for applications.
The process is simple, it basically consists of Installing a Linux OS on one or more common PCs, enabling KVM in the kernel and finally running the FusionCompute CNA and VRM nodes on the common PCs as VMs.
This guide will only have the second method of installation and describes how to setup, install and run FusionCompute.
This please refer to huawei's website for more info regarding the iso files. This is a list of a checklist:
- Enable virtualization both PCs
- Disabling Secure Boot and setting up boot mode to UEFI ( might come in handy when installing the host ubuntu OS )
- Have a USB installer for Ubuntu
- Having a ready to use Ubuntu Machine
The first step that needs to be done is upgrading the machine and in our case it's a Ubuntu 18 host.
$ sudo apt update && sudo apt upgrade -y
FusionCompute basically runs on virtualization to work and to see if it's working, we can simply run the following cmd
$ egrep โc '(svm|vmx)'/proc/cpuinfo
If the result is 0, the CPU does not support virtualization. If the result is greater than or equal to 1, the CPU supports virtualization.
In our case the output is 6.
Kernel-based Virtual Machine (KVM) is a software feature that you can install on physical Linux machines to create virtual machines. A virtual machine is a software application that acts as an independent computer within another physical computer.
We just need to run the following command
$ sudo apt install qemu qemu-kvm libvirt-bin bridge-utils virt-manager
Upon completion, we just need to restart the machine.
reboot
In order to get FusionCompute to work, we need to have the 3 machines on the same network
Now we need to configure the interface and set it to bridge so that it can be used and seen by the CNA01 machine (the one we're creating)
In my case I've used : Interface: enp0s04 IP: 172.16.16.207 (Lab Machine #7)
This step was straight forward.
Now we just have to set the Folder that contains the ISO files for FusionCompute CNA and FusionCompute VRM files.
The result would look something like this:
In this step we're going to create a VM from the FusionCompute CNA ISO file.
Now we have to load the iso file.
We're advised to use >=4096Mb of RAM and 3 CPU cores
This disk partition will work as the main Drive for the machine and we're advised to use >=200.0 GB
Now we just need to specify the Storage Volume Location
Now the next step would be to resume the installation of the FusionCompute CNA01 and for that we can just select the custom storage
In the next step we need to change the machine name to CNA01 and tick the Customize Configuration before install
ng the CPU Model As seen on Huawei's guide, we need to set the CPU model to host-passthrough or else FusionCompute won't work properly
We need to check that the memory is >=7168MB for CNA to work properly
In a nutshell, VirtIO's main purpose is to send and receive network data to and from the host. In other words, let virtio-net be a liaison for network data between the host and the guest.
So I did set the Disk Bus to VirtIO for better performance
To have better performance and no errors, we're advised to use :
- Network Source: Bridge brx (the bridge NIC that we configured earlier)
- Device mode: e1000
After verifying all the settings and requirements we're ready to start the installation.
The first thing that needs to be configured is:
- Network:
We need to set the IP to a non-leased IP and for that I chose : 172.16.16.11 because the other network machines are use this format : 172.16.16.2xx. The default gateway also needs to be set: 172.16.16.1
- Hostname:
We're going to use CNA01 as hostname for that machine
We're going to setup a machine password to protect it and it's resources.
Now after making sure that all the above is valid and is configured, we can start the installation by clicking F12.
After the installation ends, we should be able to login and check the configuration.
- USER: root
- Pass: isetcom07!
Remember to Always use a secure password!
Because this is a testing lab I used a simple one.
We proceed to ping the default gateway that we configured earlier.
$ ping 172.16.16.1
64 bytes from 172.16.16.1: icmp_seq=1 ttl=64 time=0.68ms
64 bytes from 172.16.16.1: icmp_seq=2 ttl=64 time=1.26ms
64 bytes from 172.16.16.1: icmp_seq=3 ttl=64 time=1.16ms
^C
- VRM IP: 172.16.16.12
- VRM Gateway: 172.16.16.1
We just need to repeat all the previous steps but using the FusionCompute VRM file.
This step is optional but the steps are:
- Installing Ubuntu.
- Installing and Configuring KVM.
- Installing CNA02 (same way as before).
- Connecting PC1 and PC2 on the same network.
Now after verifying connectivity, we can proceed to install the NFS server on our host PC1. NFS (Network File System) allows a system to share directories and files with others over a network. By using NFS, users and programs can access files on remote systems almost as if they were local files.
We just need to run:
$ sudo apt install nfs-kernel-server
$ sudo apt install vim
Now we need to make a shared directory:
~$ mkdir nfs_vm
~$ cd nfs_vm
~/nfs_vm$ pwd
Now we need to edit the /etc/exports to configure the NFS configuration and we just need to append the following line:
/home/isetcom07/nfs_vm *(rw,sync,no_subtree_check,no_root_squash)
For the configuration change to take effect, we need to restart the nfs-kernel-server
$ sudo systemctl restart nfs-kernerl-server
$ sudo systemctl enable nfs-kernerl-server
In our case the IP of the VRM is: 172.16.16.12
If everything runs perfectly, we should be able to access the VRM via https://172.16.16.12* .
We can use the following creds:
- Login: admin
- password: IaaS@PORTAL-CLOUD8!
We just need to use the same step in which we did configure Storage and Bridge Networks.