- Description
- Structure and conventions
- Pre-requisites
- Sandbox provisioning
- Retrieving Bastion VM credentials
- Verifying results
This repository provisions a nested Virtuozzo Infrastructure (VHI) sandbox for the Virtuozzo Infrastructure Operations Professional courses. One codebase supports multiple curricula via Terraform variable lab_track in 00_vars_lab_track.tf:
Do not change lab_track on an existing workspace without destroying and recreating the environment: network layout and instance user_data differ by track.
The operations track is five VMs by default (bastion plus node1.lab–node4.lab with one worker per the profile). The S3 track is four VMs (bastion plus node1.lab–node3.lab only). Virtual networks match the selected lab_track. Reference diagram:
If the diagram shows more cluster nodes, treat extras as operations context (node4 / optional node5.lab exercise); S3 uses three main nodes only.
The repository contains:
- Terraform plan files, ending with
.tfextension. - Cloud-init scripts under
cloud-init/:node.shandbastion.sh, with behavior gated bylab_track(Terraform also prepends_lab_log.shfor shared logging). openstack-creds.shfor sourcing cloud credentials.- Auxiliary files for students, including
WonderSI_Logos.zip.
Terraform plan files follow this naming scheme:
00_vars_lab_track.tf— student/operator variables in file order:lab_track; VHI image, flavors, storage;external_network-name; bastion image, flavor, storage;ssh_key; thenlocals.lab_track_profilesat the bottom of the file.10_data_*.tffiles contain runtime data collection modules.20_res_*.tffiles contain resource definitions.
To use this automation, your environment must meet the requirements described below.
- The OpenStack or Virtuozzo Infrastructure cloud must support nested virtualization.
How to test if nested virtualization is enabled.
Deploy a test VM and run the following on the guest (Intel exposes vmx, AMD exposes svm):
grep -E 'vmx|svm' /proc/cpuinfoIf this prints matching lines, the VM likely exposes hardware virtualization flags to the guest (nested virtualization may be available; the hypervisor and cloud policy still apply).
You need 1 floating IP for the bastion and 1 address on the cloud external network for the lab router (SNAT).
Three main nodes and worker_node_count from lab_track_profiles, VM_Public, fourth NIC. Router SNAT and bastion FIP use external_network-name (default public).
Recommended minimums, including the extra worker students add as node5.lab (8 vCPU, 16 GiB RAM, 150 + 2×100 GiB volumes):
- vCPU: 68 cores.
- RAM: 132 GiB.
- Disk space: ~1760 GiB.
These are not only what the first terraform apply consumes; they reserve headroom for the lab exercise.
Aligned with default flavors after a single terraform apply: bastion (2 vCPU / 4 GiB) plus three main nodes (16 vCPU / 32 GiB each)—no worker VMs.
- vCPU: 50 cores.
- RAM: 100 GiB.
- Disk space: ~1060 GiB (bastion 10 GiB + three nodes × (150 + 2×100) GiB volumes).
The project you are working with must have the following images:
- Virtuozzo Infrastructure ISO image
- Virtuozzo Infrastructure QCOW2 image
- Ubuntu 20.04 QCOW2 image (for the bastion VM)
Please do not use other versions of Virtuozzo Infrastructure or Ubuntu images, as the deployment script will likely fail to configure them.
To provision a sandbox, you will need to complete five steps:
- Clone this repository to your workstation.
- Install Terraform on your workstation.
- Adjust Terraform variables.
- Adjust and source the OpenStack credentials file.
- Apply Terraform configuration.
git clone https://github.com/virtuozzo/vi-sandbox
cd vi-sandbox
Download and install Terraform for your operating system from Terraform website.
Use Terraform 0.14.0 or newer (see required_version in the root module). This configuration pins the OpenStack provider to ~> 1.48 and random to ~> 3.5; run terraform init in the repository root so the correct provider versions are installed.
You will need to review and usually adjust variables in 00_vars_lab_track.tf. They appear in this order in that file:
lab_track—operationsors3, depending on the course you are completing.- Virtuozzo Infrastructure nodes:
vhi-image,vhi-image_isUUID,vhi-flavor_main,vhi-flavor_worker,vhi-storage_policy. - Networking:
external_network-name(Neutron external network for the lab router SNAT and bastion floating IP when a bastion is deployed). Internal lab network names and CIDRs are fixed in20_res_network.tf. - Bastion:
bastion-image,bastion-flavor,bastion-storage_policy(ignored when the selected profile setsdeploy_bastion = false). ssh_key— path to your public SSH key for the bastion and cluster nodes.lab_track_profiles— at the bottom of the file, inside thelocalsblock: per-trackmn_count,worker_node_count,deploy_bastion,enable_cluster_compute,default_cluster_name. Normally you only setlab_track; change the profile map only if you know what you are doing.
The subsections below follow the same order (VHI → networking → bastion → SSH). lab_track and lab_track_profiles are summarized in the numbered list above.
Adjust VHI node variables in 00_vars_lab_track.tf, in file order:
- Virtuozzo Infrastructure image name (
vhi-image,vhi-image_isUUID). - Main node flavor.
- Worker node flavor.
- Virtuozzo Infrastructure node storage policy.
You need to set the vhi-image variable to the name (or UUID—see below) of the Virtuozzo Infrastructure image in your project.
For example, if in your cloud, the Virtuozzo Infrastructure image is named VHI-latest.qcow2, the variable should look like this:
## VHI image name
variable "vhi-image" {
type = string
default = "VHI-latest.qcow2" # If required, replace the image name with the one you have in the cloud
}
Name vs UUID: Variable vhi-image_isUUID defaults to false. In that mode, Terraform looks up vhi-image by image name in Glance. If your cloud has images of different versions with the same name (e.g. VHI-latest.qcow2) set vhi-image_isUUID to true and set vhi-image to the UUID string (the name lookup is skipped).
## Set to true when vhi-image is a Glance image UUID, not a name
variable "vhi-image_isUUID" {
type = bool
default = false
}
You need to set the vhi-flavor_main variable to the flavor name that provides at least 16 CPU cores and 32 GiB RAM.
For example, if in your cloud such flavor is named va-16-32, the variable should look like this:
## Main node flavor name
variable "vhi-flavor_main" {
type = string
default = "va-16-32" # If required, replace the flavor name with the one you have in the cloud
}
For lab_track = "operations", set the vhi-flavor_worker variable to the flavor name that provides at least 8 CPU cores and 16 GiB RAM. S3 track does not deploy workers; this variable is unused there.
For example, if in your cloud such flavor is named va-8-16, the variable should look like this:
## Worker node flavor name
variable "vhi-flavor_worker" {
type = string
default = "va-8-16" # If required, replace the flavor name with the one you have in the cloud
}
You need to set the vhi-storage_policy variable to the storage policy with at least 1750GB of storage in the project's quota.
For example, if in your cloud such policy is named default, the variable should look like this:
## VHI node storage policy
variable "vhi-storage_policy" {
type = string
default = "default" # If required, replace the storage policy with the one you have in the cloud
}
Set external_network-name in 00_vars_lab_track.tf to match your cloud.
For example, if your physical network is called public, the variable should look like this:
## External network
variable "external_network-name" {
type = string
default = "public" # If required, replace the network name with the one you have in the cloud
}
Adjust bastion variables in 00_vars_lab_track.tf:
- Bastion image name.
- Bastion flavor.
- Bastion storage policy.
You need to set the bastion-image variable to the name of the Bastion image in your project.
For example, if in your cloud Bastion image is named Ubuntu-20.04, the variable should look like this:
## Bastion image
variable "bastion-image" {
type = string
default = "Ubuntu-20.04" # If required, replace the image name with the one you have in the cloud
}
You need to set the bastion-flavor variable to the flavor name that provides at least 2 CPU cores and 4 GiB RAM.
For example, if in your cloud such flavor is named va-2-4, the variable should look like this:
## Bastion flavor
variable "bastion-flavor" {
type = string
default = "va-2-4" # If required, replace the flavor name with the one you have in the cloud
}
You need to set the bastion-storage_policy variable to the storage policy with at least 10GB of storage in the project's quota.
For example, if in your cloud such policy is named default, the variable should look like this:
## Bastion storage policy
variable "bastion-storage_policy" {
type = string
default = "default" # If required, replace the storage policy with the one you have in the cloud
}
Set the ssh_key variable in 00_vars_lab_track.tf to point to your public SSH key.
For example, if your SSH key is located in ~/.ssh/student.pub, the variable should look like this:
## Bastion/Node access SSH key
variable "ssh_key" {
type = string
default = "~/.ssh/student.pub" # Replace with the path to your public SSH key
}
This repository contains an openstack-creds.sh file you can adjust to get a usable OpenStack credentials file.
In it, you will need to change some environmental variables related to your OpenStack credentials.
Follow the instructions in the file to get a usable OpenStack credentials file:
export OS_PROJECT_DOMAIN_NAME=vhi-ops # replace "vhi-ops" with your domain name
export OS_USER_DOMAIN_NAME=vhi-ops # replace "vhi-ops" with your domain name
export OS_PROJECT_NAME=student1 # replace "student1" with your project name
export OS_USERNAME=user.name # replace "user.name" with your user name
export OS_PASSWORD=********** # replace "**********" with password of your user
export OS_AUTH_URL=https://mycloud.com:5000/v3 # replace "mycloud.com" with the base URL of your cloud panel (do not replace the ":5000/v3" part)
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_TYPE=password
export OS_INSECURE=true
export PYTHONWARNINGS="ignore:Unverified HTTPS request is being made"
export NOVACLIENT_INSECURE=true
export NEUTRONCLIENT_INSECURE=true
export CINDERCLIENT_INSECURE=true
export OS_PLACEMENT_API_VERSION=1.22
export CLIFF_FIT_WIDTH=1
After you adjust the openstack-creds.sh file, source it in your terminal:
source openstack-creds.sh
Initialize Terraform in the directory and apply the plan:
terraform init && terraform apply
Changing lab_track after the fact on the same state is not supported! Destroy the stack (or use a fresh project/state) before switching tracks.
Wait at least 20 minutes before proceeding! Terraform will configure all VMs at first boot, which can take some time depending on the cloud performance and internet connection speed.
The Bastion VM student user password is automatically generated by Terraform during deployment.
After terraform apply completes, the connection details are displayed in the output:
bastion_connection_info = {
"password" = "xK#9mPq!2wLnR$vT"
"rdp_address" = "203.0.113.45:3390"
"username" = "student"
}
To retrieve the credentials at any time, use one of the following commands:
Display all connection info:
terraform output bastion_connection_info
Get JSON output (useful for scripting):
terraform output -json bastion_connection_info
Extract specific values with jq:
terraform output -json bastion_connection_info | jq -r '.password'
terraform output -json bastion_connection_info | jq -r '.rdp_address'
After applying the Terraform plan and waiting for scripts to complete the environment's configuration, you may proceed to verify the access.
Connect to the Bastion VM using the remote console. If Bastion VM is still being configured, you will see the following prompt:
Once the configuration of Bastion is complete, you should see the graphical login prompt:
Students typically use an RDP connection to the Bastion VM.
To verify that the nested Virtuozzo Infrastructure cluster is ready, do the following:
- Connect to the Bastion VM using the RDP client. Use the address and credentials from
terraform output bastion_connection_info. - Access nested Virtuozzo Infrastructure Admin Panel using the desktop shortcut and log in as
admin:
Operations track (lab_track = "operations"): After the compute cluster starts deploying, expect about an hour before initialization finishes; wait until the Admin Panel shows that process complete before treating the environment as ready.
S3 track (lab_track = "s3"): Terraform and first-boot automation deploy storage and HA only—there is no automated compute cluster. Initialization is effectively done once the storage cluster is configured, which usually takes about 20 minutes. Confirm storage and HA in the Admin Panel.




