There is a new blog about automating VM scale-out migration recommendations on the corporate Tintri site. Along with that script I wrote another script to set VM affinity for VM scale-out migration rules. These rules allow you to exclude VMs from being considered for migration.
Let’s look at a use case. Suppose there is one Flash VMstore, and 2 hybrid VMstores in a VMstore pool. VDI is running in your Flash VMstore, and you don’t what the VDI VMs to migrate to the hybrid VMstores, therefore, the VDI VMs are put into a Service Group by name pattern. Unfortunately, the script would have to be executed periodically because affinity is set at the VM level. When a VM is added to the Service Group, the script needs to be executed.
VMs are excluded with --affinity
set to ‘never‘. VMs are specified by a service group, --sg
, by a list of VM name, --vms
, or by pattern matching with --name
. To clear the affinity set --affinity
to ‘clear‘.
So let’s look at the script, set_reco_vm_affinity.py.
# Let's get to work
try:
if (len(vms) > 0):
print("Collecting VMs from list")
vm_uuids += get_vms_in_list(server_name, session_id, vms)
if (service_group != ""):
print("Collecting VMs from service group")
sg_uuid = get_sg_by_name(server_name, session_id, service_group)
if (sg_uuid == ""):
raise tintri.TintriRequestsException("Can't find service group " + service_group)
vm_uuids += get_vms_by_sg(server_name, session_id, sg_uuid)
if (vm_contains_name != ""):
print("Collecting VMs from name")
vm_uuids += get_vms_by_name(server_name, session_id, vm_contains_name)
if (len(vm_uuids) == 0):
raise tintri.TintriRequestsException("No VMs to set rules")
if debug_mode:
count = 1
for uuid in vm_uuids:
print(str(count) + ": " + uuid)
count += 1
# Process according to affinity
if (affinity == "never"):
set_vm_affinity_never(server_name, session_id, vm_uuids)
elif (affinity == "clear"):
clear_vm_affinity(server_name, session_id, vm_uuids)
else:
raise tintri.TintriRequestsException("Bad affinity rule: " + affinity)
except tintri.TintriRequestsException as tre:
print_error(tre.__str__())
sys.exit(4)
except tintri.TintriApiException as tae:
print_error(tae.__str__())
sys.exit(5)
In the above snippet, the code figures out the list of VMs and processes for the specified affinity.
The crux of this script is how to set the affinity:
# A helper function that sets the VM affinity rule for migration recommendations.
def set_vm_affinity(server_name, session_id, vm_uuids, affinity_rule):
url = "/v310/vm/"
for vm_uuid in vm_uuids:
rule_url = url + vm_uuid + "/affinity"
r = tintri.api_put(server_name, rule_url, affinity_rule, session_id)
if r.status_code != 204:
tintri.api_logout(server_name, session_id)
message = "The HTTP response for put affinity rule to the server is not 204."
raise tintri.TintriApiException(message, r.status_code,
rule_url, str(affinity_rule), r.text)
sys.stdout.write(".")
print("")
# Set the VM affinity rule to never for a list of VMs.
def set_vm_affinity_never(server_name, session_id, vm_uuids):
print("Setting " + str(len(vm_uuids)) + " VMs to never migrate")
affinity_rule = \
{"typeId" : beans + "vm.VirtualMachineAffinityRule",
"ruleType" : "NEVER"
}
set_vm_affinity(server_name, session_id, vm_uuids, affinity_rule)
# Clear the VM affinity rule for a list of VMs
def clear_vm_affinity(server_name, session_id, vm_uuids):
print("Clearing " + str(len(vm_uuids)) + " VMs affinity rules")
affinity_rule = \
{"typeId" : beans + "vm.VirtualMachineAffinityRule",
}
set_vm_affinity(server_name, session_id, vm_uuids, affinity_rule)
In the function set_vm_affinity()
, we see the API invoke /v310/vm/{vm_uuid}/affinity
. For each VM UUID in the input list, the API is invoked. There are two helper functions set_vm_affinity()
and clear_vm_affinity
to facilitate.
Although it has been discussed before, you can review how to get the VMs in a service group with this snippet of code.
# Return a list of VM UUIDs based on a service group name.
def get_vms_by_sg(server_name, session_id, sg_uuid):
vm_uuids = []
# Get a list of VMs, but return a page size at a time
vm_filter = {"includeFields" : ["uuid", "vmware"],
"serviceGroupIds" : sg_uuid
}
vm_uuids = get_vms(server_name, session_id, vm_filter)
return vm_uuids
# Return VM items constrained by a filter.
def get_vms(server_name, session_id, vm_filter):
vm_uuids = []
# Get a list of VMs, but return a page size at a time
get_vm_url = "/v310/vm"
vm_paginated_result = {'next' : "offset=0&limit=" + str(page_size)}
# While there are more VMs, go get them, and build a dictionary
# of name to UUID.
while 'next' in vm_paginated_result:
url = get_vm_url + "?" + vm_paginated_result['next']
r = tintri.api_get_query(server_name, url, vm_filter, session_id)
print_debug("The JSON response of the VM get invoke to the server " +
server_name + " is: " + r.text)
# For each VM in the page, print the VM name and UUID.
vm_paginated_result = r.json()
print_debug("VMs:\n" + format_json(vm_paginated_result))
# Get the VMs
items = vm_paginated_result["items"]
for vm in items:
vm_uuids.append(vm["uuid"]["uuid"])
return vm_uuids
Before I close this post, I would like to mention that you can post API questions on Tintri’s Hub. Until next time,
– Rick –