Month: August 2016

VM Scale-out Code Examples

There is a new blog about automating VM scale-out migration recommendations on the corporate Tintri site. Along with that script I wrote another script to set VM affinity for VM scale-out migration rules. These rules allow you to exclude VMs from being considered for migration.

Let’s look at a use case. Suppose there is one Flash VMstore, and 2 hybrid VMstores in a VMstore pool. VDI is running in your Flash VMstore, and you don’t what the VDI VMs to migrate to the hybrid VMstores, therefore, the VDI VMs are put into a Service Group by name pattern. Unfortunately, the script would have to be executed periodically because affinity is set at the VM level.  When a VM is added to the Service Group, the script needs to be executed.

VMs are excluded with --affinity set to ‘never‘.  VMs are specified by a service group, --sg, by a list of VM name, --vms, or by pattern matching with --name. To clear the affinity set --affinity to ‘clear‘.

So let’s look at the script, set_reco_vm_affinity.py.

# Let's get to work
try:
    if (len(vms) > 0):
        print("Collecting VMs from list")
        vm_uuids += get_vms_in_list(server_name, session_id, vms)

    if (service_group != ""):
        print("Collecting VMs from service group")
        sg_uuid = get_sg_by_name(server_name, session_id, service_group)
        if (sg_uuid == ""):
            raise tintri.TintriRequestsException("Can't find service group " + service_group)

        vm_uuids += get_vms_by_sg(server_name, session_id, sg_uuid)

    if (vm_contains_name != ""):
        print("Collecting VMs from name")
        vm_uuids += get_vms_by_name(server_name, session_id, vm_contains_name)

    if (len(vm_uuids) == 0):
        raise tintri.TintriRequestsException("No VMs to set rules")

    if debug_mode:
        count = 1
        for uuid in vm_uuids:
            print(str(count) + ": " + uuid)
            count += 1

    # Process according to affinity
    if (affinity == "never"):
        set_vm_affinity_never(server_name, session_id, vm_uuids)
    elif (affinity == "clear"):
        clear_vm_affinity(server_name, session_id, vm_uuids)
    else:
        raise tintri.TintriRequestsException("Bad affinity rule: " + affinity)

except tintri.TintriRequestsException as tre:
    print_error(tre.__str__())
    sys.exit(4)
except tintri.TintriApiException as tae:
    print_error(tae.__str__())
    sys.exit(5)

In the above snippet, the code figures out the list of VMs and processes for the specified affinity.

The crux of this script is how to set the affinity:

# A helper function  that sets the VM affinity rule for migration recommendations.
def set_vm_affinity(server_name, session_id, vm_uuids, affinity_rule):
    url = "/v310/vm/"

    for vm_uuid in vm_uuids:
        rule_url = url + vm_uuid + "/affinity"

        r = tintri.api_put(server_name, rule_url, affinity_rule, session_id)
        if r.status_code != 204:
            tintri.api_logout(server_name, session_id)
            message = "The HTTP response for put affinity rule to the server is not 204."
            raise tintri.TintriApiException(message, r.status_code,
                                            rule_url, str(affinity_rule), r.text)
        sys.stdout.write(".")
    print("")

# Set the VM affinity rule to never for a list of VMs.
def set_vm_affinity_never(server_name, session_id, vm_uuids):
    print("Setting " + str(len(vm_uuids)) + " VMs to never migrate")

    affinity_rule = \
        {"typeId" : beans + "vm.VirtualMachineAffinityRule",
         "ruleType" : "NEVER"
        }

    set_vm_affinity(server_name, session_id, vm_uuids, affinity_rule)

# Clear the VM affinity rule for a list of VMs
def clear_vm_affinity(server_name, session_id, vm_uuids):
    print("Clearing " + str(len(vm_uuids)) + " VMs affinity rules")

    affinity_rule = \
        {"typeId" : beans + "vm.VirtualMachineAffinityRule",
        }

    set_vm_affinity(server_name, session_id, vm_uuids, affinity_rule)

In the function set_vm_affinity(), we see the API invoke /v310/vm/{vm_uuid}/affinity. For each VM UUID in the input list, the API is invoked. There are two helper functions set_vm_affinity() and clear_vm_affinity to facilitate.

Although it has been discussed before, you can review how to get the VMs in a service group with this snippet of code.

# Return a list of VM UUIDs based on a service group name.
def get_vms_by_sg(server_name, session_id, sg_uuid):
    vm_uuids = []
# Get a list of VMs, but return a page size at a time
vm_filter = {"includeFields" : ["uuid", "vmware"],
             "serviceGroupIds" : sg_uuid
            }
vm_uuids = get_vms(server_name, session_id, vm_filter)

return vm_uuids

# Return VM items constrained by a filter.
def get_vms(server_name, session_id, vm_filter):
vm_uuids = []

# Get a list of VMs, but return a page size at a time
get_vm_url = "/v310/vm"
vm_paginated_result = {'next' : "offset=0&limit=" + str(page_size)}

# While there are more VMs, go get them, and build a dictionary
# of name to UUID.
while 'next' in vm_paginated_result:
    url = get_vm_url + "?" + vm_paginated_result['next']

    r = tintri.api_get_query(server_name, url, vm_filter, session_id)
    print_debug("The JSON response of the VM get invoke to the server " +
                server_name + " is: " + r.text)

    # For each VM in the page, print the VM name and UUID.
    vm_paginated_result = r.json()
    print_debug("VMs:\n" + format_json(vm_paginated_result))

    # Get the VMs
    items = vm_paginated_result["items"]
    for vm in items:
        vm_uuids.append(vm["uuid"]["uuid"])

return vm_uuids

Before I close this post, I would like to mention that you can post API questions on Tintri’s Hub. Until next time,

– Rick –

Advertisements

Tintri On-line Resources

Greetings,

I wanted to point to two Tintri resources that are available to you on-line.  First, is Tintri Resources, which contain videos, white papers, webinars, analyst reports, and data sheets

Tintri_Resources.

You can refine your selection by selecting a topic.  Current topics for selection are:

  • Backup and Recovery
  • Flash
  • Hyper-V
  • Private Cloud
  • Quality of Service (QoS)
  • Server Virtualization
  • Software Development
  • Tintri Global Center (TGC)
  • Tintri VMstore
  • VMstack
  • VVol.

For example, in checking out the Backup and Recovery topic, there are white papers on Best Practices with VMware vCenter, Symantec NetBackup, CommVault Simpana, and Veeam.

Another on-line resource is Tintricity Hub discussions. There are public and private forums. You need to sign-up to the Hub to access the private forums. In the private forums,  you can ask questions, and discuss Tintri REST APIs, and Tintri Automation Toolkit.  There is a general discussion group for everything else. Currently, the only public forum is on Data Protection.

The Tintricity Hub also has Challenges, Referrals, and Rewards.  By answering challenges and reading content, you collect points fro rewards. Rewards include gift cards, charity donations, one hour of free consulting, and Tintri SWAG.

When you have some time, please check-out our Tintri resources,

– Rick –

 

New Python API Examples

Greetings,

There are 3 updates to the Python API examples on GitHub. One is about updating data IP addresses on a VMstore.  The other 2 examples are on VMstore appliance information collection.

Updating  Data IP Addresses

I wrote an Python example, set_data_ip.py, that displays and/or updates the data IP addresses for a VMstore.  The script always displays the current IP addresses.  Depending on the options, one data IP address is added or removed. The main crux of the code is similar to the code setting DNS. This is described in  more detail in this week’s “Ask Rick” blog.

Collecting VMStore Information

I had previously added an example, appliance_info.py, which displays the following:

+-------------+--------------------------+
| Info        | Value                    |
+-------------+--------------------------+
| Product     | Tintri VMstore           |
| Model       | T445                     |
| All Flash   | False                    |
| OS version  | 4.2.0.1-7524.41058.18773 |
| API version | v310.51                  |
+-------------+--------------------------+

I have moved this code to appliance_status.py, because it displays short concise VMstore or TGC status.

Now that we have appliance_status.py, I re-created appliance_info.py to display the following information:

  • VMstore status
  • Appliance components
  • Failed components if any
  • IP addresses: admin, data, and replication
  • Controller status
  • Disk information

This script imports the prettytable module.

Until next time,

– Rick –