V310.61 API Documentation Available


New Tintri API documentation is now available for version v310.61. Version v310.61 maps to VMstore‘s TXOS 4.3 and TGC 3.5 which contains additional APIs to support Synchronous Replication and disk encryption.

There is a new GitHub repository, tintri-rest-api, which contains the documentation and code examples. The repository tintri-api-examples is now deprecated.

Documentation for older API versions are still available on Tintri’s support site.

Until next time,

– Rick –

Private Cloud Automation

I want to point out a set of 4 postings by Matthew Geddes, a Hyper-V architect, on Automation and Private Cloud.

  1. Part 1: sets the stage: a number of production SQL Server instances where some nightly reporting jobs are run. These reporting jobs are quite heavyweight and impact other users during their large processing window. What we want to do is to have the reporting jobs run on non-production VMs with that day’s production data.
  2. Part 2: shows how how to use syncVM from Tintri’s PowerShell Toolkit.  Matt breaks down usage of the PowerShell cmdlet, Sync-TintriVDisk; and then shows how to integrate it into a small script.
  3. Part 3: adds more code to the small script and wraps it into a function so that it can be used for multiple reports.
  4. Part 4: adds more automation with by showing how to parameterize the reports.  Matt summarizes and gives you some homework.

I like the way Matt goes through presents the use case, and then goes through the code step by step.

Until next time,

– Rick –

Tintri vSphere Web Client Plugin


Just a quick note about the Tintri vSphere Web Client Plugin which has been released as GA. It’s now available for download on Support Portal. This is a maintenance release that includes support for the following key features:

  • SyncRepl functionality: support is available for VMstores with synchronous replication.
  • vSphere Plugin version:  displays the plugin version number.
  • Snapshot display: snapshots older than seven days are displayed.
  • IP/FQDN settings: change vCenter server IP/FQDN from the vCenter settings.
  • VM protection:  provides a Snapshot and replicate every minute option.
  • vCenter support:  supports vCenter 6.5.

Here is a list of bugs that have been fixed:

Bug # Description
52631 Customer may be unable to use SyncVM refresh
50475 Customer may be unable to use the ESXi Best Practice if the host is in a non-responding state
45710 Web Client Plugin may not load after upgrading vCenter to vCenter 6.0u2
45617 Snapshot clone behavior in the Web Client Plugin may be different from the VMstore UI when multiple datastores are involved
45049 VMstore may flood vCenter with alarm definitions
43929 Tintri IOPS, Tintri Throughput and Tintri Latency graphs may not display in the datastore Summary tab
43703 The delete snapshot permission may not be available for specific user roles within the Web Client
42901 The Web Client Plugin may hang when customers attempt to refresh a VM disk

After going to the support download page, search for “Tintri vSphere” for the new download.


– Rick –

Synchronous and 1-to-many Replication


I usually blog about APIs, but today I want to point to 2 excellent videos showcasing Tintri’s Tintri Global Center (TGC) GUI. One is on Synchronous Replication and Failover; and the second one is on 1-to-many replication.

Tomer Hagay first reviews the Synchronous Replication configuration which uses Tintri’s Service Groups. Next, he shows all the statistics that are available; for example, the VM latency components: host, network, contention, flash or disk, and mirror. Finally, Tomer shows how easily it is to do transparent failover.

Bill Roth first demonstrates how to find the VMstores’ replication information. Next, he shows how to get protection information for VMs.  How to add a new replication link to a VM is shown. A VM can replicate up to 4 different destinations. Of course Service Groups can have 1-to-many.replication too. As you know, one of the cool things about Tintri, is that Tintri has statistics; and in this case, Bill shows the two replication statistics that are available:  MBytes Remaining to to be replicated, and Replication Rate, both logical and network.

So check out these informative videos and enjoy Winter Solstice,

– Rick –


Tintribot Is Here to Help


As you might recall my post on Using Tintri APIs to Build Tintri Anywhere discussing using Slack as a management interface.  We now have 2 videos on Tintribot as an example. Please note that this is a possible future.

The first video, Manage storage with ChatOps from Tintri between bites, features an admin, Krystle, approving a recommendation from our VM Scale-out feature.

In the above video, Tintribot has code that can read VM Scale-out recommendations and approve them which is covered in this Tintri post. More VM Scale-out code can be found in this post.

The second video, Manage storage from anywhere with Tintri ChapOps, has admin Krystle bringing up VMs and putting production data on them..

First Tintribot brings up 500 VMs using Tintri cloning. Then the bot uses SyncVM to sync the 500 newly created  VMs to the latest recovery point of the production SQL data.

So how do you manage your storage?

–  Rick –

Changing QoS during Veeam Backups

Tintri’s QoS is a great tool to limit storage bandwidth per VM, but what happens when a backup occurs?  Well, QoS will also limit your backup storage IOPS. If this is an issue in your environment, you might consider clearing the QoS during backup.

As an example of clearing QoS when a backup occurs,  a PowerShell script is now available that interacts with Veeam Backup and Replication.  The script, Veeam_Backup_Tintri_QoS.ps1 is available on GitHub.  The file, README.md has the information needed to run the script.

Basically the script polls the Veeam backup server every 10 seconds. When a Veeam backup job has started, the script finds the associated storage.  If the storage is on a VMstore and QoS is set, then the QoS is cleared.  After the job is done, the script restores the QoS settings.  A TGC server is required since the script connects only to a TGC.

Below is a diagram of Veeam_Backup_Tintri_QoS.ps1 (Backup Script) running in the Veeam backup server.


However the backup script could be in a separate VM.

This script could possibly be used as a template for other backup vendors.  Results may very.  Veeam made it straight-forward with a PowerShell plug-in.

Until next time,

– Rick –


PowerShell VM CSV Report


A customer via a colleague, Satinder Sharma, wanted to have a CSV output from Get-TintriVM. Their attempt was:

Get-TintriVM  | Export-Csv WS_inventory.csv

This does nothing.  After some experimentation and assistance from Dhruv Vermula, we have this:

Get-TintriVM | Select-Object {$_.vmware.name}, {$_.Uuid.UuId} | Export-csv vm2.csv

This works, but the columns are “$_.vmware.name” and “$_Uuid.Uuid”, which is not very friendly.  So I created created a small script, VmCsvReport, that is similar to a blog post “Obtain a VM CSV Report“.  This script creates nice columns; for example, “VM Name” instead of “$_.vmware.name” and creates a CSV file.

The script uses @{Expression} to create alias columns.

 $ex = @{Expression={$_.vmware.name};label="VM Name"},
       @{Expression={$_.stat.sortedstats.LatencyTotalMs};label="Total Latency"},
       @{Expression={$_.stat.sortedstats.LatencyNetworkMs};label="Network Latency"},
       @{Expression={$_.stat.sortedstats.LatencyStorageMs};label="Storage Latency"},
       @{Expression={$_.stat.sortedstats.LatencyDiskMs};label="Disk Latency"}

The output of Get-TintriVM is piped into Select-Object using the aliases in $ex.  Then $results is piped to the Export-Csv cmdlet to create a CSV file.  This could be done in one line, but I broke it up so that I could easily examine the output in debug.

$result = Get-TintriVM -TintriServer $conn | Select-Object $ex
$result | Export-Csv $csv_file

The specified VM information is in the CSV file which Excel can read quite easily.

Unfortunately, the columns are hard-coded in the script, because the PowerShell Toolkit does not have a cmdlet that maps the CSV download API.  I couldn’t suss-out a way to have configurable columns similar “Obtain a VM CSV Report“. If you know a way, please let me know.

Until next time,

– Rick –

Automation, Cloud, Containers, and PySDK


November has started with a bang of announcements and publications:

  1. Tintri Raises the Bar for Enterprise Cloud Build on Web Services Architecture and RESTful APIs” discusses Tintri’s architecture, automation, container support, analytics, and public cloud integration.  There will be a webinar on November 10th at 11 am Pacific with Kieran Harty and Steve Herrod.
  2. The Register’s article on Tintri’s Chatbot , automation, cloud, and container support talks about Tintri’s Web Services approach that makes everything API-accessible which makes automation , container support, public cloud integration, and chat-bot possible.
  3. ComputerWeekly.com published an article of “Tintri adding VMware and Flocker persistent container storage” which discusses Tintri’s container Flocker support in more detail.
  4. Tintri announced the release of the Python SDK (PySDK). This PySDK was used in the above mentioned chat-bot. With PySDK, Tintri will be able to integrate easier with automation platforms like OpenStack.
  5. Tintri published a white paper on Tintri’s vRealize Orchestrator Plugin. Now VMware’s vRealize orchestrator can manage Tintri storage.  This allows scripts in vRealize to invoke Tintri management APIs.

This is just the beginning of Tintri’s cloud announcements.  More will be coming in the next months.  Until next time,

– Rick –

Using Tintri APIs to Build Tintri Anywhere


We have a customer in Japan, Adways, the used Tintri APIs, to build a Slack-bot than communicates with a Tintri VMstore.  A Slack client sends a command message to the Slack server just like messages are sent between people.  The Slack server sends a message to the Tintri bot that invokes the appropriate Tintri API.  See the picture below:


This allows the VMstore to be managed by a cellphone. Here is an example:

@tintribot:tintri tintri-001 show appliance_info

The result output is:

| Info        | Value                    |
| Product     | Tintri VMstore           |
| Model       | T540                     |
| OS version  | |
| API version | v310.21                  |

If this interests you, please check out Masaaki Hatori’s blog. I had to use Google Chrome to translate it from Japanese to English.


– Rick –

VM Scale-out Code Examples

There is a new blog about automating VM scale-out migration recommendations on the corporate Tintri site. Along with that script I wrote another script to set VM affinity for VM scale-out migration rules. These rules allow you to exclude VMs from being considered for migration.

Let’s look at a use case. Suppose there is one Flash VMstore, and 2 hybrid VMstores in a VMstore pool. VDI is running in your Flash VMstore, and you don’t what the VDI VMs to migrate to the hybrid VMstores, therefore, the VDI VMs are put into a Service Group by name pattern. Unfortunately, the script would have to be executed periodically because affinity is set at the VM level.  When a VM is added to the Service Group, the script needs to be executed.

VMs are excluded with --affinity set to ‘never‘.  VMs are specified by a service group, --sg, by a list of VM name, --vms, or by pattern matching with --name. To clear the affinity set --affinity to ‘clear‘.

So let’s look at the script, set_reco_vm_affinity.py.

# Let's get to work
    if (len(vms) > 0):
        print("Collecting VMs from list")
        vm_uuids += get_vms_in_list(server_name, session_id, vms)

    if (service_group != ""):
        print("Collecting VMs from service group")
        sg_uuid = get_sg_by_name(server_name, session_id, service_group)
        if (sg_uuid == ""):
            raise tintri.TintriRequestsException("Can't find service group " + service_group)

        vm_uuids += get_vms_by_sg(server_name, session_id, sg_uuid)

    if (vm_contains_name != ""):
        print("Collecting VMs from name")
        vm_uuids += get_vms_by_name(server_name, session_id, vm_contains_name)

    if (len(vm_uuids) == 0):
        raise tintri.TintriRequestsException("No VMs to set rules")

    if debug_mode:
        count = 1
        for uuid in vm_uuids:
            print(str(count) + ": " + uuid)
            count += 1

    # Process according to affinity
    if (affinity == "never"):
        set_vm_affinity_never(server_name, session_id, vm_uuids)
    elif (affinity == "clear"):
        clear_vm_affinity(server_name, session_id, vm_uuids)
        raise tintri.TintriRequestsException("Bad affinity rule: " + affinity)

except tintri.TintriRequestsException as tre:
except tintri.TintriApiException as tae:

In the above snippet, the code figures out the list of VMs and processes for the specified affinity.

The crux of this script is how to set the affinity:

# A helper function  that sets the VM affinity rule for migration recommendations.
def set_vm_affinity(server_name, session_id, vm_uuids, affinity_rule):
    url = "/v310/vm/"

    for vm_uuid in vm_uuids:
        rule_url = url + vm_uuid + "/affinity"

        r = tintri.api_put(server_name, rule_url, affinity_rule, session_id)
        if r.status_code != 204:
            tintri.api_logout(server_name, session_id)
            message = "The HTTP response for put affinity rule to the server is not 204."
            raise tintri.TintriApiException(message, r.status_code,
                                            rule_url, str(affinity_rule), r.text)

# Set the VM affinity rule to never for a list of VMs.
def set_vm_affinity_never(server_name, session_id, vm_uuids):
    print("Setting " + str(len(vm_uuids)) + " VMs to never migrate")

    affinity_rule = \
        {"typeId" : beans + "vm.VirtualMachineAffinityRule",
         "ruleType" : "NEVER"

    set_vm_affinity(server_name, session_id, vm_uuids, affinity_rule)

# Clear the VM affinity rule for a list of VMs
def clear_vm_affinity(server_name, session_id, vm_uuids):
    print("Clearing " + str(len(vm_uuids)) + " VMs affinity rules")

    affinity_rule = \
        {"typeId" : beans + "vm.VirtualMachineAffinityRule",

    set_vm_affinity(server_name, session_id, vm_uuids, affinity_rule)

In the function set_vm_affinity(), we see the API invoke /v310/vm/{vm_uuid}/affinity. For each VM UUID in the input list, the API is invoked. There are two helper functions set_vm_affinity() and clear_vm_affinity to facilitate.

Although it has been discussed before, you can review how to get the VMs in a service group with this snippet of code.

# Return a list of VM UUIDs based on a service group name.
def get_vms_by_sg(server_name, session_id, sg_uuid):
    vm_uuids = []
# Get a list of VMs, but return a page size at a time
vm_filter = {"includeFields" : ["uuid", "vmware"],
             "serviceGroupIds" : sg_uuid
vm_uuids = get_vms(server_name, session_id, vm_filter)

return vm_uuids

# Return VM items constrained by a filter.
def get_vms(server_name, session_id, vm_filter):
vm_uuids = []

# Get a list of VMs, but return a page size at a time
get_vm_url = "/v310/vm"
vm_paginated_result = {'next' : "offset=0&limit=" + str(page_size)}

# While there are more VMs, go get them, and build a dictionary
# of name to UUID.
while 'next' in vm_paginated_result:
    url = get_vm_url + "?" + vm_paginated_result['next']

    r = tintri.api_get_query(server_name, url, vm_filter, session_id)
    print_debug("The JSON response of the VM get invoke to the server " +
                server_name + " is: " + r.text)

    # For each VM in the page, print the VM name and UUID.
    vm_paginated_result = r.json()
    print_debug("VMs:\n" + format_json(vm_paginated_result))

    # Get the VMs
    items = vm_paginated_result["items"]
    for vm in items:

return vm_uuids

Before I close this post, I would like to mention that you can post API questions on Tintri’s Hub. Until next time,

– Rick –