Proxy Minion interface module for managing VMware ESXi hosts.
New in version 2015.8.4.
Special Note: SaltStack thanks Adobe Corporation for their support in creating this Proxy Minion integration.
This proxy minion enables VMware ESXi (hereafter referred to as simply 'ESXi') hosts to be treated individually like a Salt Minion.
Since the ESXi host may not necessarily run on an OS capable of hosting a Python stack, the ESXi host can't run a Salt Minion directly. Salt's "Proxy Minion" functionality enables you to designate another machine to host a minion process that "proxies" communication from the Salt Master. The master does not know nor care that the target is not a "real" Salt Minion.
More in-depth conceptual reading on Proxy Minions can be found in the Proxy Minion section of Salt's documentation.
PyVmomi can be installed via pip:
pip install pyVmomi
Version 6.0 of pyVmomi has some problems with SSL error handling on certain versions of Python. If using version 6.0 of pyVmomi, Python 2.6, Python 2.7.9, or newer must be present. This is due to an upstream dependency in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the version of Python is not in the supported range, you will need to install an earlier version of pyVmomi. See Issue #29537 for more information.
Based on the note above, to install an earlier version of pyVmomi than the version currently listed in PyPi, run the following:
pip install pyVmomi==126.96.36.1994.1.1
The 188.8.131.524.1.1 is a known stable version that this original ESXi State Module was developed against.
Currently, about a third of the functions used in the vSphere Execution Module require the ESXCLI package be installed on the machine running the Proxy Minion process.
Once all of the required dependencies are in place and the vCLI package is installed, you can check to see if you can connect to your ESXi host or vCenter server by running the following command:
esxcli -s <host-location> -u <username> -p <password> system syslog config get
If the connection was successful, ESXCLI was successfully installed on your system. You should see output related to the ESXi host's syslog configuration.
To use this integration proxy module, please configure the following:
Proxy minions get their configuration from Salt's Pillar. Every proxy must have a stanza in Pillar and a reference in the Pillar top-file that matches the ID. At a minimum for communication with the ESXi host, the pillar should look like this:
proxy: proxytype: esxi host: <ip or dns name of esxi host> username: <ESXi username> passwords: - first_password - second_password - third_password credstore: <path to credential store>
proxytype key and value pair is critical, as it tells Salt which
interface to load from the
proxy directory in Salt's install hierarchy,
/srv/salt/_proxy on the Salt Master (if you have created your
own proxy module, for example). To use this ESXi Proxy Module, set this to
The location, or ip/dns, of the ESXi host. Required.
The username used to login to the ESXi host, such as
A list of passwords to be used to try and login to the ESXi host. At least one password in this list is required.
The proxy integration will try the passwords listed in order. It is
configured this way so you can have a regular password and the password you
may be updating for an ESXi host either via the
execution module function or via the
function. This way, after the password is changed, you should not need to
restart the proxy minion--it should just pick up the the new password
provided in the list. You can then change pillar at will to move that
password to the front and retire the unused ones.
This also allows you to use any number of potential fallback passwords.
When a password is changed on the host to one in the list of possible passwords, the further down on the list the password is, the longer individual commands will take to return. This is due to the nature of pyVmomi's login system. We have to wait for the first attempt to fail before trying the next password on the list.
This scenario is especially true, and even slower, when the proxy
minion first starts. If the correct password is not the first password
on the list, it may take up to a minute for
test.ping to respond
True result. Once the initial authorization is complete, the
responses for commands will be a little faster.
To avoid these longer waiting periods, SaltStack recommends moving the correct password to the top of the list and restarting the proxy minion at your earliest convenience.
If the ESXi host is not using the default protocol, set this value to an
alternate protocol. Default is
If the ESXi host is not using the default port, set this value to an
alternate port. Default is
If the ESXi host is using an untrusted SSL certificate, set this value to
the file path where the credential store is located. This file is passed to
esxcli. Default is
<HOME>/.vmware/credstore/vicredentials.xml on Linux
<APPDATA>/VMware/credstore/vicredentials.xml on Windows.
HOME variable is sometimes not set for processes running as system
services. If you want to rely on the default credential store location,
HOME is set for the proxy process.
After your pillar is in place, you can test the proxy. The proxy can run on any machine that has network connectivity to your Salt Master and to the ESXi host in question. SaltStack recommends that the machine running the salt-proxy process also run a regular minion, though it is not strictly necessary.
On the machine that will run the proxy, make sure there is an
file with at least the following in it:
master: <ip or hostname of salt-master>
You can then start the salt-proxy process with:
salt-proxy --proxyid <id you want to give the host>
You may want to add
-l debug to run the above in the foreground in
debug mode just to make sure everything is OK.
Next, accept the key for the proxy on your salt-master, just like you would for a regular minion:
salt-key -a <id you gave the esxi host>
You can confirm that the pillar data is in place for the proxy:
salt <id> pillar.items
And now you should be able to ping the ESXi host to make sure it is responding:
salt <id> test.ping
At this point you can execute one-off commands against the host. For example, you can get the ESXi host's system information:
salt <id> esxi.cmd system_info
Note that you don't need to provide credentials or an ip/hostname. Salt knows to use the credentials you stored in Pillar.
It's important to understand how this particular proxy works.
Salt.modules.vsphere is a
standard Salt execution module. If you pull up the docs for it you'll see
that almost every function in the module takes credentials and a target
host. When credentials and a host aren't passed, Salt runs commands
pyVmomi against the local machine. If you wanted, you could run
functions from this module on any host where an appropriate version of
pyVmomi is installed, and that host would reach out over the network
and communicate with the ESXi host.
esxi.cmd acts as a "shim" between the execution module and the proxy. Its
first parameter is always the function from salt.modules.vsphere. If the
function takes more positional or keyword arguments you can append them to the
call. It's this shim that speaks to the ESXi host through the proxy, arranging
for the credentials and hostname to be pulled from the Pillar section for this
Associated states are thoroughly documented in
salt.states.esxi. Look there
to find an example structure for Pillar as well as an example
for standing up an ESXi host from scratch.
ch_config(cmd, *args, **kwargs)¶
This function is called by the
It then calls whatever is passed in
cmd inside the
Passes the return through from the vsphere module.
Cycle through all the possible credentials and return the first one that works.
Get the grains from the proxy device.
Refresh the grains from the proxy device.
This function gets called when the proxy starts up. For ESXi devices, the host, login credentials, and, if configured, the protocol and port are cached.
Check to see if the host is responding. Returns False if the host didn't respond, True otherwise.
salt esxi-host test.ping
Shutdown the connection to the proxy device. For this proxy, shutdown is a no-op.