Network Automation

Network automation is a continuous process of automating the configuration, management and operations of a computer network. Although the abstraction could be compared with the operations on the server side, there are many particular challenges, the most important being that a network device is traditionally closed hardware able to run proprietary software only. In other words, the user is not able to install the salt-minion package directly on a traditional network device. For these reasons, most network devices can be controlled only remotely via proxy minions or using the Salt SSH. However, there are also vendors producing whitebox equipment (e.g. Arista, Cumulus) or others that have moved the operating system in the container (e.g. Cisco NX-OS, Cisco IOS-XR), allowing the salt-minion to be installed directly on the platform.

New in Carbon (2016.11)

The methodologies for network automation have been introduced in 2016.11.0. Network automation support is based on proxy minions.

NAPALM

NAPALM (Network Automation and Programmability Abstraction Layer with Multivendor support) is an opensourced Python library that implements a set of functions to interact with different router vendor devices using a unified API. Being vendor-agnostic simplifies operations, as the configuration and interaction with the network device does not rely on a particular vendor.

../../_images/napalm_logo.png

Beginning with 2017.7.0, the NAPALM modules have been transformed so they can run in both proxy and regular minions. That means, if the operating system allows, the salt-minion package can be installed directly on the network gear. The interface between the network operating system and Salt in that case would be the corresponding NAPALM sub-package.

For example, if the user installs the salt-minion on a Arista switch, the only requirement is napalm-eos.

The following modules are available in 2017.7.0:

Getting started

Install NAPALM - follow the notes and check the platform-specific dependencies.

Salt's Pillar system is ideally suited for configuring proxy-minions (though they can be configured in /etc/salt/proxy as well). Proxies can either be designated via a pillar file in pillar_roots, or through an external pillar. External pillars afford the opportunity for interfacing with a configuration management system, database, or other knowledgeable system that may already contain all the details of proxy targets. To use static files in pillar_roots, pattern your files after the following examples:

/etc/salt/pillar/top.sls

base:
  router1:
    - router1
  router2:
    - router2
  switch1:
    - swtich1
  swtich2:
    - switch2
  cpe1:
    - cpe1

/etc/salt/pillar/router1.sls

proxy:
  proxytype: napalm
  driver: junos
  host: r1.bbone.as1234.net
  username: my_username
  password: my_password

/etc/salt/pillar/router2.sls

proxy:
  proxytype: napalm
  driver: iosxr
  host: r2.bbone.as1234.net
  username: my_username
  password: my_password
  optional_args:
    port: 22022

/etc/salt/pillar/switch1.sls

proxy:
  proxytype: napalm
  driver: eos
  host: sw1.bbone.as1234.net
  username: my_username
  password: my_password
  optional_args:
    enable_password: my_secret

/etc/salt/pillar/switch2.sls

proxy:
  proxytype: napalm
  driver: nxos
  host: sw2.bbone.as1234.net
  username: my_username
  password: my_password

/etc/salt/pillar/cpe1.sls

proxy:
  proxytype: napalm
  driver: ios
  host: cpe1.edge.as1234.net
  username: ''
  password: ''
  optional_args:
    use_keys: True
    auto_rollback_on_error: True

CLI examples

Display the complete running configuration on router1:

$ sudo salt 'router1' net.config source='running'

Retrieve the NTP servers configured on all devices:

$ sudo salt '*' ntp.servers
router1:
  ----------
  comment:
  out:
      - 1.2.3.4
  result:
      True
cpe1:
  ----------
  comment:
  out:
      - 1.2.3.4
  result:
      True
switch2:
  ----------
  comment:
  out:
      - 1.2.3.4
  result:
      True
router2:
  ----------
  comment:
  out:
      - 1.2.3.4
  result:
      True
switch1:
  ----------
  comment:
  out:
      - 1.2.3.4
  result:
      True

Display the ARP tables on all Cisco devices running IOS-XR 5.3.3:

$ sudo salt -G 'os:iosxr and version:5.3.3' net.arp

Return operational details for interfaces from Arista switches:

$ sudo salt -C 'sw* and os:eos' net.interfaces

Execute traceroute from the edge of the network:

$ sudo salt 'router*' net.traceroute 8.8.8.8 vrf='CUSTOMER1-VRF'

Verbatim display from the CLI of Juniper routers:

$ sudo salt -C 'router* and G@os:junos' net.cli 'show version and haiku'

Retrieve the results of the RPM probes configured on Juniper MX960 routers:

$ sudo salt -C 'router* and G@os:junos and G@model:MX960' probes.results

Return the list of configured users on the CPEs:

$ sudo salt 'cpe*' users.config

Using the BGP finder, return the list of BGP neighbors that are down:

$ sudo salt-run bgp.neighbors up=False

Using the NET finder, determine the devices containing the pattern "PX-1234-LHR" in their interface description:

$ sudo salt-run net.find PX-1234-LHR

Cross-platform configuration management example: NTP

Assuming that the user adds the following two lines under file_roots:

file_roots:
  base:
    - /etc/salt/pillar/
    - /etc/salt/templates/
    - /etc/salt/states/

Define the list of NTP peers and servers wanted:

/etc/salt/pillar/ntp.sls

ntp.servers:
  - 1.2.3.4
  - 5.6.7.8
ntp.peers:
   - 10.11.12.13
   - 14.15.16.17

Include the new file: for example, if we want to have the same NTP servers on all network devices, we can add the following line inside the top.sls file:

'*':
  - ntp

/etc/salt/pillar/top.sls

base:
  '*':
    - ntp
  router1:
    - router1
  router2:
    - router2
  switch1:
    - swtich1
  swtich2:
    - switch2
  cpe1:
    - cpe1

Or include only where needed:

/etc/salt/pillar/top.sls

base:
  router1:
    - router1
    - ntp
  router2:
    - router2
    - ntp
  switch1:
    - swtich1
  swtich2:
    - switch2
  cpe1:
    - cpe1

Define the cross-vendor template:

/etc/salt/templates/ntp.jinja

{%- if grains.vendor|lower == 'cisco' %}
  no ntp
  {%- for server in servers %}
  ntp server {{ server }}
  {%- endfor %}
  {%- for peer in peers %}
  ntp peer {{ peer }}
  {%- endfor %}
{%- elif grains.os|lower == 'junos' %}
  system {
    replace:
    ntp {
      {%- for server in servers %}
      server {{ server }};
      {%- endfor %}
      {%- for peer in peers %}
      peer {{ peer }};
      {%- endfor %}
    }
  }
{%- endif %}

Define the SLS state file, making use of the Netconfig state module:

/etc/salt/states/router/ntp.sls

ntp_config_example:
  netconfig.managed:
    - template_name: salt://ntp.jinja
    - peers: {{ pillar.get('ntp.peers', []) | json }}
    - servers: {{ pillar.get('ntp.servers', []) | json }}

Run the state and assure NTP configuration consistency across your multi-vendor network:

$ sudo salt 'router*' state.sls router.ntp

Besides CLI, the state can be scheduled or executed when triggered by a certain event.

JUNOS

Juniper has developed a Junos specific proxy infrastructure which allows remote execution and configuration management of Junos devices without having to install SaltStack on the device. The infrastructure includes:

The execution and state modules are implemented using junos-eznc (PyEZ). Junos PyEZ is a microframework for Python that enables you to remotely manage and automate devices running the Junos operating system.

Getting started

Install PyEZ on the system which will run the Junos proxy minion. It is required to run Junos specific modules.

pip install junos-eznc

Next, set the master of the proxy minions.

/etc/salt/proxy

master: <master_ip>

Add the details of the Junos device. Device details are usually stored in salt pillars. If the you do not wish to store credentials in the pillar, one can setup passwordless ssh.

/srv/pillar/vmx_details.sls

proxy:
  proxytype: junos
  host: <hostip>
  username: user
  passwd: secret123

Map the pillar file to the proxy minion. This is done in the top file.

/srv/pillar/top.sls

base:
  vmx:
    - vmx_details

Note

Before starting the Junos proxy make sure that netconf is enabled on the Junos device. This can be done by adding the following configuration on the Junos device.

set system services netconf ssh

Start the salt master.

salt-master -l debug

Then start the salt proxy.

salt-proxy --proxyid=vmx -l debug

Once the master and junos proxy minion have started, we can run execution and state modules on the proxy minion. Below are few examples.

CLI examples

For detailed documentation of all the junos execution modules refer: Junos execution module

Display device facts.

$ sudo salt 'vmx' junos.facts

Refresh the Junos facts. This function will also refresh the facts which are stored in salt grains. (Junos proxy stores Junos facts in the salt grains)

$ sudo salt 'vmx' junos.facts_refresh

Call an RPC.

$ sudo salt 'vmx' junos.rpc 'get-interface-information' '/var/log/interface-info.txt' terse=True

Install config on the device.

$ sudo salt 'vmx' junos.install_config 'salt://my_config.set'

Shutdown the junos device.

$ sudo salt 'vmx' junos.shutdown shutdown=True in_min=10

State file examples

For detailed documentation of all the junos state modules refer: Junos state module

Executing an RPC on Junos device and storing the output in a file.

/srv/salt/rpc.sls

get-interface-information:
    junos:
      - rpc
      - dest: /home/user/rpc.log
      - interface_name: lo0

Lock the junos device, load the configuration, commit it and unlock the device.

/srv/salt/load.sls

lock the config:
  junos.lock

salt://configs/my_config.set:
  junos:
    - install_config
    - timeout: 100
    - diffs_file: 'var/log/diff'

commit the changes:
  junos:
    - commit

unlock the config:
  junos.unlock

According to the device personality install appropriate image on the device.

/srv/salt/image_install.sls

{% if grains['junos_facts']['personality'] == MX %}
salt://images/mx_junos_image.tgz:
  junos:
    - install_os
    - timeout: 100
    - reboot: True
{% elif grains['junos_facts']['personality'] == EX %}
salt://images/ex_junos_image.tgz:
  junos:
    - install_os
    - timeout: 150
{% elif grains['junos_facts']['personality'] == SRX %}
salt://images/srx_junos_image.tgz:
  junos:
    - install_os
    - timeout: 150
{% endif %}

Junos Syslog Engine

Junos Syslog Engine is a Salt engine which receives data from various Junos devices, extracts event information and forwards it on the master/minion event bus. To start the engine on the salt master, add the following configuration in the master config file. The engine can also run on the salt minion.

/etc/salt/master

engines:
  - junos_syslog:
      port: xxx

For junos_syslog engine to receive events, syslog must be set on the Junos device. This can be done via following configuration:

set system syslog host <ip-of-the-salt-device> port xxx any any