salt.states.glusterfs

Manage GlusterFS pool.

salt.states.glusterfs.add_volume_bricks(name, bricks)

Add brick(s) to an existing volume

name
Volume name
bricks
List of bricks to add to the volume
myvolume:
  glusterfs.add_volume_bricks:
    - bricks:
        - host1:/srv/gluster/drive1
        - host2:/srv/gluster/drive2

Replicated Volume:
  glusterfs.add_volume_bricks:
    - name: volume2
    - bricks:
      - host1:/srv/gluster/drive2
      - host2:/srv/gluster/drive3
salt.states.glusterfs.max_op_version(name)

New in version Fluorine.

Add brick(s) to an existing volume

name
Volume name
myvolume:
  glusterfs.max_op_version:
    - name: volume1
    - version: 30707
salt.states.glusterfs.op_version(name, version)

New in version Fluorine.

Add brick(s) to an existing volume

name
Volume name
version
Version to which the cluster.op-version should be set
myvolume:
  glusterfs.op_version:
    - name: volume1
    - version: 30707
salt.states.glusterfs.peered(name)

Check if node is peered.

name
The remote host with which to peer.
peer-cluster:
  glusterfs.peered:
    - name: two

peer-clusters:
  glusterfs.peered:
    - names:
      - one
      - two
      - three
      - four
salt.states.glusterfs.started(name)

Check if volume has been started

name
name of the volume
mycluster:
  glusterfs.started: []
salt.states.glusterfs.volume_present(name, bricks, stripe=False, replica=False, device_vg=False, transport=u'tcp', start=False, force=False, arbiter=False)

Ensure that the volume exists

name
name of the volume
bricks
list of brick paths
replica
replica count for volume
arbiter

use every third brick as arbiter (metadata only)

New in version Fluorine.

start
ensure that the volume is also started
myvolume:
  glusterfs.volume_present:
    - bricks:
        - host1:/srv/gluster/drive1
        - host2:/srv/gluster/drive2

Replicated Volume:
  glusterfs.volume_present:
    - name: volume2
    - bricks:
      - host1:/srv/gluster/drive2
      - host2:/srv/gluster/drive3
    - replica: 2
    - start: True

Replicated Volume with arbiter brick:
  glusterfs.volume_present:
    - name: volume3
    - bricks:
      - host1:/srv/gluster/drive2
      - host2:/srv/gluster/drive3
      - host3:/srv/gluster/drive4
    - replica: 3
    - arbiter: True
    - start: True