2018-09-01 21:58:33,211 [salt.utils.decorators:613 ][WARNING ][3930] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:58:42,355 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:58:51,747 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6749] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
2018-09-01 21:58:51,778 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6749] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'salt-minion.service'] in directory '/root'
2018-09-01 21:58:51,796 [salt.utils.parsers:1051][WARNING ][2001] Minion received a SIGTERM. Exiting.
2018-09-01 21:58:52,624 [salt.cli.daemons :293 ][INFO    ][6880] Setting up the Salt Minion "cmp002.mcp-ovs-ha.local"
2018-09-01 21:58:52,708 [salt.cli.daemons :82  ][INFO    ][6880] Starting up the Salt Minion
2018-09-01 21:58:52,708 [salt.utils.event :1017][INFO    ][6880] Starting pull socket on /var/run/salt/minion/minion_event_56b6a5185f_pull.ipc
2018-09-01 21:58:53,435 [salt.minion      :976 ][INFO    ][6880] Creating minion process manager
2018-09-01 21:58:54,651 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][6880] Executing command ['date', '+%z'] in directory '/root'
2018-09-01 21:58:54,670 [salt.utils.schedule:568 ][INFO    ][6880] Updating job settings for scheduled job: __mine_interval
2018-09-01 21:58:54,673 [salt.minion      :1107][INFO    ][6880] Added mine.update to scheduler
2018-09-01 21:58:54,687 [salt.minion      :1965][INFO    ][6880] Minion is starting as user 'root'
2018-09-01 21:58:54,698 [salt.minion      :2324][INFO    ][6880] Minion is ready to receive requests!
2018-09-01 21:58:59,042 [salt.minion      :1307][INFO    ][6880] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901215859027758
2018-09-01 21:58:59,062 [salt.minion      :1431][INFO    ][7848] Starting a new job with PID 7848
2018-09-01 21:58:59,078 [salt.minion      :1708][INFO    ][7848] Returning information for job: 20180901215859027758
2018-09-01 21:59:02,720 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,739 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,763 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,779 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,795 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,811 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,835 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,851 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,867 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,883 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,899 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,915 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,935 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,955 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,971 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:02,987 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:03,918 [salt.utils.decorators:613 ][WARNING ][5342] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 21:59:09,232 [salt.minion      :1307][INFO    ][6880] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901215909216587
2018-09-01 21:59:09,242 [salt.minion      :1431][INFO    ][9330] Starting a new job with PID 9330
2018-09-01 21:59:09,253 [salt.minion      :1708][INFO    ][9330] Returning information for job: 20180901215909216587
2018-09-01 21:59:12,471 [salt.loaded.int.module.cmdmod:722 ][ERROR   ][5342] Command '['ovs-vsctl', 'br-exists', 'br-floating']' failed with return code: 2
2018-09-01 21:59:12,472 [salt.loaded.int.module.cmdmod:728 ][ERROR   ][5342] retcode: 2
2018-09-01 21:59:16,546 [salt.minion      :1307][INFO    ][6880] User sudo_ubuntu Executing command test.ping with jid 20180901215916530233
2018-09-01 21:59:16,555 [salt.minion      :1431][INFO    ][10538] Starting a new job with PID 10538
2018-09-01 21:59:16,569 [salt.minion      :1708][INFO    ][10538] Returning information for job: 20180901215916530233
2018-09-01 21:59:17,275 [salt.minion      :1307][INFO    ][6880] User sudo_ubuntu Executing command system.reboot with jid 20180901215917259171
2018-09-01 21:59:17,284 [salt.minion      :1431][INFO    ][10544] Starting a new job with PID 10544
2018-09-01 21:59:17,288 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][10544] Executing command ['shutdown', '-r', 'now'] in directory '/root'
2018-09-01 21:59:17,649 [salt.utils.parsers:1051][WARNING ][6880] Minion received a SIGTERM. Exiting.
2018-09-01 21:59:17,663 [salt.cli.daemons :82  ][INFO    ][6880] Shutting down the Salt Minion
2018-09-01 21:59:19,365 [salt.minion      :1708][INFO    ][10544] Returning information for job: 20180901215917259171
2018-09-01 22:01:26,552 [salt.cli.daemons :293 ][INFO    ][3467] Setting up the Salt Minion "cmp002.mcp-ovs-ha.local"
2018-09-01 22:01:27,196 [salt.cli.daemons :82  ][INFO    ][3467] Starting up the Salt Minion
2018-09-01 22:01:27,196 [salt.utils.event :1017][INFO    ][3467] Starting pull socket on /var/run/salt/minion/minion_event_56b6a5185f_pull.ipc
2018-09-01 22:01:28,143 [salt.minion      :976 ][INFO    ][3467] Creating minion process manager
2018-09-01 22:01:29,192 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][3467] Executing command ['date', '+%z'] in directory '/root'
2018-09-01 22:01:29,228 [salt.utils.schedule:568 ][INFO    ][3467] Updating job settings for scheduled job: __mine_interval
2018-09-01 22:01:29,239 [salt.minion      :1107][INFO    ][3467] Added mine.update to scheduler
2018-09-01 22:01:29,253 [salt.minion      :1965][INFO    ][3467] Minion is starting as user 'root'
2018-09-01 22:01:29,264 [salt.minion      :2324][INFO    ][3467] Minion is ready to receive requests!
2018-09-01 22:01:41,365 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command test.ping with jid 20180901220141350343
2018-09-01 22:01:41,374 [salt.minion      :1431][INFO    ][3645] Starting a new job with PID 3645
2018-09-01 22:01:41,416 [salt.minion      :1708][INFO    ][3645] Returning information for job: 20180901220141350343
2018-09-01 22:01:42,122 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command state.apply with jid 20180901220142106967
2018-09-01 22:01:42,130 [salt.minion      :1431][INFO    ][3651] Starting a new job with PID 3651
2018-09-01 22:01:45,728 [salt.state       :905 ][INFO    ][3651] Loading fresh modules for state activity
2018-09-01 22:01:46,724 [salt.fileclient  :1215][INFO    ][3651] Fetching file from saltenv 'base', ** done ** 'linux/init.sls'
2018-09-01 22:01:47,195 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901220147178736
2018-09-01 22:01:47,204 [salt.minion      :1431][INFO    ][3674] Starting a new job with PID 3674
2018-09-01 22:01:47,215 [salt.minion      :1708][INFO    ][3674] Returning information for job: 20180901220147178736
2018-09-01 22:01:48,560 [salt.fileclient  :1215][INFO    ][3651] Fetching file from saltenv 'base', ** done ** 'linux/storage/init.sls'
2018-09-01 22:01:48,623 [salt.fileclient  :1215][INFO    ][3651] Fetching file from saltenv 'base', ** done ** 'linux/storage/lvm.sls'
2018-09-01 22:01:48,697 [salt.fileclient  :1215][INFO    ][3651] Fetching file from saltenv 'base', ** done ** 'ntp/init.sls'
2018-09-01 22:01:48,712 [salt.fileclient  :1215][INFO    ][3651] Fetching file from saltenv 'base', ** done ** 'ntp/client.sls'
2018-09-01 22:01:48,741 [salt.fileclient  :1215][INFO    ][3651] Fetching file from saltenv 'base', ** done ** 'ntp/server.sls'
2018-09-01 22:01:48,775 [salt.state       :1770][INFO    ][3651] Running state [/etc/environment] at time 22:01:48.775334
2018-09-01 22:01:48,775 [salt.state       :1803][INFO    ][3651] Executing state file.blockreplace for [/etc/environment]
2018-09-01 22:01:48,775 [salt.state       :290 ][INFO    ][3651] No changes needed to be made
2018-09-01 22:01:48,776 [salt.state       :1941][INFO    ][3651] Completed state [/etc/environment] at time 22:01:48.776076 duration_in_ms=0.742
2018-09-01 22:01:48,776 [salt.state       :1770][INFO    ][3651] Running state [/etc/profile.d] at time 22:01:48.776224
2018-09-01 22:01:48,776 [salt.state       :1803][INFO    ][3651] Executing state file.directory for [/etc/profile.d]
2018-09-01 22:01:48,780 [salt.state       :290 ][INFO    ][3651] Directory /etc/profile.d is in the correct state
Directory /etc/profile.d updated
2018-09-01 22:01:48,780 [salt.state       :1941][INFO    ][3651] Completed state [/etc/profile.d] at time 22:01:48.780524 duration_in_ms=4.301
2018-09-01 22:01:49,804 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/apt.conf.d/99prefer_ipv4-salt] at time 22:01:49.804306
2018-09-01 22:01:49,804 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/apt/apt.conf.d/99prefer_ipv4-salt]
2018-09-01 22:01:49,826 [salt.state       :290 ][INFO    ][3651] File /etc/apt/apt.conf.d/99prefer_ipv4-salt is in the correct state
2018-09-01 22:01:49,826 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/apt.conf.d/99prefer_ipv4-salt] at time 22:01:49.826795 duration_in_ms=22.49
2018-09-01 22:01:49,826 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/apt.conf.d/99allow_downgrades-salt] at time 22:01:49.826947
2018-09-01 22:01:49,827 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/apt/apt.conf.d/99allow_downgrades-salt]
2018-09-01 22:01:49,841 [salt.state       :290 ][INFO    ][3651] File /etc/apt/apt.conf.d/99allow_downgrades-salt is in the correct state
2018-09-01 22:01:49,841 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/apt.conf.d/99allow_downgrades-salt] at time 22:01:49.841148 duration_in_ms=14.201
2018-09-01 22:01:49,842 [salt.state       :1770][INFO    ][3651] Running state [linux_repo_prereq_pkgs] at time 22:01:49.842097
2018-09-01 22:01:49,842 [salt.state       :1803][INFO    ][3651] Executing state pkg.installed for [linux_repo_prereq_pkgs]
2018-09-01 22:01:49,842 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 22:01:50,460 [salt.state       :290 ][INFO    ][3651] All specified packages are already installed
2018-09-01 22:01:50,461 [salt.state       :1941][INFO    ][3651] Completed state [linux_repo_prereq_pkgs] at time 22:01:50.461158 duration_in_ms=619.06
2018-09-01 22:01:50,461 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/apt.conf.d/99proxies-salt] at time 22:01:50.461343
2018-09-01 22:01:50,461 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/apt/apt.conf.d/99proxies-salt]
2018-09-01 22:01:50,497 [salt.state       :290 ][INFO    ][3651] File /etc/apt/apt.conf.d/99proxies-salt is in the correct state
2018-09-01 22:01:50,497 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/apt.conf.d/99proxies-salt] at time 22:01:50.497722 duration_in_ms=36.378
2018-09-01 22:01:50,497 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/apt.conf.d/99proxies-salt-glusterfs-ppa] at time 22:01:50.497869
2018-09-01 22:01:50,498 [salt.state       :1803][INFO    ][3651] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt-glusterfs-ppa]
2018-09-01 22:01:50,498 [salt.state       :290 ][INFO    ][3651] File /etc/apt/apt.conf.d/99proxies-salt-glusterfs-ppa is not present
2018-09-01 22:01:50,498 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/apt.conf.d/99proxies-salt-glusterfs-ppa] at time 22:01:50.498344 duration_in_ms=0.474
2018-09-01 22:01:50,498 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/preferences.d/glusterfs-ppa] at time 22:01:50.498481
2018-09-01 22:01:50,498 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/apt/preferences.d/glusterfs-ppa]
2018-09-01 22:01:50,556 [salt.state       :290 ][INFO    ][3651] File /etc/apt/preferences.d/glusterfs-ppa is in the correct state
2018-09-01 22:01:50,557 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/preferences.d/glusterfs-ppa] at time 22:01:50.557008 duration_in_ms=58.527
2018-09-01 22:01:50,591 [salt.state       :1770][INFO    ][3651] Running state [apt-key adv --keyserver keyserver.ubuntu.com --recv 3FE869A9] at time 22:01:50.591534
2018-09-01 22:01:50,591 [salt.state       :1803][INFO    ][3651] Executing state cmd.run for [apt-key adv --keyserver keyserver.ubuntu.com --recv 3FE869A9]
2018-09-01 22:01:50,592 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'test -e /etc/apt/sources.list.d/glusterfs-ppa.list' in directory '/root'
2018-09-01 22:01:50,598 [salt.state       :290 ][INFO    ][3651] unless execution succeeded
2018-09-01 22:01:50,598 [salt.state       :1941][INFO    ][3651] Completed state [apt-key adv --keyserver keyserver.ubuntu.com --recv 3FE869A9] at time 22:01:50.598870 duration_in_ms=7.335
2018-09-01 22:01:50,600 [salt.state       :1770][INFO    ][3651] Running state [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main] at time 22:01:50.600709
2018-09-01 22:01:50,600 [salt.state       :1803][INFO    ][3651] Executing state pkgrepo.managed for [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main]
2018-09-01 22:01:50,652 [salt.state       :290 ][INFO    ][3651] Package repo 'deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main' already configured
2018-09-01 22:01:50,652 [salt.state       :1941][INFO    ][3651] Completed state [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main] at time 22:01:50.652588 duration_in_ms=51.879
2018-09-01 22:01:50,652 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack] at time 22:01:50.652786
2018-09-01 22:01:50,652 [salt.state       :1803][INFO    ][3651] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack]
2018-09-01 22:01:50,653 [salt.state       :290 ][INFO    ][3651] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack is not present
2018-09-01 22:01:50,653 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack] at time 22:01:50.653401 duration_in_ms=0.614
2018-09-01 22:01:50,653 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/preferences.d/mirantis_openstack] at time 22:01:50.653576
2018-09-01 22:01:50,653 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/apt/preferences.d/mirantis_openstack]
2018-09-01 22:01:50,716 [salt.state       :290 ][INFO    ][3651] File /etc/apt/preferences.d/mirantis_openstack is in the correct state
2018-09-01 22:01:50,716 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/preferences.d/mirantis_openstack] at time 22:01:50.716930 duration_in_ms=63.354
2018-09-01 22:01:50,718 [salt.state       :1770][INFO    ][3651] Running state [deb http://mirror.mirantis.com/nightly/openstack-queens/xenial xenial main] at time 22:01:50.718352
2018-09-01 22:01:50,718 [salt.state       :1803][INFO    ][3651] Executing state pkgrepo.managed for [deb http://mirror.mirantis.com/nightly/openstack-queens/xenial xenial main]
2018-09-01 22:01:50,738 [salt.state       :290 ][INFO    ][3651] Package repo 'deb http://mirror.mirantis.com/nightly/openstack-queens/xenial xenial main' already configured
2018-09-01 22:01:50,738 [salt.state       :1941][INFO    ][3651] Completed state [deb http://mirror.mirantis.com/nightly/openstack-queens/xenial xenial main] at time 22:01:50.738915 duration_in_ms=20.563
2018-09-01 22:01:50,739 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/apt.conf.d/99proxies-salt-uca] at time 22:01:50.739071
2018-09-01 22:01:50,739 [salt.state       :1803][INFO    ][3651] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt-uca]
2018-09-01 22:01:50,739 [salt.state       :290 ][INFO    ][3651] File /etc/apt/apt.conf.d/99proxies-salt-uca is not present
2018-09-01 22:01:50,739 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/apt.conf.d/99proxies-salt-uca] at time 22:01:50.739556 duration_in_ms=0.485
2018-09-01 22:01:50,739 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/preferences.d/uca] at time 22:01:50.739700
2018-09-01 22:01:50,739 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/apt/preferences.d/uca]
2018-09-01 22:01:50,798 [salt.state       :290 ][INFO    ][3651] File /etc/apt/preferences.d/uca is in the correct state
2018-09-01 22:01:50,798 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/preferences.d/uca] at time 22:01:50.798276 duration_in_ms=58.575
2018-09-01 22:01:50,801 [salt.state       :1770][INFO    ][3651] Running state [apt-key adv --keyserver keyserver.ubuntu.com --recv EC4926EA] at time 22:01:50.801226
2018-09-01 22:01:50,801 [salt.state       :1803][INFO    ][3651] Executing state cmd.run for [apt-key adv --keyserver keyserver.ubuntu.com --recv EC4926EA]
2018-09-01 22:01:50,801 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'test -e /etc/apt/sources.list.d/uca.list' in directory '/root'
2018-09-01 22:01:50,808 [salt.state       :290 ][INFO    ][3651] unless execution succeeded
2018-09-01 22:01:50,808 [salt.state       :1941][INFO    ][3651] Completed state [apt-key adv --keyserver keyserver.ubuntu.com --recv EC4926EA] at time 22:01:50.808794 duration_in_ms=7.567
2018-09-01 22:01:50,810 [salt.state       :1770][INFO    ][3651] Running state [deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/queens main] at time 22:01:50.810594
2018-09-01 22:01:50,810 [salt.state       :1803][INFO    ][3651] Executing state pkgrepo.managed for [deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/queens main]
2018-09-01 22:01:50,833 [salt.state       :290 ][INFO    ][3651] Package repo 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/queens main' already configured
2018-09-01 22:01:50,833 [salt.state       :1941][INFO    ][3651] Completed state [deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/queens main] at time 22:01:50.833222 duration_in_ms=22.629
2018-09-01 22:01:50,841 [salt.state       :1770][INFO    ][3651] Running state [pkg.refresh_db] at time 22:01:50.841088
2018-09-01 22:01:50,841 [salt.state       :1803][INFO    ][3651] Executing state module.run for [pkg.refresh_db]
2018-09-01 22:01:50,841 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:50,841 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-09-01 22:01:53,070 [salt.state       :290 ][INFO    ][3651] {'ret': {'http://archive.ubuntu.com/ubuntu xenial-backports InRelease': None, 'http://archive.ubuntu.com/ubuntu xenial-updates InRelease': True, 'http://archive.ubuntu.com/ubuntu xenial-security InRelease': True, 'http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/queens InRelease': False, 'http://repo.saltstack.com/apt/ubuntu/16.04/amd64/2017.7 xenial InRelease': None, 'http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/queens Release': None, 'http://archive.ubuntu.com/ubuntu xenial InRelease': None, 'http://mirror.mirantis.com/nightly/openstack-queens/xenial xenial InRelease': None, 'http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial InRelease': None}}
2018-09-01 22:01:53,070 [salt.state       :1941][INFO    ][3651] Completed state [pkg.refresh_db] at time 22:01:53.070407 duration_in_ms=2229.318
2018-09-01 22:01:53,070 [salt.state       :1770][INFO    ][3651] Running state [linux_extra_packages_latest] at time 22:01:53.070685
2018-09-01 22:01:53,070 [salt.state       :1803][INFO    ][3651] Executing state pkg.latest for [linux_extra_packages_latest]
2018-09-01 22:01:53,081 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['apt-cache', '-q', 'policy', 'python-tornado'] in directory '/root'
2018-09-01 22:01:53,152 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['apt-cache', '-q', 'policy', 'python-pymysql'] in directory '/root'
2018-09-01 22:01:53,215 [salt.state       :290 ][INFO    ][3651] All packages are up-to-date (python-pymysql, python-tornado).
2018-09-01 22:01:53,215 [salt.state       :1941][INFO    ][3651] Completed state [linux_extra_packages_latest] at time 22:01:53.215414 duration_in_ms=144.728
2018-09-01 22:01:53,220 [salt.state       :1770][INFO    ][3651] Running state [UTC] at time 22:01:53.220186
2018-09-01 22:01:53,220 [salt.state       :1803][INFO    ][3651] Executing state timezone.system for [UTC]
2018-09-01 22:01:53,221 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['timedatectl'] in directory '/root'
2018-09-01 22:01:53,269 [salt.state       :290 ][INFO    ][3651] Timezone UTC already set, UTC already set to UTC
2018-09-01 22:01:53,270 [salt.state       :1941][INFO    ][3651] Completed state [UTC] at time 22:01:53.270002 duration_in_ms=49.816
2018-09-01 22:01:53,270 [salt.state       :1770][INFO    ][3651] Running state [/etc/default/grub.d] at time 22:01:53.270204
2018-09-01 22:01:53,270 [salt.state       :1803][INFO    ][3651] Executing state file.directory for [/etc/default/grub.d]
2018-09-01 22:01:53,271 [salt.state       :290 ][INFO    ][3651] Directory /etc/default/grub.d is in the correct state
Directory /etc/default/grub.d updated
2018-09-01 22:01:53,271 [salt.state       :1941][INFO    ][3651] Completed state [/etc/default/grub.d] at time 22:01:53.271155 duration_in_ms=0.952
2018-09-01 22:01:53,274 [salt.state       :1770][INFO    ][3651] Running state [/etc/default/grub.d/99-custom-settings.cfg] at time 22:01:53.273987
2018-09-01 22:01:53,274 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/default/grub.d/99-custom-settings.cfg]
2018-09-01 22:01:53,298 [salt.state       :290 ][INFO    ][3651] File /etc/default/grub.d/99-custom-settings.cfg is in the correct state
2018-09-01 22:01:53,298 [salt.state       :1941][INFO    ][3651] Completed state [/etc/default/grub.d/99-custom-settings.cfg] at time 22:01:53.298180 duration_in_ms=24.193
2018-09-01 22:01:53,298 [salt.state       :1770][INFO    ][3651] Running state [/etc/default/grub.d/90-hugepages.cfg] at time 22:01:53.298861
2018-09-01 22:01:53,299 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/default/grub.d/90-hugepages.cfg]
2018-09-01 22:01:53,362 [salt.state       :290 ][INFO    ][3651] File /etc/default/grub.d/90-hugepages.cfg is in the correct state
2018-09-01 22:01:53,362 [salt.state       :1941][INFO    ][3651] Completed state [/etc/default/grub.d/90-hugepages.cfg] at time 22:01:53.362503 duration_in_ms=63.641
2018-09-01 22:01:53,363 [salt.state       :1770][INFO    ][3651] Running state [update-grub] at time 22:01:53.363467
2018-09-01 22:01:53,363 [salt.state       :1803][INFO    ][3651] Executing state cmd.wait for [update-grub]
2018-09-01 22:01:53,363 [salt.state       :290 ][INFO    ][3651] No changes made for update-grub
2018-09-01 22:01:53,363 [salt.state       :1941][INFO    ][3651] Completed state [update-grub] at time 22:01:53.363962 duration_in_ms=0.495
2018-09-01 22:01:53,370 [salt.state       :1770][INFO    ][3651] Running state [nf_conntrack] at time 22:01:53.370276
2018-09-01 22:01:53,370 [salt.state       :1803][INFO    ][3651] Executing state kmod.present for [nf_conntrack]
2018-09-01 22:01:53,370 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'lsmod' in directory '/root'
2018-09-01 22:01:53,379 [salt.state       :290 ][INFO    ][3651] Kernel module nf_conntrack is already present
2018-09-01 22:01:53,379 [salt.state       :1941][INFO    ][3651] Completed state [nf_conntrack] at time 22:01:53.379215 duration_in_ms=8.939
2018-09-01 22:01:53,384 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.tcp_keepalive_probes] at time 22:01:53.384230
2018-09-01 22:01:53,384 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.tcp_keepalive_probes]
2018-09-01 22:01:53,390 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.tcp_keepalive_probes' in directory '/root'
2018-09-01 22:01:53,398 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.tcp_keepalive_probes = 8 is already set
2018-09-01 22:01:53,398 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.tcp_keepalive_probes] at time 22:01:53.398530 duration_in_ms=14.299
2018-09-01 22:01:53,398 [salt.state       :1770][INFO    ][3651] Running state [fs.file-max] at time 22:01:53.398759
2018-09-01 22:01:53,398 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [fs.file-max]
2018-09-01 22:01:53,399 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n fs.file-max' in directory '/root'
2018-09-01 22:01:53,404 [salt.state       :290 ][INFO    ][3651] Sysctl value fs.file-max = 124165 is already set
2018-09-01 22:01:53,404 [salt.state       :1941][INFO    ][3651] Completed state [fs.file-max] at time 22:01:53.404608 duration_in_ms=5.848
2018-09-01 22:01:53,404 [salt.state       :1770][INFO    ][3651] Running state [net.core.somaxconn] at time 22:01:53.404798
2018-09-01 22:01:53,404 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.core.somaxconn]
2018-09-01 22:01:53,405 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.core.somaxconn' in directory '/root'
2018-09-01 22:01:53,410 [salt.state       :290 ][INFO    ][3651] Sysctl value net.core.somaxconn = 4096 is already set
2018-09-01 22:01:53,410 [salt.state       :1941][INFO    ][3651] Completed state [net.core.somaxconn] at time 22:01:53.410679 duration_in_ms=5.88
2018-09-01 22:01:53,410 [salt.state       :1770][INFO    ][3651] Running state [vm.dirty_ratio] at time 22:01:53.410913
2018-09-01 22:01:53,411 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [vm.dirty_ratio]
2018-09-01 22:01:53,411 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n vm.dirty_ratio' in directory '/root'
2018-09-01 22:01:53,417 [salt.state       :290 ][INFO    ][3651] Sysctl value vm.dirty_ratio = 10 is already set
2018-09-01 22:01:53,417 [salt.state       :1941][INFO    ][3651] Completed state [vm.dirty_ratio] at time 22:01:53.417168 duration_in_ms=6.256
2018-09-01 22:01:53,417 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.neigh.default.gc_thresh3] at time 22:01:53.417356
2018-09-01 22:01:53,417 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.neigh.default.gc_thresh3]
2018-09-01 22:01:53,417 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.neigh.default.gc_thresh3' in directory '/root'
2018-09-01 22:01:53,423 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.neigh.default.gc_thresh3 = 16384 is already set
2018-09-01 22:01:53,423 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.neigh.default.gc_thresh3] at time 22:01:53.423232 duration_in_ms=5.876
2018-09-01 22:01:53,423 [salt.state       :1770][INFO    ][3651] Running state [vm.dirty_background_ratio] at time 22:01:53.423460
2018-09-01 22:01:53,423 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [vm.dirty_background_ratio]
2018-09-01 22:01:53,424 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n vm.dirty_background_ratio' in directory '/root'
2018-09-01 22:01:53,429 [salt.state       :290 ][INFO    ][3651] Sysctl value vm.dirty_background_ratio = 5 is already set
2018-09-01 22:01:53,429 [salt.state       :1941][INFO    ][3651] Completed state [vm.dirty_background_ratio] at time 22:01:53.429461 duration_in_ms=6.001
2018-09-01 22:01:53,429 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.tcp_congestion_control] at time 22:01:53.429648
2018-09-01 22:01:53,429 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.tcp_congestion_control]
2018-09-01 22:01:53,430 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.tcp_congestion_control' in directory '/root'
2018-09-01 22:01:53,435 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.tcp_congestion_control = yeah is already set
2018-09-01 22:01:53,435 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.tcp_congestion_control] at time 22:01:53.435442 duration_in_ms=5.793
2018-09-01 22:01:53,435 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.tcp_max_syn_backlog] at time 22:01:53.435673
2018-09-01 22:01:53,435 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.tcp_max_syn_backlog]
2018-09-01 22:01:53,436 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.tcp_max_syn_backlog' in directory '/root'
2018-09-01 22:01:53,441 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.tcp_max_syn_backlog = 8192 is already set
2018-09-01 22:01:53,441 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.tcp_max_syn_backlog] at time 22:01:53.441572 duration_in_ms=5.899
2018-09-01 22:01:53,441 [salt.state       :1770][INFO    ][3651] Running state [net.nf_conntrack_max] at time 22:01:53.441760
2018-09-01 22:01:53,441 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.nf_conntrack_max]
2018-09-01 22:01:53,442 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.nf_conntrack_max' in directory '/root'
2018-09-01 22:01:53,447 [salt.state       :290 ][INFO    ][3651] Sysctl value net.nf_conntrack_max = 1048576 is already set
2018-09-01 22:01:53,447 [salt.state       :1941][INFO    ][3651] Completed state [net.nf_conntrack_max] at time 22:01:53.447910 duration_in_ms=6.149
2018-09-01 22:01:53,448 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.tcp_retries2] at time 22:01:53.448146
2018-09-01 22:01:53,448 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.tcp_retries2]
2018-09-01 22:01:53,448 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.tcp_retries2' in directory '/root'
2018-09-01 22:01:53,454 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.tcp_retries2 = 5 is already set
2018-09-01 22:01:53,454 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.tcp_retries2] at time 22:01:53.454530 duration_in_ms=6.383
2018-09-01 22:01:53,454 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.tcp_fin_timeout] at time 22:01:53.454739
2018-09-01 22:01:53,454 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.tcp_fin_timeout]
2018-09-01 22:01:53,455 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.tcp_fin_timeout' in directory '/root'
2018-09-01 22:01:53,460 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.tcp_fin_timeout = 30 is already set
2018-09-01 22:01:53,460 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.tcp_fin_timeout] at time 22:01:53.460903 duration_in_ms=6.164
2018-09-01 22:01:53,461 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.tcp_slow_start_after_idle] at time 22:01:53.461131
2018-09-01 22:01:53,461 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.tcp_slow_start_after_idle]
2018-09-01 22:01:53,461 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.tcp_slow_start_after_idle' in directory '/root'
2018-09-01 22:01:53,467 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.tcp_slow_start_after_idle = 0 is already set
2018-09-01 22:01:53,467 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.tcp_slow_start_after_idle] at time 22:01:53.467204 duration_in_ms=6.072
2018-09-01 22:01:53,467 [salt.state       :1770][INFO    ][3651] Running state [vm.swappiness] at time 22:01:53.467406
2018-09-01 22:01:53,467 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [vm.swappiness]
2018-09-01 22:01:53,468 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n vm.swappiness' in directory '/root'
2018-09-01 22:01:53,473 [salt.state       :290 ][INFO    ][3651] Sysctl value vm.swappiness = 10 is already set
2018-09-01 22:01:53,473 [salt.state       :1941][INFO    ][3651] Completed state [vm.swappiness] at time 22:01:53.473612 duration_in_ms=6.206
2018-09-01 22:01:53,473 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.tcp_keepalive_intvl] at time 22:01:53.473853
2018-09-01 22:01:53,474 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.tcp_keepalive_intvl]
2018-09-01 22:01:53,474 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.tcp_keepalive_intvl' in directory '/root'
2018-09-01 22:01:53,479 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.tcp_keepalive_intvl = 3 is already set
2018-09-01 22:01:53,480 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.tcp_keepalive_intvl] at time 22:01:53.480105 duration_in_ms=6.252
2018-09-01 22:01:53,480 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.neigh.default.gc_thresh1] at time 22:01:53.480304
2018-09-01 22:01:53,480 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.neigh.default.gc_thresh1]
2018-09-01 22:01:53,480 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.neigh.default.gc_thresh1' in directory '/root'
2018-09-01 22:01:53,485 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.neigh.default.gc_thresh1 = 4096 is already set
2018-09-01 22:01:53,486 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.neigh.default.gc_thresh1] at time 22:01:53.486124 duration_in_ms=5.82
2018-09-01 22:01:53,486 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.neigh.default.gc_thresh2] at time 22:01:53.486358
2018-09-01 22:01:53,486 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.neigh.default.gc_thresh2]
2018-09-01 22:01:53,487 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.neigh.default.gc_thresh2' in directory '/root'
2018-09-01 22:01:53,492 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.neigh.default.gc_thresh2 = 8192 is already set
2018-09-01 22:01:53,492 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.neigh.default.gc_thresh2] at time 22:01:53.492675 duration_in_ms=6.316
2018-09-01 22:01:53,492 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.tcp_tw_reuse] at time 22:01:53.492885
2018-09-01 22:01:53,493 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.tcp_tw_reuse]
2018-09-01 22:01:53,493 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.tcp_tw_reuse' in directory '/root'
2018-09-01 22:01:53,498 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.tcp_tw_reuse = 1 is already set
2018-09-01 22:01:53,499 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.tcp_tw_reuse] at time 22:01:53.498966 duration_in_ms=6.082
2018-09-01 22:01:53,499 [salt.state       :1770][INFO    ][3651] Running state [net.core.netdev_max_backlog] at time 22:01:53.499194
2018-09-01 22:01:53,499 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.core.netdev_max_backlog]
2018-09-01 22:01:53,499 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.core.netdev_max_backlog' in directory '/root'
2018-09-01 22:01:53,505 [salt.state       :290 ][INFO    ][3651] Sysctl value net.core.netdev_max_backlog = 261144 is already set
2018-09-01 22:01:53,505 [salt.state       :1941][INFO    ][3651] Completed state [net.core.netdev_max_backlog] at time 22:01:53.505445 duration_in_ms=6.251
2018-09-01 22:01:53,505 [salt.state       :1770][INFO    ][3651] Running state [net.ipv4.tcp_keepalive_time] at time 22:01:53.505645
2018-09-01 22:01:53,505 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [net.ipv4.tcp_keepalive_time]
2018-09-01 22:01:53,506 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n net.ipv4.tcp_keepalive_time' in directory '/root'
2018-09-01 22:01:53,511 [salt.state       :290 ][INFO    ][3651] Sysctl value net.ipv4.tcp_keepalive_time = 30 is already set
2018-09-01 22:01:53,511 [salt.state       :1941][INFO    ][3651] Completed state [net.ipv4.tcp_keepalive_time] at time 22:01:53.511866 duration_in_ms=6.222
2018-09-01 22:01:53,512 [salt.state       :1770][INFO    ][3651] Running state [kernel.panic] at time 22:01:53.512110
2018-09-01 22:01:53,512 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [kernel.panic]
2018-09-01 22:01:53,512 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n kernel.panic' in directory '/root'
2018-09-01 22:01:53,518 [salt.state       :290 ][INFO    ][3651] Sysctl value kernel.panic = 60 is already set
2018-09-01 22:01:53,518 [salt.state       :1941][INFO    ][3651] Completed state [kernel.panic] at time 22:01:53.518348 duration_in_ms=6.238
2018-09-01 22:01:53,518 [salt.state       :1770][INFO    ][3651] Running state [fs.inotify.max_user_instances] at time 22:01:53.518550
2018-09-01 22:01:53,518 [salt.state       :1803][INFO    ][3651] Executing state sysctl.present for [fs.inotify.max_user_instances]
2018-09-01 22:01:53,519 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl -n fs.inotify.max_user_instances' in directory '/root'
2018-09-01 22:01:53,524 [salt.state       :290 ][INFO    ][3651] Sysctl value fs.inotify.max_user_instances = 4096 is already set
2018-09-01 22:01:53,524 [salt.state       :1941][INFO    ][3651] Completed state [fs.inotify.max_user_instances] at time 22:01:53.524730 duration_in_ms=6.18
2018-09-01 22:01:53,528 [salt.state       :1770][INFO    ][3651] Running state [/mnt/hugepages_1G] at time 22:01:53.528250
2018-09-01 22:01:53,528 [salt.state       :1803][INFO    ][3651] Executing state mount.mounted for [/mnt/hugepages_1G]
2018-09-01 22:01:53,528 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'mount -l' in directory '/root'
2018-09-01 22:01:53,535 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'blkid' in directory '/root'
2018-09-01 22:01:53,564 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'mount -l' in directory '/root'
2018-09-01 22:01:53,570 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'mount -o mode=775,pagesize=1G,remount -t hugetlbfs Hugetlbfs-kvm-1g /mnt/hugepages_1G' in directory '/root'
2018-09-01 22:01:53,577 [salt.state       :290 ][INFO    ][3651] {'umount': 'Forced remount because options (pagesize=1G) changed'}
2018-09-01 22:01:53,577 [salt.state       :1941][INFO    ][3651] Completed state [/mnt/hugepages_1G] at time 22:01:53.577468 duration_in_ms=49.217
2018-09-01 22:01:53,577 [salt.state       :1770][INFO    ][3651] Running state [sysctl vm.nr_hugepages=16] at time 22:01:53.577690
2018-09-01 22:01:53,577 [salt.state       :1803][INFO    ][3651] Executing state cmd.run for [sysctl vm.nr_hugepages=16]
2018-09-01 22:01:53,578 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'sysctl vm.nr_hugepages | grep -qE '16'' in directory '/root'
2018-09-01 22:01:53,584 [salt.state       :290 ][INFO    ][3651] unless execution succeeded
2018-09-01 22:01:53,584 [salt.state       :1941][INFO    ][3651] Completed state [sysctl vm.nr_hugepages=16] at time 22:01:53.584812 duration_in_ms=7.122
2018-09-01 22:01:53,585 [salt.state       :1770][INFO    ][3651] Running state [systemctl mask dev-hugepages.mount] at time 22:01:53.585008
2018-09-01 22:01:53,585 [salt.state       :1803][INFO    ][3651] Executing state cmd.run for [systemctl mask dev-hugepages.mount]
2018-09-01 22:01:53,585 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'systemctl mask dev-hugepages.mount' in directory '/root'
2018-09-01 22:01:53,644 [salt.state       :290 ][INFO    ][3651] {'pid': 4180, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-09-01 22:01:53,644 [salt.state       :1941][INFO    ][3651] Completed state [systemctl mask dev-hugepages.mount] at time 22:01:53.644450 duration_in_ms=59.441
2018-09-01 22:01:53,644 [salt.state       :1770][INFO    ][3651] Running state [linux_sysfs_package] at time 22:01:53.644655
2018-09-01 22:01:53,644 [salt.state       :1803][INFO    ][3651] Executing state pkg.installed for [linux_sysfs_package]
2018-09-01 22:01:53,649 [salt.state       :290 ][INFO    ][3651] All specified packages are already installed
2018-09-01 22:01:53,649 [salt.state       :1941][INFO    ][3651] Completed state [linux_sysfs_package] at time 22:01:53.649928 duration_in_ms=5.272
2018-09-01 22:01:53,650 [salt.state       :1770][INFO    ][3651] Running state [/etc/sysfs.d] at time 22:01:53.650952
2018-09-01 22:01:53,651 [salt.state       :1803][INFO    ][3651] Executing state file.directory for [/etc/sysfs.d]
2018-09-01 22:01:53,651 [salt.state       :290 ][INFO    ][3651] Directory /etc/sysfs.d is in the correct state
Directory /etc/sysfs.d updated
2018-09-01 22:01:53,651 [salt.state       :1941][INFO    ][3651] Completed state [/etc/sysfs.d] at time 22:01:53.651624 duration_in_ms=0.671
2018-09-01 22:01:53,652 [salt.state       :1770][INFO    ][3651] Running state [ondemand] at time 22:01:53.652487
2018-09-01 22:01:53,652 [salt.state       :1803][INFO    ][3651] Executing state service.dead for [ondemand]
2018-09-01 22:01:53,653 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'status', 'ondemand.service', '-n', '0'] in directory '/root'
2018-09-01 22:01:53,661 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-active', 'ondemand.service'] in directory '/root'
2018-09-01 22:01:53,667 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'ondemand.service'] in directory '/root'
2018-09-01 22:01:53,698 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'runlevel' in directory '/root'
2018-09-01 22:01:53,705 [salt.state       :290 ][INFO    ][3651] The service ondemand is already dead
2018-09-01 22:01:53,705 [salt.state       :1941][INFO    ][3651] Completed state [ondemand] at time 22:01:53.705643 duration_in_ms=53.156
2018-09-01 22:01:53,706 [salt.state       :1770][INFO    ][3651] Running state [/etc/sysfs.d/governor.conf] at time 22:01:53.706732
2018-09-01 22:01:53,707 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/sysfs.d/governor.conf]
2018-09-01 22:01:53,724 [salt.state       :290 ][INFO    ][3651] File /etc/sysfs.d/governor.conf is in the correct state
2018-09-01 22:01:53,724 [salt.state       :1941][INFO    ][3651] Completed state [/etc/sysfs.d/governor.conf] at time 22:01:53.724784 duration_in_ms=18.052
2018-09-01 22:01:53,725 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.724979
2018-09-01 22:01:53,725 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,725 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,725 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,725 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.725961 duration_in_ms=0.981
2018-09-01 22:01:53,726 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.726129
2018-09-01 22:01:53,726 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,726 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,726 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,727 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.726977 duration_in_ms=0.848
2018-09-01 22:01:53,727 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.727141
2018-09-01 22:01:53,727 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,727 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,727 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,728 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.728016 duration_in_ms=0.875
2018-09-01 22:01:53,728 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.728179
2018-09-01 22:01:53,728 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,728 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,728 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,729 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.729061 duration_in_ms=0.882
2018-09-01 22:01:53,729 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.729244
2018-09-01 22:01:53,729 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,729 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,729 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,730 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.730105 duration_in_ms=0.861
2018-09-01 22:01:53,730 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.730266
2018-09-01 22:01:53,730 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,730 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,730 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,731 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.731105 duration_in_ms=0.84
2018-09-01 22:01:53,731 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.731272
2018-09-01 22:01:53,731 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,731 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,732 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,732 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.732139 duration_in_ms=0.867
2018-09-01 22:01:53,732 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.732301
2018-09-01 22:01:53,732 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,732 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,733 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,733 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.733163 duration_in_ms=0.862
2018-09-01 22:01:53,733 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.733326
2018-09-01 22:01:53,733 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,733 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,734 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,734 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.734183 duration_in_ms=0.857
2018-09-01 22:01:53,734 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.734344
2018-09-01 22:01:53,734 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,734 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,735 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,735 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.735203 duration_in_ms=0.858
2018-09-01 22:01:53,735 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.735364
2018-09-01 22:01:53,735 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,735 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,736 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,736 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.736241 duration_in_ms=0.877
2018-09-01 22:01:53,736 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.736403
2018-09-01 22:01:53,736 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,736 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,737 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,737 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.737269 duration_in_ms=0.866
2018-09-01 22:01:53,737 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.737429
2018-09-01 22:01:53,737 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,737 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,738 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,738 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.738295 duration_in_ms=0.867
2018-09-01 22:01:53,738 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.738457
2018-09-01 22:01:53,738 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,738 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,739 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,739 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.739329 duration_in_ms=0.872
2018-09-01 22:01:53,739 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.739487
2018-09-01 22:01:53,739 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,739 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,740 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,740 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.740358 duration_in_ms=0.872
2018-09-01 22:01:53,740 [salt.state       :1770][INFO    ][3651] Running state [sysfs.write] at time 22:01:53.740515
2018-09-01 22:01:53,740 [salt.state       :1803][INFO    ][3651] Executing state module.run for [sysfs.write]
2018-09-01 22:01:53,740 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:53,741 [salt.state       :290 ][INFO    ][3651] {'ret': True}
2018-09-01 22:01:53,741 [salt.state       :1941][INFO    ][3651] Completed state [sysfs.write] at time 22:01:53.741348 duration_in_ms=0.833
2018-09-01 22:01:53,749 [salt.state       :1770][INFO    ][3651] Running state [cs_CZ.UTF-8] at time 22:01:53.749189
2018-09-01 22:01:53,749 [salt.state       :1803][INFO    ][3651] Executing state locale.present for [cs_CZ.UTF-8]
2018-09-01 22:01:53,749 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'locale -a' in directory '/root'
2018-09-01 22:01:53,768 [salt.state       :290 ][INFO    ][3651] Locale cs_CZ.UTF-8 is already present
2018-09-01 22:01:53,768 [salt.state       :1941][INFO    ][3651] Completed state [cs_CZ.UTF-8] at time 22:01:53.768294 duration_in_ms=19.104
2018-09-01 22:01:53,768 [salt.state       :1770][INFO    ][3651] Running state [en_US.UTF-8] at time 22:01:53.768507
2018-09-01 22:01:53,768 [salt.state       :1803][INFO    ][3651] Executing state locale.present for [en_US.UTF-8]
2018-09-01 22:01:53,769 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'locale -a' in directory '/root'
2018-09-01 22:01:53,773 [salt.state       :290 ][INFO    ][3651] Locale en_US.UTF-8 is already present
2018-09-01 22:01:53,773 [salt.state       :1941][INFO    ][3651] Completed state [en_US.UTF-8] at time 22:01:53.773763 duration_in_ms=5.257
2018-09-01 22:01:53,774 [salt.state       :1770][INFO    ][3651] Running state [en_US.UTF-8] at time 22:01:53.774896
2018-09-01 22:01:53,775 [salt.state       :1803][INFO    ][3651] Executing state locale.system for [en_US.UTF-8]
2018-09-01 22:01:53,775 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'localectl' in directory '/root'
2018-09-01 22:01:53,809 [salt.state       :290 ][INFO    ][3651] System locale en_US.UTF-8 already set
2018-09-01 22:01:53,809 [salt.state       :1941][INFO    ][3651] Completed state [en_US.UTF-8] at time 22:01:53.809601 duration_in_ms=34.705
2018-09-01 22:01:53,816 [salt.state       :1770][INFO    ][3651] Running state [root] at time 22:01:53.816387
2018-09-01 22:01:53,816 [salt.state       :1803][INFO    ][3651] Executing state group.present for [root]
2018-09-01 22:01:53,816 [salt.state       :290 ][INFO    ][3651] Group root is present and up to date
2018-09-01 22:01:53,816 [salt.state       :1941][INFO    ][3651] Completed state [root] at time 22:01:53.816969 duration_in_ms=0.582
2018-09-01 22:01:53,821 [salt.state       :1770][INFO    ][3651] Running state [root] at time 22:01:53.821472
2018-09-01 22:01:53,821 [salt.state       :1803][INFO    ][3651] Executing state user.present for [root]
2018-09-01 22:01:53,822 [salt.state       :290 ][INFO    ][3651] User root is present and up to date
2018-09-01 22:01:53,822 [salt.state       :1941][INFO    ][3651] Completed state [root] at time 22:01:53.822505 duration_in_ms=1.033
2018-09-01 22:01:53,823 [salt.state       :1770][INFO    ][3651] Running state [/root] at time 22:01:53.823284
2018-09-01 22:01:53,823 [salt.state       :1803][INFO    ][3651] Executing state file.directory for [/root]
2018-09-01 22:01:53,823 [salt.state       :290 ][INFO    ][3651] Directory /root is in the correct state
Directory /root updated
2018-09-01 22:01:53,824 [salt.state       :1941][INFO    ][3651] Completed state [/root] at time 22:01:53.824001 duration_in_ms=0.718
2018-09-01 22:01:53,824 [salt.state       :1770][INFO    ][3651] Running state [/etc/sudoers.d/90-salt-user-root] at time 22:01:53.824148
2018-09-01 22:01:53,824 [salt.state       :1803][INFO    ][3651] Executing state file.absent for [/etc/sudoers.d/90-salt-user-root]
2018-09-01 22:01:53,826 [salt.state       :290 ][INFO    ][3651] File /etc/sudoers.d/90-salt-user-root is not present
2018-09-01 22:01:53,826 [salt.state       :1941][INFO    ][3651] Completed state [/etc/sudoers.d/90-salt-user-root] at time 22:01:53.826466 duration_in_ms=2.318
2018-09-01 22:01:53,826 [salt.state       :1770][INFO    ][3651] Running state [ubuntu] at time 22:01:53.826605
2018-09-01 22:01:53,826 [salt.state       :1803][INFO    ][3651] Executing state group.present for [ubuntu]
2018-09-01 22:01:53,826 [salt.state       :290 ][INFO    ][3651] Group ubuntu is present and up to date
2018-09-01 22:01:53,827 [salt.state       :1941][INFO    ][3651] Completed state [ubuntu] at time 22:01:53.827068 duration_in_ms=0.463
2018-09-01 22:01:53,827 [salt.state       :1770][INFO    ][3651] Running state [ubuntu] at time 22:01:53.827765
2018-09-01 22:01:53,827 [salt.state       :1803][INFO    ][3651] Executing state user.present for [ubuntu]
2018-09-01 22:01:53,832 [salt.state       :290 ][INFO    ][3651] User ubuntu is present and up to date
2018-09-01 22:01:53,833 [salt.state       :1941][INFO    ][3651] Completed state [ubuntu] at time 22:01:53.833049 duration_in_ms=5.284
2018-09-01 22:01:53,833 [salt.state       :1770][INFO    ][3651] Running state [/home/ubuntu] at time 22:01:53.833831
2018-09-01 22:01:53,833 [salt.state       :1803][INFO    ][3651] Executing state file.directory for [/home/ubuntu]
2018-09-01 22:01:53,834 [salt.state       :290 ][INFO    ][3651] Directory /home/ubuntu is in the correct state
Directory /home/ubuntu updated
2018-09-01 22:01:53,834 [salt.state       :1941][INFO    ][3651] Completed state [/home/ubuntu] at time 22:01:53.834512 duration_in_ms=0.681
2018-09-01 22:01:53,835 [salt.state       :1770][INFO    ][3651] Running state [/etc/sudoers.d/90-salt-user-ubuntu] at time 22:01:53.835201
2018-09-01 22:01:53,835 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/sudoers.d/90-salt-user-ubuntu]
2018-09-01 22:01:53,866 [salt.state       :290 ][INFO    ][3651] File /etc/sudoers.d/90-salt-user-ubuntu is in the correct state
2018-09-01 22:01:53,866 [salt.state       :1941][INFO    ][3651] Completed state [/etc/sudoers.d/90-salt-user-ubuntu] at time 22:01:53.866124 duration_in_ms=30.922
2018-09-01 22:01:53,866 [salt.state       :1770][INFO    ][3651] Running state [/etc/security/limits.d/90-salt-default.conf] at time 22:01:53.866277
2018-09-01 22:01:53,866 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/security/limits.d/90-salt-default.conf]
2018-09-01 22:01:53,932 [salt.state       :290 ][INFO    ][3651] File /etc/security/limits.d/90-salt-default.conf is in the correct state
2018-09-01 22:01:53,932 [salt.state       :1941][INFO    ][3651] Completed state [/etc/security/limits.d/90-salt-default.conf] at time 22:01:53.932512 duration_in_ms=66.235
2018-09-01 22:01:53,932 [salt.state       :1770][INFO    ][3651] Running state [/etc/systemd/system.conf.d/90-salt.conf] at time 22:01:53.932651
2018-09-01 22:01:53,932 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/systemd/system.conf.d/90-salt.conf]
2018-09-01 22:01:53,992 [salt.state       :290 ][INFO    ][3651] File /etc/systemd/system.conf.d/90-salt.conf is in the correct state
2018-09-01 22:01:53,992 [salt.state       :1941][INFO    ][3651] Completed state [/etc/systemd/system.conf.d/90-salt.conf] at time 22:01:53.992810 duration_in_ms=60.158
2018-09-01 22:01:53,993 [salt.state       :1770][INFO    ][3651] Running state [service.systemctl_reload] at time 22:01:53.993548
2018-09-01 22:01:53,993 [salt.state       :1803][INFO    ][3651] Executing state module.wait for [service.systemctl_reload]
2018-09-01 22:01:53,993 [salt.state       :290 ][INFO    ][3651] No changes made for service.systemctl_reload
2018-09-01 22:01:53,994 [salt.state       :1941][INFO    ][3651] Completed state [service.systemctl_reload] at time 22:01:53.993975 duration_in_ms=0.427
2018-09-01 22:01:53,994 [salt.state       :1770][INFO    ][3651] Running state [/etc/issue] at time 22:01:53.994117
2018-09-01 22:01:53,994 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/issue]
2018-09-01 22:01:53,998 [salt.state       :290 ][INFO    ][3651] File /etc/issue is in the correct state
2018-09-01 22:01:53,998 [salt.state       :1941][INFO    ][3651] Completed state [/etc/issue] at time 22:01:53.998304 duration_in_ms=4.187
2018-09-01 22:01:53,998 [salt.state       :1770][INFO    ][3651] Running state [/etc/hostname] at time 22:01:53.998441
2018-09-01 22:01:53,998 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/hostname]
2018-09-01 22:01:54,013 [salt.state       :290 ][INFO    ][3651] File /etc/hostname is in the correct state
2018-09-01 22:01:54,013 [salt.state       :1941][INFO    ][3651] Completed state [/etc/hostname] at time 22:01:54.013734 duration_in_ms=15.293
2018-09-01 22:01:54,014 [salt.state       :1770][INFO    ][3651] Running state [hostname cmp002] at time 22:01:54.014740
2018-09-01 22:01:54,014 [salt.state       :1803][INFO    ][3651] Executing state cmd.run for [hostname cmp002]
2018-09-01 22:01:54,015 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'test "$(hostname)" = "cmp002"' in directory '/root'
2018-09-01 22:01:54,021 [salt.state       :290 ][INFO    ][3651] unless execution succeeded
2018-09-01 22:01:54,022 [salt.state       :1941][INFO    ][3651] Completed state [hostname cmp002] at time 22:01:54.022147 duration_in_ms=7.407
2018-09-01 22:01:54,023 [salt.state       :1770][INFO    ][3651] Running state [mdb02] at time 22:01:54.023640
2018-09-01 22:01:54,023 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mdb02]
2018-09-01 22:01:54,024 [salt.state       :290 ][INFO    ][3651] Host mdb02 (10.167.4.33) already present
2018-09-01 22:01:54,024 [salt.state       :1941][INFO    ][3651] Completed state [mdb02] at time 22:01:54.024829 duration_in_ms=1.189
2018-09-01 22:01:54,025 [salt.state       :1770][INFO    ][3651] Running state [mdb02.mcp-ovs-ha.local] at time 22:01:54.025144
2018-09-01 22:01:54,025 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mdb02.mcp-ovs-ha.local]
2018-09-01 22:01:54,025 [salt.state       :290 ][INFO    ][3651] Host mdb02.mcp-ovs-ha.local (10.167.4.33) already present
2018-09-01 22:01:54,026 [salt.state       :1941][INFO    ][3651] Completed state [mdb02.mcp-ovs-ha.local] at time 22:01:54.026038 duration_in_ms=0.894
2018-09-01 22:01:54,026 [salt.state       :1770][INFO    ][3651] Running state [mdb03] at time 22:01:54.026358
2018-09-01 22:01:54,026 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mdb03]
2018-09-01 22:01:54,027 [salt.state       :290 ][INFO    ][3651] Host mdb03 (10.167.4.34) already present
2018-09-01 22:01:54,027 [salt.state       :1941][INFO    ][3651] Completed state [mdb03] at time 22:01:54.027221 duration_in_ms=0.863
2018-09-01 22:01:54,027 [salt.state       :1770][INFO    ][3651] Running state [mdb03.mcp-ovs-ha.local] at time 22:01:54.027508
2018-09-01 22:01:54,027 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mdb03.mcp-ovs-ha.local]
2018-09-01 22:01:54,028 [salt.state       :290 ][INFO    ][3651] Host mdb03.mcp-ovs-ha.local (10.167.4.34) already present
2018-09-01 22:01:54,028 [salt.state       :1941][INFO    ][3651] Completed state [mdb03.mcp-ovs-ha.local] at time 22:01:54.028395 duration_in_ms=0.888
2018-09-01 22:01:54,028 [salt.state       :1770][INFO    ][3651] Running state [mdb01] at time 22:01:54.028697
2018-09-01 22:01:54,028 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mdb01]
2018-09-01 22:01:54,029 [salt.state       :290 ][INFO    ][3651] Host mdb01 (10.167.4.32) already present
2018-09-01 22:01:54,029 [salt.state       :1941][INFO    ][3651] Completed state [mdb01] at time 22:01:54.029515 duration_in_ms=0.818
2018-09-01 22:01:54,029 [salt.state       :1770][INFO    ][3651] Running state [mdb01.mcp-ovs-ha.local] at time 22:01:54.029798
2018-09-01 22:01:54,030 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mdb01.mcp-ovs-ha.local]
2018-09-01 22:01:54,031 [salt.state       :290 ][INFO    ][3651] Host mdb01.mcp-ovs-ha.local (10.167.4.32) already present
2018-09-01 22:01:54,031 [salt.state       :1941][INFO    ][3651] Completed state [mdb01.mcp-ovs-ha.local] at time 22:01:54.031891 duration_in_ms=2.093
2018-09-01 22:01:54,032 [salt.state       :1770][INFO    ][3651] Running state [mdb] at time 22:01:54.032176
2018-09-01 22:01:54,032 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mdb]
2018-09-01 22:01:54,032 [salt.state       :290 ][INFO    ][3651] Host mdb (10.167.4.31) already present
2018-09-01 22:01:54,033 [salt.state       :1941][INFO    ][3651] Completed state [mdb] at time 22:01:54.032991 duration_in_ms=0.815
2018-09-01 22:01:54,033 [salt.state       :1770][INFO    ][3651] Running state [mdb.mcp-ovs-ha.local] at time 22:01:54.033279
2018-09-01 22:01:54,033 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mdb.mcp-ovs-ha.local]
2018-09-01 22:01:54,033 [salt.state       :290 ][INFO    ][3651] Host mdb.mcp-ovs-ha.local (10.167.4.31) already present
2018-09-01 22:01:54,034 [salt.state       :1941][INFO    ][3651] Completed state [mdb.mcp-ovs-ha.local] at time 22:01:54.034110 duration_in_ms=0.831
2018-09-01 22:01:54,034 [salt.state       :1770][INFO    ][3651] Running state [cfg01] at time 22:01:54.034396
2018-09-01 22:01:54,034 [salt.state       :1803][INFO    ][3651] Executing state host.present for [cfg01]
2018-09-01 22:01:54,035 [salt.state       :290 ][INFO    ][3651] Host cfg01 (10.167.4.11) already present
2018-09-01 22:01:54,035 [salt.state       :1941][INFO    ][3651] Completed state [cfg01] at time 22:01:54.035199 duration_in_ms=0.803
2018-09-01 22:01:54,035 [salt.state       :1770][INFO    ][3651] Running state [cfg01.mcp-ovs-ha.local] at time 22:01:54.035480
2018-09-01 22:01:54,035 [salt.state       :1803][INFO    ][3651] Executing state host.present for [cfg01.mcp-ovs-ha.local]
2018-09-01 22:01:54,036 [salt.state       :290 ][INFO    ][3651] Host cfg01.mcp-ovs-ha.local (10.167.4.11) already present
2018-09-01 22:01:54,036 [salt.state       :1941][INFO    ][3651] Completed state [cfg01.mcp-ovs-ha.local] at time 22:01:54.036300 duration_in_ms=0.82
2018-09-01 22:01:54,036 [salt.state       :1770][INFO    ][3651] Running state [prx01] at time 22:01:54.036582
2018-09-01 22:01:54,036 [salt.state       :1803][INFO    ][3651] Executing state host.present for [prx01]
2018-09-01 22:01:54,037 [salt.state       :290 ][INFO    ][3651] Host prx01 (10.167.4.14) already present
2018-09-01 22:01:54,037 [salt.state       :1941][INFO    ][3651] Completed state [prx01] at time 22:01:54.037386 duration_in_ms=0.804
2018-09-01 22:01:54,037 [salt.state       :1770][INFO    ][3651] Running state [prx01.mcp-ovs-ha.local] at time 22:01:54.037675
2018-09-01 22:01:54,037 [salt.state       :1803][INFO    ][3651] Executing state host.present for [prx01.mcp-ovs-ha.local]
2018-09-01 22:01:54,038 [salt.state       :290 ][INFO    ][3651] Host prx01.mcp-ovs-ha.local (10.167.4.14) already present
2018-09-01 22:01:54,038 [salt.state       :1941][INFO    ][3651] Completed state [prx01.mcp-ovs-ha.local] at time 22:01:54.038498 duration_in_ms=0.824
2018-09-01 22:01:54,038 [salt.state       :1770][INFO    ][3651] Running state [kvm01] at time 22:01:54.038774
2018-09-01 22:01:54,039 [salt.state       :1803][INFO    ][3651] Executing state host.present for [kvm01]
2018-09-01 22:01:54,039 [salt.state       :290 ][INFO    ][3651] Host kvm01 (10.167.4.20) already present
2018-09-01 22:01:54,039 [salt.state       :1941][INFO    ][3651] Completed state [kvm01] at time 22:01:54.039555 duration_in_ms=0.782
2018-09-01 22:01:54,039 [salt.state       :1770][INFO    ][3651] Running state [kvm01.mcp-ovs-ha.local] at time 22:01:54.039844
2018-09-01 22:01:54,040 [salt.state       :1803][INFO    ][3651] Executing state host.present for [kvm01.mcp-ovs-ha.local]
2018-09-01 22:01:54,040 [salt.state       :290 ][INFO    ][3651] Host kvm01.mcp-ovs-ha.local (10.167.4.20) already present
2018-09-01 22:01:54,040 [salt.state       :1941][INFO    ][3651] Completed state [kvm01.mcp-ovs-ha.local] at time 22:01:54.040606 duration_in_ms=0.762
2018-09-01 22:01:54,040 [salt.state       :1770][INFO    ][3651] Running state [kvm03] at time 22:01:54.040884
2018-09-01 22:01:54,041 [salt.state       :1803][INFO    ][3651] Executing state host.present for [kvm03]
2018-09-01 22:01:54,041 [salt.state       :290 ][INFO    ][3651] Host kvm03 (10.167.4.22) already present
2018-09-01 22:01:54,041 [salt.state       :1941][INFO    ][3651] Completed state [kvm03] at time 22:01:54.041668 duration_in_ms=0.784
2018-09-01 22:01:54,041 [salt.state       :1770][INFO    ][3651] Running state [kvm03.mcp-ovs-ha.local] at time 22:01:54.041946
2018-09-01 22:01:54,042 [salt.state       :1803][INFO    ][3651] Executing state host.present for [kvm03.mcp-ovs-ha.local]
2018-09-01 22:01:54,042 [salt.state       :290 ][INFO    ][3651] Host kvm03.mcp-ovs-ha.local (10.167.4.22) already present
2018-09-01 22:01:54,042 [salt.state       :1941][INFO    ][3651] Completed state [kvm03.mcp-ovs-ha.local] at time 22:01:54.042728 duration_in_ms=0.782
2018-09-01 22:01:54,043 [salt.state       :1770][INFO    ][3651] Running state [kvm02] at time 22:01:54.043010
2018-09-01 22:01:54,043 [salt.state       :1803][INFO    ][3651] Executing state host.present for [kvm02]
2018-09-01 22:01:54,043 [salt.state       :290 ][INFO    ][3651] Host kvm02 (10.167.4.21) already present
2018-09-01 22:01:54,043 [salt.state       :1941][INFO    ][3651] Completed state [kvm02] at time 22:01:54.043805 duration_in_ms=0.794
2018-09-01 22:01:54,044 [salt.state       :1770][INFO    ][3651] Running state [kvm02.mcp-ovs-ha.local] at time 22:01:54.044105
2018-09-01 22:01:54,044 [salt.state       :1803][INFO    ][3651] Executing state host.present for [kvm02.mcp-ovs-ha.local]
2018-09-01 22:01:54,044 [salt.state       :290 ][INFO    ][3651] Host kvm02.mcp-ovs-ha.local (10.167.4.21) already present
2018-09-01 22:01:54,044 [salt.state       :1941][INFO    ][3651] Completed state [kvm02.mcp-ovs-ha.local] at time 22:01:54.044902 duration_in_ms=0.797
2018-09-01 22:01:54,045 [salt.state       :1770][INFO    ][3651] Running state [dbs] at time 22:01:54.045185
2018-09-01 22:01:54,045 [salt.state       :1803][INFO    ][3651] Executing state host.present for [dbs]
2018-09-01 22:01:54,045 [salt.state       :290 ][INFO    ][3651] Host dbs (10.167.4.23) already present
2018-09-01 22:01:54,046 [salt.state       :1941][INFO    ][3651] Completed state [dbs] at time 22:01:54.045968 duration_in_ms=0.784
2018-09-01 22:01:54,046 [salt.state       :1770][INFO    ][3651] Running state [dbs.mcp-ovs-ha.local] at time 22:01:54.046251
2018-09-01 22:01:54,046 [salt.state       :1803][INFO    ][3651] Executing state host.present for [dbs.mcp-ovs-ha.local]
2018-09-01 22:01:54,046 [salt.state       :290 ][INFO    ][3651] Host dbs.mcp-ovs-ha.local (10.167.4.23) already present
2018-09-01 22:01:54,047 [salt.state       :1941][INFO    ][3651] Completed state [dbs.mcp-ovs-ha.local] at time 22:01:54.047060 duration_in_ms=0.809
2018-09-01 22:01:54,047 [salt.state       :1770][INFO    ][3651] Running state [prx] at time 22:01:54.047341
2018-09-01 22:01:54,047 [salt.state       :1803][INFO    ][3651] Executing state host.present for [prx]
2018-09-01 22:01:54,047 [salt.state       :290 ][INFO    ][3651] Host prx (10.167.4.13) already present
2018-09-01 22:01:54,048 [salt.state       :1941][INFO    ][3651] Completed state [prx] at time 22:01:54.048156 duration_in_ms=0.816
2018-09-01 22:01:54,048 [salt.state       :1770][INFO    ][3651] Running state [prx.mcp-ovs-ha.local] at time 22:01:54.048440
2018-09-01 22:01:54,048 [salt.state       :1803][INFO    ][3651] Executing state host.present for [prx.mcp-ovs-ha.local]
2018-09-01 22:01:54,049 [salt.state       :290 ][INFO    ][3651] Host prx.mcp-ovs-ha.local (10.167.4.13) already present
2018-09-01 22:01:54,049 [salt.state       :1941][INFO    ][3651] Completed state [prx.mcp-ovs-ha.local] at time 22:01:54.049236 duration_in_ms=0.795
2018-09-01 22:01:54,049 [salt.state       :1770][INFO    ][3651] Running state [prx02] at time 22:01:54.049513
2018-09-01 22:01:54,049 [salt.state       :1803][INFO    ][3651] Executing state host.present for [prx02]
2018-09-01 22:01:54,050 [salt.state       :290 ][INFO    ][3651] Host prx02 (10.167.4.15) already present
2018-09-01 22:01:54,050 [salt.state       :1941][INFO    ][3651] Completed state [prx02] at time 22:01:54.050278 duration_in_ms=0.765
2018-09-01 22:01:54,050 [salt.state       :1770][INFO    ][3651] Running state [prx02.mcp-ovs-ha.local] at time 22:01:54.050555
2018-09-01 22:01:54,050 [salt.state       :1803][INFO    ][3651] Executing state host.present for [prx02.mcp-ovs-ha.local]
2018-09-01 22:01:54,051 [salt.state       :290 ][INFO    ][3651] Host prx02.mcp-ovs-ha.local (10.167.4.15) already present
2018-09-01 22:01:54,051 [salt.state       :1941][INFO    ][3651] Completed state [prx02.mcp-ovs-ha.local] at time 22:01:54.051325 duration_in_ms=0.771
2018-09-01 22:01:54,051 [salt.state       :1770][INFO    ][3651] Running state [msg02] at time 22:01:54.051609
2018-09-01 22:01:54,051 [salt.state       :1803][INFO    ][3651] Executing state host.present for [msg02]
2018-09-01 22:01:54,052 [salt.state       :290 ][INFO    ][3651] Host msg02 (10.167.4.29) already present
2018-09-01 22:01:54,052 [salt.state       :1941][INFO    ][3651] Completed state [msg02] at time 22:01:54.052381 duration_in_ms=0.772
2018-09-01 22:01:54,052 [salt.state       :1770][INFO    ][3651] Running state [msg02.mcp-ovs-ha.local] at time 22:01:54.052664
2018-09-01 22:01:54,052 [salt.state       :1803][INFO    ][3651] Executing state host.present for [msg02.mcp-ovs-ha.local]
2018-09-01 22:01:54,053 [salt.state       :290 ][INFO    ][3651] Host msg02.mcp-ovs-ha.local (10.167.4.29) already present
2018-09-01 22:01:54,053 [salt.state       :1941][INFO    ][3651] Completed state [msg02.mcp-ovs-ha.local] at time 22:01:54.053432 duration_in_ms=0.769
2018-09-01 22:01:54,053 [salt.state       :1770][INFO    ][3651] Running state [msg03] at time 22:01:54.053714
2018-09-01 22:01:54,053 [salt.state       :1803][INFO    ][3651] Executing state host.present for [msg03]
2018-09-01 22:01:54,054 [salt.state       :290 ][INFO    ][3651] Host msg03 (10.167.4.30) already present
2018-09-01 22:01:54,054 [salt.state       :1941][INFO    ][3651] Completed state [msg03] at time 22:01:54.054583 duration_in_ms=0.87
2018-09-01 22:01:54,054 [salt.state       :1770][INFO    ][3651] Running state [msg03.mcp-ovs-ha.local] at time 22:01:54.054866
2018-09-01 22:01:54,055 [salt.state       :1803][INFO    ][3651] Executing state host.present for [msg03.mcp-ovs-ha.local]
2018-09-01 22:01:54,055 [salt.state       :290 ][INFO    ][3651] Host msg03.mcp-ovs-ha.local (10.167.4.30) already present
2018-09-01 22:01:54,055 [salt.state       :1941][INFO    ][3651] Completed state [msg03.mcp-ovs-ha.local] at time 22:01:54.055640 duration_in_ms=0.774
2018-09-01 22:01:54,055 [salt.state       :1770][INFO    ][3651] Running state [msg01] at time 22:01:54.055933
2018-09-01 22:01:54,056 [salt.state       :1803][INFO    ][3651] Executing state host.present for [msg01]
2018-09-01 22:01:54,056 [salt.state       :290 ][INFO    ][3651] Host msg01 (10.167.4.28) already present
2018-09-01 22:01:54,056 [salt.state       :1941][INFO    ][3651] Completed state [msg01] at time 22:01:54.056699 duration_in_ms=0.766
2018-09-01 22:01:54,057 [salt.state       :1770][INFO    ][3651] Running state [msg01.mcp-ovs-ha.local] at time 22:01:54.056984
2018-09-01 22:01:54,057 [salt.state       :1803][INFO    ][3651] Executing state host.present for [msg01.mcp-ovs-ha.local]
2018-09-01 22:01:54,057 [salt.state       :290 ][INFO    ][3651] Host msg01.mcp-ovs-ha.local (10.167.4.28) already present
2018-09-01 22:01:54,057 [salt.state       :1941][INFO    ][3651] Completed state [msg01.mcp-ovs-ha.local] at time 22:01:54.057750 duration_in_ms=0.766
2018-09-01 22:01:54,058 [salt.state       :1770][INFO    ][3651] Running state [msg] at time 22:01:54.058036
2018-09-01 22:01:54,058 [salt.state       :1803][INFO    ][3651] Executing state host.present for [msg]
2018-09-01 22:01:54,058 [salt.state       :290 ][INFO    ][3651] Host msg (10.167.4.27) already present
2018-09-01 22:01:54,058 [salt.state       :1941][INFO    ][3651] Completed state [msg] at time 22:01:54.058809 duration_in_ms=0.773
2018-09-01 22:01:54,059 [salt.state       :1770][INFO    ][3651] Running state [msg.mcp-ovs-ha.local] at time 22:01:54.059083
2018-09-01 22:01:54,059 [salt.state       :1803][INFO    ][3651] Executing state host.present for [msg.mcp-ovs-ha.local]
2018-09-01 22:01:54,059 [salt.state       :290 ][INFO    ][3651] Host msg.mcp-ovs-ha.local (10.167.4.27) already present
2018-09-01 22:01:54,059 [salt.state       :1941][INFO    ][3651] Completed state [msg.mcp-ovs-ha.local] at time 22:01:54.059841 duration_in_ms=0.757
2018-09-01 22:01:54,060 [salt.state       :1770][INFO    ][3651] Running state [cfg01] at time 22:01:54.060118
2018-09-01 22:01:54,060 [salt.state       :1803][INFO    ][3651] Executing state host.present for [cfg01]
2018-09-01 22:01:54,060 [salt.state       :290 ][INFO    ][3651] Host cfg01 (10.167.4.11) already present
2018-09-01 22:01:54,060 [salt.state       :1941][INFO    ][3651] Completed state [cfg01] at time 22:01:54.060864 duration_in_ms=0.747
2018-09-01 22:01:54,061 [salt.state       :1770][INFO    ][3651] Running state [cfg01.mcp-ovs-ha.local] at time 22:01:54.061145
2018-09-01 22:01:54,061 [salt.state       :1803][INFO    ][3651] Executing state host.present for [cfg01.mcp-ovs-ha.local]
2018-09-01 22:01:54,061 [salt.state       :290 ][INFO    ][3651] Host cfg01.mcp-ovs-ha.local (10.167.4.11) already present
2018-09-01 22:01:54,061 [salt.state       :1941][INFO    ][3651] Completed state [cfg01.mcp-ovs-ha.local] at time 22:01:54.061893 duration_in_ms=0.748
2018-09-01 22:01:54,062 [salt.state       :1770][INFO    ][3651] Running state [cmp002] at time 22:01:54.062187
2018-09-01 22:01:54,062 [salt.state       :1803][INFO    ][3651] Executing state host.present for [cmp002]
2018-09-01 22:01:54,062 [salt.state       :290 ][INFO    ][3651] Host cmp002 (10.167.4.53) already present
2018-09-01 22:01:54,062 [salt.state       :1941][INFO    ][3651] Completed state [cmp002] at time 22:01:54.062946 duration_in_ms=0.759
2018-09-01 22:01:54,063 [salt.state       :1770][INFO    ][3651] Running state [cmp002.mcp-ovs-ha.local] at time 22:01:54.063231
2018-09-01 22:01:54,063 [salt.state       :1803][INFO    ][3651] Executing state host.present for [cmp002.mcp-ovs-ha.local]
2018-09-01 22:01:54,063 [salt.state       :290 ][INFO    ][3651] Host cmp002.mcp-ovs-ha.local (10.167.4.53) already present
2018-09-01 22:01:54,064 [salt.state       :1941][INFO    ][3651] Completed state [cmp002.mcp-ovs-ha.local] at time 22:01:54.063968 duration_in_ms=0.737
2018-09-01 22:01:54,065 [salt.state       :1770][INFO    ][3651] Running state [file.replace] at time 22:01:54.065070
2018-09-01 22:01:54,065 [salt.state       :1803][INFO    ][3651] Executing state module.run for [file.replace]
2018-09-01 22:01:54,294 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['git', '--version'] in directory '/root'
2018-09-01 22:01:54,461 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'grep -q "cmp002 cmp002.mcp-ovs-ha.local" /etc/hosts' in directory '/root'
2018-09-01 22:01:54,469 [salt.utils.decorators:613 ][WARNING ][3651] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:01:54,472 [salt.state       :290 ][INFO    ][3651] {'ret': '--- \n+++ \n@@ -22,7 +22,7 @@\n 10.167.4.30\t\tmsg03 msg03.mcp-ovs-ha.local\n 10.167.4.28\t\tmsg01 msg01.mcp-ovs-ha.local\n 10.167.4.27\t\tmsg msg.mcp-ovs-ha.local\n-10.167.4.53\t\tcmp002 cmp002.mcp-ovs-ha.local\n+10.167.4.53\t\tcmp002.mcp-ovs-ha.local cmp002\n 10.167.4.52\t\tcmp001 cmp001.mcp-ovs-ha.local\n 10.167.4.24\t\tdbs01 dbs01.mcp-ovs-ha.local\n 10.167.4.25\t\tdbs02 dbs02.mcp-ovs-ha.local\n'}
2018-09-01 22:01:54,472 [salt.state       :1941][INFO    ][3651] Completed state [file.replace] at time 22:01:54.472297 duration_in_ms=407.226
2018-09-01 22:01:54,472 [salt.state       :1770][INFO    ][3651] Running state [cmp001] at time 22:01:54.472788
2018-09-01 22:01:54,473 [salt.state       :1803][INFO    ][3651] Executing state host.present for [cmp001]
2018-09-01 22:01:54,473 [salt.state       :290 ][INFO    ][3651] Host cmp001 (10.167.4.52) already present
2018-09-01 22:01:54,473 [salt.state       :1941][INFO    ][3651] Completed state [cmp001] at time 22:01:54.473691 duration_in_ms=0.904
2018-09-01 22:01:54,474 [salt.state       :1770][INFO    ][3651] Running state [cmp001.mcp-ovs-ha.local] at time 22:01:54.474026
2018-09-01 22:01:54,474 [salt.state       :1803][INFO    ][3651] Executing state host.present for [cmp001.mcp-ovs-ha.local]
2018-09-01 22:01:54,474 [salt.state       :290 ][INFO    ][3651] Host cmp001.mcp-ovs-ha.local (10.167.4.52) already present
2018-09-01 22:01:54,474 [salt.state       :1941][INFO    ][3651] Completed state [cmp001.mcp-ovs-ha.local] at time 22:01:54.474881 duration_in_ms=0.855
2018-09-01 22:01:54,475 [salt.state       :1770][INFO    ][3651] Running state [dbs01] at time 22:01:54.475212
2018-09-01 22:01:54,475 [salt.state       :1803][INFO    ][3651] Executing state host.present for [dbs01]
2018-09-01 22:01:54,475 [salt.state       :290 ][INFO    ][3651] Host dbs01 (10.167.4.24) already present
2018-09-01 22:01:54,476 [salt.state       :1941][INFO    ][3651] Completed state [dbs01] at time 22:01:54.476082 duration_in_ms=0.87
2018-09-01 22:01:54,476 [salt.state       :1770][INFO    ][3651] Running state [dbs01.mcp-ovs-ha.local] at time 22:01:54.476405
2018-09-01 22:01:54,476 [salt.state       :1803][INFO    ][3651] Executing state host.present for [dbs01.mcp-ovs-ha.local]
2018-09-01 22:01:54,477 [salt.state       :290 ][INFO    ][3651] Host dbs01.mcp-ovs-ha.local (10.167.4.24) already present
2018-09-01 22:01:54,477 [salt.state       :1941][INFO    ][3651] Completed state [dbs01.mcp-ovs-ha.local] at time 22:01:54.477254 duration_in_ms=0.849
2018-09-01 22:01:54,477 [salt.state       :1770][INFO    ][3651] Running state [dbs02] at time 22:01:54.477574
2018-09-01 22:01:54,477 [salt.state       :1803][INFO    ][3651] Executing state host.present for [dbs02]
2018-09-01 22:01:54,478 [salt.state       :290 ][INFO    ][3651] Host dbs02 (10.167.4.25) already present
2018-09-01 22:01:54,478 [salt.state       :1941][INFO    ][3651] Completed state [dbs02] at time 22:01:54.478383 duration_in_ms=0.809
2018-09-01 22:01:54,478 [salt.state       :1770][INFO    ][3651] Running state [dbs02.mcp-ovs-ha.local] at time 22:01:54.478698
2018-09-01 22:01:54,478 [salt.state       :1803][INFO    ][3651] Executing state host.present for [dbs02.mcp-ovs-ha.local]
2018-09-01 22:01:54,479 [salt.state       :290 ][INFO    ][3651] Host dbs02.mcp-ovs-ha.local (10.167.4.25) already present
2018-09-01 22:01:54,479 [salt.state       :1941][INFO    ][3651] Completed state [dbs02.mcp-ovs-ha.local] at time 22:01:54.479520 duration_in_ms=0.822
2018-09-01 22:01:54,479 [salt.state       :1770][INFO    ][3651] Running state [dbs03] at time 22:01:54.479848
2018-09-01 22:01:54,480 [salt.state       :1803][INFO    ][3651] Executing state host.present for [dbs03]
2018-09-01 22:01:54,480 [salt.state       :290 ][INFO    ][3651] Host dbs03 (10.167.4.26) already present
2018-09-01 22:01:54,480 [salt.state       :1941][INFO    ][3651] Completed state [dbs03] at time 22:01:54.480639 duration_in_ms=0.792
2018-09-01 22:01:54,480 [salt.state       :1770][INFO    ][3651] Running state [dbs03.mcp-ovs-ha.local] at time 22:01:54.480947
2018-09-01 22:01:54,481 [salt.state       :1803][INFO    ][3651] Executing state host.present for [dbs03.mcp-ovs-ha.local]
2018-09-01 22:01:54,481 [salt.state       :290 ][INFO    ][3651] Host dbs03.mcp-ovs-ha.local (10.167.4.26) already present
2018-09-01 22:01:54,481 [salt.state       :1941][INFO    ][3651] Completed state [dbs03.mcp-ovs-ha.local] at time 22:01:54.481770 duration_in_ms=0.823
2018-09-01 22:01:54,482 [salt.state       :1770][INFO    ][3651] Running state [mas01] at time 22:01:54.482212
2018-09-01 22:01:54,482 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mas01]
2018-09-01 22:01:54,482 [salt.state       :290 ][INFO    ][3651] Host mas01 (10.167.4.12) already present
2018-09-01 22:01:54,483 [salt.state       :1941][INFO    ][3651] Completed state [mas01] at time 22:01:54.483006 duration_in_ms=0.795
2018-09-01 22:01:54,483 [salt.state       :1770][INFO    ][3651] Running state [mas01.mcp-ovs-ha.local] at time 22:01:54.483262
2018-09-01 22:01:54,483 [salt.state       :1803][INFO    ][3651] Executing state host.present for [mas01.mcp-ovs-ha.local]
2018-09-01 22:01:54,483 [salt.state       :290 ][INFO    ][3651] Host mas01.mcp-ovs-ha.local (10.167.4.12) already present
2018-09-01 22:01:54,483 [salt.state       :1941][INFO    ][3651] Completed state [mas01.mcp-ovs-ha.local] at time 22:01:54.483963 duration_in_ms=0.701
2018-09-01 22:01:54,484 [salt.state       :1770][INFO    ][3651] Running state [ctl02] at time 22:01:54.484202
2018-09-01 22:01:54,484 [salt.state       :1803][INFO    ][3651] Executing state host.present for [ctl02]
2018-09-01 22:01:54,484 [salt.state       :290 ][INFO    ][3651] Host ctl02 (10.167.4.37) already present
2018-09-01 22:01:54,484 [salt.state       :1941][INFO    ][3651] Completed state [ctl02] at time 22:01:54.484762 duration_in_ms=0.561
2018-09-01 22:01:54,485 [salt.state       :1770][INFO    ][3651] Running state [ctl02.mcp-ovs-ha.local] at time 22:01:54.485000
2018-09-01 22:01:54,485 [salt.state       :1803][INFO    ][3651] Executing state host.present for [ctl02.mcp-ovs-ha.local]
2018-09-01 22:01:54,485 [salt.state       :290 ][INFO    ][3651] Host ctl02.mcp-ovs-ha.local (10.167.4.37) already present
2018-09-01 22:01:54,485 [salt.state       :1941][INFO    ][3651] Completed state [ctl02.mcp-ovs-ha.local] at time 22:01:54.485548 duration_in_ms=0.548
2018-09-01 22:01:54,485 [salt.state       :1770][INFO    ][3651] Running state [ctl03] at time 22:01:54.485785
2018-09-01 22:01:54,485 [salt.state       :1803][INFO    ][3651] Executing state host.present for [ctl03]
2018-09-01 22:01:54,486 [salt.state       :290 ][INFO    ][3651] Host ctl03 (10.167.4.38) already present
2018-09-01 22:01:54,486 [salt.state       :1941][INFO    ][3651] Completed state [ctl03] at time 22:01:54.486331 duration_in_ms=0.546
2018-09-01 22:01:54,486 [salt.state       :1770][INFO    ][3651] Running state [ctl03.mcp-ovs-ha.local] at time 22:01:54.486566
2018-09-01 22:01:54,486 [salt.state       :1803][INFO    ][3651] Executing state host.present for [ctl03.mcp-ovs-ha.local]
2018-09-01 22:01:54,487 [salt.state       :290 ][INFO    ][3651] Host ctl03.mcp-ovs-ha.local (10.167.4.38) already present
2018-09-01 22:01:54,487 [salt.state       :1941][INFO    ][3651] Completed state [ctl03.mcp-ovs-ha.local] at time 22:01:54.487112 duration_in_ms=0.545
2018-09-01 22:01:54,487 [salt.state       :1770][INFO    ][3651] Running state [ctl01] at time 22:01:54.487357
2018-09-01 22:01:54,487 [salt.state       :1803][INFO    ][3651] Executing state host.present for [ctl01]
2018-09-01 22:01:54,487 [salt.state       :290 ][INFO    ][3651] Host ctl01 (10.167.4.36) already present
2018-09-01 22:01:54,487 [salt.state       :1941][INFO    ][3651] Completed state [ctl01] at time 22:01:54.487917 duration_in_ms=0.561
2018-09-01 22:01:54,488 [salt.state       :1770][INFO    ][3651] Running state [ctl01.mcp-ovs-ha.local] at time 22:01:54.488156
2018-09-01 22:01:54,488 [salt.state       :1803][INFO    ][3651] Executing state host.present for [ctl01.mcp-ovs-ha.local]
2018-09-01 22:01:54,488 [salt.state       :290 ][INFO    ][3651] Host ctl01.mcp-ovs-ha.local (10.167.4.36) already present
2018-09-01 22:01:54,488 [salt.state       :1941][INFO    ][3651] Completed state [ctl01.mcp-ovs-ha.local] at time 22:01:54.488706 duration_in_ms=0.551
2018-09-01 22:01:54,488 [salt.state       :1770][INFO    ][3651] Running state [ctl] at time 22:01:54.488947
2018-09-01 22:01:54,489 [salt.state       :1803][INFO    ][3651] Executing state host.present for [ctl]
2018-09-01 22:01:54,489 [salt.state       :290 ][INFO    ][3651] Host ctl (10.167.4.35) already present
2018-09-01 22:01:54,489 [salt.state       :1941][INFO    ][3651] Completed state [ctl] at time 22:01:54.489499 duration_in_ms=0.552
2018-09-01 22:01:54,489 [salt.state       :1770][INFO    ][3651] Running state [ctl.mcp-ovs-ha.local] at time 22:01:54.489740
2018-09-01 22:01:54,489 [salt.state       :1803][INFO    ][3651] Executing state host.present for [ctl.mcp-ovs-ha.local]
2018-09-01 22:01:54,490 [salt.state       :290 ][INFO    ][3651] Host ctl.mcp-ovs-ha.local (10.167.4.35) already present
2018-09-01 22:01:54,490 [salt.state       :1941][INFO    ][3651] Completed state [ctl.mcp-ovs-ha.local] at time 22:01:54.490304 duration_in_ms=0.564
2018-09-01 22:01:54,490 [salt.state       :1770][INFO    ][3651] Running state [linux_network_bridge_pkgs] at time 22:01:54.490455
2018-09-01 22:01:54,490 [salt.state       :1803][INFO    ][3651] Executing state pkg.installed for [linux_network_bridge_pkgs]
2018-09-01 22:01:54,495 [salt.state       :290 ][INFO    ][3651] All specified packages are already installed
2018-09-01 22:01:54,495 [salt.state       :1941][INFO    ][3651] Completed state [linux_network_bridge_pkgs] at time 22:01:54.495680 duration_in_ms=5.225
2018-09-01 22:01:54,495 [salt.state       :1770][INFO    ][3651] Running state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 22:01:54.495829
2018-09-01 22:01:54,495 [salt.state       :1803][INFO    ][3651] Executing state file.absent for [/etc/network/interfaces.d/50-cloud-init.cfg]
2018-09-01 22:01:54,496 [salt.state       :290 ][INFO    ][3651] File /etc/network/interfaces.d/50-cloud-init.cfg is not present
2018-09-01 22:01:54,496 [salt.state       :1941][INFO    ][3651] Completed state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 22:01:54.496275 duration_in_ms=0.446
2018-09-01 22:01:54,497 [salt.state       :1770][INFO    ][3651] Running state [enp6s0.300] at time 22:01:54.497676
2018-09-01 22:01:54,497 [salt.state       :1803][INFO    ][3651] Executing state network.managed for [enp6s0.300]
2018-09-01 22:01:54,588 [salt.state       :290 ][INFO    ][3651] Interface enp6s0.300 is up to date.
2018-09-01 22:01:54,588 [salt.state       :1941][INFO    ][3651] Completed state [enp6s0.300] at time 22:01:54.588728 duration_in_ms=91.05
2018-09-01 22:01:54,589 [salt.state       :1770][INFO    ][3651] Running state [br-ctl] at time 22:01:54.589675
2018-09-01 22:01:54,589 [salt.state       :1803][INFO    ][3651] Executing state network.managed for [br-ctl]
2018-09-01 22:01:54,595 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 22:01:54,608 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'bridge-utils'] in directory '/root'
2018-09-01 22:01:54,890 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 22:01:54,927 [salt.state       :290 ][INFO    ][3651] Interface br-ctl is up to date.
2018-09-01 22:01:54,927 [salt.state       :1941][INFO    ][3651] Completed state [br-ctl] at time 22:01:54.927623 duration_in_ms=337.948
2018-09-01 22:01:54,927 [salt.state       :1770][INFO    ][3651] Running state [enp6s0] at time 22:01:54.927867
2018-09-01 22:01:54,928 [salt.state       :1803][INFO    ][3651] Executing state network.managed for [enp6s0]
2018-09-01 22:01:54,943 [salt.state       :290 ][INFO    ][3651] Interface enp6s0 is up to date.
2018-09-01 22:01:54,944 [salt.state       :1941][INFO    ][3651] Completed state [enp6s0] at time 22:01:54.943987 duration_in_ms=16.12
2018-09-01 22:01:54,944 [salt.state       :1770][INFO    ][3651] Running state [br-floating] at time 22:01:54.944238
2018-09-01 22:01:54,944 [salt.state       :1803][INFO    ][3651] Executing state openvswitch_bridge.present for [br-floating]
2018-09-01 22:01:54,944 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'ovs-vsctl br-exists br-floating' in directory '/root'
2018-09-01 22:01:54,951 [salt.state       :290 ][INFO    ][3651] Bridge br-floating already exists.
2018-09-01 22:01:54,951 [salt.state       :1941][INFO    ][3651] Completed state [br-floating] at time 22:01:54.951462 duration_in_ms=7.224
2018-09-01 22:01:54,951 [salt.state       :1770][INFO    ][3651] Running state [/etc/network/interfaces.u/ifcfg-br-floating] at time 22:01:54.951679
2018-09-01 22:01:54,951 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/network/interfaces.u/ifcfg-br-floating]
2018-09-01 22:01:54,974 [salt.state       :290 ][INFO    ][3651] File /etc/network/interfaces.u/ifcfg-br-floating is in the correct state
2018-09-01 22:01:54,974 [salt.state       :1941][INFO    ][3651] Completed state [/etc/network/interfaces.u/ifcfg-br-floating] at time 22:01:54.974547 duration_in_ms=22.868
2018-09-01 22:01:54,974 [salt.state       :1770][INFO    ][3651] Running state [/etc/network/interfaces] at time 22:01:54.974708
2018-09-01 22:01:54,974 [salt.state       :1803][INFO    ][3651] Executing state file.prepend for [/etc/network/interfaces]
2018-09-01 22:01:54,976 [salt.state       :290 ][INFO    ][3651] File changed:
--- 
+++ 
@@ -1,3 +1,6 @@
+source /etc/network/interfaces.d/*
+# Workaround for Upstream-Bug: https://github.com/saltstack/salt/issues/40262
+source /etc/network/interfaces.u/*
 auto lo
 iface lo inet loopback
 auto enp6s0.300

2018-09-01 22:01:54,976 [salt.state       :1941][INFO    ][3651] Completed state [/etc/network/interfaces] at time 22:01:54.976618 duration_in_ms=1.91
2018-09-01 22:01:54,979 [salt.state       :1770][INFO    ][3651] Running state [/etc/network/interfaces] at time 22:01:54.979772
2018-09-01 22:01:54,979 [salt.state       :1803][INFO    ][3651] Executing state file.prepend for [/etc/network/interfaces]
2018-09-01 22:01:54,980 [salt.state       :290 ][INFO    ][3651] File /etc/network/interfaces is in correct state
2018-09-01 22:01:54,980 [salt.state       :1941][INFO    ][3651] Completed state [/etc/network/interfaces] at time 22:01:54.980806 duration_in_ms=1.035
2018-09-01 22:01:54,982 [salt.state       :1770][INFO    ][3651] Running state [ifup --ignore-errors br-floating] at time 22:01:54.982498
2018-09-01 22:01:54,982 [salt.state       :1803][INFO    ][3651] Executing state cmd.run for [ifup --ignore-errors br-floating]
2018-09-01 22:01:54,982 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'ip link show br-floating | grep -q '\<UP\>'' in directory '/root'
2018-09-01 22:01:54,988 [salt.state       :290 ][INFO    ][3651] unless execution succeeded
2018-09-01 22:01:54,989 [salt.state       :1941][INFO    ][3651] Completed state [ifup --ignore-errors br-floating] at time 22:01:54.989052 duration_in_ms=6.553
2018-09-01 22:01:54,989 [salt.state       :1770][INFO    ][3651] Running state [ovs-vsctl add-port br-floating enp8s0] at time 22:01:54.989273
2018-09-01 22:01:54,989 [salt.state       :1803][INFO    ][3651] Executing state cmd.run for [ovs-vsctl add-port br-floating enp8s0]
2018-09-01 22:01:54,989 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'ovs-vsctl show | grep enp8s0' in directory '/root'
2018-09-01 22:01:54,996 [salt.state       :290 ][INFO    ][3651] unless execution succeeded
2018-09-01 22:01:54,996 [salt.state       :1941][INFO    ][3651] Completed state [ovs-vsctl add-port br-floating enp8s0] at time 22:01:54.996934 duration_in_ms=7.661
2018-09-01 22:01:54,997 [salt.state       :1770][INFO    ][3651] Running state [/etc/network/interfaces] at time 22:01:54.997124
2018-09-01 22:01:54,997 [salt.state       :1803][INFO    ][3651] Executing state file.prepend for [/etc/network/interfaces]
2018-09-01 22:01:54,998 [salt.state       :290 ][INFO    ][3651] File /etc/network/interfaces is in correct state
2018-09-01 22:01:54,998 [salt.state       :1941][INFO    ][3651] Completed state [/etc/network/interfaces] at time 22:01:54.998353 duration_in_ms=1.229
2018-09-01 22:01:54,998 [salt.state       :1770][INFO    ][3651] Running state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 22:01:54.998496
2018-09-01 22:01:54,998 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/network/interfaces.u/ifcfg-enp8s0]
2018-09-01 22:01:55,015 [salt.state       :290 ][INFO    ][3651] File /etc/network/interfaces.u/ifcfg-enp8s0 is in the correct state
2018-09-01 22:01:55,015 [salt.state       :1941][INFO    ][3651] Completed state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 22:01:55.015887 duration_in_ms=17.391
2018-09-01 22:01:55,016 [salt.state       :1770][INFO    ][3651] Running state [/etc/network/interfaces] at time 22:01:55.016032
2018-09-01 22:01:55,016 [salt.state       :1803][INFO    ][3651] Executing state file.replace for [/etc/network/interfaces]
2018-09-01 22:01:55,016 [salt.state       :290 ][INFO    ][3651] No changes needed to be made
2018-09-01 22:01:55,017 [salt.state       :1941][INFO    ][3651] Completed state [/etc/network/interfaces] at time 22:01:55.017011 duration_in_ms=0.98
2018-09-01 22:01:55,017 [salt.state       :1770][INFO    ][3651] Running state [/etc/network/interfaces] at time 22:01:55.017147
2018-09-01 22:01:55,017 [salt.state       :1803][INFO    ][3651] Executing state file.replace for [/etc/network/interfaces]
2018-09-01 22:01:55,017 [salt.state       :290 ][INFO    ][3651] No changes needed to be made
2018-09-01 22:01:55,018 [salt.state       :1941][INFO    ][3651] Completed state [/etc/network/interfaces] at time 22:01:55.018100 duration_in_ms=0.953
2018-09-01 22:01:55,021 [salt.state       :1770][INFO    ][3651] Running state [ifup enp8s0] at time 22:01:55.021203
2018-09-01 22:01:55,021 [salt.state       :1803][INFO    ][3651] Executing state cmd.run for [ifup enp8s0]
2018-09-01 22:01:55,021 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command 'ifup enp8s0' in directory '/root'
2018-09-01 22:01:55,026 [salt.state       :290 ][INFO    ][3651] {'pid': 4286, 'retcode': 0, 'stderr': 'ifup: interface enp8s0 already configured', 'stdout': ''}
2018-09-01 22:01:55,026 [salt.state       :1941][INFO    ][3651] Completed state [ifup enp8s0] at time 22:01:55.026339 duration_in_ms=5.137
2018-09-01 22:01:55,027 [salt.state       :1770][INFO    ][3651] Running state [enp7s0.1000] at time 22:01:55.027748
2018-09-01 22:01:55,027 [salt.state       :1803][INFO    ][3651] Executing state network.managed for [enp7s0.1000]
2018-09-01 22:01:55,041 [salt.state       :290 ][INFO    ][3651] Interface enp7s0.1000 is up to date.
2018-09-01 22:01:55,041 [salt.state       :1941][INFO    ][3651] Completed state [enp7s0.1000] at time 22:01:55.041547 duration_in_ms=13.798
2018-09-01 22:01:55,042 [salt.state       :1770][INFO    ][3651] Running state [br-mesh] at time 22:01:55.042434
2018-09-01 22:01:55,042 [salt.state       :1803][INFO    ][3651] Executing state network.managed for [br-mesh]
2018-09-01 22:01:55,048 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 22:01:55,058 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'bridge-utils'] in directory '/root'
2018-09-01 22:01:55,327 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 22:01:55,363 [salt.state       :290 ][INFO    ][3651] Interface br-mesh is up to date.
2018-09-01 22:01:55,363 [salt.state       :1941][INFO    ][3651] Completed state [br-mesh] at time 22:01:55.363694 duration_in_ms=321.259
2018-09-01 22:01:55,363 [salt.state       :1770][INFO    ][3651] Running state [enp7s0] at time 22:01:55.363915
2018-09-01 22:01:55,364 [salt.state       :1803][INFO    ][3651] Executing state network.managed for [enp7s0]
2018-09-01 22:01:55,380 [salt.state       :290 ][INFO    ][3651] Interface enp7s0 is up to date.
2018-09-01 22:01:55,380 [salt.state       :1941][INFO    ][3651] Completed state [enp7s0] at time 22:01:55.380514 duration_in_ms=16.599
2018-09-01 22:01:55,380 [salt.state       :1770][INFO    ][3651] Running state [/etc/network/interfaces] at time 22:01:55.380696
2018-09-01 22:01:55,380 [salt.state       :1803][INFO    ][3651] Executing state file.prepend for [/etc/network/interfaces]
2018-09-01 22:01:55,382 [salt.state       :290 ][INFO    ][3651] File changed:
--- 
+++ 
@@ -1,3 +1,6 @@
+source /etc/network/interfaces.d/*
+# Workaround for Upstream-Bug: https://github.com/saltstack/salt/issues/40262
+source /etc/network/interfaces.u/*
 auto lo
 iface lo inet loopback
 auto enp6s0.300

2018-09-01 22:01:55,382 [salt.state       :1941][INFO    ][3651] Completed state [/etc/network/interfaces] at time 22:01:55.382117 duration_in_ms=1.42
2018-09-01 22:01:55,382 [salt.state       :1770][INFO    ][3651] Running state [/etc/profile.d/proxy.sh] at time 22:01:55.382270
2018-09-01 22:01:55,382 [salt.state       :1803][INFO    ][3651] Executing state file.absent for [/etc/profile.d/proxy.sh]
2018-09-01 22:01:55,382 [salt.state       :290 ][INFO    ][3651] File /etc/profile.d/proxy.sh is not present
2018-09-01 22:01:55,382 [salt.state       :1941][INFO    ][3651] Completed state [/etc/profile.d/proxy.sh] at time 22:01:55.382702 duration_in_ms=0.432
2018-09-01 22:01:55,382 [salt.state       :1770][INFO    ][3651] Running state [/etc/apt/apt.conf.d/95proxies] at time 22:01:55.382832
2018-09-01 22:01:55,382 [salt.state       :1803][INFO    ][3651] Executing state file.absent for [/etc/apt/apt.conf.d/95proxies]
2018-09-01 22:01:55,383 [salt.state       :290 ][INFO    ][3651] File /etc/apt/apt.conf.d/95proxies is not present
2018-09-01 22:01:55,383 [salt.state       :1941][INFO    ][3651] Completed state [/etc/apt/apt.conf.d/95proxies] at time 22:01:55.383247 duration_in_ms=0.415
2018-09-01 22:01:55,383 [salt.state       :1770][INFO    ][3651] Running state [linux_lvm_pkgs] at time 22:01:55.383379
2018-09-01 22:01:55,383 [salt.state       :1803][INFO    ][3651] Executing state pkg.installed for [linux_lvm_pkgs]
2018-09-01 22:01:55,389 [salt.state       :290 ][INFO    ][3651] All specified packages are already installed
2018-09-01 22:01:55,389 [salt.state       :1941][INFO    ][3651] Completed state [linux_lvm_pkgs] at time 22:01:55.389167 duration_in_ms=5.788
2018-09-01 22:01:55,390 [salt.state       :1770][INFO    ][3651] Running state [/etc/lvm/lvm.conf] at time 22:01:55.390531
2018-09-01 22:01:55,390 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/lvm/lvm.conf]
2018-09-01 22:01:55,410 [salt.fileclient  :1215][INFO    ][3651] Fetching file from saltenv 'base', ** done ** 'linux/files/lvm.conf'
2018-09-01 22:01:55,488 [salt.state       :290 ][INFO    ][3651] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 # This is an example configuration file for the LVM2 system.
 # It contains the default settings that would be used if there was no
 # /etc/lvm/lvm.conf file.
@@ -26,506 +27,509 @@
 # How LVM configuration settings are handled.
 config {
 
-	# Configuration option config/checks.
-	# If enabled, any LVM configuration mismatch is reported.
-	# This implies checking that the configuration key is understood by
-	# LVM and that the value of the key is the proper type. If disabled,
-	# any configuration mismatch is ignored and the default value is used
-	# without any warning (a message about the configuration key not being
-	# found is issued in verbose mode only).
-	checks = 1
-
-	# Configuration option config/abort_on_errors.
-	# Abort the LVM process if a configuration mismatch is found.
-	abort_on_errors = 0
-
-	# Configuration option config/profile_dir.
-	# Directory where LVM looks for configuration profiles.
-	profile_dir = "/etc/lvm/profile"
+        # Configuration option config/checks.
+        # If enabled, any LVM configuration mismatch is reported.
+        # This implies checking that the configuration key is understood by
+        # LVM and that the value of the key is the proper type. If disabled,
+        # any configuration mismatch is ignored and the default value is used
+        # without any warning (a message about the configuration key not being
+        # found is issued in verbose mode only).
+        checks = 1
+
+        # Configuration option config/abort_on_errors.
+        # Abort the LVM process if a configuration mismatch is found.
+        abort_on_errors = 0
+
+        # Configuration option config/profile_dir.
+        # Directory where LVM looks for configuration profiles.
+        profile_dir = "/etc/lvm/profile"
 }
 
 # Configuration section devices.
 # How LVM uses block devices.
 devices {
 
-	# Configuration option devices/dir.
-	# Directory in which to create volume group device nodes.
-	# Commands also accept this as a prefix on volume group names.
-	# This configuration option is advanced.
-	dir = "/dev"
-
-	# Configuration option devices/scan.
-	# Directories containing device nodes to use with LVM.
-	# This configuration option is advanced.
-	scan = [ "/dev" ]
-
-	# Configuration option devices/obtain_device_list_from_udev.
-	# Obtain the list of available devices from udev.
-	# This avoids opening or using any inapplicable non-block devices or
-	# subdirectories found in the udev directory. Any device node or
-	# symlink not managed by udev in the udev directory is ignored. This
-	# setting applies only to the udev-managed device directory; other
-	# directories will be scanned fully. LVM needs to be compiled with
-	# udev support for this setting to apply.
-	obtain_device_list_from_udev = 1
-
-	# Configuration option devices/external_device_info_source.
-	# Select an external device information source.
-	# Some information may already be available in the system and LVM can
-	# use this information to determine the exact type or use of devices it
-	# processes. Using an existing external device information source can
-	# speed up device processing as LVM does not need to run its own native
-	# routines to acquire this information. For example, this information
-	# is used to drive LVM filtering like MD component detection, multipath
-	# component detection, partition detection and others.
-	# 
-	# Accepted values:
-	#   none
-	#     No external device information source is used.
-	#   udev
-	#     Reuse existing udev database records. Applicable only if LVM is
-	#     compiled with udev support.
-	# 
-	external_device_info_source = "none"
-
-	# Configuration option devices/preferred_names.
-	# Select which path name to display for a block device.
-	# If multiple path names exist for a block device, and LVM needs to
-	# display a name for the device, the path names are matched against
-	# each item in this list of regular expressions. The first match is
-	# used. Try to avoid using undescriptive /dev/dm-N names, if present.
-	# If no preferred name matches, or if preferred_names are not defined,
-	# the following built-in preferences are applied in order until one
-	# produces a preferred name:
-	# Prefer names with path prefixes in the order of:
-	# /dev/mapper, /dev/disk, /dev/dm-*, /dev/block.
-	# Prefer the name with the least number of slashes.
-	# Prefer a name that is a symlink.
-	# Prefer the path with least value in lexicographical order.
-	# 
-	# Example
-	# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option devices/filter.
-	# Limit the block devices that are used by LVM commands.
-	# This is a list of regular expressions used to accept or reject block
-	# device path names. Each regex is delimited by a vertical bar '|'
-	# (or any character) and is preceded by 'a' to accept the path, or
-	# by 'r' to reject the path. The first regex in the list to match the
-	# path is used, producing the 'a' or 'r' result for the device.
-	# When multiple path names exist for a block device, if any path name
-	# matches an 'a' pattern before an 'r' pattern, then the device is
-	# accepted. If all the path names match an 'r' pattern first, then the
-	# device is rejected. Unmatching path names do not affect the accept
-	# or reject decision. If no path names for a device match a pattern,
-	# then the device is accepted. Be careful mixing 'a' and 'r' patterns,
-	# as the combination might produce unexpected results (test changes.)
-	# Run vgscan after changing the filter to regenerate the cache.
-	# See the use_lvmetad comment for a special case regarding filters.
-	# 
-	# Example
-	# Accept every block device:
-	# filter = [ "a|.*/|" ]
-	# Reject the cdrom drive:
-	# filter = [ "r|/dev/cdrom|" ]
-	# Work with just loopback devices, e.g. for testing:
-	# filter = [ "a|loop|", "r|.*|" ]
-	# Accept all loop devices and ide drives except hdc:
-	# filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
-	# Use anchors to be very specific:
-	# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
-	# 
-	# This configuration option has an automatic default value.
-	# filter = [ "a|.*/|" ]
-
-	# Configuration option devices/global_filter.
-	# Limit the block devices that are used by LVM system components.
-	# Because devices/filter may be overridden from the command line, it is
-	# not suitable for system-wide device filtering, e.g. udev and lvmetad.
-	# Use global_filter to hide devices from these LVM system components.
-	# The syntax is the same as devices/filter. Devices rejected by
-	# global_filter are not opened by LVM.
-	# This configuration option has an automatic default value.
-	# global_filter = [ "a|.*/|" ]
-
-	# Configuration option devices/cache_dir.
-	# Directory in which to store the device cache file.
-	# The results of filtering are cached on disk to avoid rescanning dud
-	# devices (which can take a very long time). By default this cache is
-	# stored in a file named .cache. It is safe to delete this file; the
-	# tools regenerate it. If obtain_device_list_from_udev is enabled, the
-	# list of devices is obtained from udev and any existing .cache file
-	# is removed.
-	cache_dir = "/run/lvm"
-
-	# Configuration option devices/cache_file_prefix.
-	# A prefix used before the .cache file name. See devices/cache_dir.
-	cache_file_prefix = ""
-
-	# Configuration option devices/write_cache_state.
-	# Enable/disable writing the cache file. See devices/cache_dir.
-	write_cache_state = 1
-
-	# Configuration option devices/types.
-	# List of additional acceptable block device types.
-	# These are of device type names from /proc/devices, followed by the
-	# maximum number of partitions.
-	# 
-	# Example
-	# types = [ "fd", 16 ]
-	# 
-	# This configuration option is advanced.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option devices/sysfs_scan.
-	# Restrict device scanning to block devices appearing in sysfs.
-	# This is a quick way of filtering out block devices that are not
-	# present on the system. sysfs must be part of the kernel and mounted.)
-	sysfs_scan = 1
-
-	# Configuration option devices/multipath_component_detection.
-	# Ignore devices that are components of DM multipath devices.
-	multipath_component_detection = 1
-
-	# Configuration option devices/md_component_detection.
-	# Ignore devices that are components of software RAID (md) devices.
-	md_component_detection = 1
-
-	# Configuration option devices/fw_raid_component_detection.
-	# Ignore devices that are components of firmware RAID devices.
-	# LVM must use an external_device_info_source other than none for this
-	# detection to execute.
-	fw_raid_component_detection = 0
-
-	# Configuration option devices/md_chunk_alignment.
-	# Align PV data blocks with md device's stripe-width.
-	# This applies if a PV is placed directly on an md device.
-	md_chunk_alignment = 1
-
-	# Configuration option devices/default_data_alignment.
-	# Default alignment of the start of a PV data area in MB.
-	# If set to 0, a value of 64KiB will be used.
-	# Set to 1 for 1MiB, 2 for 2MiB, etc.
-	# This configuration option has an automatic default value.
-	# default_data_alignment = 1
-
-	# Configuration option devices/data_alignment_detection.
-	# Detect PV data alignment based on sysfs device information.
-	# The start of a PV data area will be a multiple of minimum_io_size or
-	# optimal_io_size exposed in sysfs. minimum_io_size is the smallest
-	# request the device can perform without incurring a read-modify-write
-	# penalty, e.g. MD chunk size. optimal_io_size is the device's
-	# preferred unit of receiving I/O, e.g. MD stripe width.
-	# minimum_io_size is used if optimal_io_size is undefined (0).
-	# If md_chunk_alignment is enabled, that detects the optimal_io_size.
-	# This setting takes precedence over md_chunk_alignment.
-	data_alignment_detection = 1
-
-	# Configuration option devices/data_alignment.
-	# Alignment of the start of a PV data area in KiB.
-	# If a PV is placed directly on an md device and md_chunk_alignment or
-	# data_alignment_detection are enabled, then this setting is ignored.
-	# Otherwise, md_chunk_alignment and data_alignment_detection are
-	# disabled if this is set. Set to 0 to use the default alignment or the
-	# page size, if larger.
-	data_alignment = 0
-
-	# Configuration option devices/data_alignment_offset_detection.
-	# Detect PV data alignment offset based on sysfs device information.
-	# The start of a PV aligned data area will be shifted by the
-	# alignment_offset exposed in sysfs. This offset is often 0, but may
-	# be non-zero. Certain 4KiB sector drives that compensate for windows
-	# partitioning will have an alignment_offset of 3584 bytes (sector 7
-	# is the lowest aligned logical block, the 4KiB sectors start at
-	# LBA -1, and consequently sector 63 is aligned on a 4KiB boundary).
-	# pvcreate --dataalignmentoffset will skip this detection.
-	data_alignment_offset_detection = 1
-
-	# Configuration option devices/ignore_suspended_devices.
-	# Ignore DM devices that have I/O suspended while scanning devices.
-	# Otherwise, LVM waits for a suspended device to become accessible.
-	# This should only be needed in recovery situations.
-	ignore_suspended_devices = 0
-
-	# Configuration option devices/ignore_lvm_mirrors.
-	# Do not scan 'mirror' LVs to avoid possible deadlocks.
-	# This avoids possible deadlocks when using the 'mirror' segment type.
-	# This setting determines whether LVs using the 'mirror' segment type
-	# are scanned for LVM labels. This affects the ability of mirrors to
-	# be used as physical volumes. If this setting is enabled, it is
-	# impossible to create VGs on top of mirror LVs, i.e. to stack VGs on
-	# mirror LVs. If this setting is disabled, allowing mirror LVs to be
-	# scanned, it may cause LVM processes and I/O to the mirror to become
-	# blocked. This is due to the way that the mirror segment type handles
-	# failures. In order for the hang to occur, an LVM command must be run
-	# just after a failure and before the automatic LVM repair process
-	# takes place, or there must be failures in multiple mirrors in the
-	# same VG at the same time with write failures occurring moments before
-	# a scan of the mirror's labels. The 'mirror' scanning problems do not
-	# apply to LVM RAID types like 'raid1' which handle failures in a
-	# different way, making them a better choice for VG stacking.
-	ignore_lvm_mirrors = 1
-
-	# Configuration option devices/disable_after_error_count.
-	# Number of I/O errors after which a device is skipped.
-	# During each LVM operation, errors received from each device are
-	# counted. If the counter of a device exceeds the limit set here,
-	# no further I/O is sent to that device for the remainder of the
-	# operation. Setting this to 0 disables the counters altogether.
-	disable_after_error_count = 0
-
-	# Configuration option devices/require_restorefile_with_uuid.
-	# Allow use of pvcreate --uuid without requiring --restorefile.
-	require_restorefile_with_uuid = 1
-
-	# Configuration option devices/pv_min_size.
-	# Minimum size in KiB of block devices which can be used as PVs.
-	# In a clustered environment all nodes must use the same value.
-	# Any value smaller than 512KiB is ignored. The previous built-in
-	# value was 512.
-	pv_min_size = 2048
-
-	# Configuration option devices/issue_discards.
-	# Issue discards to PVs that are no longer used by an LV.
-	# Discards are sent to an LV's underlying physical volumes when the LV
-	# is no longer using the physical volumes' space, e.g. lvremove,
-	# lvreduce. Discards inform the storage that a region is no longer
-	# used. Storage that supports discards advertise the protocol-specific
-	# way discards should be issued by the kernel (TRIM, UNMAP, or
-	# WRITE SAME with UNMAP bit set). Not all storage will support or
-	# benefit from discards, but SSDs and thinly provisioned LUNs
-	# generally do. If enabled, discards will only be issued if both the
-	# storage and kernel provide support.
-	issue_discards = 1
+        # Configuration option devices/dir.
+        # Directory in which to create volume group device nodes.
+        # Commands also accept this as a prefix on volume group names.
+        # This configuration option is advanced.
+        dir = "/dev"
+
+        # Configuration option devices/scan.
+        # Directories containing device nodes to use with LVM.
+        # This configuration option is advanced.
+        scan = [ "/dev" ]
+
+        # Configuration option devices/obtain_device_list_from_udev.
+        # Obtain the list of available devices from udev.
+        # This avoids opening or using any inapplicable non-block devices or
+        # subdirectories found in the udev directory. Any device node or
+        # symlink not managed by udev in the udev directory is ignored. This
+        # setting applies only to the udev-managed device directory; other
+        # directories will be scanned fully. LVM needs to be compiled with
+        # udev support for this setting to apply.
+        obtain_device_list_from_udev = 1
+
+        # Configuration option devices/external_device_info_source.
+        # Select an external device information source.
+        # Some information may already be available in the system and LVM can
+        # use this information to determine the exact type or use of devices it
+        # processes. Using an existing external device information source can
+        # speed up device processing as LVM does not need to run its own native
+        # routines to acquire this information. For example, this information
+        # is used to drive LVM filtering like MD component detection, multipath
+        # component detection, partition detection and others.
+        # 
+        # Accepted values:
+        #   none
+        #     No external device information source is used.
+        #   udev
+        #     Reuse existing udev database records. Applicable only if LVM is
+        #     compiled with udev support.
+        # 
+        external_device_info_source = "none"
+
+        # Configuration option devices/preferred_names.
+        # Select which path name to display for a block device.
+        # If multiple path names exist for a block device, and LVM needs to
+        # display a name for the device, the path names are matched against
+        # each item in this list of regular expressions. The first match is
+        # used. Try to avoid using undescriptive /dev/dm-N names, if present.
+        # If no preferred name matches, or if preferred_names are not defined,
+        # the following built-in preferences are applied in order until one
+        # produces a preferred name:
+        # Prefer names with path prefixes in the order of:
+        # /dev/mapper, /dev/disk, /dev/dm-*, /dev/block.
+        # Prefer the name with the least number of slashes.
+        # Prefer a name that is a symlink.
+        # Prefer the path with least value in lexicographical order.
+        # 
+        # Example
+        # preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option devices/filter.
+        # Limit the block devices that are used by LVM commands.
+        # This is a list of regular expressions used to accept or reject block
+        # device path names. Each regex is delimited by a vertical bar '|'
+        # (or any character) and is preceded by 'a' to accept the path, or
+        # by 'r' to reject the path. The first regex in the list to match the
+        # path is used, producing the 'a' or 'r' result for the device.
+        # When multiple path names exist for a block device, if any path name
+        # matches an 'a' pattern before an 'r' pattern, then the device is
+        # accepted. If all the path names match an 'r' pattern first, then the
+        # device is rejected. Unmatching path names do not affect the accept
+        # or reject decision. If no path names for a device match a pattern,
+        # then the device is accepted. Be careful mixing 'a' and 'r' patterns,
+        # as the combination might produce unexpected results (test changes.)
+        # Run vgscan after changing the filter to regenerate the cache.
+        # See the use_lvmetad comment for a special case regarding filters.
+        # 
+        # Example
+        # Accept every block device:
+
+        filter = ["a|/dev/sda2*|", "r|.*|" ]
+
+        # filter = [ "a|.*/|" ]
+        # Reject the cdrom drive:
+        # filter = [ "r|/dev/cdrom|" ]
+        # Work with just loopback devices, e.g. for testing:
+        # filter = [ "a|loop|", "r|.*|" ]
+        # Accept all loop devices and ide drives except hdc:
+        # filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
+        # Use anchors to be very specific:
+        # filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
+        # 
+        # This configuration option has an automatic default value.
+        # filter = [ "a|.*/|" ]
+
+        # Configuration option devices/global_filter.
+        # Limit the block devices that are used by LVM system components.
+        # Because devices/filter may be overridden from the command line, it is
+        # not suitable for system-wide device filtering, e.g. udev and lvmetad.
+        # Use global_filter to hide devices from these LVM system components.
+        # The syntax is the same as devices/filter. Devices rejected by
+        # global_filter are not opened by LVM.
+        # This configuration option has an automatic default value.
+        # global_filter = [ "a|.*/|" ]
+
+        # Configuration option devices/cache_dir.
+        # Directory in which to store the device cache file.
+        # The results of filtering are cached on disk to avoid rescanning dud
+        # devices (which can take a very long time). By default this cache is
+        # stored in a file named .cache. It is safe to delete this file; the
+        # tools regenerate it. If obtain_device_list_from_udev is enabled, the
+        # list of devices is obtained from udev and any existing .cache file
+        # is removed.
+        cache_dir = "/run/lvm"
+
+        # Configuration option devices/cache_file_prefix.
+        # A prefix used before the .cache file name. See devices/cache_dir.
+        cache_file_prefix = ""
+
+        # Configuration option devices/write_cache_state.
+        # Enable/disable writing the cache file. See devices/cache_dir.
+        write_cache_state = 1
+
+        # Configuration option devices/types.
+        # List of additional acceptable block device types.
+        # These are of device type names from /proc/devices, followed by the
+        # maximum number of partitions.
+        # 
+        # Example
+        # types = [ "fd", 16 ]
+        # 
+        # This configuration option is advanced.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option devices/sysfs_scan.
+        # Restrict device scanning to block devices appearing in sysfs.
+        # This is a quick way of filtering out block devices that are not
+        # present on the system. sysfs must be part of the kernel and mounted.)
+        sysfs_scan = 1
+
+        # Configuration option devices/multipath_component_detection.
+        # Ignore devices that are components of DM multipath devices.
+        multipath_component_detection = 1
+
+        # Configuration option devices/md_component_detection.
+        # Ignore devices that are components of software RAID (md) devices.
+        md_component_detection = 1
+
+        # Configuration option devices/fw_raid_component_detection.
+        # Ignore devices that are components of firmware RAID devices.
+        # LVM must use an external_device_info_source other than none for this
+        # detection to execute.
+        fw_raid_component_detection = 0
+
+        # Configuration option devices/md_chunk_alignment.
+        # Align PV data blocks with md device's stripe-width.
+        # This applies if a PV is placed directly on an md device.
+        md_chunk_alignment = 1
+
+        # Configuration option devices/default_data_alignment.
+        # Default alignment of the start of a PV data area in MB.
+        # If set to 0, a value of 64KiB will be used.
+        # Set to 1 for 1MiB, 2 for 2MiB, etc.
+        # This configuration option has an automatic default value.
+        # default_data_alignment = 1
+
+        # Configuration option devices/data_alignment_detection.
+        # Detect PV data alignment based on sysfs device information.
+        # The start of a PV data area will be a multiple of minimum_io_size or
+        # optimal_io_size exposed in sysfs. minimum_io_size is the smallest
+        # request the device can perform without incurring a read-modify-write
+        # penalty, e.g. MD chunk size. optimal_io_size is the device's
+        # preferred unit of receiving I/O, e.g. MD stripe width.
+        # minimum_io_size is used if optimal_io_size is undefined (0).
+        # If md_chunk_alignment is enabled, that detects the optimal_io_size.
+        # This setting takes precedence over md_chunk_alignment.
+        data_alignment_detection = 1
+
+        # Configuration option devices/data_alignment.
+        # Alignment of the start of a PV data area in KiB.
+        # If a PV is placed directly on an md device and md_chunk_alignment or
+        # data_alignment_detection are enabled, then this setting is ignored.
+        # Otherwise, md_chunk_alignment and data_alignment_detection are
+        # disabled if this is set. Set to 0 to use the default alignment or the
+        # page size, if larger.
+        data_alignment = 0
+
+        # Configuration option devices/data_alignment_offset_detection.
+        # Detect PV data alignment offset based on sysfs device information.
+        # The start of a PV aligned data area will be shifted by the
+        # alignment_offset exposed in sysfs. This offset is often 0, but may
+        # be non-zero. Certain 4KiB sector drives that compensate for windows
+        # partitioning will have an alignment_offset of 3584 bytes (sector 7
+        # is the lowest aligned logical block, the 4KiB sectors start at
+        # LBA -1, and consequently sector 63 is aligned on a 4KiB boundary).
+        # pvcreate --dataalignmentoffset will skip this detection.
+        data_alignment_offset_detection = 1
+
+        # Configuration option devices/ignore_suspended_devices.
+        # Ignore DM devices that have I/O suspended while scanning devices.
+        # Otherwise, LVM waits for a suspended device to become accessible.
+        # This should only be needed in recovery situations.
+        ignore_suspended_devices = 0
+
+        # Configuration option devices/ignore_lvm_mirrors.
+        # Do not scan 'mirror' LVs to avoid possible deadlocks.
+        # This avoids possible deadlocks when using the 'mirror' segment type.
+        # This setting determines whether LVs using the 'mirror' segment type
+        # are scanned for LVM labels. This affects the ability of mirrors to
+        # be used as physical volumes. If this setting is enabled, it is
+        # impossible to create VGs on top of mirror LVs, i.e. to stack VGs on
+        # mirror LVs. If this setting is disabled, allowing mirror LVs to be
+        # scanned, it may cause LVM processes and I/O to the mirror to become
+        # blocked. This is due to the way that the mirror segment type handles
+        # failures. In order for the hang to occur, an LVM command must be run
+        # just after a failure and before the automatic LVM repair process
+        # takes place, or there must be failures in multiple mirrors in the
+        # same VG at the same time with write failures occurring moments before
+        # a scan of the mirror's labels. The 'mirror' scanning problems do not
+        # apply to LVM RAID types like 'raid1' which handle failures in a
+        # different way, making them a better choice for VG stacking.
+        ignore_lvm_mirrors = 1
+
+        # Configuration option devices/disable_after_error_count.
+        # Number of I/O errors after which a device is skipped.
+        # During each LVM operation, errors received from each device are
+        # counted. If the counter of a device exceeds the limit set here,
+        # no further I/O is sent to that device for the remainder of the
+        # operation. Setting this to 0 disables the counters altogether.
+        disable_after_error_count = 0
+
+        # Configuration option devices/require_restorefile_with_uuid.
+        # Allow use of pvcreate --uuid without requiring --restorefile.
+        require_restorefile_with_uuid = 1
+
+        # Configuration option devices/pv_min_size.
+        # Minimum size in KiB of block devices which can be used as PVs.
+        # In a clustered environment all nodes must use the same value.
+        # Any value smaller than 512KiB is ignored. The previous built-in
+        # value was 512.
+        pv_min_size = 2048
+
+        # Configuration option devices/issue_discards.
+        # Issue discards to PVs that are no longer used by an LV.
+        # Discards are sent to an LV's underlying physical volumes when the LV
+        # is no longer using the physical volumes' space, e.g. lvremove,
+        # lvreduce. Discards inform the storage that a region is no longer
+        # used. Storage that supports discards advertise the protocol-specific
+        # way discards should be issued by the kernel (TRIM, UNMAP, or
+        # WRITE SAME with UNMAP bit set). Not all storage will support or
+        # benefit from discards, but SSDs and thinly provisioned LUNs
+        # generally do. If enabled, discards will only be issued if both the
+        # storage and kernel provide support.
+        issue_discards = 1
 }
 
 # Configuration section allocation.
 # How LVM selects space and applies properties to LVs.
 allocation {
 
-	# Configuration option allocation/cling_tag_list.
-	# Advise LVM which PVs to use when searching for new space.
-	# When searching for free space to extend an LV, the 'cling' allocation
-	# policy will choose space on the same PVs as the last segment of the
-	# existing LV. If there is insufficient space and a list of tags is
-	# defined here, it will check whether any of them are attached to the
-	# PVs concerned and then seek to match those PV tags between existing
-	# extents and new extents.
-	# 
-	# Example
-	# Use the special tag "@*" as a wildcard to match any PV tag:
-	# cling_tag_list = [ "@*" ]
-	# LVs are mirrored between two sites within a single VG, and
-	# PVs are tagged with either @site1 or @site2 to indicate where
-	# they are situated:
-	# cling_tag_list = [ "@site1", "@site2" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option allocation/maximise_cling.
-	# Use a previous allocation algorithm.
-	# Changes made in version 2.02.85 extended the reach of the 'cling'
-	# policies to detect more situations where data can be grouped onto
-	# the same disks. This setting can be used to disable the changes
-	# and revert to the previous algorithm.
-	maximise_cling = 1
-
-	# Configuration option allocation/use_blkid_wiping.
-	# Use blkid to detect existing signatures on new PVs and LVs.
-	# The blkid library can detect more signatures than the native LVM
-	# detection code, but may take longer. LVM needs to be compiled with
-	# blkid wiping support for this setting to apply. LVM native detection
-	# code is currently able to recognize: MD device signatures,
-	# swap signature, and LUKS signatures. To see the list of signatures
-	# recognized by blkid, check the output of the 'blkid -k' command.
-	use_blkid_wiping = 1
-
-	# Configuration option allocation/wipe_signatures_when_zeroing_new_lvs.
-	# Look for and erase any signatures while zeroing a new LV.
-	# The --wipesignatures option overrides this setting.
-	# Zeroing is controlled by the -Z/--zero option, and if not specified,
-	# zeroing is used by default if possible. Zeroing simply overwrites the
-	# first 4KiB of a new LV with zeroes and does no signature detection or
-	# wiping. Signature wiping goes beyond zeroing and detects exact types
-	# and positions of signatures within the whole LV. It provides a
-	# cleaner LV after creation as all known signatures are wiped. The LV
-	# is not claimed incorrectly by other tools because of old signatures
-	# from previous use. The number of signatures that LVM can detect
-	# depends on the detection code that is selected (see
-	# use_blkid_wiping.) Wiping each detected signature must be confirmed.
-	# When this setting is disabled, signatures on new LVs are not detected
-	# or erased unless the --wipesignatures option is used directly.
-	wipe_signatures_when_zeroing_new_lvs = 1
-
-	# Configuration option allocation/mirror_logs_require_separate_pvs.
-	# Mirror logs and images will always use different PVs.
-	# The default setting changed in version 2.02.85.
-	mirror_logs_require_separate_pvs = 0
-
-	# Configuration option allocation/cache_pool_metadata_require_separate_pvs.
-	# Cache pool metadata and data will always use different PVs.
-	cache_pool_metadata_require_separate_pvs = 0
-
-	# Configuration option allocation/cache_mode.
-	# The default cache mode used for new cache.
-	# 
-	# Accepted values:
-	#   writethrough
-	#     Data blocks are immediately written from the cache to disk.
-	#   writeback
-	#     Data blocks are written from the cache back to disk after some
-	#     delay to improve performance.
-	# 
-	# This setting replaces allocation/cache_pool_cachemode.
-	# This configuration option has an automatic default value.
-	# cache_mode = "writethrough"
-
-	# Configuration option allocation/cache_policy.
-	# The default cache policy used for new cache volume.
-	# Since kernel 4.2 the default policy is smq (Stochastic multique),
-	# otherwise the older mq (Multiqueue) policy is selected.
-	# This configuration option does not have a default value defined.
-
-	# Configuration section allocation/cache_settings.
-	# Settings for the cache policy.
-	# See documentation for individual cache policies for more info.
-	# This configuration section has an automatic default value.
-	# cache_settings {
-	# }
-
-	# Configuration option allocation/cache_pool_chunk_size.
-	# The minimal chunk size in KiB for cache pool volumes.
-	# Using a chunk_size that is too large can result in wasteful use of
-	# the cache, where small reads and writes can cause large sections of
-	# an LV to be mapped into the cache. However, choosing a chunk_size
-	# that is too small can result in more overhead trying to manage the
-	# numerous chunks that become mapped into the cache. The former is
-	# more of a problem than the latter in most cases, so the default is
-	# on the smaller end of the spectrum. Supported values range from
-	# 32KiB to 1GiB in multiples of 32.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option allocation/thin_pool_metadata_require_separate_pvs.
-	# Thin pool metdata and data will always use different PVs.
-	thin_pool_metadata_require_separate_pvs = 0
-
-	# Configuration option allocation/thin_pool_zero.
-	# Thin pool data chunks are zeroed before they are first used.
-	# Zeroing with a larger thin pool chunk size reduces performance.
-	# This configuration option has an automatic default value.
-	# thin_pool_zero = 1
-
-	# Configuration option allocation/thin_pool_discards.
-	# The discards behaviour of thin pool volumes.
-	# 
-	# Accepted values:
-	#   ignore
-	#   nopassdown
-	#   passdown
-	# 
-	# This configuration option has an automatic default value.
-	# thin_pool_discards = "passdown"
-
-	# Configuration option allocation/thin_pool_chunk_size_policy.
-	# The chunk size calculation policy for thin pool volumes.
-	# 
-	# Accepted values:
-	#   generic
-	#     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
-	#     the chunk size based on estimation and device hints exposed in
-	#     sysfs - the minimum_io_size. The chunk size is always at least
-	#     64KiB.
-	#   performance
-	#     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
-	#     the chunk size for performance based on device hints exposed in
-	#     sysfs - the optimal_io_size. The chunk size is always at least
-	#     512KiB.
-	# 
-	# This configuration option has an automatic default value.
-	# thin_pool_chunk_size_policy = "generic"
-
-	# Configuration option allocation/thin_pool_chunk_size.
-	# The minimal chunk size in KiB for thin pool volumes.
-	# Larger chunk sizes may improve performance for plain thin volumes,
-	# however using them for snapshot volumes is less efficient, as it
-	# consumes more space and takes extra time for copying. When unset,
-	# lvm tries to estimate chunk size starting from 64KiB. Supported
-	# values are in the range 64KiB to 1GiB.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option allocation/physical_extent_size.
-	# Default physical extent size in KiB to use for new VGs.
-	# This configuration option has an automatic default value.
-	# physical_extent_size = 4096
+        # Configuration option allocation/cling_tag_list.
+        # Advise LVM which PVs to use when searching for new space.
+        # When searching for free space to extend an LV, the 'cling' allocation
+        # policy will choose space on the same PVs as the last segment of the
+        # existing LV. If there is insufficient space and a list of tags is
+        # defined here, it will check whether any of them are attached to the
+        # PVs concerned and then seek to match those PV tags between existing
+        # extents and new extents.
+        # 
+        # Example
+        # Use the special tag "@*" as a wildcard to match any PV tag:
+        # cling_tag_list = [ "@*" ]
+        # LVs are mirrored between two sites within a single VG, and
+        # PVs are tagged with either @site1 or @site2 to indicate where
+        # they are situated:
+        # cling_tag_list = [ "@site1", "@site2" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option allocation/maximise_cling.
+        # Use a previous allocation algorithm.
+        # Changes made in version 2.02.85 extended the reach of the 'cling'
+        # policies to detect more situations where data can be grouped onto
+        # the same disks. This setting can be used to disable the changes
+        # and revert to the previous algorithm.
+        maximise_cling = 1
+
+        # Configuration option allocation/use_blkid_wiping.
+        # Use blkid to detect existing signatures on new PVs and LVs.
+        # The blkid library can detect more signatures than the native LVM
+        # detection code, but may take longer. LVM needs to be compiled with
+        # blkid wiping support for this setting to apply. LVM native detection
+        # code is currently able to recognize: MD device signatures,
+        # swap signature, and LUKS signatures. To see the list of signatures
+        # recognized by blkid, check the output of the 'blkid -k' command.
+        use_blkid_wiping = 1
+
+        # Configuration option allocation/wipe_signatures_when_zeroing_new_lvs.
+        # Look for and erase any signatures while zeroing a new LV.
+        # The --wipesignatures option overrides this setting.
+        # Zeroing is controlled by the -Z/--zero option, and if not specified,
+        # zeroing is used by default if possible. Zeroing simply overwrites the
+        # first 4KiB of a new LV with zeroes and does no signature detection or
+        # wiping. Signature wiping goes beyond zeroing and detects exact types
+        # and positions of signatures within the whole LV. It provides a
+        # cleaner LV after creation as all known signatures are wiped. The LV
+        # is not claimed incorrectly by other tools because of old signatures
+        # from previous use. The number of signatures that LVM can detect
+        # depends on the detection code that is selected (see
+        # use_blkid_wiping.) Wiping each detected signature must be confirmed.
+        # When this setting is disabled, signatures on new LVs are not detected
+        # or erased unless the --wipesignatures option is used directly.
+        wipe_signatures_when_zeroing_new_lvs = 1
+
+        # Configuration option allocation/mirror_logs_require_separate_pvs.
+        # Mirror logs and images will always use different PVs.
+        # The default setting changed in version 2.02.85.
+        mirror_logs_require_separate_pvs = 0
+
+        # Configuration option allocation/cache_pool_metadata_require_separate_pvs.
+        # Cache pool metadata and data will always use different PVs.
+        cache_pool_metadata_require_separate_pvs = 0
+
+        # Configuration option allocation/cache_mode.
+        # The default cache mode used for new cache.
+        # 
+        # Accepted values:
+        #   writethrough
+        #     Data blocks are immediately written from the cache to disk.
+        #   writeback
+        #     Data blocks are written from the cache back to disk after some
+        #     delay to improve performance.
+        # 
+        # This setting replaces allocation/cache_pool_cachemode.
+        # This configuration option has an automatic default value.
+        # cache_mode = "writethrough"
+
+        # Configuration option allocation/cache_policy.
+        # The default cache policy used for new cache volume.
+        # Since kernel 4.2 the default policy is smq (Stochastic multique),
+        # otherwise the older mq (Multiqueue) policy is selected.
+        # This configuration option does not have a default value defined.
+
+        # Configuration section allocation/cache_settings.
+        # Settings for the cache policy.
+        # See documentation for individual cache policies for more info.
+        # This configuration section has an automatic default value.
+        # cache_settings {
+        # }
+
+        # Configuration option allocation/cache_pool_chunk_size.
+        # The minimal chunk size in KiB for cache pool volumes.
+        # Using a chunk_size that is too large can result in wasteful use of
+        # the cache, where small reads and writes can cause large sections of
+        # an LV to be mapped into the cache. However, choosing a chunk_size
+        # that is too small can result in more overhead trying to manage the
+        # numerous chunks that become mapped into the cache. The former is
+        # more of a problem than the latter in most cases, so the default is
+        # on the smaller end of the spectrum. Supported values range from
+        # 32KiB to 1GiB in multiples of 32.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option allocation/thin_pool_metadata_require_separate_pvs.
+        # Thin pool metdata and data will always use different PVs.
+        thin_pool_metadata_require_separate_pvs = 0
+
+        # Configuration option allocation/thin_pool_zero.
+        # Thin pool data chunks are zeroed before they are first used.
+        # Zeroing with a larger thin pool chunk size reduces performance.
+        # This configuration option has an automatic default value.
+        # thin_pool_zero = 1
+
+        # Configuration option allocation/thin_pool_discards.
+        # The discards behaviour of thin pool volumes.
+        # 
+        # Accepted values:
+        #   ignore
+        #   nopassdown
+        #   passdown
+        # 
+        # This configuration option has an automatic default value.
+        # thin_pool_discards = "passdown"
+
+        # Configuration option allocation/thin_pool_chunk_size_policy.
+        # The chunk size calculation policy for thin pool volumes.
+        # 
+        # Accepted values:
+        #   generic
+        #     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
+        #     the chunk size based on estimation and device hints exposed in
+        #     sysfs - the minimum_io_size. The chunk size is always at least
+        #     64KiB.
+        #   performance
+        #     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
+        #     the chunk size for performance based on device hints exposed in
+        #     sysfs - the optimal_io_size. The chunk size is always at least
+        #     512KiB.
+        # 
+        # This configuration option has an automatic default value.
+        # thin_pool_chunk_size_policy = "generic"
+
+        # Configuration option allocation/thin_pool_chunk_size.
+        # The minimal chunk size in KiB for thin pool volumes.
+        # Larger chunk sizes may improve performance for plain thin volumes,
+        # however using them for snapshot volumes is less efficient, as it
+        # consumes more space and takes extra time for copying. When unset,
+        # lvm tries to estimate chunk size starting from 64KiB. Supported
+        # values are in the range 64KiB to 1GiB.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option allocation/physical_extent_size.
+        # Default physical extent size in KiB to use for new VGs.
+        # This configuration option has an automatic default value.
+        # physical_extent_size = 4096
 }
 
 # Configuration section log.
 # How LVM log information is reported.
 log {
 
-	# Configuration option log/verbose.
-	# Controls the messages sent to stdout or stderr.
-	verbose = 0
-
-	# Configuration option log/silent.
-	# Suppress all non-essential messages from stdout.
-	# This has the same effect as -qq. When enabled, the following commands
-	# still produce output: dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck,
-	# pvdisplay, pvs, version, vgcfgrestore -l, vgdisplay, vgs.
-	# Non-essential messages are shifted from log level 4 to log level 5
-	# for syslog and lvm2_log_fn purposes.
-	# Any 'yes' or 'no' questions not overridden by other arguments are
-	# suppressed and default to 'no'.
-	silent = 0
-
-	# Configuration option log/syslog.
-	# Send log messages through syslog.
-	syslog = 1
-
-	# Configuration option log/file.
-	# Write error and debug log messages to a file specified here.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option log/overwrite.
-	# Overwrite the log file each time the program is run.
-	overwrite = 0
-
-	# Configuration option log/level.
-	# The level of log messages that are sent to the log file or syslog.
-	# There are 6 syslog-like log levels currently in use: 2 to 7 inclusive.
-	# 7 is the most verbose (LOG_DEBUG).
-	level = 0
-
-	# Configuration option log/indent.
-	# Indent messages according to their severity.
-	indent = 1
-
-	# Configuration option log/command_names.
-	# Display the command name on each line of output.
-	command_names = 0
-
-	# Configuration option log/prefix.
-	# A prefix to use before the log message text.
-	# (After the command name, if selected).
-	# Two spaces allows you to see/grep the severity of each message.
-	# To make the messages look similar to the original LVM tools use:
-	# indent = 0, command_names = 1, prefix = " -- "
-	prefix = "  "
-
-	# Configuration option log/activation.
-	# Log messages during activation.
-	# Don't use this in low memory situations (can deadlock).
-	activation = 0
-
-	# Configuration option log/debug_classes.
-	# Select log messages by class.
-	# Some debugging messages are assigned to a class and only appear in
-	# debug output if the class is listed here. Classes currently
-	# available: memory, devices, activation, allocation, lvmetad,
-	# metadata, cache, locking, lvmpolld. Use "all" to see everything.
-	debug_classes = [ "memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld" ]
+        # Configuration option log/verbose.
+        # Controls the messages sent to stdout or stderr.
+        verbose = 0
+
+        # Configuration option log/silent.
+        # Suppress all non-essential messages from stdout.
+        # This has the same effect as -qq. When enabled, the following commands
+        # still produce output: dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck,
+        # pvdisplay, pvs, version, vgcfgrestore -l, vgdisplay, vgs.
+        # Non-essential messages are shifted from log level 4 to log level 5
+        # for syslog and lvm2_log_fn purposes.
+        # Any 'yes' or 'no' questions not overridden by other arguments are
+        # suppressed and default to 'no'.
+        silent = 0
+
+        # Configuration option log/syslog.
+        # Send log messages through syslog.
+        syslog = 1
+
+        # Configuration option log/file.
+        # Write error and debug log messages to a file specified here.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option log/overwrite.
+        # Overwrite the log file each time the program is run.
+        overwrite = 0
+
+        # Configuration option log/level.
+        # The level of log messages that are sent to the log file or syslog.
+        # There are 6 syslog-like log levels currently in use: 2 to 7 inclusive.
+        # 7 is the most verbose (LOG_DEBUG).
+        level = 0
+
+        # Configuration option log/indent.
+        # Indent messages according to their severity.
+        indent = 1
+
+        # Configuration option log/command_names.
+        # Display the command name on each line of output.
+        command_names = 0
+
+        # Configuration option log/prefix.
+        # A prefix to use before the log message text.
+        # (After the command name, if selected).
+        # Two spaces allows you to see/grep the severity of each message.
+        # To make the messages look similar to the original LVM tools use:
+        # indent = 0, command_names = 1, prefix = " -- "
+        prefix = "  "
+
+        # Configuration option log/activation.
+        # Log messages during activation.
+        # Don't use this in low memory situations (can deadlock).
+        activation = 0
+
+        # Configuration option log/debug_classes.
+        # Select log messages by class.
+        # Some debugging messages are assigned to a class and only appear in
+        # debug output if the class is listed here. Classes currently
+        # available: memory, devices, activation, allocation, lvmetad,
+        # metadata, cache, locking, lvmpolld. Use "all" to see everything.
+        debug_classes = [ "memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld" ]
 }
 
 # Configuration section backup.
@@ -535,957 +539,957 @@
 # stored in a human readable text format.
 backup {
 
-	# Configuration option backup/backup.
-	# Maintain a backup of the current metadata configuration.
-	# Think very hard before turning this off!
-	backup = 1
-
-	# Configuration option backup/backup_dir.
-	# Location of the metadata backup files.
-	# Remember to back up this directory regularly!
-	backup_dir = "/etc/lvm/backup"
-
-	# Configuration option backup/archive.
-	# Maintain an archive of old metadata configurations.
-	# Think very hard before turning this off.
-	archive = 1
-
-	# Configuration option backup/archive_dir.
-	# Location of the metdata archive files.
-	# Remember to back up this directory regularly!
-	archive_dir = "/etc/lvm/archive"
-
-	# Configuration option backup/retain_min.
-	# Minimum number of archives to keep.
-	retain_min = 10
-
-	# Configuration option backup/retain_days.
-	# Minimum number of days to keep archive files.
-	retain_days = 30
+        # Configuration option backup/backup.
+        # Maintain a backup of the current metadata configuration.
+        # Think very hard before turning this off!
+        backup = 1
+
+        # Configuration option backup/backup_dir.
+        # Location of the metadata backup files.
+        # Remember to back up this directory regularly!
+        backup_dir = "/etc/lvm/backup"
+
+        # Configuration option backup/archive.
+        # Maintain an archive of old metadata configurations.
+        # Think very hard before turning this off.
+        archive = 1
+
+        # Configuration option backup/archive_dir.
+        # Location of the metdata archive files.
+        # Remember to back up this directory regularly!
+        archive_dir = "/etc/lvm/archive"
+
+        # Configuration option backup/retain_min.
+        # Minimum number of archives to keep.
+        retain_min = 10
+
+        # Configuration option backup/retain_days.
+        # Minimum number of days to keep archive files.
+        retain_days = 30
 }
 
 # Configuration section shell.
 # Settings for running LVM in shell (readline) mode.
 shell {
 
-	# Configuration option shell/history_size.
-	# Number of lines of history to store in ~/.lvm_history.
-	history_size = 100
+        # Configuration option shell/history_size.
+        # Number of lines of history to store in ~/.lvm_history.
+        history_size = 100
 }
 
 # Configuration section global.
 # Miscellaneous global LVM settings.
 global {
 
-	# Configuration option global/umask.
-	# The file creation mask for any files and directories created.
-	# Interpreted as octal if the first digit is zero.
-	umask = 077
-
-	# Configuration option global/test.
-	# No on-disk metadata changes will be made in test mode.
-	# Equivalent to having the -t option on every command.
-	test = 0
-
-	# Configuration option global/units.
-	# Default value for --units argument.
-	units = "h"
-
-	# Configuration option global/si_unit_consistency.
-	# Distinguish between powers of 1024 and 1000 bytes.
-	# The LVM commands distinguish between powers of 1024 bytes,
-	# e.g. KiB, MiB, GiB, and powers of 1000 bytes, e.g. KB, MB, GB.
-	# If scripts depend on the old behaviour, disable this setting
-	# temporarily until they are updated.
-	si_unit_consistency = 1
-
-	# Configuration option global/suffix.
-	# Display unit suffix for sizes.
-	# This setting has no effect if the units are in human-readable form
-	# (global/units = "h") in which case the suffix is always displayed.
-	suffix = 1
-
-	# Configuration option global/activation.
-	# Enable/disable communication with the kernel device-mapper.
-	# Disable to use the tools to manipulate LVM metadata without
-	# activating any logical volumes. If the device-mapper driver
-	# is not present in the kernel, disabling this should suppress
-	# the error messages.
-	activation = 1
-
-	# Configuration option global/fallback_to_lvm1.
-	# Try running LVM1 tools if LVM cannot communicate with DM.
-	# This option only applies to 2.4 kernels and is provided to help
-	# switch between device-mapper kernels and LVM1 kernels. The LVM1
-	# tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
-	# They will stop working once the lvm2 on-disk metadata format is used.
-	# This configuration option has an automatic default value.
-	# fallback_to_lvm1 = 0
-
-	# Configuration option global/format.
-	# The default metadata format that commands should use.
-	# The -M 1|2 option overrides this setting.
-	# 
-	# Accepted values:
-	#   lvm1
-	#   lvm2
-	# 
-	# This configuration option has an automatic default value.
-	# format = "lvm2"
-
-	# Configuration option global/format_libraries.
-	# Shared libraries that process different metadata formats.
-	# If support for LVM1 metadata was compiled as a shared library use
-	# format_libraries = "liblvm2format1.so"
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/segment_libraries.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/proc.
-	# Location of proc filesystem.
-	# This configuration option is advanced.
-	proc = "/proc"
-
-	# Configuration option global/etc.
-	# Location of /etc system configuration directory.
-	etc = "/etc"
-
-	# Configuration option global/locking_type.
-	# Type of locking to use.
-	# 
-	# Accepted values:
-	#   0
-	#     Turns off locking. Warning: this risks metadata corruption if
-	#     commands run concurrently.
-	#   1
-	#     LVM uses local file-based locking, the standard mode.
-	#   2
-	#     LVM uses the external shared library locking_library.
-	#   3
-	#     LVM uses built-in clustered locking with clvmd.
-	#     This is incompatible with lvmetad. If use_lvmetad is enabled,
-	#     LVM prints a warning and disables lvmetad use.
-	#   4
-	#     LVM uses read-only locking which forbids any operations that
-	#     might change metadata.
-	#   5
-	#     Offers dummy locking for tools that do not need any locks.
-	#     You should not need to set this directly; the tools will select
-	#     when to use it instead of the configured locking_type.
-	#     Do not use lvmetad or the kernel device-mapper driver with this
-	#     locking type. It is used by the --readonly option that offers
-	#     read-only access to Volume Group metadata that cannot be locked
-	#     safely because it belongs to an inaccessible domain and might be
-	#     in use, for example a virtual machine image or a disk that is
-	#     shared by a clustered machine.
-	# 
-	locking_type = 1
-
-	# Configuration option global/wait_for_locks.
-	# When disabled, fail if a lock request would block.
-	wait_for_locks = 1
-
-	# Configuration option global/fallback_to_clustered_locking.
-	# Attempt to use built-in cluster locking if locking_type 2 fails.
-	# If using external locking (type 2) and initialisation fails, with
-	# this enabled, an attempt will be made to use the built-in clustered
-	# locking. Disable this if using a customised locking_library.
-	fallback_to_clustered_locking = 1
-
-	# Configuration option global/fallback_to_local_locking.
-	# Use locking_type 1 (local) if locking_type 2 or 3 fail.
-	# If an attempt to initialise type 2 or type 3 locking failed, perhaps
-	# because cluster components such as clvmd are not running, with this
-	# enabled, an attempt will be made to use local file-based locking
-	# (type 1). If this succeeds, only commands against local VGs will
-	# proceed. VGs marked as clustered will be ignored.
-	fallback_to_local_locking = 1
-
-	# Configuration option global/locking_dir.
-	# Directory to use for LVM command file locks.
-	# Local non-LV directory that holds file-based locks while commands are
-	# in progress. A directory like /tmp that may get wiped on reboot is OK.
-	locking_dir = "/run/lock/lvm"
-
-	# Configuration option global/prioritise_write_locks.
-	# Allow quicker VG write access during high volume read access.
-	# When there are competing read-only and read-write access requests for
-	# a volume group's metadata, instead of always granting the read-only
-	# requests immediately, delay them to allow the read-write requests to
-	# be serviced. Without this setting, write access may be stalled by a
-	# high volume of read-only requests. This option only affects
-	# locking_type 1 viz. local file-based locking.
-	prioritise_write_locks = 1
-
-	# Configuration option global/library_dir.
-	# Search this directory first for shared libraries.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/locking_library.
-	# The external locking library to use for locking_type 2.
-	# This configuration option has an automatic default value.
-	# locking_library = "liblvm2clusterlock.so"
-
-	# Configuration option global/abort_on_internal_errors.
-	# Abort a command that encounters an internal error.
-	# Treat any internal errors as fatal errors, aborting the process that
-	# encountered the internal error. Please only enable for debugging.
-	abort_on_internal_errors = 0
-
-	# Configuration option global/detect_internal_vg_cache_corruption.
-	# Internal verification of VG structures.
-	# Check if CRC matches when a parsed VG is used multiple times. This
-	# is useful to catch unexpected changes to cached VG structures.
-	# Please only enable for debugging.
-	detect_internal_vg_cache_corruption = 0
-
-	# Configuration option global/metadata_read_only.
-	# No operations that change on-disk metadata are permitted.
-	# Additionally, read-only commands that encounter metadata in need of
-	# repair will still be allowed to proceed exactly as if the repair had
-	# been performed (except for the unchanged vg_seqno). Inappropriate
-	# use could mess up your system, so seek advice first!
-	metadata_read_only = 0
-
-	# Configuration option global/mirror_segtype_default.
-	# The segment type used by the short mirroring option -m.
-	# The --type mirror|raid1 option overrides this setting.
-	# 
-	# Accepted values:
-	#   mirror
-	#     The original RAID1 implementation from LVM/DM. It is
-	#     characterized by a flexible log solution (core, disk, mirrored),
-	#     and by the necessity to block I/O while handling a failure.
-	#     There is an inherent race in the dmeventd failure handling logic
-	#     with snapshots of devices using this type of RAID1 that in the
-	#     worst case could cause a deadlock. (Also see
-	#     devices/ignore_lvm_mirrors.)
-	#   raid1
-	#     This is a newer RAID1 implementation using the MD RAID1
-	#     personality through device-mapper. It is characterized by a
-	#     lack of log options. (A log is always allocated for every
-	#     device and they are placed on the same device as the image,
-	#     so no separate devices are required.) This mirror
-	#     implementation does not require I/O to be blocked while
-	#     handling a failure. This mirror implementation is not
-	#     cluster-aware and cannot be used in a shared (active/active)
-	#     fashion in a cluster.
-	# 
-	mirror_segtype_default = "raid1"
-
-	# Configuration option global/raid10_segtype_default.
-	# The segment type used by the -i -m combination.
-	# The --type raid10|mirror option overrides this setting.
-	# The --stripes/-i and --mirrors/-m options can both be specified
-	# during the creation of a logical volume to use both striping and
-	# mirroring for the LV. There are two different implementations.
-	# 
-	# Accepted values:
-	#   raid10
-	#     LVM uses MD's RAID10 personality through DM. This is the
-	#     preferred option.
-	#   mirror
-	#     LVM layers the 'mirror' and 'stripe' segment types. The layering
-	#     is done by creating a mirror LV on top of striped sub-LVs,
-	#     effectively creating a RAID 0+1 array. The layering is suboptimal
-	#     in terms of providing redundancy and performance.
-	# 
-	raid10_segtype_default = "raid10"
-
-	# Configuration option global/sparse_segtype_default.
-	# The segment type used by the -V -L combination.
-	# The --type snapshot|thin option overrides this setting.
-	# The combination of -V and -L options creates a sparse LV. There are
-	# two different implementations.
-	# 
-	# Accepted values:
-	#   snapshot
-	#     The original snapshot implementation from LVM/DM. It uses an old
-	#     snapshot that mixes data and metadata within a single COW
-	#     storage volume and performs poorly when the size of stored data
-	#     passes hundreds of MB.
-	#   thin
-	#     A newer implementation that uses thin provisioning. It has a
-	#     bigger minimal chunk size (64KiB) and uses a separate volume for
-	#     metadata. It has better performance, especially when more data
-	#     is used. It also supports full snapshots.
-	# 
-	sparse_segtype_default = "thin"
-
-	# Configuration option global/lvdisplay_shows_full_device_path.
-	# Enable this to reinstate the previous lvdisplay name format.
-	# The default format for displaying LV names in lvdisplay was changed
-	# in version 2.02.89 to show the LV name and path separately.
-	# Previously this was always shown as /dev/vgname/lvname even when that
-	# was never a valid path in the /dev filesystem.
-	# This configuration option has an automatic default value.
-	# lvdisplay_shows_full_device_path = 0
-
-	# Configuration option global/use_lvmetad.
-	# Use lvmetad to cache metadata and reduce disk scanning.
-	# When enabled (and running), lvmetad provides LVM commands with VG
-	# metadata and PV state. LVM commands then avoid reading this
-	# information from disks which can be slow. When disabled (or not
-	# running), LVM commands fall back to scanning disks to obtain VG
-	# metadata. lvmetad is kept updated via udev rules which must be set
-	# up for LVM to work correctly. (The udev rules should be installed
-	# by default.) Without a proper udev setup, changes in the system's
-	# block device configuration will be unknown to LVM, and ignored
-	# until a manual 'pvscan --cache' is run. If lvmetad was running
-	# while use_lvmetad was disabled, it must be stopped, use_lvmetad
-	# enabled, and then started. When using lvmetad, LV activation is
-	# switched to an automatic, event-based mode. In this mode, LVs are
-	# activated based on incoming udev events that inform lvmetad when
-	# PVs appear on the system. When a VG is complete (all PVs present),
-	# it is auto-activated. The auto_activation_volume_list setting
-	# controls which LVs are auto-activated (all by default.)
-	# When lvmetad is updated (automatically by udev events, or directly
-	# by pvscan --cache), devices/filter is ignored and all devices are
-	# scanned by default. lvmetad always keeps unfiltered information
-	# which is provided to LVM commands. Each LVM command then filters
-	# based on devices/filter. This does not apply to other, non-regexp,
-	# filtering settings: component filters such as multipath and MD
-	# are checked during pvscan --cache. To filter a device and prevent
-	# scanning from the LVM system entirely, including lvmetad, use
-	# devices/global_filter.
-	use_lvmetad = 1
-
-	# Configuration option global/use_lvmlockd.
-	# Use lvmlockd for locking among hosts using LVM on shared storage.
-	# See lvmlockd(8) for more information.
-	use_lvmlockd = 0
-
-	# Configuration option global/lvmlockd_lock_retries.
-	# Retry lvmlockd lock requests this many times.
-	# This configuration option has an automatic default value.
-	# lvmlockd_lock_retries = 3
-
-	# Configuration option global/sanlock_lv_extend.
-	# Size in MiB to extend the internal LV holding sanlock locks.
-	# The internal LV holds locks for each LV in the VG, and after enough
-	# LVs have been created, the internal LV needs to be extended. lvcreate
-	# will automatically extend the internal LV when needed by the amount
-	# specified here. Setting this to 0 disables the automatic extension
-	# and can cause lvcreate to fail.
-	# This configuration option has an automatic default value.
-	# sanlock_lv_extend = 256
-
-	# Configuration option global/thin_check_executable.
-	# The full path to the thin_check command.
-	# LVM uses this command to check that a thin metadata device is in a
-	# usable state. When a thin pool is activated and after it is
-	# deactivated, this command is run. Activation will only proceed if
-	# the command has an exit status of 0. Set to "" to skip this check.
-	# (Not recommended.) Also see thin_check_options.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# thin_check_executable = "/usr/sbin/thin_check"
-
-	# Configuration option global/thin_dump_executable.
-	# The full path to the thin_dump command.
-	# LVM uses this command to dump thin pool metadata.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# thin_dump_executable = "/usr/sbin/thin_dump"
-
-	# Configuration option global/thin_repair_executable.
-	# The full path to the thin_repair command.
-	# LVM uses this command to repair a thin metadata device if it is in
-	# an unusable state. Also see thin_repair_options.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# thin_repair_executable = "/usr/sbin/thin_repair"
-
-	# Configuration option global/thin_check_options.
-	# List of options passed to the thin_check command.
-	# With thin_check version 2.1 or newer you can add the option
-	# --ignore-non-fatal-errors to let it pass through ignorable errors
-	# and fix them later. With thin_check version 3.2 or newer you should
-	# include the option --clear-needs-check-flag.
-	# This configuration option has an automatic default value.
-	# thin_check_options = [ "-q", "--clear-needs-check-flag" ]
-
-	# Configuration option global/thin_repair_options.
-	# List of options passed to the thin_repair command.
-	# This configuration option has an automatic default value.
-	# thin_repair_options = [ "" ]
-
-	# Configuration option global/thin_disabled_features.
-	# Features to not use in the thin driver.
-	# This can be helpful for testing, or to avoid using a feature that is
-	# causing problems. Features include: block_size, discards,
-	# discards_non_power_2, external_origin, metadata_resize,
-	# external_origin_extend, error_if_no_space.
-	# 
-	# Example
-	# thin_disabled_features = [ "discards", "block_size" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/cache_disabled_features.
-	# Features to not use in the cache driver.
-	# This can be helpful for testing, or to avoid using a feature that is
-	# causing problems. Features include: policy_mq, policy_smq.
-	# 
-	# Example
-	# cache_disabled_features = [ "policy_smq" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/cache_check_executable.
-	# The full path to the cache_check command.
-	# LVM uses this command to check that a cache metadata device is in a
-	# usable state. When a cached LV is activated and after it is
-	# deactivated, this command is run. Activation will only proceed if the
-	# command has an exit status of 0. Set to "" to skip this check.
-	# (Not recommended.) Also see cache_check_options.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# cache_check_executable = "/usr/sbin/cache_check"
-
-	# Configuration option global/cache_dump_executable.
-	# The full path to the cache_dump command.
-	# LVM uses this command to dump cache pool metadata.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# cache_dump_executable = "/usr/sbin/cache_dump"
-
-	# Configuration option global/cache_repair_executable.
-	# The full path to the cache_repair command.
-	# LVM uses this command to repair a cache metadata device if it is in
-	# an unusable state. Also see cache_repair_options.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# cache_repair_executable = "/usr/sbin/cache_repair"
-
-	# Configuration option global/cache_check_options.
-	# List of options passed to the cache_check command.
-	# With cache_check version 5.0 or newer you should include the option
-	# --clear-needs-check-flag.
-	# This configuration option has an automatic default value.
-	# cache_check_options = [ "-q", "--clear-needs-check-flag" ]
-
-	# Configuration option global/cache_repair_options.
-	# List of options passed to the cache_repair command.
-	# This configuration option has an automatic default value.
-	# cache_repair_options = [ "" ]
-
-	# Configuration option global/system_id_source.
-	# The method LVM uses to set the local system ID.
-	# Volume Groups can also be given a system ID (by vgcreate, vgchange,
-	# or vgimport.) A VG on shared storage devices is accessible only to
-	# the host with a matching system ID. See 'man lvmsystemid' for
-	# information on limitations and correct usage.
-	# 
-	# Accepted values:
-	#   none
-	#     The host has no system ID.
-	#   lvmlocal
-	#     Obtain the system ID from the system_id setting in the 'local'
-	#     section of an lvm configuration file, e.g. lvmlocal.conf.
-	#   uname
-	#     Set the system ID from the hostname (uname) of the system.
-	#     System IDs beginning localhost are not permitted.
-	#   machineid
-	#     Use the contents of the machine-id file to set the system ID.
-	#     Some systems create this file at installation time.
-	#     See 'man machine-id' and global/etc.
-	#   file
-	#     Use the contents of another file (system_id_file) to set the
-	#     system ID.
-	# 
-	system_id_source = "none"
-
-	# Configuration option global/system_id_file.
-	# The full path to the file containing a system ID.
-	# This is used when system_id_source is set to 'file'.
-	# Comments starting with the character # are ignored.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/use_lvmpolld.
-	# Use lvmpolld to supervise long running LVM commands.
-	# When enabled, control of long running LVM commands is transferred
-	# from the original LVM command to the lvmpolld daemon. This allows
-	# the operation to continue independent of the original LVM command.
-	# After lvmpolld takes over, the LVM command displays the progress
-	# of the ongoing operation. lvmpolld itself runs LVM commands to
-	# manage the progress of ongoing operations. lvmpolld can be used as
-	# a native systemd service, which allows it to be started on demand,
-	# and to use its own control group. When this option is disabled, LVM
-	# commands will supervise long running operations by forking themselves.
-	use_lvmpolld = 1
+        # Configuration option global/umask.
+        # The file creation mask for any files and directories created.
+        # Interpreted as octal if the first digit is zero.
+        umask = 077
+
+        # Configuration option global/test.
+        # No on-disk metadata changes will be made in test mode.
+        # Equivalent to having the -t option on every command.
+        test = 0
+
+        # Configuration option global/units.
+        # Default value for --units argument.
+        units = "h"
+
+        # Configuration option global/si_unit_consistency.
+        # Distinguish between powers of 1024 and 1000 bytes.
+        # The LVM commands distinguish between powers of 1024 bytes,
+        # e.g. KiB, MiB, GiB, and powers of 1000 bytes, e.g. KB, MB, GB.
+        # If scripts depend on the old behaviour, disable this setting
+        # temporarily until they are updated.
+        si_unit_consistency = 1
+
+        # Configuration option global/suffix.
+        # Display unit suffix for sizes.
+        # This setting has no effect if the units are in human-readable form
+        # (global/units = "h") in which case the suffix is always displayed.
+        suffix = 1
+
+        # Configuration option global/activation.
+        # Enable/disable communication with the kernel device-mapper.
+        # Disable to use the tools to manipulate LVM metadata without
+        # activating any logical volumes. If the device-mapper driver
+        # is not present in the kernel, disabling this should suppress
+        # the error messages.
+        activation = 1
+
+        # Configuration option global/fallback_to_lvm1.
+        # Try running LVM1 tools if LVM cannot communicate with DM.
+        # This option only applies to 2.4 kernels and is provided to help
+        # switch between device-mapper kernels and LVM1 kernels. The LVM1
+        # tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
+        # They will stop working once the lvm2 on-disk metadata format is used.
+        # This configuration option has an automatic default value.
+        # fallback_to_lvm1 = 0
+
+        # Configuration option global/format.
+        # The default metadata format that commands should use.
+        # The -M 1|2 option overrides this setting.
+        # 
+        # Accepted values:
+        #   lvm1
+        #   lvm2
+        # 
+        # This configuration option has an automatic default value.
+        # format = "lvm2"
+
+        # Configuration option global/format_libraries.
+        # Shared libraries that process different metadata formats.
+        # If support for LVM1 metadata was compiled as a shared library use
+        # format_libraries = "liblvm2format1.so"
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/segment_libraries.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/proc.
+        # Location of proc filesystem.
+        # This configuration option is advanced.
+        proc = "/proc"
+
+        # Configuration option global/etc.
+        # Location of /etc system configuration directory.
+        etc = "/etc"
+
+        # Configuration option global/locking_type.
+        # Type of locking to use.
+        # 
+        # Accepted values:
+        #   0
+        #     Turns off locking. Warning: this risks metadata corruption if
+        #     commands run concurrently.
+        #   1
+        #     LVM uses local file-based locking, the standard mode.
+        #   2
+        #     LVM uses the external shared library locking_library.
+        #   3
+        #     LVM uses built-in clustered locking with clvmd.
+        #     This is incompatible with lvmetad. If use_lvmetad is enabled,
+        #     LVM prints a warning and disables lvmetad use.
+        #   4
+        #     LVM uses read-only locking which forbids any operations that
+        #     might change metadata.
+        #   5
+        #     Offers dummy locking for tools that do not need any locks.
+        #     You should not need to set this directly; the tools will select
+        #     when to use it instead of the configured locking_type.
+        #     Do not use lvmetad or the kernel device-mapper driver with this
+        #     locking type. It is used by the --readonly option that offers
+        #     read-only access to Volume Group metadata that cannot be locked
+        #     safely because it belongs to an inaccessible domain and might be
+        #     in use, for example a virtual machine image or a disk that is
+        #     shared by a clustered machine.
+        # 
+        locking_type = 1
+
+        # Configuration option global/wait_for_locks.
+        # When disabled, fail if a lock request would block.
+        wait_for_locks = 1
+
+        # Configuration option global/fallback_to_clustered_locking.
+        # Attempt to use built-in cluster locking if locking_type 2 fails.
+        # If using external locking (type 2) and initialisation fails, with
+        # this enabled, an attempt will be made to use the built-in clustered
+        # locking. Disable this if using a customised locking_library.
+        fallback_to_clustered_locking = 1
+
+        # Configuration option global/fallback_to_local_locking.
+        # Use locking_type 1 (local) if locking_type 2 or 3 fail.
+        # If an attempt to initialise type 2 or type 3 locking failed, perhaps
+        # because cluster components such as clvmd are not running, with this
+        # enabled, an attempt will be made to use local file-based locking
+        # (type 1). If this succeeds, only commands against local VGs will
+        # proceed. VGs marked as clustered will be ignored.
+        fallback_to_local_locking = 1
+
+        # Configuration option global/locking_dir.
+        # Directory to use for LVM command file locks.
+        # Local non-LV directory that holds file-based locks while commands are
+        # in progress. A directory like /tmp that may get wiped on reboot is OK.
+        locking_dir = "/run/lock/lvm"
+
+        # Configuration option global/prioritise_write_locks.
+        # Allow quicker VG write access during high volume read access.
+        # When there are competing read-only and read-write access requests for
+        # a volume group's metadata, instead of always granting the read-only
+        # requests immediately, delay them to allow the read-write requests to
+        # be serviced. Without this setting, write access may be stalled by a
+        # high volume of read-only requests. This option only affects
+        # locking_type 1 viz. local file-based locking.
+        prioritise_write_locks = 1
+
+        # Configuration option global/library_dir.
+        # Search this directory first for shared libraries.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/locking_library.
+        # The external locking library to use for locking_type 2.
+        # This configuration option has an automatic default value.
+        # locking_library = "liblvm2clusterlock.so"
+
+        # Configuration option global/abort_on_internal_errors.
+        # Abort a command that encounters an internal error.
+        # Treat any internal errors as fatal errors, aborting the process that
+        # encountered the internal error. Please only enable for debugging.
+        abort_on_internal_errors = 0
+
+        # Configuration option global/detect_internal_vg_cache_corruption.
+        # Internal verification of VG structures.
+        # Check if CRC matches when a parsed VG is used multiple times. This
+        # is useful to catch unexpected changes to cached VG structures.
+        # Please only enable for debugging.
+        detect_internal_vg_cache_corruption = 0
+
+        # Configuration option global/metadata_read_only.
+        # No operations that change on-disk metadata are permitted.
+        # Additionally, read-only commands that encounter metadata in need of
+        # repair will still be allowed to proceed exactly as if the repair had
+        # been performed (except for the unchanged vg_seqno). Inappropriate
+        # use could mess up your system, so seek advice first!
+        metadata_read_only = 0
+
+        # Configuration option global/mirror_segtype_default.
+        # The segment type used by the short mirroring option -m.
+        # The --type mirror|raid1 option overrides this setting.
+        # 
+        # Accepted values:
+        #   mirror
+        #     The original RAID1 implementation from LVM/DM. It is
+        #     characterized by a flexible log solution (core, disk, mirrored),
+        #     and by the necessity to block I/O while handling a failure.
+        #     There is an inherent race in the dmeventd failure handling logic
+        #     with snapshots of devices using this type of RAID1 that in the
+        #     worst case could cause a deadlock. (Also see
+        #     devices/ignore_lvm_mirrors.)
+        #   raid1
+        #     This is a newer RAID1 implementation using the MD RAID1
+        #     personality through device-mapper. It is characterized by a
+        #     lack of log options. (A log is always allocated for every
+        #     device and they are placed on the same device as the image,
+        #     so no separate devices are required.) This mirror
+        #     implementation does not require I/O to be blocked while
+        #     handling a failure. This mirror implementation is not
+        #     cluster-aware and cannot be used in a shared (active/active)
+        #     fashion in a cluster.
+        # 
+        mirror_segtype_default = "raid1"
+
+        # Configuration option global/raid10_segtype_default.
+        # The segment type used by the -i -m combination.
+        # The --type raid10|mirror option overrides this setting.
+        # The --stripes/-i and --mirrors/-m options can both be specified
+        # during the creation of a logical volume to use both striping and
+        # mirroring for the LV. There are two different implementations.
+        # 
+        # Accepted values:
+        #   raid10
+        #     LVM uses MD's RAID10 personality through DM. This is the
+        #     preferred option.
+        #   mirror
+        #     LVM layers the 'mirror' and 'stripe' segment types. The layering
+        #     is done by creating a mirror LV on top of striped sub-LVs,
+        #     effectively creating a RAID 0+1 array. The layering is suboptimal
+        #     in terms of providing redundancy and performance.
+        # 
+        raid10_segtype_default = "raid10"
+
+        # Configuration option global/sparse_segtype_default.
+        # The segment type used by the -V -L combination.
+        # The --type snapshot|thin option overrides this setting.
+        # The combination of -V and -L options creates a sparse LV. There are
+        # two different implementations.
+        # 
+        # Accepted values:
+        #   snapshot
+        #     The original snapshot implementation from LVM/DM. It uses an old
+        #     snapshot that mixes data and metadata within a single COW
+        #     storage volume and performs poorly when the size of stored data
+        #     passes hundreds of MB.
+        #   thin
+        #     A newer implementation that uses thin provisioning. It has a
+        #     bigger minimal chunk size (64KiB) and uses a separate volume for
+        #     metadata. It has better performance, especially when more data
+        #     is used. It also supports full snapshots.
+        # 
+        sparse_segtype_default = "thin"
+
+        # Configuration option global/lvdisplay_shows_full_device_path.
+        # Enable this to reinstate the previous lvdisplay name format.
+        # The default format for displaying LV names in lvdisplay was changed
+        # in version 2.02.89 to show the LV name and path separately.
+        # Previously this was always shown as /dev/vgname/lvname even when that
+        # was never a valid path in the /dev filesystem.
+        # This configuration option has an automatic default value.
+        # lvdisplay_shows_full_device_path = 0
+
+        # Configuration option global/use_lvmetad.
+        # Use lvmetad to cache metadata and reduce disk scanning.
+        # When enabled (and running), lvmetad provides LVM commands with VG
+        # metadata and PV state. LVM commands then avoid reading this
+        # information from disks which can be slow. When disabled (or not
+        # running), LVM commands fall back to scanning disks to obtain VG
+        # metadata. lvmetad is kept updated via udev rules which must be set
+        # up for LVM to work correctly. (The udev rules should be installed
+        # by default.) Without a proper udev setup, changes in the system's
+        # block device configuration will be unknown to LVM, and ignored
+        # until a manual 'pvscan --cache' is run. If lvmetad was running
+        # while use_lvmetad was disabled, it must be stopped, use_lvmetad
+        # enabled, and then started. When using lvmetad, LV activation is
+        # switched to an automatic, event-based mode. In this mode, LVs are
+        # activated based on incoming udev events that inform lvmetad when
+        # PVs appear on the system. When a VG is complete (all PVs present),
+        # it is auto-activated. The auto_activation_volume_list setting
+        # controls which LVs are auto-activated (all by default.)
+        # When lvmetad is updated (automatically by udev events, or directly
+        # by pvscan --cache), devices/filter is ignored and all devices are
+        # scanned by default. lvmetad always keeps unfiltered information
+        # which is provided to LVM commands. Each LVM command then filters
+        # based on devices/filter. This does not apply to other, non-regexp,
+        # filtering settings: component filters such as multipath and MD
+        # are checked during pvscan --cache. To filter a device and prevent
+        # scanning from the LVM system entirely, including lvmetad, use
+        # devices/global_filter.
+        use_lvmetad = 1
+
+        # Configuration option global/use_lvmlockd.
+        # Use lvmlockd for locking among hosts using LVM on shared storage.
+        # See lvmlockd(8) for more information.
+        use_lvmlockd = 0
+
+        # Configuration option global/lvmlockd_lock_retries.
+        # Retry lvmlockd lock requests this many times.
+        # This configuration option has an automatic default value.
+        # lvmlockd_lock_retries = 3
+
+        # Configuration option global/sanlock_lv_extend.
+        # Size in MiB to extend the internal LV holding sanlock locks.
+        # The internal LV holds locks for each LV in the VG, and after enough
+        # LVs have been created, the internal LV needs to be extended. lvcreate
+        # will automatically extend the internal LV when needed by the amount
+        # specified here. Setting this to 0 disables the automatic extension
+        # and can cause lvcreate to fail.
+        # This configuration option has an automatic default value.
+        # sanlock_lv_extend = 256
+
+        # Configuration option global/thin_check_executable.
+        # The full path to the thin_check command.
+        # LVM uses this command to check that a thin metadata device is in a
+        # usable state. When a thin pool is activated and after it is
+        # deactivated, this command is run. Activation will only proceed if
+        # the command has an exit status of 0. Set to "" to skip this check.
+        # (Not recommended.) Also see thin_check_options.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # thin_check_executable = "/usr/sbin/thin_check"
+
+        # Configuration option global/thin_dump_executable.
+        # The full path to the thin_dump command.
+        # LVM uses this command to dump thin pool metadata.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # thin_dump_executable = "/usr/sbin/thin_dump"
+
+        # Configuration option global/thin_repair_executable.
+        # The full path to the thin_repair command.
+        # LVM uses this command to repair a thin metadata device if it is in
+        # an unusable state. Also see thin_repair_options.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # thin_repair_executable = "/usr/sbin/thin_repair"
+
+        # Configuration option global/thin_check_options.
+        # List of options passed to the thin_check command.
+        # With thin_check version 2.1 or newer you can add the option
+        # --ignore-non-fatal-errors to let it pass through ignorable errors
+        # and fix them later. With thin_check version 3.2 or newer you should
+        # include the option --clear-needs-check-flag.
+        # This configuration option has an automatic default value.
+        # thin_check_options = [ "-q", "--clear-needs-check-flag" ]
+
+        # Configuration option global/thin_repair_options.
+        # List of options passed to the thin_repair command.
+        # This configuration option has an automatic default value.
+        # thin_repair_options = [ "" ]
+
+        # Configuration option global/thin_disabled_features.
+        # Features to not use in the thin driver.
+        # This can be helpful for testing, or to avoid using a feature that is
+        # causing problems. Features include: block_size, discards,
+        # discards_non_power_2, external_origin, metadata_resize,
+        # external_origin_extend, error_if_no_space.
+        # 
+        # Example
+        # thin_disabled_features = [ "discards", "block_size" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/cache_disabled_features.
+        # Features to not use in the cache driver.
+        # This can be helpful for testing, or to avoid using a feature that is
+        # causing problems. Features include: policy_mq, policy_smq.
+        # 
+        # Example
+        # cache_disabled_features = [ "policy_smq" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/cache_check_executable.
+        # The full path to the cache_check command.
+        # LVM uses this command to check that a cache metadata device is in a
+        # usable state. When a cached LV is activated and after it is
+        # deactivated, this command is run. Activation will only proceed if the
+        # command has an exit status of 0. Set to "" to skip this check.
+        # (Not recommended.) Also see cache_check_options.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # cache_check_executable = "/usr/sbin/cache_check"
+
+        # Configuration option global/cache_dump_executable.
+        # The full path to the cache_dump command.
+        # LVM uses this command to dump cache pool metadata.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # cache_dump_executable = "/usr/sbin/cache_dump"
+
+        # Configuration option global/cache_repair_executable.
+        # The full path to the cache_repair command.
+        # LVM uses this command to repair a cache metadata device if it is in
+        # an unusable state. Also see cache_repair_options.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # cache_repair_executable = "/usr/sbin/cache_repair"
+
+        # Configuration option global/cache_check_options.
+        # List of options passed to the cache_check command.
+        # With cache_check version 5.0 or newer you should include the option
+        # --clear-needs-check-flag.
+        # This configuration option has an automatic default value.
+        # cache_check_options = [ "-q", "--clear-needs-check-flag" ]
+
+        # Configuration option global/cache_repair_options.
+        # List of options passed to the cache_repair command.
+        # This configuration option has an automatic default value.
+        # cache_repair_options = [ "" ]
+
+        # Configuration option global/system_id_source.
+        # The method LVM uses to set the local system ID.
+        # Volume Groups can also be given a system ID (by vgcreate, vgchange,
+        # or vgimport.) A VG on shared storage devices is accessible only to
+        # the host with a matching system ID. See 'man lvmsystemid' for
+        # information on limitations and correct usage.
+        # 
+        # Accepted values:
+        #   none
+        #     The host has no system ID.
+        #   lvmlocal
+        #     Obtain the system ID from the system_id setting in the 'local'
+        #     section of an lvm configuration file, e.g. lvmlocal.conf.
+        #   uname
+        #     Set the system ID from the hostname (uname) of the system.
+        #     System IDs beginning localhost are not permitted.
+        #   machineid
+        #     Use the contents of the machine-id file to set the system ID.
+        #     Some systems create this file at installation time.
+        #     See 'man machine-id' and global/etc.
+        #   file
+        #     Use the contents of another file (system_id_file) to set the
+        #     system ID.
+        # 
+        system_id_source = "none"
+
+        # Configuration option global/system_id_file.
+        # The full path to the file containing a system ID.
+        # This is used when system_id_source is set to 'file'.
+        # Comments starting with the character # are ignored.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/use_lvmpolld.
+        # Use lvmpolld to supervise long running LVM commands.
+        # When enabled, control of long running LVM commands is transferred
+        # from the original LVM command to the lvmpolld daemon. This allows
+        # the operation to continue independent of the original LVM command.
+        # After lvmpolld takes over, the LVM command displays the progress
+        # of the ongoing operation. lvmpolld itself runs LVM commands to
+        # manage the progress of ongoing operations. lvmpolld can be used as
+        # a native systemd service, which allows it to be started on demand,
+        # and to use its own control group. When this option is disabled, LVM
+        # commands will supervise long running operations by forking themselves.
+        use_lvmpolld = 1
 }
 
 # Configuration section activation.
 activation {
 
-	# Configuration option activation/checks.
-	# Perform internal checks of libdevmapper operations.
-	# Useful for debugging problems with activation. Some of the checks may
-	# be expensive, so it's best to use this only when there seems to be a
-	# problem.
-	checks = 0
-
-	# Configuration option activation/udev_sync.
-	# Use udev notifications to synchronize udev and LVM.
-	# The --nodevsync option overrides this setting.
-	# When disabled, LVM commands will not wait for notifications from
-	# udev, but continue irrespective of any possible udev processing in
-	# the background. Only use this if udev is not running or has rules
-	# that ignore the devices LVM creates. If enabled when udev is not
-	# running, and LVM processes are waiting for udev, run the command
-	# 'dmsetup udevcomplete_all' to wake them up.
-	udev_sync = 1
-
-	# Configuration option activation/udev_rules.
-	# Use udev rules to manage LV device nodes and symlinks.
-	# When disabled, LVM will manage the device nodes and symlinks for
-	# active LVs itself. Manual intervention may be required if this
-	# setting is changed while LVs are active.
-	udev_rules = 1
-
-	# Configuration option activation/verify_udev_operations.
-	# Use extra checks in LVM to verify udev operations.
-	# This enables additional checks (and if necessary, repairs) on entries
-	# in the device directory after udev has completed processing its
-	# events. Useful for diagnosing problems with LVM/udev interactions.
-	verify_udev_operations = 0
-
-	# Configuration option activation/retry_deactivation.
-	# Retry failed LV deactivation.
-	# If LV deactivation fails, LVM will retry for a few seconds before
-	# failing. This may happen because a process run from a quick udev rule
-	# temporarily opened the device.
-	retry_deactivation = 1
-
-	# Configuration option activation/missing_stripe_filler.
-	# Method to fill missing stripes when activating an incomplete LV.
-	# Using 'error' will make inaccessible parts of the device return I/O
-	# errors on access. You can instead use a device path, in which case,
-	# that device will be used in place of missing stripes. Using anything
-	# other than 'error' with mirrored or snapshotted volumes is likely to
-	# result in data corruption.
-	# This configuration option is advanced.
-	missing_stripe_filler = "error"
-
-	# Configuration option activation/use_linear_target.
-	# Use the linear target to optimize single stripe LVs.
-	# When disabled, the striped target is used. The linear target is an
-	# optimised version of the striped target that only handles a single
-	# stripe.
-	use_linear_target = 1
-
-	# Configuration option activation/reserved_stack.
-	# Stack size in KiB to reserve for use while devices are suspended.
-	# Insufficent reserve risks I/O deadlock during device suspension.
-	reserved_stack = 64
-
-	# Configuration option activation/reserved_memory.
-	# Memory size in KiB to reserve for use while devices are suspended.
-	# Insufficent reserve risks I/O deadlock during device suspension.
-	reserved_memory = 8192
-
-	# Configuration option activation/process_priority.
-	# Nice value used while devices are suspended.
-	# Use a high priority so that LVs are suspended
-	# for the shortest possible time.
-	process_priority = -18
-
-	# Configuration option activation/volume_list.
-	# Only LVs selected by this list are activated.
-	# If this list is defined, an LV is only activated if it matches an
-	# entry in this list. If this list is undefined, it imposes no limits
-	# on LV activation (all are allowed).
-	# 
-	# Accepted values:
-	#   vgname
-	#     The VG name is matched exactly and selects all LVs in the VG.
-	#   vgname/lvname
-	#     The VG name and LV name are matched exactly and selects the LV.
-	#   @tag
-	#     Selects an LV if the specified tag matches a tag set on the LV
-	#     or VG.
-	#   @*
-	#     Selects an LV if a tag defined on the host is also set on the LV
-	#     or VG. See tags/hosttags. If any host tags exist but volume_list
-	#     is not defined, a default single-entry list containing '@*' is
-	#     assumed.
-	# 
-	# Example
-	# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/auto_activation_volume_list.
-	# Only LVs selected by this list are auto-activated.
-	# This list works like volume_list, but it is used only by
-	# auto-activation commands. It does not apply to direct activation
-	# commands. If this list is defined, an LV is only auto-activated
-	# if it matches an entry in this list. If this list is undefined, it
-	# imposes no limits on LV auto-activation (all are allowed.) If this
-	# list is defined and empty, i.e. "[]", then no LVs are selected for
-	# auto-activation. An LV that is selected by this list for
-	# auto-activation, must also be selected by volume_list (if defined)
-	# before it is activated. Auto-activation is an activation command that
-	# includes the 'a' argument: --activate ay or -a ay. The 'a' (auto)
-	# argument for auto-activation is meant to be used by activation
-	# commands that are run automatically by the system, as opposed to LVM
-	# commands run directly by a user. A user may also use the 'a' flag
-	# directly to perform auto-activation. Also see pvscan(8) for more
-	# information about auto-activation.
-	# 
-	# Accepted values:
-	#   vgname
-	#     The VG name is matched exactly and selects all LVs in the VG.
-	#   vgname/lvname
-	#     The VG name and LV name are matched exactly and selects the LV.
-	#   @tag
-	#     Selects an LV if the specified tag matches a tag set on the LV
-	#     or VG.
-	#   @*
-	#     Selects an LV if a tag defined on the host is also set on the LV
-	#     or VG. See tags/hosttags. If any host tags exist but volume_list
-	#     is not defined, a default single-entry list containing '@*' is
-	#     assumed.
-	# 
-	# Example
-	# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/read_only_volume_list.
-	# LVs in this list are activated in read-only mode.
-	# If this list is defined, each LV that is to be activated is checked
-	# against this list, and if it matches, it is activated in read-only
-	# mode. This overrides the permission setting stored in the metadata,
-	# e.g. from --permission rw.
-	# 
-	# Accepted values:
-	#   vgname
-	#     The VG name is matched exactly and selects all LVs in the VG.
-	#   vgname/lvname
-	#     The VG name and LV name are matched exactly and selects the LV.
-	#   @tag
-	#     Selects an LV if the specified tag matches a tag set on the LV
-	#     or VG.
-	#   @*
-	#     Selects an LV if a tag defined on the host is also set on the LV
-	#     or VG. See tags/hosttags. If any host tags exist but volume_list
-	#     is not defined, a default single-entry list containing '@*' is
-	#     assumed.
-	# 
-	# Example
-	# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/raid_region_size.
-	# Size in KiB of each raid or mirror synchronization region.
-	# For raid or mirror segment types, this is the amount of data that is
-	# copied at once when initializing, or moved at once by pvmove.
-	raid_region_size = 512
-
-	# Configuration option activation/error_when_full.
-	# Return errors if a thin pool runs out of space.
-	# The --errorwhenfull option overrides this setting.
-	# When enabled, writes to thin LVs immediately return an error if the
-	# thin pool is out of data space. When disabled, writes to thin LVs
-	# are queued if the thin pool is out of space, and processed when the
-	# thin pool data space is extended. New thin pools are assigned the
-	# behavior defined here.
-	# This configuration option has an automatic default value.
-	# error_when_full = 0
-
-	# Configuration option activation/readahead.
-	# Setting to use when there is no readahead setting in metadata.
-	# 
-	# Accepted values:
-	#   none
-	#     Disable readahead.
-	#   auto
-	#     Use default value chosen by kernel.
-	# 
-	readahead = "auto"
-
-	# Configuration option activation/raid_fault_policy.
-	# Defines how a device failure in a RAID LV is handled.
-	# This includes LVs that have the following segment types:
-	# raid1, raid4, raid5*, and raid6*.
-	# If a device in the LV fails, the policy determines the steps
-	# performed by dmeventd automatically, and the steps perfomed by the
-	# manual command lvconvert --repair --use-policies.
-	# Automatic handling requires dmeventd to be monitoring the LV.
-	# 
-	# Accepted values:
-	#   warn
-	#     Use the system log to warn the user that a device in the RAID LV
-	#     has failed. It is left to the user to run lvconvert --repair
-	#     manually to remove or replace the failed device. As long as the
-	#     number of failed devices does not exceed the redundancy of the LV
-	#     (1 device for raid4/5, 2 for raid6), the LV will remain usable.
-	#   allocate
-	#     Attempt to use any extra physical volumes in the VG as spares and
-	#     replace faulty devices.
-	# 
-	raid_fault_policy = "warn"
-
-	# Configuration option activation/mirror_image_fault_policy.
-	# Defines how a device failure in a 'mirror' LV is handled.
-	# An LV with the 'mirror' segment type is composed of mirror images
-	# (copies) and a mirror log. A disk log ensures that a mirror LV does
-	# not need to be re-synced (all copies made the same) every time a
-	# machine reboots or crashes. If a device in the LV fails, this policy
-	# determines the steps perfomed by dmeventd automatically, and the steps
-	# performed by the manual command lvconvert --repair --use-policies.
-	# Automatic handling requires dmeventd to be monitoring the LV.
-	# 
-	# Accepted values:
-	#   remove
-	#     Simply remove the faulty device and run without it. If the log
-	#     device fails, the mirror would convert to using an in-memory log.
-	#     This means the mirror will not remember its sync status across
-	#     crashes/reboots and the entire mirror will be re-synced. If a
-	#     mirror image fails, the mirror will convert to a non-mirrored
-	#     device if there is only one remaining good copy.
-	#   allocate
-	#     Remove the faulty device and try to allocate space on a new
-	#     device to be a replacement for the failed device. Using this
-	#     policy for the log is fast and maintains the ability to remember
-	#     sync state through crashes/reboots. Using this policy for a
-	#     mirror device is slow, as it requires the mirror to resynchronize
-	#     the devices, but it will preserve the mirror characteristic of
-	#     the device. This policy acts like 'remove' if no suitable device
-	#     and space can be allocated for the replacement.
-	#   allocate_anywhere
-	#     Not yet implemented. Useful to place the log device temporarily
-	#     on the same physical volume as one of the mirror images. This
-	#     policy is not recommended for mirror devices since it would break
-	#     the redundant nature of the mirror. This policy acts like
-	#     'remove' if no suitable device and space can be allocated for the
-	#     replacement.
-	# 
-	mirror_image_fault_policy = "remove"
-
-	# Configuration option activation/mirror_log_fault_policy.
-	# Defines how a device failure in a 'mirror' log LV is handled.
-	# The mirror_image_fault_policy description for mirrored LVs also
-	# applies to mirrored log LVs.
-	mirror_log_fault_policy = "allocate"
-
-	# Configuration option activation/snapshot_autoextend_threshold.
-	# Auto-extend a snapshot when its usage exceeds this percent.
-	# Setting this to 100 disables automatic extension.
-	# The minimum value is 50 (a smaller value is treated as 50.)
-	# Also see snapshot_autoextend_percent.
-	# Automatic extension requires dmeventd to be monitoring the LV.
-	# 
-	# Example
-	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
-	# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
-	# 840M, it is extended to 1.44G:
-	# snapshot_autoextend_threshold = 70
-	# 
-	snapshot_autoextend_threshold = 100
-
-	# Configuration option activation/snapshot_autoextend_percent.
-	# Auto-extending a snapshot adds this percent extra space.
-	# The amount of additional space added to a snapshot is this
-	# percent of its current size.
-	# 
-	# Example
-	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
-	# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
-	# 840M, it is extended to 1.44G:
-	# snapshot_autoextend_percent = 20
-	# 
-	snapshot_autoextend_percent = 20
-
-	# Configuration option activation/thin_pool_autoextend_threshold.
-	# Auto-extend a thin pool when its usage exceeds this percent.
-	# Setting this to 100 disables automatic extension.
-	# The minimum value is 50 (a smaller value is treated as 50.)
-	# Also see thin_pool_autoextend_percent.
-	# Automatic extension requires dmeventd to be monitoring the LV.
-	# 
-	# Example
-	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
-	# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
-	# 840M, it is extended to 1.44G:
-	# thin_pool_autoextend_threshold = 70
-	# 
-	thin_pool_autoextend_threshold = 100
-
-	# Configuration option activation/thin_pool_autoextend_percent.
-	# Auto-extending a thin pool adds this percent extra space.
-	# The amount of additional space added to a thin pool is this
-	# percent of its current size.
-	# 
-	# Example
-	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
-	# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
-	# 840M, it is extended to 1.44G:
-	# thin_pool_autoextend_percent = 20
-	# 
-	thin_pool_autoextend_percent = 20
-
-	# Configuration option activation/mlock_filter.
-	# Do not mlock these memory areas.
-	# While activating devices, I/O to devices being (re)configured is
-	# suspended. As a precaution against deadlocks, LVM pins memory it is
-	# using so it is not paged out, and will not require I/O to reread.
-	# Groups of pages that are known not to be accessed during activation
-	# do not need to be pinned into memory. Each string listed in this
-	# setting is compared against each line in /proc/self/maps, and the
-	# pages corresponding to lines that match are not pinned. On some
-	# systems, locale-archive was found to make up over 80% of the memory
-	# used by the process.
-	# 
-	# Example
-	# mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
-	# 
-	# This configuration option is advanced.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/use_mlockall.
-	# Use the old behavior of mlockall to pin all memory.
-	# Prior to version 2.02.62, LVM used mlockall() to pin the whole
-	# process's memory while activating devices.
-	use_mlockall = 0
-
-	# Configuration option activation/monitoring.
-	# Monitor LVs that are activated.
-	# The --ignoremonitoring option overrides this setting.
-	# When enabled, LVM will ask dmeventd to monitor activated LVs.
-	monitoring = 1
-
-	# Configuration option activation/polling_interval.
-	# Check pvmove or lvconvert progress at this interval (seconds).
-	# When pvmove or lvconvert must wait for the kernel to finish
-	# synchronising or merging data, they check and report progress at
-	# intervals of this number of seconds. If this is set to 0 and there
-	# is only one thing to wait for, there are no progress reports, but
-	# the process is awoken immediately once the operation is complete.
-	polling_interval = 15
-
-	# Configuration option activation/auto_set_activation_skip.
-	# Set the activation skip flag on new thin snapshot LVs.
-	# The --setactivationskip option overrides this setting.
-	# An LV can have a persistent 'activation skip' flag. The flag causes
-	# the LV to be skipped during normal activation. The lvchange/vgchange
-	# -K option is required to activate LVs that have the activation skip
-	# flag set. When this setting is enabled, the activation skip flag is
-	# set on new thin snapshot LVs.
-	# This configuration option has an automatic default value.
-	# auto_set_activation_skip = 1
-
-	# Configuration option activation/activation_mode.
-	# How LVs with missing devices are activated.
-	# The --activationmode option overrides this setting.
-	# 
-	# Accepted values:
-	#   complete
-	#     Only allow activation of an LV if all of the Physical Volumes it
-	#     uses are present. Other PVs in the Volume Group may be missing.
-	#   degraded
-	#     Like complete, but additionally RAID LVs of segment type raid1,
-	#     raid4, raid5, radid6 and raid10 will be activated if there is no
-	#     data loss, i.e. they have sufficient redundancy to present the
-	#     entire addressable range of the Logical Volume.
-	#   partial
-	#     Allows the activation of any LV even if a missing or failed PV
-	#     could cause data loss with a portion of the LV inaccessible.
-	#     This setting should not normally be used, but may sometimes
-	#     assist with data recovery.
-	# 
-	activation_mode = "degraded"
-
-	# Configuration option activation/lock_start_list.
-	# Locking is started only for VGs selected by this list.
-	# The rules are the same as those for volume_list.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/auto_lock_start_list.
-	# Locking is auto-started only for VGs selected by this list.
-	# The rules are the same as those for auto_activation_volume_list.
-	# This configuration option does not have a default value defined.
+        # Configuration option activation/checks.
+        # Perform internal checks of libdevmapper operations.
+        # Useful for debugging problems with activation. Some of the checks may
+        # be expensive, so it's best to use this only when there seems to be a
+        # problem.
+        checks = 0
+
+        # Configuration option activation/udev_sync.
+        # Use udev notifications to synchronize udev and LVM.
+        # The --nodevsync option overrides this setting.
+        # When disabled, LVM commands will not wait for notifications from
+        # udev, but continue irrespective of any possible udev processing in
+        # the background. Only use this if udev is not running or has rules
+        # that ignore the devices LVM creates. If enabled when udev is not
+        # running, and LVM processes are waiting for udev, run the command
+        # 'dmsetup udevcomplete_all' to wake them up.
+        udev_sync = 1
+
+        # Configuration option activation/udev_rules.
+        # Use udev rules to manage LV device nodes and symlinks.
+        # When disabled, LVM will manage the device nodes and symlinks for
+        # active LVs itself. Manual intervention may be required if this
+        # setting is changed while LVs are active.
+        udev_rules = 1
+
+        # Configuration option activation/verify_udev_operations.
+        # Use extra checks in LVM to verify udev operations.
+        # This enables additional checks (and if necessary, repairs) on entries
+        # in the device directory after udev has completed processing its
+        # events. Useful for diagnosing problems with LVM/udev interactions.
+        verify_udev_operations = 0
+
+        # Configuration option activation/retry_deactivation.
+        # Retry failed LV deactivation.
+        # If LV deactivation fails, LVM will retry for a few seconds before
+        # failing. This may happen because a process run from a quick udev rule
+        # temporarily opened the device.
+        retry_deactivation = 1
+
+        # Configuration option activation/missing_stripe_filler.
+        # Method to fill missing stripes when activating an incomplete LV.
+        # Using 'error' will make inaccessible parts of the device return I/O
+        # errors on access. You can instead use a device path, in which case,
+        # that device will be used in place of missing stripes. Using anything
+        # other than 'error' with mirrored or snapshotted volumes is likely to
+        # result in data corruption.
+        # This configuration option is advanced.
+        missing_stripe_filler = "error"
+
+        # Configuration option activation/use_linear_target.
+        # Use the linear target to optimize single stripe LVs.
+        # When disabled, the striped target is used. The linear target is an
+        # optimised version of the striped target that only handles a single
+        # stripe.
+        use_linear_target = 1
+
+        # Configuration option activation/reserved_stack.
+        # Stack size in KiB to reserve for use while devices are suspended.
+        # Insufficent reserve risks I/O deadlock during device suspension.
+        reserved_stack = 64
+
+        # Configuration option activation/reserved_memory.
+        # Memory size in KiB to reserve for use while devices are suspended.
+        # Insufficent reserve risks I/O deadlock during device suspension.
+        reserved_memory = 8192
+
+        # Configuration option activation/process_priority.
+        # Nice value used while devices are suspended.
+        # Use a high priority so that LVs are suspended
+        # for the shortest possible time.
+        process_priority = -18
+
+        # Configuration option activation/volume_list.
+        # Only LVs selected by this list are activated.
+        # If this list is defined, an LV is only activated if it matches an
+        # entry in this list. If this list is undefined, it imposes no limits
+        # on LV activation (all are allowed).
+        # 
+        # Accepted values:
+        #   vgname
+        #     The VG name is matched exactly and selects all LVs in the VG.
+        #   vgname/lvname
+        #     The VG name and LV name are matched exactly and selects the LV.
+        #   @tag
+        #     Selects an LV if the specified tag matches a tag set on the LV
+        #     or VG.
+        #   @*
+        #     Selects an LV if a tag defined on the host is also set on the LV
+        #     or VG. See tags/hosttags. If any host tags exist but volume_list
+        #     is not defined, a default single-entry list containing '@*' is
+        #     assumed.
+        # 
+        # Example
+        # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/auto_activation_volume_list.
+        # Only LVs selected by this list are auto-activated.
+        # This list works like volume_list, but it is used only by
+        # auto-activation commands. It does not apply to direct activation
+        # commands. If this list is defined, an LV is only auto-activated
+        # if it matches an entry in this list. If this list is undefined, it
+        # imposes no limits on LV auto-activation (all are allowed.) If this
+        # list is defined and empty, i.e. "[]", then no LVs are selected for
+        # auto-activation. An LV that is selected by this list for
+        # auto-activation, must also be selected by volume_list (if defined)
+        # before it is activated. Auto-activation is an activation command that
+        # includes the 'a' argument: --activate ay or -a ay. The 'a' (auto)
+        # argument for auto-activation is meant to be used by activation
+        # commands that are run automatically by the system, as opposed to LVM
+        # commands run directly by a user. A user may also use the 'a' flag
+        # directly to perform auto-activation. Also see pvscan(8) for more
+        # information about auto-activation.
+        # 
+        # Accepted values:
+        #   vgname
+        #     The VG name is matched exactly and selects all LVs in the VG.
+        #   vgname/lvname
+        #     The VG name and LV name are matched exactly and selects the LV.
+        #   @tag
+        #     Selects an LV if the specified tag matches a tag set on the LV
+        #     or VG.
+        #   @*
+        #     Selects an LV if a tag defined on the host is also set on the LV
+        #     or VG. See tags/hosttags. If any host tags exist but volume_list
+        #     is not defined, a default single-entry list containing '@*' is
+        #     assumed.
+        # 
+        # Example
+        # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/read_only_volume_list.
+        # LVs in this list are activated in read-only mode.
+        # If this list is defined, each LV that is to be activated is checked
+        # against this list, and if it matches, it is activated in read-only
+        # mode. This overrides the permission setting stored in the metadata,
+        # e.g. from --permission rw.
+        # 
+        # Accepted values:
+        #   vgname
+        #     The VG name is matched exactly and selects all LVs in the VG.
+        #   vgname/lvname
+        #     The VG name and LV name are matched exactly and selects the LV.
+        #   @tag
+        #     Selects an LV if the specified tag matches a tag set on the LV
+        #     or VG.
+        #   @*
+        #     Selects an LV if a tag defined on the host is also set on the LV
+        #     or VG. See tags/hosttags. If any host tags exist but volume_list
+        #     is not defined, a default single-entry list containing '@*' is
+        #     assumed.
+        # 
+        # Example
+        # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/raid_region_size.
+        # Size in KiB of each raid or mirror synchronization region.
+        # For raid or mirror segment types, this is the amount of data that is
+        # copied at once when initializing, or moved at once by pvmove.
+        raid_region_size = 512
+
+        # Configuration option activation/error_when_full.
+        # Return errors if a thin pool runs out of space.
+        # The --errorwhenfull option overrides this setting.
+        # When enabled, writes to thin LVs immediately return an error if the
+        # thin pool is out of data space. When disabled, writes to thin LVs
+        # are queued if the thin pool is out of space, and processed when the
+        # thin pool data space is extended. New thin pools are assigned the
+        # behavior defined here.
+        # This configuration option has an automatic default value.
+        # error_when_full = 0
+
+        # Configuration option activation/readahead.
+        # Setting to use when there is no readahead setting in metadata.
+        # 
+        # Accepted values:
+        #   none
+        #     Disable readahead.
+        #   auto
+        #     Use default value chosen by kernel.
+        # 
+        readahead = "auto"
+
+        # Configuration option activation/raid_fault_policy.
+        # Defines how a device failure in a RAID LV is handled.
+        # This includes LVs that have the following segment types:
+        # raid1, raid4, raid5*, and raid6*.
+        # If a device in the LV fails, the policy determines the steps
+        # performed by dmeventd automatically, and the steps perfomed by the
+        # manual command lvconvert --repair --use-policies.
+        # Automatic handling requires dmeventd to be monitoring the LV.
+        # 
+        # Accepted values:
+        #   warn
+        #     Use the system log to warn the user that a device in the RAID LV
+        #     has failed. It is left to the user to run lvconvert --repair
+        #     manually to remove or replace the failed device. As long as the
+        #     number of failed devices does not exceed the redundancy of the LV
+        #     (1 device for raid4/5, 2 for raid6), the LV will remain usable.
+        #   allocate
+        #     Attempt to use any extra physical volumes in the VG as spares and
+        #     replace faulty devices.
+        # 
+        raid_fault_policy = "warn"
+
+        # Configuration option activation/mirror_image_fault_policy.
+        # Defines how a device failure in a 'mirror' LV is handled.
+        # An LV with the 'mirror' segment type is composed of mirror images
+        # (copies) and a mirror log. A disk log ensures that a mirror LV does
+        # not need to be re-synced (all copies made the same) every time a
+        # machine reboots or crashes. If a device in the LV fails, this policy
+        # determines the steps perfomed by dmeventd automatically, and the steps
+        # performed by the manual command lvconvert --repair --use-policies.
+        # Automatic handling requires dmeventd to be monitoring the LV.
+        # 
+        # Accepted values:
+        #   remove
+        #     Simply remove the faulty device and run without it. If the log
+        #     device fails, the mirror would convert to using an in-memory log.
+        #     This means the mirror will not remember its sync status across
+        #     crashes/reboots and the entire mirror will be re-synced. If a
+        #     mirror image fails, the mirror will convert to a non-mirrored
+        #     device if there is only one remaining good copy.
+        #   allocate
+        #     Remove the faulty device and try to allocate space on a new
+        #     device to be a replacement for the failed device. Using this
+        #     policy for the log is fast and maintains the ability to remember
+        #     sync state through crashes/reboots. Using this policy for a
+        #     mirror device is slow, as it requires the mirror to resynchronize
+        #     the devices, but it will preserve the mirror characteristic of
+        #     the device. This policy acts like 'remove' if no suitable device
+        #     and space can be allocated for the replacement.
+        #   allocate_anywhere
+        #     Not yet implemented. Useful to place the log device temporarily
+        #     on the same physical volume as one of the mirror images. This
+        #     policy is not recommended for mirror devices since it would break
+        #     the redundant nature of the mirror. This policy acts like
+        #     'remove' if no suitable device and space can be allocated for the
+        #     replacement.
+        # 
+        mirror_image_fault_policy = "remove"
+
+        # Configuration option activation/mirror_log_fault_policy.
+        # Defines how a device failure in a 'mirror' log LV is handled.
+        # The mirror_image_fault_policy description for mirrored LVs also
+        # applies to mirrored log LVs.
+        mirror_log_fault_policy = "allocate"
+
+        # Configuration option activation/snapshot_autoextend_threshold.
+        # Auto-extend a snapshot when its usage exceeds this percent.
+        # Setting this to 100 disables automatic extension.
+        # The minimum value is 50 (a smaller value is treated as 50.)
+        # Also see snapshot_autoextend_percent.
+        # Automatic extension requires dmeventd to be monitoring the LV.
+        # 
+        # Example
+        # Using 70% autoextend threshold and 20% autoextend size, when a 1G
+        # snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
+        # 840M, it is extended to 1.44G:
+        # snapshot_autoextend_threshold = 70
+        # 
+        snapshot_autoextend_threshold = 100
+
+        # Configuration option activation/snapshot_autoextend_percent.
+        # Auto-extending a snapshot adds this percent extra space.
+        # The amount of additional space added to a snapshot is this
+        # percent of its current size.
+        # 
+        # Example
+        # Using 70% autoextend threshold and 20% autoextend size, when a 1G
+        # snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
+        # 840M, it is extended to 1.44G:
+        # snapshot_autoextend_percent = 20
+        # 
+        snapshot_autoextend_percent = 20
+
+        # Configuration option activation/thin_pool_autoextend_threshold.
+        # Auto-extend a thin pool when its usage exceeds this percent.
+        # Setting this to 100 disables automatic extension.
+        # The minimum value is 50 (a smaller value is treated as 50.)
+        # Also see thin_pool_autoextend_percent.
+        # Automatic extension requires dmeventd to be monitoring the LV.
+        # 
+        # Example
+        # Using 70% autoextend threshold and 20% autoextend size, when a 1G
+        # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
+        # 840M, it is extended to 1.44G:
+        # thin_pool_autoextend_threshold = 70
+        # 
+        thin_pool_autoextend_threshold = 100
+
+        # Configuration option activation/thin_pool_autoextend_percent.
+        # Auto-extending a thin pool adds this percent extra space.
+        # The amount of additional space added to a thin pool is this
+        # percent of its current size.
+        # 
+        # Example
+        # Using 70% autoextend threshold and 20% autoextend size, when a 1G
+        # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
+        # 840M, it is extended to 1.44G:
+        # thin_pool_autoextend_percent = 20
+        # 
+        thin_pool_autoextend_percent = 20
+
+        # Configuration option activation/mlock_filter.
+        # Do not mlock these memory areas.
+        # While activating devices, I/O to devices being (re)configured is
+        # suspended. As a precaution against deadlocks, LVM pins memory it is
+        # using so it is not paged out, and will not require I/O to reread.
+        # Groups of pages that are known not to be accessed during activation
+        # do not need to be pinned into memory. Each string listed in this
+        # setting is compared against each line in /proc/self/maps, and the
+        # pages corresponding to lines that match are not pinned. On some
+        # systems, locale-archive was found to make up over 80% of the memory
+        # used by the process.
+        # 
+        # Example
+        # mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
+        # 
+        # This configuration option is advanced.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/use_mlockall.
+        # Use the old behavior of mlockall to pin all memory.
+        # Prior to version 2.02.62, LVM used mlockall() to pin the whole
+        # process's memory while activating devices.
+        use_mlockall = 0
+
+        # Configuration option activation/monitoring.
+        # Monitor LVs that are activated.
+        # The --ignoremonitoring option overrides this setting.
+        # When enabled, LVM will ask dmeventd to monitor activated LVs.
+        monitoring = 1
+
+        # Configuration option activation/polling_interval.
+        # Check pvmove or lvconvert progress at this interval (seconds).
+        # When pvmove or lvconvert must wait for the kernel to finish
+        # synchronising or merging data, they check and report progress at
+        # intervals of this number of seconds. If this is set to 0 and there
+        # is only one thing to wait for, there are no progress reports, but
+        # the process is awoken immediately once the operation is complete.
+        polling_interval = 15
+
+        # Configuration option activation/auto_set_activation_skip.
+        # Set the activation skip flag on new thin snapshot LVs.
+        # The --setactivationskip option overrides this setting.
+        # An LV can have a persistent 'activation skip' flag. The flag causes
+        # the LV to be skipped during normal activation. The lvchange/vgchange
+        # -K option is required to activate LVs that have the activation skip
+        # flag set. When this setting is enabled, the activation skip flag is
+        # set on new thin snapshot LVs.
+        # This configuration option has an automatic default value.
+        # auto_set_activation_skip = 1
+
+        # Configuration option activation/activation_mode.
+        # How LVs with missing devices are activated.
+        # The --activationmode option overrides this setting.
+        # 
+        # Accepted values:
+        #   complete
+        #     Only allow activation of an LV if all of the Physical Volumes it
+        #     uses are present. Other PVs in the Volume Group may be missing.
+        #   degraded
+        #     Like complete, but additionally RAID LVs of segment type raid1,
+        #     raid4, raid5, radid6 and raid10 will be activated if there is no
+        #     data loss, i.e. they have sufficient redundancy to present the
+        #     entire addressable range of the Logical Volume.
+        #   partial
+        #     Allows the activation of any LV even if a missing or failed PV
+        #     could cause data loss with a portion of the LV inaccessible.
+        #     This setting should not normally be used, but may sometimes
+        #     assist with data recovery.
+        # 
+        activation_mode = "degraded"
+
+        # Configuration option activation/lock_start_list.
+        # Locking is started only for VGs selected by this list.
+        # The rules are the same as those for volume_list.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/auto_lock_start_list.
+        # Locking is auto-started only for VGs selected by this list.
+        # The rules are the same as those for auto_activation_volume_list.
+        # This configuration option does not have a default value defined.
 }
 
 # Configuration section metadata.
 # This configuration section has an automatic default value.
 # metadata {
 
-	# Configuration option metadata/pvmetadatacopies.
-	# Number of copies of metadata to store on each PV.
-	# The --pvmetadatacopies option overrides this setting.
-	# 
-	# Accepted values:
-	#   2
-	#     Two copies of the VG metadata are stored on the PV, one at the
-	#     front of the PV, and one at the end.
-	#   1
-	#     One copy of VG metadata is stored at the front of the PV.
-	#   0
-	#     No copies of VG metadata are stored on the PV. This may be
-	#     useful for VGs containing large numbers of PVs.
-	# 
-	# This configuration option is advanced.
-	# This configuration option has an automatic default value.
-	# pvmetadatacopies = 1
-
-	# Configuration option metadata/vgmetadatacopies.
-	# Number of copies of metadata to maintain for each VG.
-	# The --vgmetadatacopies option overrides this setting.
-	# If set to a non-zero value, LVM automatically chooses which of the
-	# available metadata areas to use to achieve the requested number of
-	# copies of the VG metadata. If you set a value larger than the the
-	# total number of metadata areas available, then metadata is stored in
-	# them all. The value 0 (unmanaged) disables this automatic management
-	# and allows you to control which metadata areas are used at the
-	# individual PV level using pvchange --metadataignore y|n.
-	# This configuration option has an automatic default value.
-	# vgmetadatacopies = 0
-
-	# Configuration option metadata/pvmetadatasize.
-	# Approximate number of sectors to use for each metadata copy.
-	# VGs with large numbers of PVs or LVs, or VGs containing complex LV
-	# structures, may need additional space for VG metadata. The metadata
-	# areas are treated as circular buffers, so unused space becomes filled
-	# with an archive of the most recent previous versions of the metadata.
-	# This configuration option has an automatic default value.
-	# pvmetadatasize = 255
-
-	# Configuration option metadata/pvmetadataignore.
-	# Ignore metadata areas on a new PV.
-	# The --metadataignore option overrides this setting.
-	# If metadata areas on a PV are ignored, LVM will not store metadata
-	# in them.
-	# This configuration option is advanced.
-	# This configuration option has an automatic default value.
-	# pvmetadataignore = 0
-
-	# Configuration option metadata/stripesize.
-	# This configuration option is advanced.
-	# This configuration option has an automatic default value.
-	# stripesize = 64
-
-	# Configuration option metadata/dirs.
-	# Directories holding live copies of text format metadata.
-	# These directories must not be on logical volumes!
-	# It's possible to use LVM with a couple of directories here,
-	# preferably on different (non-LV) filesystems, and with no other
-	# on-disk metadata (pvmetadatacopies = 0). Or this can be in addition
-	# to on-disk metadata areas. The feature was originally added to
-	# simplify testing and is not supported under low memory situations -
-	# the machine could lock up. Never edit any files in these directories
-	# by hand unless you are absolutely sure you know what you are doing!
-	# Use the supplied toolset to make changes (e.g. vgcfgrestore).
-	# 
-	# Example
-	# dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
-	# 
-	# This configuration option is advanced.
-	# This configuration option does not have a default value defined.
+        # Configuration option metadata/pvmetadatacopies.
+        # Number of copies of metadata to store on each PV.
+        # The --pvmetadatacopies option overrides this setting.
+        # 
+        # Accepted values:
+        #   2
+        #     Two copies of the VG metadata are stored on the PV, one at the
+        #     front of the PV, and one at the end.
+        #   1
+        #     One copy of VG metadata is stored at the front of the PV.
+        #   0
+        #     No copies of VG metadata are stored on the PV. This may be
+        #     useful for VGs containing large numbers of PVs.
+        # 
+        # This configuration option is advanced.
+        # This configuration option has an automatic default value.
+        # pvmetadatacopies = 1
+
+        # Configuration option metadata/vgmetadatacopies.
+        # Number of copies of metadata to maintain for each VG.
+        # The --vgmetadatacopies option overrides this setting.
+        # If set to a non-zero value, LVM automatically chooses which of the
+        # available metadata areas to use to achieve the requested number of
+        # copies of the VG metadata. If you set a value larger than the the
+        # total number of metadata areas available, then metadata is stored in
+        # them all. The value 0 (unmanaged) disables this automatic management
+        # and allows you to control which metadata areas are used at the
+        # individual PV level using pvchange --metadataignore y|n.
+        # This configuration option has an automatic default value.
+        # vgmetadatacopies = 0
+
+        # Configuration option metadata/pvmetadatasize.
+        # Approximate number of sectors to use for each metadata copy.
+        # VGs with large numbers of PVs or LVs, or VGs containing complex LV
+        # structures, may need additional space for VG metadata. The metadata
+        # areas are treated as circular buffers, so unused space becomes filled
+        # with an archive of the most recent previous versions of the metadata.
+        # This configuration option has an automatic default value.
+        # pvmetadatasize = 255
+
+        # Configuration option metadata/pvmetadataignore.
+        # Ignore metadata areas on a new PV.
+        # The --metadataignore option overrides this setting.
+        # If metadata areas on a PV are ignored, LVM will not store metadata
+        # in them.
+        # This configuration option is advanced.
+        # This configuration option has an automatic default value.
+        # pvmetadataignore = 0
+
+        # Configuration option metadata/stripesize.
+        # This configuration option is advanced.
+        # This configuration option has an automatic default value.
+        # stripesize = 64
+
+        # Configuration option metadata/dirs.
+        # Directories holding live copies of text format metadata.
+        # These directories must not be on logical volumes!
+        # It's possible to use LVM with a couple of directories here,
+        # preferably on different (non-LV) filesystems, and with no other
+        # on-disk metadata (pvmetadatacopies = 0). Or this can be in addition
+        # to on-disk metadata areas. The feature was originally added to
+        # simplify testing and is not supported under low memory situations -
+        # the machine could lock up. Never edit any files in these directories
+        # by hand unless you are absolutely sure you know what you are doing!
+        # Use the supplied toolset to make changes (e.g. vgcfgrestore).
+        # 
+        # Example
+        # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
+        # 
+        # This configuration option is advanced.
+        # This configuration option does not have a default value defined.
 # }
 
 # Configuration section report.
@@ -1493,357 +1497,357 @@
 # This configuration section has an automatic default value.
 # report {
 
-	# Configuration option report/compact_output.
-	# Do not print empty values for all report fields.
-	# If enabled, all fields that don't have a value set for any of the
-	# rows reported are skipped and not printed. Compact output is
-	# applicable only if report/buffered is enabled. If you need to
-	# compact only specified fields, use compact_output=0 and define
-	# report/compact_output_cols configuration setting instead.
-	# This configuration option has an automatic default value.
-	# compact_output = 0
-
-	# Configuration option report/compact_output_cols.
-	# Do not print empty values for specified report fields.
-	# If defined, specified fields that don't have a value set for any
-	# of the rows reported are skipped and not printed. Compact output
-	# is applicable only if report/buffered is enabled. If you need to
-	# compact all fields, use compact_output=1 instead in which case
-	# the compact_output_cols setting is then ignored.
-	# This configuration option has an automatic default value.
-	# compact_output_cols = ""
-
-	# Configuration option report/aligned.
-	# Align columns in report output.
-	# This configuration option has an automatic default value.
-	# aligned = 1
-
-	# Configuration option report/buffered.
-	# Buffer report output.
-	# When buffered reporting is used, the report's content is appended
-	# incrementally to include each object being reported until the report
-	# is flushed to output which normally happens at the end of command
-	# execution. Otherwise, if buffering is not used, each object is
-	# reported as soon as its processing is finished.
-	# This configuration option has an automatic default value.
-	# buffered = 1
-
-	# Configuration option report/headings.
-	# Show headings for columns on report.
-	# This configuration option has an automatic default value.
-	# headings = 1
-
-	# Configuration option report/separator.
-	# A separator to use on report after each field.
-	# This configuration option has an automatic default value.
-	# separator = " "
-
-	# Configuration option report/list_item_separator.
-	# A separator to use for list items when reported.
-	# This configuration option has an automatic default value.
-	# list_item_separator = ","
-
-	# Configuration option report/prefixes.
-	# Use a field name prefix for each field reported.
-	# This configuration option has an automatic default value.
-	# prefixes = 0
-
-	# Configuration option report/quoted.
-	# Quote field values when using field name prefixes.
-	# This configuration option has an automatic default value.
-	# quoted = 1
-
-	# Configuration option report/colums_as_rows.
-	# Output each column as a row.
-	# If set, this also implies report/prefixes=1.
-	# This configuration option has an automatic default value.
-	# colums_as_rows = 0
-
-	# Configuration option report/binary_values_as_numeric.
-	# Use binary values 0 or 1 instead of descriptive literal values.
-	# For columns that have exactly two valid values to report
-	# (not counting the 'unknown' value which denotes that the
-	# value could not be determined).
-	# This configuration option has an automatic default value.
-	# binary_values_as_numeric = 0
-
-	# Configuration option report/time_format.
-	# Set time format for fields reporting time values.
-	# Format specification is a string which may contain special character
-	# sequences and ordinary character sequences. Ordinary character
-	# sequences are copied verbatim. Each special character sequence is
-	# introduced by the '%' character and such sequence is then
-	# substituted with a value as described below.
-	# 
-	# Accepted values:
-	#   %a
-	#     The abbreviated name of the day of the week according to the
-	#     current locale.
-	#   %A
-	#     The full name of the day of the week according to the current
-	#     locale.
-	#   %b
-	#     The abbreviated month name according to the current locale.
-	#   %B
-	#     The full month name according to the current locale.
-	#   %c
-	#     The preferred date and time representation for the current
-	#     locale (alt E)
-	#   %C
-	#     The century number (year/100) as a 2-digit integer. (alt E)
-	#   %d
-	#     The day of the month as a decimal number (range 01 to 31).
-	#     (alt O)
-	#   %D
-	#     Equivalent to %m/%d/%y. (For Americans only. Americans should
-	#     note that in other countries%d/%m/%y is rather common. This
-	#     means that in international context this format is ambiguous and
-	#     should not be used.
-	#   %e
-	#     Like %d, the day of the month as a decimal number, but a leading
-	#     zero is replaced by a space. (alt O)
-	#   %E
-	#     Modifier: use alternative local-dependent representation if
-	#     available.
-	#   %F
-	#     Equivalent to %Y-%m-%d (the ISO 8601 date format).
-	#   %G
-	#     The ISO 8601 week-based year with century as adecimal number.
-	#     The 4-digit year corresponding to the ISO week number (see %V).
-	#     This has the same format and value as %Y, except that if the
-	#     ISO week number belongs to the previous or next year, that year
-	#     is used instead.
-	#   %g
-	#     Like %G, but without century, that is, with a 2-digit year
-	#     (00-99).
-	#   %h
-	#     Equivalent to %b.
-	#   %H
-	#     The hour as a decimal number using a 24-hour clock
-	#     (range 00 to 23). (alt O)
-	#   %I
-	#     The hour as a decimal number using a 12-hour clock
-	#     (range 01 to 12). (alt O)
-	#   %j
-	#     The day of the year as a decimal number (range 001 to 366).
-	#   %k
-	#     The hour (24-hour clock) as a decimal number (range 0 to 23);
-	#     single digits are preceded by a blank. (See also %H.)
-	#   %l
-	#     The hour (12-hour clock) as a decimal number (range 1 to 12);
-	#     single digits are preceded by a blank. (See also %I.)
-	#   %m
-	#     The month as a decimal number (range 01 to 12). (alt O)
-	#   %M
-	#     The minute as a decimal number (range 00 to 59). (alt O)
-	#   %O
-	#     Modifier: use alternative numeric symbols.
-	#   %p
-	#     Either "AM" or "PM" according to the given time value,
-	#     or the corresponding strings for the current locale. Noon is
-	#     treated as "PM" and midnight as "AM".
-	#   %P
-	#     Like %p but in lowercase: "am" or "pm" or a corresponding
-	#     string for the current locale.
-	#   %r
-	#     The time in a.m. or p.m. notation. In the POSIX locale this is
-	#     equivalent to %I:%M:%S %p.
-	#   %R
-	#     The time in 24-hour notation (%H:%M). For a version including
-	#     the seconds, see %T below.
-	#   %s
-	#     The number of seconds since the Epoch,
-	#     1970-01-01 00:00:00 +0000 (UTC)
-	#   %S
-	#     The second as a decimal number (range 00 to 60). (The range is
-	#     up to 60 to allow for occasional leap seconds.) (alt O)
-	#   %t
-	#     A tab character.
-	#   %T
-	#     The time in 24-hour notation (%H:%M:%S).
-	#   %u
-	#     The day of the week as a decimal, range 1 to 7, Monday being 1.
-	#     See also %w. (alt O)
-	#   %U
-	#     The week number of the current year as a decimal number,
-	#     range 00 to 53, starting with the first Sunday as the first
-	#     day of week 01. See also %V and %W. (alt O)
-	#   %V
-	#     The ISO 8601 week number of the current year as a decimal number,
-	#     range 01 to 53, where week 1 is the first week that has at least
-	#     4 days in the new year. See also %U and %W. (alt O)
-	#   %w
-	#     The day of the week as a decimal, range 0 to 6, Sunday being 0.
-	#     See also %u. (alt O)
-	#   %W
-	#     The week number of the current year as a decimal number,
-	#     range 00 to 53, starting with the first Monday as the first day
-	#     of week 01. (alt O)
-	#   %x
-	#     The preferred date representation for the current locale without
-	#     the time. (alt E)
-	#   %X
-	#     The preferred time representation for the current locale without
-	#     the date. (alt E)
-	#   %y
-	#     The year as a decimal number without a century (range 00 to 99).
-	#     (alt E, alt O)
-	#   %Y
-	#     The year as a decimal number including the century. (alt E)
-	#   %z
-	#     The +hhmm or -hhmm numeric timezone (that is, the hour and minute
-	#     offset from UTC).
-	#   %Z
-	#     The timezone name or abbreviation.
-	#   %%
-	#     A literal '%' character.
-	# 
-	# This configuration option has an automatic default value.
-	# time_format = "%Y-%m-%d %T %z"
-
-	# Configuration option report/devtypes_sort.
-	# List of columns to sort by when reporting 'lvm devtypes' command.
-	# See 'lvm devtypes -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# devtypes_sort = "devtype_name"
-
-	# Configuration option report/devtypes_cols.
-	# List of columns to report for 'lvm devtypes' command.
-	# See 'lvm devtypes -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# devtypes_cols = "devtype_name,devtype_max_partitions,devtype_description"
-
-	# Configuration option report/devtypes_cols_verbose.
-	# List of columns to report for 'lvm devtypes' command in verbose mode.
-	# See 'lvm devtypes -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# devtypes_cols_verbose = "devtype_name,devtype_max_partitions,devtype_description"
-
-	# Configuration option report/lvs_sort.
-	# List of columns to sort by when reporting 'lvs' command.
-	# See 'lvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# lvs_sort = "vg_name,lv_name"
-
-	# Configuration option report/lvs_cols.
-	# List of columns to report for 'lvs' command.
-	# See 'lvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# lvs_cols = "lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv"
-
-	# Configuration option report/lvs_cols_verbose.
-	# List of columns to report for 'lvs' command in verbose mode.
-	# See 'lvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# lvs_cols_verbose = "lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert_lv,lv_uuid,lv_profile"
-
-	# Configuration option report/vgs_sort.
-	# List of columns to sort by when reporting 'vgs' command.
-	# See 'vgs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# vgs_sort = "vg_name"
-
-	# Configuration option report/vgs_cols.
-	# List of columns to report for 'vgs' command.
-	# See 'vgs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# vgs_cols = "vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"
-
-	# Configuration option report/vgs_cols_verbose.
-	# List of columns to report for 'vgs' command in verbose mode.
-	# See 'vgs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# vgs_cols_verbose = "vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile"
-
-	# Configuration option report/pvs_sort.
-	# List of columns to sort by when reporting 'pvs' command.
-	# See 'pvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvs_sort = "pv_name"
-
-	# Configuration option report/pvs_cols.
-	# List of columns to report for 'pvs' command.
-	# See 'pvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"
-
-	# Configuration option report/pvs_cols_verbose.
-	# List of columns to report for 'pvs' command in verbose mode.
-	# See 'pvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
-
-	# Configuration option report/segs_sort.
-	# List of columns to sort by when reporting 'lvs --segments' command.
-	# See 'lvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# segs_sort = "vg_name,lv_name,seg_start"
-
-	# Configuration option report/segs_cols.
-	# List of columns to report for 'lvs --segments' command.
-	# See 'lvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# segs_cols = "lv_name,vg_name,lv_attr,stripes,segtype,seg_size"
-
-	# Configuration option report/segs_cols_verbose.
-	# List of columns to report for 'lvs --segments' command in verbose mode.
-	# See 'lvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# segs_cols_verbose = "lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize"
-
-	# Configuration option report/pvsegs_sort.
-	# List of columns to sort by when reporting 'pvs --segments' command.
-	# See 'pvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvsegs_sort = "pv_name,pvseg_start"
-
-	# Configuration option report/pvsegs_cols.
-	# List of columns to sort by when reporting 'pvs --segments' command.
-	# See 'pvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvsegs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size"
-
-	# Configuration option report/pvsegs_cols_verbose.
-	# List of columns to sort by when reporting 'pvs --segments' command in verbose mode.
-	# See 'pvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvsegs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges"
+        # Configuration option report/compact_output.
+        # Do not print empty values for all report fields.
+        # If enabled, all fields that don't have a value set for any of the
+        # rows reported are skipped and not printed. Compact output is
+        # applicable only if report/buffered is enabled. If you need to
+        # compact only specified fields, use compact_output=0 and define
+        # report/compact_output_cols configuration setting instead.
+        # This configuration option has an automatic default value.
+        # compact_output = 0
+
+        # Configuration option report/compact_output_cols.
+        # Do not print empty values for specified report fields.
+        # If defined, specified fields that don't have a value set for any
+        # of the rows reported are skipped and not printed. Compact output
+        # is applicable only if report/buffered is enabled. If you need to
+        # compact all fields, use compact_output=1 instead in which case
+        # the compact_output_cols setting is then ignored.
+        # This configuration option has an automatic default value.
+        # compact_output_cols = ""
+
+        # Configuration option report/aligned.
+        # Align columns in report output.
+        # This configuration option has an automatic default value.
+        # aligned = 1
+
+        # Configuration option report/buffered.
+        # Buffer report output.
+        # When buffered reporting is used, the report's content is appended
+        # incrementally to include each object being reported until the report
+        # is flushed to output which normally happens at the end of command
+        # execution. Otherwise, if buffering is not used, each object is
+        # reported as soon as its processing is finished.
+        # This configuration option has an automatic default value.
+        # buffered = 1
+
+        # Configuration option report/headings.
+        # Show headings for columns on report.
+        # This configuration option has an automatic default value.
+        # headings = 1
+
+        # Configuration option report/separator.
+        # A separator to use on report after each field.
+        # This configuration option has an automatic default value.
+        # separator = " "
+
+        # Configuration option report/list_item_separator.
+        # A separator to use for list items when reported.
+        # This configuration option has an automatic default value.
+        # list_item_separator = ","
+
+        # Configuration option report/prefixes.
+        # Use a field name prefix for each field reported.
+        # This configuration option has an automatic default value.
+        # prefixes = 0
+
+        # Configuration option report/quoted.
+        # Quote field values when using field name prefixes.
+        # This configuration option has an automatic default value.
+        # quoted = 1
+
+        # Configuration option report/colums_as_rows.
+        # Output each column as a row.
+        # If set, this also implies report/prefixes=1.
+        # This configuration option has an automatic default value.
+        # colums_as_rows = 0
+
+        # Configuration option report/binary_values_as_numeric.
+        # Use binary values 0 or 1 instead of descriptive literal values.
+        # For columns that have exactly two valid values to report
+        # (not counting the 'unknown' value which denotes that the
+        # value could not be determined).
+        # This configuration option has an automatic default value.
+        # binary_values_as_numeric = 0
+
+        # Configuration option report/time_format.
+        # Set time format for fields reporting time values.
+        # Format specification is a string which may contain special character
+        # sequences and ordinary character sequences. Ordinary character
+        # sequences are copied verbatim. Each special character sequence is
+        # introduced by the '%' character and such sequence is then
+        # substituted with a value as described below.
+        # 
+        # Accepted values:
+        #   %a
+        #     The abbreviated name of the day of the week according to the
+        #     current locale.
+        #   %A
+        #     The full name of the day of the week according to the current
+        #     locale.
+        #   %b
+        #     The abbreviated month name according to the current locale.
+        #   %B
+        #     The full month name according to the current locale.
+        #   %c
+        #     The preferred date and time representation for the current
+        #     locale (alt E)
+        #   %C
+        #     The century number (year/100) as a 2-digit integer. (alt E)
+        #   %d
+        #     The day of the month as a decimal number (range 01 to 31).
+        #     (alt O)
+        #   %D
+        #     Equivalent to %m/%d/%y. (For Americans only. Americans should
+        #     note that in other countries%d/%m/%y is rather common. This
+        #     means that in international context this format is ambiguous and
+        #     should not be used.
+        #   %e
+        #     Like %d, the day of the month as a decimal number, but a leading
+        #     zero is replaced by a space. (alt O)
+        #   %E
+        #     Modifier: use alternative local-dependent representation if
+        #     available.
+        #   %F
+        #     Equivalent to %Y-%m-%d (the ISO 8601 date format).
+        #   %G
+        #     The ISO 8601 week-based year with century as adecimal number.
+        #     The 4-digit year corresponding to the ISO week number (see %V).
+        #     This has the same format and value as %Y, except that if the
+        #     ISO week number belongs to the previous or next year, that year
+        #     is used instead.
+        #   %g
+        #     Like %G, but without century, that is, with a 2-digit year
+        #     (00-99).
+        #   %h
+        #     Equivalent to %b.
+        #   %H
+        #     The hour as a decimal number using a 24-hour clock
+        #     (range 00 to 23). (alt O)
+        #   %I
+        #     The hour as a decimal number using a 12-hour clock
+        #     (range 01 to 12). (alt O)
+        #   %j
+        #     The day of the year as a decimal number (range 001 to 366).
+        #   %k
+        #     The hour (24-hour clock) as a decimal number (range 0 to 23);
+        #     single digits are preceded by a blank. (See also %H.)
+        #   %l
+        #     The hour (12-hour clock) as a decimal number (range 1 to 12);
+        #     single digits are preceded by a blank. (See also %I.)
+        #   %m
+        #     The month as a decimal number (range 01 to 12). (alt O)
+        #   %M
+        #     The minute as a decimal number (range 00 to 59). (alt O)
+        #   %O
+        #     Modifier: use alternative numeric symbols.
+        #   %p
+        #     Either "AM" or "PM" according to the given time value,
+        #     or the corresponding strings for the current locale. Noon is
+        #     treated as "PM" and midnight as "AM".
+        #   %P
+        #     Like %p but in lowercase: "am" or "pm" or a corresponding
+        #     string for the current locale.
+        #   %r
+        #     The time in a.m. or p.m. notation. In the POSIX locale this is
+        #     equivalent to %I:%M:%S %p.
+        #   %R
+        #     The time in 24-hour notation (%H:%M). For a version including
+        #     the seconds, see %T below.
+        #   %s
+        #     The number of seconds since the Epoch,
+        #     1970-01-01 00:00:00 +0000 (UTC)
+        #   %S
+        #     The second as a decimal number (range 00 to 60). (The range is
+        #     up to 60 to allow for occasional leap seconds.) (alt O)
+        #   %t
+        #     A tab character.
+        #   %T
+        #     The time in 24-hour notation (%H:%M:%S).
+        #   %u
+        #     The day of the week as a decimal, range 1 to 7, Monday being 1.
+        #     See also %w. (alt O)
+        #   %U
+        #     The week number of the current year as a decimal number,
+        #     range 00 to 53, starting with the first Sunday as the first
+        #     day of week 01. See also %V and %W. (alt O)
+        #   %V
+        #     The ISO 8601 week number of the current year as a decimal number,
+        #     range 01 to 53, where week 1 is the first week that has at least
+        #     4 days in the new year. See also %U and %W. (alt O)
+        #   %w
+        #     The day of the week as a decimal, range 0 to 6, Sunday being 0.
+        #     See also %u. (alt O)
+        #   %W
+        #     The week number of the current year as a decimal number,
+        #     range 00 to 53, starting with the first Monday as the first day
+        #     of week 01. (alt O)
+        #   %x
+        #     The preferred date representation for the current locale without
+        #     the time. (alt E)
+        #   %X
+        #     The preferred time representation for the current locale without
+        #     the date. (alt E)
+        #   %y
+        #     The year as a decimal number without a century (range 00 to 99).
+        #     (alt E, alt O)
+        #   %Y
+        #     The year as a decimal number including the century. (alt E)
+        #   %z
+        #     The +hhmm or -hhmm numeric timezone (that is, the hour and minute
+        #     offset from UTC).
+        #   %Z
+        #     The timezone name or abbreviation.
+        #   %%
+        #     A literal '%' character.
+        # 
+        # This configuration option has an automatic default value.
+        # time_format = "%Y-%m-%d %T %z"
+
+        # Configuration option report/devtypes_sort.
+        # List of columns to sort by when reporting 'lvm devtypes' command.
+        # See 'lvm devtypes -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # devtypes_sort = "devtype_name"
+
+        # Configuration option report/devtypes_cols.
+        # List of columns to report for 'lvm devtypes' command.
+        # See 'lvm devtypes -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # devtypes_cols = "devtype_name,devtype_max_partitions,devtype_description"
+
+        # Configuration option report/devtypes_cols_verbose.
+        # List of columns to report for 'lvm devtypes' command in verbose mode.
+        # See 'lvm devtypes -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # devtypes_cols_verbose = "devtype_name,devtype_max_partitions,devtype_description"
+
+        # Configuration option report/lvs_sort.
+        # List of columns to sort by when reporting 'lvs' command.
+        # See 'lvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # lvs_sort = "vg_name,lv_name"
+
+        # Configuration option report/lvs_cols.
+        # List of columns to report for 'lvs' command.
+        # See 'lvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # lvs_cols = "lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv"
+
+        # Configuration option report/lvs_cols_verbose.
+        # List of columns to report for 'lvs' command in verbose mode.
+        # See 'lvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # lvs_cols_verbose = "lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert_lv,lv_uuid,lv_profile"
+
+        # Configuration option report/vgs_sort.
+        # List of columns to sort by when reporting 'vgs' command.
+        # See 'vgs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # vgs_sort = "vg_name"
+
+        # Configuration option report/vgs_cols.
+        # List of columns to report for 'vgs' command.
+        # See 'vgs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # vgs_cols = "vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"
+
+        # Configuration option report/vgs_cols_verbose.
+        # List of columns to report for 'vgs' command in verbose mode.
+        # See 'vgs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # vgs_cols_verbose = "vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile"
+
+        # Configuration option report/pvs_sort.
+        # List of columns to sort by when reporting 'pvs' command.
+        # See 'pvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvs_sort = "pv_name"
+
+        # Configuration option report/pvs_cols.
+        # List of columns to report for 'pvs' command.
+        # See 'pvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"
+
+        # Configuration option report/pvs_cols_verbose.
+        # List of columns to report for 'pvs' command in verbose mode.
+        # See 'pvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
+
+        # Configuration option report/segs_sort.
+        # List of columns to sort by when reporting 'lvs --segments' command.
+        # See 'lvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # segs_sort = "vg_name,lv_name,seg_start"
+
+        # Configuration option report/segs_cols.
+        # List of columns to report for 'lvs --segments' command.
+        # See 'lvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # segs_cols = "lv_name,vg_name,lv_attr,stripes,segtype,seg_size"
+
+        # Configuration option report/segs_cols_verbose.
+        # List of columns to report for 'lvs --segments' command in verbose mode.
+        # See 'lvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # segs_cols_verbose = "lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize"
+
+        # Configuration option report/pvsegs_sort.
+        # List of columns to sort by when reporting 'pvs --segments' command.
+        # See 'pvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvsegs_sort = "pv_name,pvseg_start"
+
+        # Configuration option report/pvsegs_cols.
+        # List of columns to sort by when reporting 'pvs --segments' command.
+        # See 'pvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvsegs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size"
+
+        # Configuration option report/pvsegs_cols_verbose.
+        # List of columns to sort by when reporting 'pvs --segments' command in verbose mode.
+        # See 'pvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvsegs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges"
 # }
 
 # Configuration section dmeventd.
 # Settings for the LVM event daemon.
 dmeventd {
 
-	# Configuration option dmeventd/mirror_library.
-	# The library dmeventd uses when monitoring a mirror device.
-	# libdevmapper-event-lvm2mirror.so attempts to recover from
-	# failures. It removes failed devices from a volume group and
-	# reconfigures a mirror as necessary. If no mirror library is
-	# provided, mirrors are not monitored through dmeventd.
-	mirror_library = "libdevmapper-event-lvm2mirror.so"
-
-	# Configuration option dmeventd/raid_library.
-	# This configuration option has an automatic default value.
-	# raid_library = "libdevmapper-event-lvm2raid.so"
-
-	# Configuration option dmeventd/snapshot_library.
-	# The library dmeventd uses when monitoring a snapshot device.
-	# libdevmapper-event-lvm2snapshot.so monitors the filling of snapshots
-	# and emits a warning through syslog when the usage exceeds 80%. The
-	# warning is repeated when 85%, 90% and 95% of the snapshot is filled.
-	snapshot_library = "libdevmapper-event-lvm2snapshot.so"
-
-	# Configuration option dmeventd/thin_library.
-	# The library dmeventd uses when monitoring a thin device.
-	# libdevmapper-event-lvm2thin.so monitors the filling of a pool
-	# and emits a warning through syslog when the usage exceeds 80%. The
-	# warning is repeated when 85%, 90% and 95% of the pool is filled.
-	thin_library = "libdevmapper-event-lvm2thin.so"
-
-	# Configuration option dmeventd/executable.
-	# The full path to the dmeventd binary.
-	# This configuration option has an automatic default value.
-	# executable = "/sbin/dmeventd"
+        # Configuration option dmeventd/mirror_library.
+        # The library dmeventd uses when monitoring a mirror device.
+        # libdevmapper-event-lvm2mirror.so attempts to recover from
+        # failures. It removes failed devices from a volume group and
+        # reconfigures a mirror as necessary. If no mirror library is
+        # provided, mirrors are not monitored through dmeventd.
+        mirror_library = "libdevmapper-event-lvm2mirror.so"
+
+        # Configuration option dmeventd/raid_library.
+        # This configuration option has an automatic default value.
+        # raid_library = "libdevmapper-event-lvm2raid.so"
+
+        # Configuration option dmeventd/snapshot_library.
+        # The library dmeventd uses when monitoring a snapshot device.
+        # libdevmapper-event-lvm2snapshot.so monitors the filling of snapshots
+        # and emits a warning through syslog when the usage exceeds 80%. The
+        # warning is repeated when 85%, 90% and 95% of the snapshot is filled.
+        snapshot_library = "libdevmapper-event-lvm2snapshot.so"
+
+        # Configuration option dmeventd/thin_library.
+        # The library dmeventd uses when monitoring a thin device.
+        # libdevmapper-event-lvm2thin.so monitors the filling of a pool
+        # and emits a warning through syslog when the usage exceeds 80%. The
+        # warning is repeated when 85%, 90% and 95% of the pool is filled.
+        thin_library = "libdevmapper-event-lvm2thin.so"
+
+        # Configuration option dmeventd/executable.
+        # The full path to the dmeventd binary.
+        # This configuration option has an automatic default value.
+        # executable = "/sbin/dmeventd"
 }
 
 # Configuration section tags.
@@ -1851,37 +1855,37 @@
 # This configuration section has an automatic default value.
 # tags {
 
-	# Configuration option tags/hosttags.
-	# Create a host tag using the machine name.
-	# The machine name is nodename returned by uname(2).
-	# This configuration option has an automatic default value.
-	# hosttags = 0
-
-	# Configuration section tags/<tag>.
-	# Replace this subsection name with a custom tag name.
-	# Multiple subsections like this can be created. The '@' prefix for
-	# tags is optional. This subsection can contain host_list, which is a
-	# list of machine names. If the name of the local machine is found in
-	# host_list, then the name of this subsection is used as a tag and is
-	# applied to the local machine as a 'host tag'. If this subsection is
-	# empty (has no host_list), then the subsection name is always applied
-	# as a 'host tag'.
-	# 
-	# Example
-	# The host tag foo is given to all hosts, and the host tag
-	# bar is given to the hosts named machine1 and machine2.
-	# tags { foo { } bar { host_list = [ "machine1", "machine2" ] } }
-	# 
-	# This configuration section has variable name.
-	# This configuration section has an automatic default value.
-	# tag {
-
-		# Configuration option tags/<tag>/host_list.
-		# A list of machine names.
-		# These machine names are compared to the nodename returned
-		# by uname(2). If the local machine name matches an entry in
-		# this list, the name of the subsection is applied to the
-		# machine as a 'host tag'.
-		# This configuration option does not have a default value defined.
-	# }
+        # Configuration option tags/hosttags.
+        # Create a host tag using the machine name.
+        # The machine name is nodename returned by uname(2).
+        # This configuration option has an automatic default value.
+        # hosttags = 0
+
+        # Configuration section tags/<tag>.
+        # Replace this subsection name with a custom tag name.
+        # Multiple subsections like this can be created. The '@' prefix for
+        # tags is optional. This subsection can contain host_list, which is a
+        # list of machine names. If the name of the local machine is found in
+        # host_list, then the name of this subsection is used as a tag and is
+        # applied to the local machine as a 'host tag'. If this subsection is
+        # empty (has no host_list), then the subsection name is always applied
+        # as a 'host tag'.
+        # 
+        # Example
+        # The host tag foo is given to all hosts, and the host tag
+        # bar is given to the hosts named machine1 and machine2.
+        # tags { foo { } bar { host_list = [ "machine1", "machine2" ] } }
+        # 
+        # This configuration section has variable name.
+        # This configuration section has an automatic default value.
+        # tag {
+
+                # Configuration option tags/<tag>/host_list.
+                # A list of machine names.
+                # These machine names are compared to the nodename returned
+                # by uname(2). If the local machine name matches an entry in
+                # this list, the name of the subsection is applied to the
+                # machine as a 'host tag'.
+                # This configuration option does not have a default value defined.
+        # }
 # }

2018-09-01 22:01:55,490 [salt.state       :1941][INFO    ][3651] Completed state [/etc/lvm/lvm.conf] at time 22:01:55.490957 duration_in_ms=100.425
2018-09-01 22:01:55,492 [salt.state       :1770][INFO    ][3651] Running state [lvm2-lvmetad] at time 22:01:55.492449
2018-09-01 22:01:55,492 [salt.state       :1803][INFO    ][3651] Executing state service.running for [lvm2-lvmetad]
2018-09-01 22:01:55,493 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'status', 'lvm2-lvmetad.service', '-n', '0'] in directory '/root'
2018-09-01 22:01:55,501 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-active', 'lvm2-lvmetad.service'] in directory '/root'
2018-09-01 22:01:55,507 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmetad.service'] in directory '/root'
2018-09-01 22:01:55,516 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmetad.service'] in directory '/root'
2018-09-01 22:01:55,526 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'lvm2-lvmetad.service'] in directory '/root'
2018-09-01 22:01:55,985 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmetad.service'] in directory '/root'
2018-09-01 22:01:55,992 [salt.state       :290 ][INFO    ][3651] {'lvm2-lvmetad': True}
2018-09-01 22:01:55,993 [salt.state       :1941][INFO    ][3651] Completed state [lvm2-lvmetad] at time 22:01:55.993097 duration_in_ms=500.648
2018-09-01 22:01:55,994 [salt.state       :1770][INFO    ][3651] Running state [lvm2-lvmpolld] at time 22:01:55.994693
2018-09-01 22:01:55,994 [salt.state       :1803][INFO    ][3651] Executing state service.running for [lvm2-lvmpolld]
2018-09-01 22:01:55,995 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'status', 'lvm2-lvmpolld.service', '-n', '0'] in directory '/root'
2018-09-01 22:01:56,003 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-active', 'lvm2-lvmpolld.service'] in directory '/root'
2018-09-01 22:01:56,010 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-09-01 22:01:56,021 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'lvm2-lvmpolld.service'] in directory '/root'
2018-09-01 22:01:56,041 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-active', 'lvm2-lvmpolld.service'] in directory '/root'
2018-09-01 22:01:56,049 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-09-01 22:01:56,059 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-09-01 22:01:56,071 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'lvm2-lvmpolld.service'] in directory '/root'
2018-09-01 22:01:56,296 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-09-01 22:01:56,304 [salt.state       :290 ][INFO    ][3651] {'lvm2-lvmpolld': True}
2018-09-01 22:01:56,304 [salt.state       :1941][INFO    ][3651] Completed state [lvm2-lvmpolld] at time 22:01:56.304730 duration_in_ms=310.035
2018-09-01 22:01:56,306 [salt.state       :1770][INFO    ][3651] Running state [lvm2-monitor] at time 22:01:56.306382
2018-09-01 22:01:56,306 [salt.state       :1803][INFO    ][3651] Executing state service.running for [lvm2-monitor]
2018-09-01 22:01:56,307 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'status', 'lvm2-monitor.service', '-n', '0'] in directory '/root'
2018-09-01 22:01:56,313 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-active', 'lvm2-monitor.service'] in directory '/root'
2018-09-01 22:01:56,319 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'lvm2-monitor.service'] in directory '/root'
2018-09-01 22:01:56,324 [salt.state       :290 ][INFO    ][3651] The service lvm2-monitor is already running
2018-09-01 22:01:56,325 [salt.state       :1941][INFO    ][3651] Completed state [lvm2-monitor] at time 22:01:56.325084 duration_in_ms=18.702
2018-09-01 22:01:56,325 [salt.state       :1770][INFO    ][3651] Running state [lvm2-monitor] at time 22:01:56.325252
2018-09-01 22:01:56,325 [salt.state       :1803][INFO    ][3651] Executing state service.mod_watch for [lvm2-monitor]
2018-09-01 22:01:56,325 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-active', 'lvm2-monitor.service'] in directory '/root'
2018-09-01 22:01:56,334 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'lvm2-monitor.service'] in directory '/root'
2018-09-01 22:01:56,365 [salt.state       :290 ][INFO    ][3651] {'lvm2-monitor': True}
2018-09-01 22:01:56,365 [salt.state       :1941][INFO    ][3651] Completed state [lvm2-monitor] at time 22:01:56.365806 duration_in_ms=40.553
2018-09-01 22:01:56,369 [salt.state       :1770][INFO    ][3651] Running state [/dev/sda2] at time 22:01:56.369322
2018-09-01 22:01:56,369 [salt.state       :1803][INFO    ][3651] Executing state lvm.pv_present for [/dev/sda2]
2018-09-01 22:01:56,369 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['pvdisplay', '-c', '/dev/sda2'] in directory '/root'
2018-09-01 22:01:56,413 [salt.state       :290 ][INFO    ][3651] Physical Volume /dev/sda2 already present
2018-09-01 22:01:56,413 [salt.state       :1941][INFO    ][3651] Completed state [/dev/sda2] at time 22:01:56.413944 duration_in_ms=44.621
2018-09-01 22:01:56,415 [salt.state       :1770][INFO    ][3651] Running state [vgroot] at time 22:01:56.414994
2018-09-01 22:01:56,415 [salt.state       :1803][INFO    ][3651] Executing state lvm.vg_present for [vgroot]
2018-09-01 22:01:56,415 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['vgdisplay', '-c', 'vgroot'] in directory '/root'
2018-09-01 22:01:56,424 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['pvdisplay', '-c', '/dev/sda2'] in directory '/root'
2018-09-01 22:01:56,434 [salt.state       :290 ][INFO    ][3651] Volume Group vgroot already present
/dev/sda2 is part of Volume Group
2018-09-01 22:01:56,435 [salt.state       :1941][INFO    ][3651] Completed state [vgroot] at time 22:01:56.435114 duration_in_ms=20.12
2018-09-01 22:01:56,435 [salt.state       :1770][INFO    ][3651] Running state [ntp] at time 22:01:56.435342
2018-09-01 22:01:56,435 [salt.state       :1803][INFO    ][3651] Executing state pkg.installed for [ntp]
2018-09-01 22:01:56,440 [salt.state       :290 ][INFO    ][3651] All specified packages are already installed
2018-09-01 22:01:56,441 [salt.state       :1941][INFO    ][3651] Completed state [ntp] at time 22:01:56.441051 duration_in_ms=5.709
2018-09-01 22:01:56,442 [salt.state       :1770][INFO    ][3651] Running state [/etc/ntp.conf] at time 22:01:56.442265
2018-09-01 22:01:56,442 [salt.state       :1803][INFO    ][3651] Executing state file.managed for [/etc/ntp.conf]
2018-09-01 22:01:56,458 [salt.fileclient  :1215][INFO    ][3651] Fetching file from saltenv 'base', ** done ** 'ntp/files/ntp.conf'
2018-09-01 22:01:56,500 [salt.state       :290 ][INFO    ][3651] File changed:
--- 
+++ 
@@ -1,66 +1,25 @@
 
-# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help
 
-driftfile /var/lib/ntp/ntp.drift
+# ntpd will only synchronize your clock.
 
-# Enable this if you want statistics to be logged.
-#statsdir /var/log/ntpstats/
+# For details, see:
+# - the ntp.conf man page
+# - http://support.ntp.org/bin/view/Support/GettingStarted
+# - https://wiki.archlinux.org/index.php/Network_Time_Protocol_daemon
 
-statistics loopstats peerstats clockstats
-filegen loopstats file loopstats type day enable
-filegen peerstats file peerstats type day enable
-filegen clockstats file clockstats type day enable
+# Associate to cloud NTP pool servers
+server 1.pool.ntp.org iburst
+server 0.pool.ntp.org
 
-# Specify one or more NTP servers.
+# Exchange time with everybody, but don't allow configuration.
+restrict -4 default kod nomodify notrap nopeer noquery
+restrict -6 default kod nomodify notrap nopeer noquery
 
-# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
-# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
-# more information.
-# pools
-pool ntp.ubuntu.com iburst
-
-# Use Ubuntu's ntp server as a fallback.
-# pool ntp.ubuntu.com
-
-# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
-# details.  The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
-# might also be helpful.
-#
-# Note that "restrict" applies to both servers and clients, so a configuration
-# that might be intended to block requests from certain clients could also end
-# up blocking replies from your own upstream servers.
-
-# By default, exchange time with everybody, but don't allow configuration.
-restrict -4 default kod notrap nomodify nopeer noquery limited
-restrict -6 default kod notrap nomodify nopeer noquery limited
-
-# Local users may interrogate the ntp server more closely.
+# Only allow read-only access from localhost
 restrict 127.0.0.1
 restrict ::1
 
-# Needed for adding pool entries
-restrict source notrap nomodify noquery
+# mode7 is required for collectd monitoring
 
-# Clients from this (example!) subnet have unlimited access, but only if
-# cryptographically authenticated.
-#restrict 192.168.123.0 mask 255.255.255.0 notrust
-
-
-# If you want to provide time to your local subnet, change the next line.
-# (Again, the address is an example only.)
-#broadcast 192.168.123.255
-
-# If you want to listen to time broadcasts on your local subnet, de-comment the
-# next lines.  Please do this only if you trust everybody on the network!
-#disable auth
-#broadcastclient
-
-#Changes recquired to use pps synchonisation as explained in documentation:
-#http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#AEN3918
-
-#server 127.127.8.1 mode 135 prefer    # Meinberg GPS167 with PPS
-#fudge 127.127.8.1 time1 0.0042        # relative to PPS for my hardware
-
-#server 127.127.22.1                   # ATOM(PPS)
-#fudge 127.127.22.1 flag3 1            # enable PPS API
-
+# Location of drift file
+driftfile /var/lib/ntp/ntp.drift

2018-09-01 22:01:56,501 [salt.state       :1941][INFO    ][3651] Completed state [/etc/ntp.conf] at time 22:01:56.501130 duration_in_ms=58.864
2018-09-01 22:01:56,502 [salt.state       :1770][INFO    ][3651] Running state [ntp] at time 22:01:56.502133
2018-09-01 22:01:56,502 [salt.state       :1803][INFO    ][3651] Executing state service.running for [ntp]
2018-09-01 22:01:56,502 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'status', 'ntp.service', '-n', '0'] in directory '/root'
2018-09-01 22:01:56,511 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-active', 'ntp.service'] in directory '/root'
2018-09-01 22:01:56,517 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-enabled', 'ntp.service'] in directory '/root'
2018-09-01 22:01:56,525 [salt.state       :290 ][INFO    ][3651] The service ntp is already running
2018-09-01 22:01:56,525 [salt.state       :1941][INFO    ][3651] Completed state [ntp] at time 22:01:56.525668 duration_in_ms=23.535
2018-09-01 22:01:56,525 [salt.state       :1770][INFO    ][3651] Running state [ntp] at time 22:01:56.525825
2018-09-01 22:01:56,526 [salt.state       :1803][INFO    ][3651] Executing state service.mod_watch for [ntp]
2018-09-01 22:01:56,526 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemctl', 'is-active', 'ntp.service'] in directory '/root'
2018-09-01 22:01:56,533 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3651] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'ntp.service'] in directory '/root'
2018-09-01 22:01:56,559 [salt.state       :290 ][INFO    ][3651] {'ntp': True}
2018-09-01 22:01:56,559 [salt.state       :1941][INFO    ][3651] Completed state [ntp] at time 22:01:56.559421 duration_in_ms=33.595
2018-09-01 22:01:56,563 [salt.minion      :1708][INFO    ][3651] Returning information for job: 20180901220142106967
2018-09-01 22:02:10,626 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command pkg.upgrade with jid 20180901220210614012
2018-09-01 22:02:10,635 [salt.minion      :1431][INFO    ][4640] Starting a new job with PID 4640
2018-09-01 22:02:10,647 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][4640] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 22:02:10,931 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][4640] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'dist-upgrade'] in directory '/root'
2018-09-01 22:02:15,737 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901220215724843
2018-09-01 22:02:15,747 [salt.minion      :1431][INFO    ][4670] Starting a new job with PID 4670
2018-09-01 22:02:15,757 [salt.minion      :1708][INFO    ][4670] Returning information for job: 20180901220215724843
2018-09-01 22:02:25,950 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901220225938743
2018-09-01 22:02:25,960 [salt.minion      :1431][INFO    ][4957] Starting a new job with PID 4957
2018-09-01 22:02:25,972 [salt.minion      :1708][INFO    ][4957] Returning information for job: 20180901220225938743
2018-09-01 22:02:36,170 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901220236158802
2018-09-01 22:02:36,181 [salt.minion      :1431][INFO    ][5722] Starting a new job with PID 5722
2018-09-01 22:02:36,199 [salt.minion      :1708][INFO    ][5722] Returning information for job: 20180901220236158802
2018-09-01 22:02:40,593 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][4640] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 22:02:40,613 [salt.minion      :1708][INFO    ][4640] Returning information for job: 20180901220210614012
2018-09-01 22:03:51,176 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command state.apply with jid 20180901220351166378
2018-09-01 22:03:51,185 [salt.minion      :1431][INFO    ][5939] Starting a new job with PID 5939
2018-09-01 22:03:54,738 [salt.state       :905 ][INFO    ][5939] Loading fresh modules for state activity
2018-09-01 22:03:54,809 [salt.fileclient  :1215][INFO    ][5939] Fetching file from saltenv 'base', ** done ** 'salt/init.sls'
2018-09-01 22:03:55,051 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,116 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,246 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,258 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,271 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,282 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,294 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,305 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,315 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l libvirt-bin | grep libvirt-bin | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,326 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,338 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l qemu-kvm | grep qemu-kvm | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,350 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,361 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,426 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,437 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,448 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,457 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,468 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,479 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,490 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l libvirt-bin | grep libvirt-bin | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,501 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,512 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l qemu-kvm | grep qemu-kvm | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,522 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,533 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,617 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,628 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,639 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,651 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,662 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,672 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,682 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,693 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,703 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,714 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,728 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,739 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,751 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,762 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,772 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,782 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,792 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,803 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,813 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,823 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:55,857 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'salt-minion --version' in directory '/root'
2018-09-01 22:03:56,027 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'salt-minion --version' in directory '/root'
2018-09-01 22:03:56,210 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901220356200399
2018-09-01 22:03:56,219 [salt.minion      :1431][INFO    ][6138] Starting a new job with PID 6138
2018-09-01 22:03:56,229 [salt.minion      :1708][INFO    ][6138] Returning information for job: 20180901220356200399
2018-09-01 22:03:56,581 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,594 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,738 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,751 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,763 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,775 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,786 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,798 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,810 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l libvirt-bin | grep libvirt-bin | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,821 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,832 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l qemu-kvm | grep qemu-kvm | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,842 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,852 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,912 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,924 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,936 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,946 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,957 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,968 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,979 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l libvirt-bin | grep libvirt-bin | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:56,991 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,003 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l qemu-kvm | grep qemu-kvm | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,015 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,026 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,106 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,119 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,130 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,141 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,152 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,163 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,174 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,186 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,196 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,207 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,222 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,233 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,244 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,255 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,267 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,278 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,289 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,300 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,310 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,322 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2018-09-01 22:03:57,354 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'salt-minion --version' in directory '/root'
2018-09-01 22:03:57,526 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'salt-minion --version' in directory '/root'
2018-09-01 22:03:58,392 [salt.state       :1770][INFO    ][5939] Running state [salt-minion] at time 22:03:58.392192
2018-09-01 22:03:58,392 [salt.state       :1803][INFO    ][5939] Executing state pkg.installed for [salt-minion]
2018-09-01 22:03:58,393 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 22:03:58,654 [salt.state       :290 ][INFO    ][5939] All specified packages are already installed
2018-09-01 22:03:58,654 [salt.state       :1941][INFO    ][5939] Completed state [salt-minion] at time 22:03:58.654381 duration_in_ms=262.19
2018-09-01 22:03:58,654 [salt.state       :1770][INFO    ][5939] Running state [salt_minion_dependency_packages] at time 22:03:58.654579
2018-09-01 22:03:58,654 [salt.state       :1803][INFO    ][5939] Executing state pkg.installed for [salt_minion_dependency_packages]
2018-09-01 22:03:58,659 [salt.state       :290 ][INFO    ][5939] All specified packages are already installed
2018-09-01 22:03:58,659 [salt.state       :1941][INFO    ][5939] Completed state [salt_minion_dependency_packages] at time 22:03:58.659308 duration_in_ms=4.728
2018-09-01 22:03:58,661 [salt.state       :1770][INFO    ][5939] Running state [/etc/salt/minion.d/minion.conf] at time 22:03:58.661229
2018-09-01 22:03:58,661 [salt.state       :1803][INFO    ][5939] Executing state file.managed for [/etc/salt/minion.d/minion.conf]
2018-09-01 22:03:58,797 [salt.state       :290 ][INFO    ][5939] File /etc/salt/minion.d/minion.conf is in the correct state
2018-09-01 22:03:58,797 [salt.state       :1941][INFO    ][5939] Completed state [/etc/salt/minion.d/minion.conf] at time 22:03:58.797577 duration_in_ms=136.347
2018-09-01 22:03:58,797 [salt.state       :1770][INFO    ][5939] Running state [python-netaddr] at time 22:03:58.797725
2018-09-01 22:03:58,797 [salt.state       :1803][INFO    ][5939] Executing state pkg.installed for [python-netaddr]
2018-09-01 22:03:58,802 [salt.state       :290 ][INFO    ][5939] All specified packages are already installed
2018-09-01 22:03:58,802 [salt.state       :1941][INFO    ][5939] Completed state [python-netaddr] at time 22:03:58.802200 duration_in_ms=4.476
2018-09-01 22:03:58,802 [salt.state       :1770][INFO    ][5939] Running state [salt-minion] at time 22:03:58.802906
2018-09-01 22:03:58,803 [salt.state       :1803][INFO    ][5939] Executing state service.running for [salt-minion]
2018-09-01 22:03:58,803 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
2018-09-01 22:03:58,818 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command ['systemctl', 'is-active', 'salt-minion.service'] in directory '/root'
2018-09-01 22:03:58,824 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command ['systemctl', 'is-enabled', 'salt-minion.service'] in directory '/root'
2018-09-01 22:03:58,831 [salt.state       :290 ][INFO    ][5939] The service salt-minion is already running
2018-09-01 22:03:58,831 [salt.state       :1941][INFO    ][5939] Completed state [salt-minion] at time 22:03:58.831842 duration_in_ms=28.936
2018-09-01 22:03:58,833 [salt.state       :1770][INFO    ][5939] Running state [/etc/salt/grains.d] at time 22:03:58.833582
2018-09-01 22:03:58,833 [salt.state       :1803][INFO    ][5939] Executing state file.directory for [/etc/salt/grains.d]
2018-09-01 22:03:58,834 [salt.state       :290 ][INFO    ][5939] Directory /etc/salt/grains.d is in the correct state
Directory /etc/salt/grains.d updated
2018-09-01 22:03:58,834 [salt.state       :1941][INFO    ][5939] Completed state [/etc/salt/grains.d] at time 22:03:58.834629 duration_in_ms=1.047
2018-09-01 22:03:58,835 [salt.state       :1770][INFO    ][5939] Running state [/etc/salt/grains] at time 22:03:58.835040
2018-09-01 22:03:58,835 [salt.state       :1803][INFO    ][5939] Executing state file.managed for [/etc/salt/grains]
2018-09-01 22:03:58,835 [salt.state       :290 ][INFO    ][5939] File /etc/salt/grains exists with proper permissions. No changes made.
2018-09-01 22:03:58,835 [salt.state       :1941][INFO    ][5939] Completed state [/etc/salt/grains] at time 22:03:58.835828 duration_in_ms=0.788
2018-09-01 22:03:58,836 [salt.state       :1770][INFO    ][5939] Running state [/etc/salt/grains.d/placeholder] at time 22:03:58.836132
2018-09-01 22:03:58,836 [salt.state       :1803][INFO    ][5939] Executing state file.managed for [/etc/salt/grains.d/placeholder]
2018-09-01 22:03:58,841 [salt.state       :290 ][INFO    ][5939] File /etc/salt/grains.d/placeholder exists with proper permissions. No changes made.
2018-09-01 22:03:58,841 [salt.state       :1941][INFO    ][5939] Completed state [/etc/salt/grains.d/placeholder] at time 22:03:58.841429 duration_in_ms=5.297
2018-09-01 22:03:58,841 [salt.state       :1770][INFO    ][5939] Running state [/etc/salt/grains.d/sphinx] at time 22:03:58.841712
2018-09-01 22:03:58,841 [salt.state       :1803][INFO    ][5939] Executing state file.managed for [/etc/salt/grains.d/sphinx]
2018-09-01 22:03:58,855 [salt.state       :290 ][INFO    ][5939] File changed:
--- 
+++ 
@@ -41,13 +41,13 @@
 
                 * lvm2: 2.02.133-1ubuntu10
 
-                * sysfsutils: dpkg-query: no packages found matching sysfsutils
+                * sysfsutils: 2.1.0+repack-4
 
                 * sg3-utils: dpkg-query: no packages found matching sg3-utils
 
                 * python-cinder: dpkg-query: no packages found matching python-cinder
 
-                * python-mysqldb: dpkg-query: no packages found matching python-mysqldb
+                * python-mysqldb: <none>
 
                 * p7zip: dpkg-query: no packages found matching p7zip
 
@@ -86,8 +86,11 @@
             ip:
               name: IP Addresses
               value:
+              - 10.1.0.6
+              - 10.167.4.53
               - 127.0.0.1
-              - 192.168.11.36
+              - 172.30.10.111
+              - 192.168.11.35
         system:
           name: System
           param:
@@ -135,7 +138,7 @@
 
                 * pm-utils: dpkg-query: no packages found matching pm-utils
 
-                * sysfsutils: dpkg-query: no packages found matching sysfsutils
+                * sysfsutils: 2.1.0+repack-4
 
                 * sg3-utils: dpkg-query: no packages found matching sg3-utils
 

2018-09-01 22:03:58,856 [salt.state       :1941][INFO    ][5939] Completed state [/etc/salt/grains.d/sphinx] at time 22:03:58.856136 duration_in_ms=14.424
2018-09-01 22:03:58,857 [salt.state       :1770][INFO    ][5939] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 22:03:58.857353
2018-09-01 22:03:58,857 [salt.state       :1803][INFO    ][5939] Executing state cmd.wait for [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"]
2018-09-01 22:03:58,857 [salt.state       :290 ][INFO    ][5939] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"
2018-09-01 22:03:58,858 [salt.state       :1941][INFO    ][5939] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 22:03:58.857967 duration_in_ms=0.614
2018-09-01 22:03:58,858 [salt.state       :1770][INFO    ][5939] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 22:03:58.858123
2018-09-01 22:03:58,858 [salt.state       :1803][INFO    ][5939] Executing state cmd.mod_watch for [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"]
2018-09-01 22:03:58,858 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"' in directory '/root'
2018-09-01 22:03:58,948 [salt.state       :290 ][INFO    ][5939] {'pid': 6344, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-09-01 22:03:58,948 [salt.state       :1941][INFO    ][5939] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 22:03:58.948802 duration_in_ms=90.679
2018-09-01 22:03:58,949 [salt.state       :1770][INFO    ][5939] Running state [/etc/salt/grains.d/dns_records] at time 22:03:58.949138
2018-09-01 22:03:58,949 [salt.state       :1803][INFO    ][5939] Executing state file.managed for [/etc/salt/grains.d/dns_records]
2018-09-01 22:03:58,956 [salt.state       :290 ][INFO    ][5939] File /etc/salt/grains.d/dns_records is in the correct state
2018-09-01 22:03:58,956 [salt.state       :1941][INFO    ][5939] Completed state [/etc/salt/grains.d/dns_records] at time 22:03:58.956449 duration_in_ms=7.311
2018-09-01 22:03:58,956 [salt.state       :1770][INFO    ][5939] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"] at time 22:03:58.956950
2018-09-01 22:03:58,957 [salt.state       :1803][INFO    ][5939] Executing state cmd.wait for [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"]
2018-09-01 22:03:58,957 [salt.state       :290 ][INFO    ][5939] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"
2018-09-01 22:03:58,957 [salt.state       :1941][INFO    ][5939] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"] at time 22:03:58.957426 duration_in_ms=0.476
2018-09-01 22:03:58,957 [salt.state       :1770][INFO    ][5939] Running state [/etc/salt/grains.d/salt] at time 22:03:58.957652
2018-09-01 22:03:58,957 [salt.state       :1803][INFO    ][5939] Executing state file.managed for [/etc/salt/grains.d/salt]
2018-09-01 22:03:58,962 [salt.state       :290 ][INFO    ][5939] File /etc/salt/grains.d/salt is in the correct state
2018-09-01 22:03:58,962 [salt.state       :1941][INFO    ][5939] Completed state [/etc/salt/grains.d/salt] at time 22:03:58.962413 duration_in_ms=4.761
2018-09-01 22:03:58,962 [salt.state       :1770][INFO    ][5939] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"] at time 22:03:58.962851
2018-09-01 22:03:58,963 [salt.state       :1803][INFO    ][5939] Executing state cmd.wait for [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"]
2018-09-01 22:03:58,963 [salt.state       :290 ][INFO    ][5939] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"
2018-09-01 22:03:58,963 [salt.state       :1941][INFO    ][5939] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"] at time 22:03:58.963312 duration_in_ms=0.462
2018-09-01 22:03:58,964 [salt.state       :1770][INFO    ][5939] Running state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 22:03:58.964315
2018-09-01 22:03:58,964 [salt.state       :1803][INFO    ][5939] Executing state cmd.wait for [cat /etc/salt/grains.d/* > /etc/salt/grains]
2018-09-01 22:03:58,964 [salt.state       :290 ][INFO    ][5939] No changes made for cat /etc/salt/grains.d/* > /etc/salt/grains
2018-09-01 22:03:58,964 [salt.state       :1941][INFO    ][5939] Completed state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 22:03:58.964792 duration_in_ms=0.478
2018-09-01 22:03:58,964 [salt.state       :1770][INFO    ][5939] Running state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 22:03:58.964909
2018-09-01 22:03:58,965 [salt.state       :1803][INFO    ][5939] Executing state cmd.mod_watch for [cat /etc/salt/grains.d/* > /etc/salt/grains]
2018-09-01 22:03:58,966 [salt.loaded.int.module.cmdmod:395 ][INFO    ][5939] Executing command 'cat /etc/salt/grains.d/* > /etc/salt/grains' in directory '/root'
2018-09-01 22:03:58,970 [salt.state       :290 ][INFO    ][5939] {'pid': 6346, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-09-01 22:03:58,970 [salt.state       :1941][INFO    ][5939] Completed state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 22:03:58.970652 duration_in_ms=5.744
2018-09-01 22:03:58,971 [salt.state       :1770][INFO    ][5939] Running state [mine.update] at time 22:03:58.971097
2018-09-01 22:03:58,971 [salt.state       :1803][INFO    ][5939] Executing state module.wait for [mine.update]
2018-09-01 22:03:58,971 [salt.state       :290 ][INFO    ][5939] No changes made for mine.update
2018-09-01 22:03:58,971 [salt.state       :1941][INFO    ][5939] Completed state [mine.update] at time 22:03:58.971575 duration_in_ms=0.478
2018-09-01 22:03:58,971 [salt.state       :1770][INFO    ][5939] Running state [mine.update] at time 22:03:58.971691
2018-09-01 22:03:58,971 [salt.state       :1803][INFO    ][5939] Executing state module.mod_watch for [mine.update]
2018-09-01 22:03:58,972 [salt.utils.decorators:613 ][WARNING ][5939] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 22:03:59,488 [salt.state       :290 ][INFO    ][5939] {'ret': True}
2018-09-01 22:03:59,488 [salt.state       :1941][INFO    ][5939] Completed state [mine.update] at time 22:03:59.488517 duration_in_ms=516.826
2018-09-01 22:03:59,488 [salt.state       :1770][INFO    ][5939] Running state [ca-certificates] at time 22:03:59.488693
2018-09-01 22:03:59,488 [salt.state       :1803][INFO    ][5939] Executing state pkg.installed for [ca-certificates]
2018-09-01 22:03:59,493 [salt.state       :290 ][INFO    ][5939] All specified packages are already installed
2018-09-01 22:03:59,493 [salt.state       :1941][INFO    ][5939] Completed state [ca-certificates] at time 22:03:59.493886 duration_in_ms=5.193
2018-09-01 22:03:59,494 [salt.state       :1770][INFO    ][5939] Running state [update-ca-certificates] at time 22:03:59.494287
2018-09-01 22:03:59,494 [salt.state       :1803][INFO    ][5939] Executing state cmd.wait for [update-ca-certificates]
2018-09-01 22:03:59,494 [salt.state       :290 ][INFO    ][5939] No changes made for update-ca-certificates
2018-09-01 22:03:59,494 [salt.state       :1941][INFO    ][5939] Completed state [update-ca-certificates] at time 22:03:59.494746 duration_in_ms=0.459
2018-09-01 22:03:59,496 [salt.minion      :1708][INFO    ][5939] Returning information for job: 20180901220351166378
2018-09-01 22:08:12,688 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.sync_all with jid 20180901220812681081
2018-09-01 22:08:12,697 [salt.minion      :1431][INFO    ][6358] Starting a new job with PID 6358
2018-09-01 22:08:15,995 [salt.state       :905 ][INFO    ][6358] Loading fresh modules for state activity
2018-09-01 22:08:17,105 [salt.utils.extmods:71  ][INFO    ][6358] Creating module dir '/var/cache/salt/minion/extmods/clouds'
2018-09-01 22:08:17,108 [salt.utils.extmods:82  ][INFO    ][6358] Syncing clouds for environment 'base'
2018-09-01 22:08:17,108 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_clouds, for base)
2018-09-01 22:08:17,109 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_clouds/' for environment 'base'
2018-09-01 22:08:17,149 [salt.utils.extmods:71  ][INFO    ][6358] Creating module dir '/var/cache/salt/minion/extmods/beacons'
2018-09-01 22:08:17,152 [salt.utils.extmods:82  ][INFO    ][6358] Syncing beacons for environment 'base'
2018-09-01 22:08:17,152 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_beacons, for base)
2018-09-01 22:08:17,153 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_beacons/' for environment 'base'
2018-09-01 22:08:17,193 [salt.utils.extmods:82  ][INFO    ][6358] Syncing modules for environment 'base'
2018-09-01 22:08:17,193 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_modules, for base)
2018-09-01 22:08:17,193 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_modules/' for environment 'base'
2018-09-01 22:08:17,785 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901220817782294
2018-09-01 22:08:17,794 [salt.minion      :1431][INFO    ][6369] Starting a new job with PID 6369
2018-09-01 22:08:17,805 [salt.minion      :1708][INFO    ][6369] Returning information for job: 20180901220817782294
2018-09-01 22:08:17,953 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/__init__.py' to '/var/cache/salt/minion/extmods/modules/__init__.py'
2018-09-01 22:08:17,953 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/_modules/gnocchiv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/_modules/gnocchiv1/__init__.py'
2018-09-01 22:08:17,954 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/_modules/gnocchiv1/archive_policy.py' to '/var/cache/salt/minion/extmods/modules/_modules/gnocchiv1/archive_policy.py'
2018-09-01 22:08:17,955 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/_modules/gnocchiv1/common.py' to '/var/cache/salt/minion/extmods/modules/_modules/gnocchiv1/common.py'
2018-09-01 22:08:17,955 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/architect.py' to '/var/cache/salt/minion/extmods/modules/architect.py'
2018-09-01 22:08:17,956 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/artifactory.py' to '/var/cache/salt/minion/extmods/modules/artifactory.py'
2018-09-01 22:08:17,956 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/auditd.py' to '/var/cache/salt/minion/extmods/modules/auditd.py'
2018-09-01 22:08:17,957 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/avinetworks.py' to '/var/cache/salt/minion/extmods/modules/avinetworks.py'
2018-09-01 22:08:17,957 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/__init__.py'
2018-09-01 22:08:17,958 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/acl.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/acl.py'
2018-09-01 22:08:17,958 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/common.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/common.py'
2018-09-01 22:08:17,963 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/secrets.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/secrets.py'
2018-09-01 22:08:17,963 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ceph_ng.py' to '/var/cache/salt/minion/extmods/modules/ceph_ng.py'
2018-09-01 22:08:17,964 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/cert_formula_helper.py' to '/var/cache/salt/minion/extmods/modules/cert_formula_helper.py'
2018-09-01 22:08:17,964 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/cfgdrive.py' to '/var/cache/salt/minion/extmods/modules/cfgdrive.py'
2018-09-01 22:08:17,965 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/chartmuseum.py' to '/var/cache/salt/minion/extmods/modules/chartmuseum.py'
2018-09-01 22:08:17,965 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/cinderng.py' to '/var/cache/salt/minion/extmods/modules/cinderng.py'
2018-09-01 22:08:17,965 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/__init__.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/__init__.py'
2018-09-01 22:08:17,966 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/common.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/common.py'
2018-09-01 22:08:17,966 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/volume.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/volume.py'
2018-09-01 22:08:17,966 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/configdrive.py' to '/var/cache/salt/minion/extmods/modules/configdrive.py'
2018-09-01 22:08:17,967 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/contrail.py' to '/var/cache/salt/minion/extmods/modules/contrail.py'
2018-09-01 22:08:17,968 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/creds.py' to '/var/cache/salt/minion/extmods/modules/creds.py'
2018-09-01 22:08:17,973 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/devops_utils.py' to '/var/cache/salt/minion/extmods/modules/devops_utils.py'
2018-09-01 22:08:17,973 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/dockerng_service.py' to '/var/cache/salt/minion/extmods/modules/dockerng_service.py'
2018-09-01 22:08:17,974 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/encode_json.py' to '/var/cache/salt/minion/extmods/modules/encode_json.py'
2018-09-01 22:08:17,974 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/gerrit.py' to '/var/cache/salt/minion/extmods/modules/gerrit.py'
2018-09-01 22:08:17,975 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/gitlab.py' to '/var/cache/salt/minion/extmods/modules/gitlab.py'
2018-09-01 22:08:17,975 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/glanceng.py' to '/var/cache/salt/minion/extmods/modules/glanceng.py'
2018-09-01 22:08:17,976 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/__init__.py' to '/var/cache/salt/minion/extmods/modules/glancev2/__init__.py'
2018-09-01 22:08:17,976 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/common.py' to '/var/cache/salt/minion/extmods/modules/glancev2/common.py'
2018-09-01 22:08:17,977 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/image.py' to '/var/cache/salt/minion/extmods/modules/glancev2/image.py'
2018-09-01 22:08:17,977 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/task.py' to '/var/cache/salt/minion/extmods/modules/glancev2/task.py'
2018-09-01 22:08:17,978 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/__init__.py'
2018-09-01 22:08:17,978 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/archive_policy.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/archive_policy.py'
2018-09-01 22:08:17,978 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/common.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/common.py'
2018-09-01 22:08:17,979 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/heat.py' to '/var/cache/salt/minion/extmods/modules/heat.py'
2018-09-01 22:08:17,979 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/heatv1/__init__.py'
2018-09-01 22:08:17,980 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/common.py' to '/var/cache/salt/minion/extmods/modules/heatv1/common.py'
2018-09-01 22:08:17,980 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/stack.py' to '/var/cache/salt/minion/extmods/modules/heatv1/stack.py'
2018-09-01 22:08:17,981 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/heka_alarming.py' to '/var/cache/salt/minion/extmods/modules/heka_alarming.py'
2018-09-01 22:08:17,981 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/helm.py' to '/var/cache/salt/minion/extmods/modules/helm.py'
2018-09-01 22:08:17,982 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ironicng.py' to '/var/cache/salt/minion/extmods/modules/ironicng.py'
2018-09-01 22:08:17,982 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/__init__.py'
2018-09-01 22:08:17,983 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/chassis.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/chassis.py'
2018-09-01 22:08:17,983 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/common.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/common.py'
2018-09-01 22:08:17,983 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/drivers.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/drivers.py'
2018-09-01 22:08:17,984 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/nodes.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/nodes.py'
2018-09-01 22:08:17,984 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/ports.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/ports.py'
2018-09-01 22:08:17,985 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/volumes.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/volumes.py'
2018-09-01 22:08:17,985 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/jenkins_common.py' to '/var/cache/salt/minion/extmods/modules/jenkins_common.py'
2018-09-01 22:08:17,986 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystone_policy.py' to '/var/cache/salt/minion/extmods/modules/keystone_policy.py'
2018-09-01 22:08:17,986 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystoneng.py' to '/var/cache/salt/minion/extmods/modules/keystoneng.py'
2018-09-01 22:08:17,993 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/__init__.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/__init__.py'
2018-09-01 22:08:17,994 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/arg_converter.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/arg_converter.py'
2018-09-01 22:08:17,994 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/common.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/common.py'
2018-09-01 22:08:17,994 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/domains.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/domains.py'
2018-09-01 22:08:17,995 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/endpoints.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/endpoints.py'
2018-09-01 22:08:17,995 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/groups.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/groups.py'
2018-09-01 22:08:17,996 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/lists.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/lists.py'
2018-09-01 22:08:17,996 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/projects.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/projects.py'
2018-09-01 22:08:17,996 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/regions.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/regions.py'
2018-09-01 22:08:17,997 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/roles.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/roles.py'
2018-09-01 22:08:18,000 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/services.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/services.py'
2018-09-01 22:08:18,001 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/users.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/users.py'
2018-09-01 22:08:18,001 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/linux_hosts.py' to '/var/cache/salt/minion/extmods/modules/linux_hosts.py'
2018-09-01 22:08:18,001 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/linux_netlink.py' to '/var/cache/salt/minion/extmods/modules/linux_netlink.py'
2018-09-01 22:08:18,002 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/maas.py' to '/var/cache/salt/minion/extmods/modules/maas.py'
2018-09-01 22:08:18,002 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/maas_client.py' to '/var/cache/salt/minion/extmods/modules/maas_client.py'
2018-09-01 22:08:18,003 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/maasng.py' to '/var/cache/salt/minion/extmods/modules/maasng.py'
2018-09-01 22:08:18,003 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/manilang/__init__.py' to '/var/cache/salt/minion/extmods/modules/manilang/__init__.py'
2018-09-01 22:08:18,004 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/manilang/common.py' to '/var/cache/salt/minion/extmods/modules/manilang/common.py'
2018-09-01 22:08:18,004 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/manilang/share_types.py' to '/var/cache/salt/minion/extmods/modules/manilang/share_types.py'
2018-09-01 22:08:18,004 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/manilang/shares.py' to '/var/cache/salt/minion/extmods/modules/manilang/shares.py'
2018-09-01 22:08:18,005 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/modelschema.py' to '/var/cache/salt/minion/extmods/modules/modelschema.py'
2018-09-01 22:08:18,005 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/modelutils.py' to '/var/cache/salt/minion/extmods/modules/modelutils.py'
2018-09-01 22:08:18,005 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/multipart.py' to '/var/cache/salt/minion/extmods/modules/multipart.py'
2018-09-01 22:08:18,006 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/nagios_alarming.py' to '/var/cache/salt/minion/extmods/modules/nagios_alarming.py'
2018-09-01 22:08:18,006 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/netutils.py' to '/var/cache/salt/minion/extmods/modules/netutils.py'
2018-09-01 22:08:18,006 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronng.py' to '/var/cache/salt/minion/extmods/modules/neutronng.py'
2018-09-01 22:08:18,007 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/__init__.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/__init__.py'
2018-09-01 22:08:18,007 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/agents.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/agents.py'
2018-09-01 22:08:18,008 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/arg_converter.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/arg_converter.py'
2018-09-01 22:08:18,008 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/auto_alloc.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/auto_alloc.py'
2018-09-01 22:08:18,008 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/common.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/common.py'
2018-09-01 22:08:18,009 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/lists.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/lists.py'
2018-09-01 22:08:18,009 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/networks.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/networks.py'
2018-09-01 22:08:18,010 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/routers.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/routers.py'
2018-09-01 22:08:18,010 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/subnetpools.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/subnetpools.py'
2018-09-01 22:08:18,010 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/subnets.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/subnets.py'
2018-09-01 22:08:18,011 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/novang.py' to '/var/cache/salt/minion/extmods/modules/novang.py'
2018-09-01 22:08:18,011 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/novav21/__init__.py' to '/var/cache/salt/minion/extmods/modules/novav21/__init__.py'
2018-09-01 22:08:18,012 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/novav21/aggregates.py' to '/var/cache/salt/minion/extmods/modules/novav21/aggregates.py'
2018-09-01 22:08:18,012 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/novav21/common.py' to '/var/cache/salt/minion/extmods/modules/novav21/common.py'
2018-09-01 22:08:18,013 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/novav21/flavors.py' to '/var/cache/salt/minion/extmods/modules/novav21/flavors.py'
2018-09-01 22:08:18,013 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/novav21/keypairs.py' to '/var/cache/salt/minion/extmods/modules/novav21/keypairs.py'
2018-09-01 22:08:18,013 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/novav21/quotas.py' to '/var/cache/salt/minion/extmods/modules/novav21/quotas.py'
2018-09-01 22:08:18,014 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/novav21/servers.py' to '/var/cache/salt/minion/extmods/modules/novav21/servers.py'
2018-09-01 22:08:18,015 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/reclass.py' to '/var/cache/salt/minion/extmods/modules/reclass.py'
2018-09-01 22:08:18,015 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/rsyslog_util.py' to '/var/cache/salt/minion/extmods/modules/rsyslog_util.py'
2018-09-01 22:08:18,015 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/rundeck.py' to '/var/cache/salt/minion/extmods/modules/rundeck.py'
2018-09-01 22:08:18,016 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/saltkey.py' to '/var/cache/salt/minion/extmods/modules/saltkey.py'
2018-09-01 22:08:18,016 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/saltresource.py' to '/var/cache/salt/minion/extmods/modules/saltresource.py'
2018-09-01 22:08:18,017 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/seedng.py' to '/var/cache/salt/minion/extmods/modules/seedng.py'
2018-09-01 22:08:18,018 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/testing/__init__.py' to '/var/cache/salt/minion/extmods/modules/testing/__init__.py'
2018-09-01 22:08:18,018 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/testing/credentials.py' to '/var/cache/salt/minion/extmods/modules/testing/credentials.py'
2018-09-01 22:08:18,018 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/testing/django.py' to '/var/cache/salt/minion/extmods/modules/testing/django.py'
2018-09-01 22:08:18,019 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/testing/django_client_proxy.py' to '/var/cache/salt/minion/extmods/modules/testing/django_client_proxy.py'
2018-09-01 22:08:18,019 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/ufw.py' to '/var/cache/salt/minion/extmods/modules/ufw.py'
2018-09-01 22:08:18,019 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/utils.py' to '/var/cache/salt/minion/extmods/modules/utils.py'
2018-09-01 22:08:18,020 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_modules/virtng.py' to '/var/cache/salt/minion/extmods/modules/virtng.py'
2018-09-01 22:08:18,027 [salt.utils.extmods:82  ][INFO    ][6358] Syncing states for environment 'base'
2018-09-01 22:08:18,027 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_states, for base)
2018-09-01 22:08:18,027 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_states/' for environment 'base'
2018-09-01 22:08:18,335 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/_states/gnocchiv1.py' to '/var/cache/salt/minion/extmods/states/_states/gnocchiv1.py'
2018-09-01 22:08:18,335 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/artifactory.py' to '/var/cache/salt/minion/extmods/states/artifactory.py'
2018-09-01 22:08:18,335 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/avinetworks.py' to '/var/cache/salt/minion/extmods/states/avinetworks.py'
2018-09-01 22:08:18,336 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/barbicanv1.py' to '/var/cache/salt/minion/extmods/states/barbicanv1.py'
2018-09-01 22:08:18,336 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/chartmuseum.py' to '/var/cache/salt/minion/extmods/states/chartmuseum.py'
2018-09-01 22:08:18,336 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/cinderng.py' to '/var/cache/salt/minion/extmods/states/cinderng.py'
2018-09-01 22:08:18,337 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/cinderv3.py' to '/var/cache/salt/minion/extmods/states/cinderv3.py'
2018-09-01 22:08:18,337 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/ckanext.py' to '/var/cache/salt/minion/extmods/states/ckanext.py'
2018-09-01 22:08:18,337 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/contrail.py' to '/var/cache/salt/minion/extmods/states/contrail.py'
2018-09-01 22:08:18,338 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/debmirror.py' to '/var/cache/salt/minion/extmods/states/debmirror.py'
2018-09-01 22:08:18,338 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/dockerng_service.py' to '/var/cache/salt/minion/extmods/states/dockerng_service.py'
2018-09-01 22:08:18,338 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/gerrit.py' to '/var/cache/salt/minion/extmods/states/gerrit.py'
2018-09-01 22:08:18,338 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/gitlab.py' to '/var/cache/salt/minion/extmods/states/gitlab.py'
2018-09-01 22:08:18,339 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/glanceng.py' to '/var/cache/salt/minion/extmods/states/glanceng.py'
2018-09-01 22:08:18,339 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/glancev2.py' to '/var/cache/salt/minion/extmods/states/glancev2.py'
2018-09-01 22:08:18,339 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/gnocchiv1.py' to '/var/cache/salt/minion/extmods/states/gnocchiv1.py'
2018-09-01 22:08:18,340 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/grafana3_dashboard.py' to '/var/cache/salt/minion/extmods/states/grafana3_dashboard.py'
2018-09-01 22:08:18,340 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/grafana3_datasource.py' to '/var/cache/salt/minion/extmods/states/grafana3_datasource.py'
2018-09-01 22:08:18,340 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/heat.py' to '/var/cache/salt/minion/extmods/states/heat.py'
2018-09-01 22:08:18,340 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/heatv1.py' to '/var/cache/salt/minion/extmods/states/heatv1.py'
2018-09-01 22:08:18,341 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/helm_release.py' to '/var/cache/salt/minion/extmods/states/helm_release.py'
2018-09-01 22:08:18,341 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/helm_repos.py' to '/var/cache/salt/minion/extmods/states/helm_repos.py'
2018-09-01 22:08:18,341 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/httpng.py' to '/var/cache/salt/minion/extmods/states/httpng.py'
2018-09-01 22:08:18,341 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/ironicng.py' to '/var/cache/salt/minion/extmods/states/ironicng.py'
2018-09-01 22:08:18,342 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/ironicv1.py' to '/var/cache/salt/minion/extmods/states/ironicv1.py'
2018-09-01 22:08:18,342 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_approval.py' to '/var/cache/salt/minion/extmods/states/jenkins_approval.py'
2018-09-01 22:08:18,342 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_artifactory.py' to '/var/cache/salt/minion/extmods/states/jenkins_artifactory.py'
2018-09-01 22:08:18,343 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_credential.py' to '/var/cache/salt/minion/extmods/states/jenkins_credential.py'
2018-09-01 22:08:18,343 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_gerrit.py' to '/var/cache/salt/minion/extmods/states/jenkins_gerrit.py'
2018-09-01 22:08:18,343 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_globalenvprop.py' to '/var/cache/salt/minion/extmods/states/jenkins_globalenvprop.py'
2018-09-01 22:08:18,343 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_jira.py' to '/var/cache/salt/minion/extmods/states/jenkins_jira.py'
2018-09-01 22:08:18,344 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_job.py' to '/var/cache/salt/minion/extmods/states/jenkins_job.py'
2018-09-01 22:08:18,344 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_lib.py' to '/var/cache/salt/minion/extmods/states/jenkins_lib.py'
2018-09-01 22:08:18,344 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_node.py' to '/var/cache/salt/minion/extmods/states/jenkins_node.py'
2018-09-01 22:08:18,344 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_plugin.py' to '/var/cache/salt/minion/extmods/states/jenkins_plugin.py'
2018-09-01 22:08:18,345 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_security.py' to '/var/cache/salt/minion/extmods/states/jenkins_security.py'
2018-09-01 22:08:18,345 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_slack.py' to '/var/cache/salt/minion/extmods/states/jenkins_slack.py'
2018-09-01 22:08:18,345 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_smtp.py' to '/var/cache/salt/minion/extmods/states/jenkins_smtp.py'
2018-09-01 22:08:18,345 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_theme.py' to '/var/cache/salt/minion/extmods/states/jenkins_theme.py'
2018-09-01 22:08:18,346 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_throttle_category.py' to '/var/cache/salt/minion/extmods/states/jenkins_throttle_category.py'
2018-09-01 22:08:18,346 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_user.py' to '/var/cache/salt/minion/extmods/states/jenkins_user.py'
2018-09-01 22:08:18,346 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/jenkins_view.py' to '/var/cache/salt/minion/extmods/states/jenkins_view.py'
2018-09-01 22:08:18,346 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/keystone_policy.py' to '/var/cache/salt/minion/extmods/states/keystone_policy.py'
2018-09-01 22:08:18,347 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/keystoneng.py' to '/var/cache/salt/minion/extmods/states/keystoneng.py'
2018-09-01 22:08:18,347 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/keystonev3.py' to '/var/cache/salt/minion/extmods/states/keystonev3.py'
2018-09-01 22:08:18,347 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/kibana_object.py' to '/var/cache/salt/minion/extmods/states/kibana_object.py'
2018-09-01 22:08:18,348 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/maas_cluster.py' to '/var/cache/salt/minion/extmods/states/maas_cluster.py'
2018-09-01 22:08:18,348 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/maasng.py' to '/var/cache/salt/minion/extmods/states/maasng.py'
2018-09-01 22:08:18,348 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/manilang.py' to '/var/cache/salt/minion/extmods/states/manilang.py'
2018-09-01 22:08:18,348 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/neutronng.py' to '/var/cache/salt/minion/extmods/states/neutronng.py'
2018-09-01 22:08:18,349 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/neutronv2.py' to '/var/cache/salt/minion/extmods/states/neutronv2.py'
2018-09-01 22:08:18,349 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/novang.py' to '/var/cache/salt/minion/extmods/states/novang.py'
2018-09-01 22:08:18,349 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/novav21.py' to '/var/cache/salt/minion/extmods/states/novav21.py'
2018-09-01 22:08:18,350 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/powerdns_mysql.py' to '/var/cache/salt/minion/extmods/states/powerdns_mysql.py'
2018-09-01 22:08:18,350 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/reclass.py' to '/var/cache/salt/minion/extmods/states/reclass.py'
2018-09-01 22:08:18,350 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/rundeck_project.py' to '/var/cache/salt/minion/extmods/states/rundeck_project.py'
2018-09-01 22:08:18,351 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/rundeck_scm.py' to '/var/cache/salt/minion/extmods/states/rundeck_scm.py'
2018-09-01 22:08:18,351 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/rundeck_secret.py' to '/var/cache/salt/minion/extmods/states/rundeck_secret.py'
2018-09-01 22:08:18,351 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_states/ufw.py' to '/var/cache/salt/minion/extmods/states/ufw.py'
2018-09-01 22:08:18,354 [salt.utils.extmods:71  ][INFO    ][6358] Creating module dir '/var/cache/salt/minion/extmods/sdb'
2018-09-01 22:08:18,356 [salt.utils.extmods:82  ][INFO    ][6358] Syncing sdb for environment 'base'
2018-09-01 22:08:18,357 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_sdb, for base)
2018-09-01 22:08:18,357 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_sdb/' for environment 'base'
2018-09-01 22:08:18,400 [salt.utils.extmods:82  ][INFO    ][6358] Syncing grains for environment 'base'
2018-09-01 22:08:18,400 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_grains, for base)
2018-09-01 22:08:18,401 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_grains/' for environment 'base'
2018-09-01 22:08:18,517 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/ceilometer_policy.py' to '/var/cache/salt/minion/extmods/grains/ceilometer_policy.py'
2018-09-01 22:08:18,517 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/ceph.py' to '/var/cache/salt/minion/extmods/grains/ceph.py'
2018-09-01 22:08:18,518 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/cinder_policy.py' to '/var/cache/salt/minion/extmods/grains/cinder_policy.py'
2018-09-01 22:08:18,518 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/docker_swarm.py' to '/var/cache/salt/minion/extmods/grains/docker_swarm.py'
2018-09-01 22:08:18,518 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/glance_policy.py' to '/var/cache/salt/minion/extmods/grains/glance_policy.py'
2018-09-01 22:08:18,519 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/heat_policy.py' to '/var/cache/salt/minion/extmods/grains/heat_policy.py'
2018-09-01 22:08:18,519 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/jenkins_plugins.py' to '/var/cache/salt/minion/extmods/grains/jenkins_plugins.py'
2018-09-01 22:08:18,519 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/keystone_policy.py' to '/var/cache/salt/minion/extmods/grains/keystone_policy.py'
2018-09-01 22:08:18,520 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/kubernetes.py' to '/var/cache/salt/minion/extmods/grains/kubernetes.py'
2018-09-01 22:08:18,520 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/neutron_policy.py' to '/var/cache/salt/minion/extmods/grains/neutron_policy.py'
2018-09-01 22:08:18,520 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/nova_policy.py' to '/var/cache/salt/minion/extmods/grains/nova_policy.py'
2018-09-01 22:08:18,521 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_grains/ssh_fingerprints.py' to '/var/cache/salt/minion/extmods/grains/ssh_fingerprints.py'
2018-09-01 22:08:18,522 [salt.utils.extmods:71  ][INFO    ][6358] Creating module dir '/var/cache/salt/minion/extmods/renderers'
2018-09-01 22:08:18,525 [salt.utils.extmods:82  ][INFO    ][6358] Syncing renderers for environment 'base'
2018-09-01 22:08:18,525 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_renderers, for base)
2018-09-01 22:08:18,525 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_renderers/' for environment 'base'
2018-09-01 22:08:18,565 [salt.utils.extmods:82  ][INFO    ][6358] Syncing returners for environment 'base'
2018-09-01 22:08:18,565 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_returners, for base)
2018-09-01 22:08:18,566 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_returners/' for environment 'base'
2018-09-01 22:08:18,622 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_returners/postgres_graph_db.py' to '/var/cache/salt/minion/extmods/returners/postgres_graph_db.py'
2018-09-01 22:08:18,623 [salt.utils.extmods:71  ][INFO    ][6358] Creating module dir '/var/cache/salt/minion/extmods/output'
2018-09-01 22:08:18,626 [salt.utils.extmods:82  ][INFO    ][6358] Syncing output for environment 'base'
2018-09-01 22:08:18,626 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_output, for base)
2018-09-01 22:08:18,626 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_output/' for environment 'base'
2018-09-01 22:08:18,666 [salt.utils.extmods:71  ][INFO    ][6358] Creating module dir '/var/cache/salt/minion/extmods/utils'
2018-09-01 22:08:18,669 [salt.utils.extmods:82  ][INFO    ][6358] Syncing utils for environment 'base'
2018-09-01 22:08:18,669 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_utils, for base)
2018-09-01 22:08:18,669 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_utils/' for environment 'base'
2018-09-01 22:08:18,710 [salt.utils.extmods:71  ][INFO    ][6358] Creating module dir '/var/cache/salt/minion/extmods/log_handlers'
2018-09-01 22:08:18,713 [salt.utils.extmods:82  ][INFO    ][6358] Syncing log_handlers for environment 'base'
2018-09-01 22:08:18,713 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_log_handlers, for base)
2018-09-01 22:08:18,713 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_log_handlers/' for environment 'base'
2018-09-01 22:08:18,754 [salt.utils.extmods:71  ][INFO    ][6358] Creating module dir '/var/cache/salt/minion/extmods/proxy'
2018-09-01 22:08:18,757 [salt.utils.extmods:82  ][INFO    ][6358] Syncing proxy for environment 'base'
2018-09-01 22:08:18,757 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_proxy, for base)
2018-09-01 22:08:18,757 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_proxy/' for environment 'base'
2018-09-01 22:08:18,794 [salt.utils.extmods:82  ][INFO    ][6358] Syncing engines for environment 'base'
2018-09-01 22:08:18,795 [salt.utils.extmods:86  ][INFO    ][6358] Loading cache from salt://_engines, for base)
2018-09-01 22:08:18,795 [salt.fileclient  :229 ][INFO    ][6358] Caching directory '_engines/' for environment 'base'
2018-09-01 22:08:18,852 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_engines/architect.py' to '/var/cache/salt/minion/extmods/engines/architect.py'
2018-09-01 22:08:18,853 [salt.utils.extmods:111 ][INFO    ][6358] Copying '/var/cache/salt/minion/files/base/_engines/saltgraph.py' to '/var/cache/salt/minion/extmods/engines/saltgraph.py'
2018-09-01 22:08:18,856 [salt.minion      :1708][INFO    ][6358] Returning information for job: 20180901220812681081
2018-09-01 22:25:25,797 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command state.sls with jid 20180901222525788292
2018-09-01 22:25:25,806 [salt.minion      :1431][INFO    ][6486] Starting a new job with PID 6486
2018-09-01 22:25:26,300 [salt.state       :905 ][INFO    ][6486] Loading fresh modules for state activity
2018-09-01 22:25:26,980 [salt.fileclient  :1215][INFO    ][6486] Fetching file from saltenv 'base', ** done ** 'glusterfs/client.sls'
2018-09-01 22:25:27,002 [salt.fileclient  :1215][INFO    ][6486] Fetching file from saltenv 'base', ** done ** 'glusterfs/map.jinja'
2018-09-01 22:25:27,018 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 22:25:27,307 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command 'systemd-escape -p --suffix=mount /var/lib/nova/instances' in directory '/root'
2018-09-01 22:25:27,753 [salt.state       :1770][INFO    ][6486] Running state [glusterfs-client] at time 22:25:27.753323
2018-09-01 22:25:27,753 [salt.state       :1803][INFO    ][6486] Executing state pkg.installed for [glusterfs-client]
2018-09-01 22:25:27,767 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['apt-cache', '-q', 'policy', 'glusterfs-client'] in directory '/root'
2018-09-01 22:25:27,838 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-09-01 22:25:29,518 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 22:25:29,531 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'glusterfs-client'] in directory '/root'
2018-09-01 22:25:30,869 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901222530858198
2018-09-01 22:25:30,878 [salt.minion      :1431][INFO    ][6938] Starting a new job with PID 6938
2018-09-01 22:25:30,889 [salt.minion      :1708][INFO    ][6938] Returning information for job: 20180901222530858198
2018-09-01 22:25:36,040 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 22:25:36,062 [salt.state       :290 ][INFO    ][6486] Made the following changes:
'python-jwt' changed from 'absent' to '1.3.0-1ubuntu0.1'
'glusterfs-client' changed from 'absent' to '3.13.2-ubuntu1~xenial2'
'libaio1' changed from 'absent' to '0.3.110-2'
'attr' changed from 'absent' to '1:2.4.47-2'
'libpython2.7' changed from 'absent' to '2.7.12-1ubuntu0~16.04.3'
'glusterfs-common' changed from 'absent' to '3.13.2-ubuntu1~xenial2'
'librdmacm1' changed from 'absent' to '1.0.21-1'
'liburcu4' changed from 'absent' to '0.9.1-3'
'libibverbs1' changed from 'absent' to '1.1.8-1.1ubuntu2'
'python-prettytable' changed from 'absent' to '0.7.2-3'

2018-09-01 22:25:36,088 [salt.state       :905 ][INFO    ][6486] Loading fresh modules for state activity
2018-09-01 22:25:36,109 [salt.state       :1941][INFO    ][6486] Completed state [glusterfs-client] at time 22:25:36.109038 duration_in_ms=8355.715
2018-09-01 22:25:36,112 [salt.state       :1770][INFO    ][6486] Running state [attr] at time 22:25:36.112830
2018-09-01 22:25:36,113 [salt.state       :1803][INFO    ][6486] Executing state pkg.installed for [attr]
2018-09-01 22:25:36,490 [salt.state       :290 ][INFO    ][6486] All specified packages are already installed
2018-09-01 22:25:36,490 [salt.state       :1941][INFO    ][6486] Completed state [attr] at time 22:25:36.490326 duration_in_ms=377.496
2018-09-01 22:25:36,491 [salt.state       :1770][INFO    ][6486] Running state [/etc/systemd/system/var-lib-nova-instances.mount] at time 22:25:36.491875
2018-09-01 22:25:36,492 [salt.state       :1803][INFO    ][6486] Executing state file.managed for [/etc/systemd/system/var-lib-nova-instances.mount]
2018-09-01 22:25:36,510 [salt.fileclient  :1215][INFO    ][6486] Fetching file from saltenv 'base', ** done ** 'glusterfs/files/glusterfs-client.mount'
2018-09-01 22:25:36,518 [salt.state       :290 ][INFO    ][6486] File changed:
New file
2018-09-01 22:25:36,518 [salt.state       :1941][INFO    ][6486] Completed state [/etc/systemd/system/var-lib-nova-instances.mount] at time 22:25:36.518745 duration_in_ms=26.87
2018-09-01 22:25:36,519 [salt.state       :1770][INFO    ][6486] Running state [var-lib-nova-instances.mount] at time 22:25:36.519419
2018-09-01 22:25:36,519 [salt.state       :1803][INFO    ][6486] Executing state service.running for [var-lib-nova-instances.mount]
2018-09-01 22:25:36,519 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemctl', 'status', 'var-lib-nova-instances.mount', '-n', '0'] in directory '/root'
2018-09-01 22:25:36,529 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemctl', 'is-active', 'var-lib-nova-instances.mount'] in directory '/root'
2018-09-01 22:25:36,534 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-09-01 22:25:36,543 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'var-lib-nova-instances.mount'] in directory '/root'
2018-09-01 22:25:36,584 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemctl', 'is-active', 'var-lib-nova-instances.mount'] in directory '/root'
2018-09-01 22:25:36,591 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-09-01 22:25:36,599 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-09-01 22:25:36,608 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'var-lib-nova-instances.mount'] in directory '/root'
2018-09-01 22:25:36,690 [salt.loaded.int.module.cmdmod:395 ][INFO    ][6486] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-09-01 22:25:36,698 [salt.state       :290 ][INFO    ][6486] {'var-lib-nova-instances.mount': True}
2018-09-01 22:25:36,698 [salt.state       :1941][INFO    ][6486] Completed state [var-lib-nova-instances.mount] at time 22:25:36.698658 duration_in_ms=179.237
2018-09-01 22:25:36,699 [salt.minion      :1708][INFO    ][6486] Returning information for job: 20180901222525788292
2018-09-01 22:58:04,213 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command state.sls with jid 20180901225804205230
2018-09-01 22:58:04,223 [salt.minion      :1431][INFO    ][7808] Starting a new job with PID 7808
2018-09-01 22:58:09,826 [salt.state       :905 ][INFO    ][7808] Loading fresh modules for state activity
2018-09-01 22:58:09,945 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225809941454
2018-09-01 22:58:09,954 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'cinder/init.sls'
2018-09-01 22:58:09,954 [salt.minion      :1431][INFO    ][7815] Starting a new job with PID 7815
2018-09-01 22:58:09,967 [salt.minion      :1708][INFO    ][7815] Returning information for job: 20180901225809941454
2018-09-01 22:58:10,002 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'cinder/volume.sls'
2018-09-01 22:58:10,072 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'cinder/user.sls'
2018-09-01 22:58:10,101 [salt.state       :1770][INFO    ][7808] Running state [cinder] at time 22:58:10.101710
2018-09-01 22:58:10,101 [salt.state       :1803][INFO    ][7808] Executing state group.present for [cinder]
2018-09-01 22:58:10,103 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['groupadd', '-g 304', '-r', 'cinder'] in directory '/root'
2018-09-01 22:58:10,247 [salt.state       :290 ][INFO    ][7808] {'passwd': 'x', 'gid': 304, 'name': 'cinder', 'members': []}
2018-09-01 22:58:10,247 [salt.state       :1941][INFO    ][7808] Completed state [cinder] at time 22:58:10.247247 duration_in_ms=145.538
2018-09-01 22:58:10,247 [salt.state       :1770][INFO    ][7808] Running state [cinder] at time 22:58:10.247595
2018-09-01 22:58:10,247 [salt.state       :1803][INFO    ][7808] Executing state user.present for [cinder]
2018-09-01 22:58:10,249 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['useradd', '-s', '/bin/false', '-u', '304', '-g', '304', '-m', '-d', '/var/lib/cinder', '-r', 'cinder'] in directory '/root'
2018-09-01 22:58:10,431 [salt.state       :290 ][INFO    ][7808] {'shell': '/bin/false', 'workphone': '', 'uid': 304, 'passwd': 'x', 'roomnumber': '', 'groups': ['cinder'], 'home': '/var/lib/cinder', 'name': 'cinder', 'gid': 304, 'fullname': '', 'homephone': ''}
2018-09-01 22:58:10,432 [salt.state       :1941][INFO    ][7808] Completed state [cinder] at time 22:58:10.432032 duration_in_ms=184.437
2018-09-01 22:58:10,800 [salt.state       :1770][INFO    ][7808] Running state [cinder-volume] at time 22:58:10.800820
2018-09-01 22:58:10,801 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [cinder-volume]
2018-09-01 22:58:10,801 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 22:58:11,096 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['apt-cache', '-q', 'policy', 'cinder-volume'] in directory '/root'
2018-09-01 22:58:11,157 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-09-01 22:58:14,595 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 22:58:14,606 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'cinder-volume'] in directory '/root'
2018-09-01 22:58:20,158 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225820144632
2018-09-01 22:58:20,168 [salt.minion      :1431][INFO    ][8382] Starting a new job with PID 8382
2018-09-01 22:58:20,178 [salt.minion      :1708][INFO    ][8382] Returning information for job: 20180901225820144632
2018-09-01 22:58:30,256 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225830244688
2018-09-01 22:58:30,268 [salt.minion      :1431][INFO    ][8673] Starting a new job with PID 8673
2018-09-01 22:58:30,284 [salt.minion      :1708][INFO    ][8673] Returning information for job: 20180901225830244688
2018-09-01 22:58:40,439 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225840429120
2018-09-01 22:58:40,451 [salt.minion      :1431][INFO    ][8985] Starting a new job with PID 8985
2018-09-01 22:58:40,466 [salt.minion      :1708][INFO    ][8985] Returning information for job: 20180901225840429120
2018-09-01 22:58:50,625 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225850615224
2018-09-01 22:58:50,635 [salt.minion      :1431][INFO    ][9301] Starting a new job with PID 9301
2018-09-01 22:58:50,644 [salt.minion      :1708][INFO    ][9301] Returning information for job: 20180901225850615224
2018-09-01 22:59:00,806 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225900795587
2018-09-01 22:59:00,817 [salt.minion      :1431][INFO    ][9623] Starting a new job with PID 9623
2018-09-01 22:59:00,827 [salt.minion      :1708][INFO    ][9623] Returning information for job: 20180901225900795587
2018-09-01 22:59:10,996 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225910983206
2018-09-01 22:59:11,007 [salt.minion      :1431][INFO    ][10926] Starting a new job with PID 10926
2018-09-01 22:59:11,023 [salt.minion      :1708][INFO    ][10926] Returning information for job: 20180901225910983206
2018-09-01 22:59:21,193 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225921182659
2018-09-01 22:59:21,203 [salt.minion      :1431][INFO    ][10931] Starting a new job with PID 10931
2018-09-01 22:59:21,214 [salt.minion      :1708][INFO    ][10931] Returning information for job: 20180901225921182659
2018-09-01 22:59:31,376 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225931365612
2018-09-01 22:59:31,389 [salt.minion      :1431][INFO    ][11126] Starting a new job with PID 11126
2018-09-01 22:59:31,401 [salt.minion      :1708][INFO    ][11126] Returning information for job: 20180901225931365612
2018-09-01 22:59:41,567 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225941556027
2018-09-01 22:59:41,578 [salt.minion      :1431][INFO    ][11521] Starting a new job with PID 11521
2018-09-01 22:59:41,594 [salt.minion      :1708][INFO    ][11521] Returning information for job: 20180901225941556027
2018-09-01 22:59:51,748 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901225951738225
2018-09-01 22:59:51,757 [salt.minion      :1431][INFO    ][11964] Starting a new job with PID 11964
2018-09-01 22:59:51,769 [salt.minion      :1708][INFO    ][11964] Returning information for job: 20180901225951738225
2018-09-01 23:00:01,927 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230001917662
2018-09-01 23:00:01,936 [salt.minion      :1431][INFO    ][12021] Starting a new job with PID 12021
2018-09-01 23:00:01,947 [salt.minion      :1708][INFO    ][12021] Returning information for job: 20180901230001917662
2018-09-01 23:00:12,118 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230012106568
2018-09-01 23:00:12,127 [salt.minion      :1431][INFO    ][12576] Starting a new job with PID 12576
2018-09-01 23:00:12,136 [salt.minion      :1708][INFO    ][12576] Returning information for job: 20180901230012106568
2018-09-01 23:00:12,736 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:00:12,759 [salt.state       :290 ][INFO    ][7808] Made the following changes:
'python-routes' changed from 'absent' to '2.4.1-1~u16.04+mcp2'
'python-retrying' changed from 'absent' to '1.3.3-1'
'libiscsi2' changed from 'absent' to '1.12.0-2'
'python-kombu' changed from 'absent' to '4.1.0-1~u16.04+mcp1'
'python-oslo.concurrency' changed from 'absent' to '3.25.0-1.0~u16.04+mcp2'
'python-sqlparse' changed from 'absent' to '0.2.2-1~u16.04+mcp1'
'python-monotonic' changed from 'absent' to '0.6-2'
'liblapack3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-psycopg2' changed from 'absent' to '2.7.4-1.0~u16.04+mcp1'
'python-secretstorage' changed from 'absent' to '2.1.3-1'
'python2.7-numpy' changed from 'absent' to '1'
'python-glanceclient' changed from 'absent' to '1:2.10.0-1.0~u16.04+mcp3'
'python-formencode' changed from 'absent' to '1.3.0-0ubuntu5'
'python-functools32' changed from 'absent' to '3.2.3.2-2'
'python-migrate' changed from 'absent' to '0.11.0-1~u16.04+mcp2'
'python-cachetools' changed from 'absent' to '2.0.0-2.0~u16.04+mcp1'
'libquadmath0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'python-egenix-mxtools' changed from 'absent' to '3.2.9-1'
'python-blinker' changed from 'absent' to '1.3.dfsg2-1build1'
'python-roman' changed from 'absent' to '2.0.0-2'
'python-pastescript' changed from 'absent' to '1.7.5-3build1'
'sg3-utils' changed from 'absent' to '1.40-0ubuntu1'
'python-tenacity' changed from 'absent' to '4.8.0-1.0~u16.04+mcp1'
'python-oslo.versionedobjects' changed from 'absent' to '1.31.3-1.0~u16.04+mcp4'
'python-setuptools' changed from 'absent' to '39.0.1-2~cloud0'
'docutils-doc' changed from 'absent' to '0.12+dfsg-1'
'python-dbus' changed from 'absent' to '1.2.0-3'
'librados2' changed from 'absent' to '12.2.4-0ubuntu1~cloud1'
'python-fixtures' changed from 'absent' to '3.0.0-1.1~u16.04+mcp2'
'python-pycadf' changed from 'absent' to '2.6.0-1~u16.04+mcp2'
'python-httplib2' changed from 'absent' to '0.9.1+dfsg-1'
'python-testtools' changed from 'absent' to '2.3.0-1.0~u16.04+mcp1'
'python-egenix-mxdatetime' changed from 'absent' to '3.2.9-1'
'python-anyjson' changed from 'absent' to '0.3.3-1build1'
'libnss3-nssdb' changed from 'absent' to '2:3.28.4-0ubuntu0.16.04.3'
'python-pymemcache' changed from 'absent' to '1.3.2-2ubuntu1'
'libblas3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-dogpile.cache' changed from 'absent' to '0.6.2-1.1~u16.04+mcp2'
'python-dnspython' changed from 'absent' to '1.14.0-3.1~u16.04+mcp2'
'python-babel' changed from 'absent' to '2.3.4+dfsg.1-2.1~u16.04+mcp2'
'python2.7-paramiko' changed from 'absent' to '1'
'python-jsonschema' changed from 'absent' to '2.6.0-2.0~u16.04+mcp1'
'python-pil' changed from 'absent' to '3.1.2-0ubuntu1.1'
'python-oslo.privsep' changed from 'absent' to '1.27.0-1.0~u16.04+mcp2'
'python2.7-lxml' changed from 'absent' to '1'
'python-suds' changed from 'absent' to '0.7~git20150727.94664dd-3'
'python-oslo.db' changed from 'absent' to '4.33.0-1.0~u16.04+mcp9'
'python2.7-sqlalchemy-ext' changed from 'absent' to '1'
'libnspr4' changed from 'absent' to '2:4.13.1-0ubuntu0.16.04.1'
'python-os-win' changed from 'absent' to '3.0.0-1.0~u16.04+mcp2'
'python-tz' changed from 'absent' to '2014.10~dfsg1-0ubuntu2'
'python2.7-simplejson' changed from 'absent' to '1'
'libtiff5' changed from 'absent' to '4.0.6-1ubuntu0.4'
'python-funcsigs' changed from 'absent' to '1.0.2-4.0~u16.04+mcp1'
'python-scgi' changed from 'absent' to '1.13-1.1build1'
'python2.7-pil' changed from 'absent' to '1'
'python-rtslib-fb' changed from 'absent' to '2.1.57+debian-3'
'os-brick-common' changed from 'absent' to '2.3.0-1.0~u16.04+mcp2'
'python-repoze.lru' changed from 'absent' to '0.6-6'
'python-posix-ipc' changed from 'absent' to '0.9.8-2build2'
'formencode-i18n' changed from 'absent' to '1.3.0-0ubuntu5'
'python2.7-testtools' changed from 'absent' to '1'
'python-alembic' changed from 'absent' to '0.8.10-1.1~u16.04+mcp2'
'docutils' changed from 'absent' to '1'
'python2.7-dbus' changed from 'absent' to '1'
'python-oslo.middleware' changed from 'absent' to '3.34.0-1.0~u16.04+mcp2'
'python-pygments' changed from 'absent' to '2.2.0+dfsg-1~u16.04+mcp2'
'python-pillow' changed from 'absent' to '1'
'libpaperg' changed from 'absent' to '1'
'liblapack.so.3' changed from 'absent' to '1'
'python2.7-netifaces' changed from 'absent' to '1'
'python-numpy-dev' changed from 'absent' to '1'
'liblcms2-2' changed from 'absent' to '2.6-3ubuntu2'
'docutils-common' changed from 'absent' to '0.12+dfsg-1'
'python-oslo.context' changed from 'absent' to '1:2.20.0-1.0~u16.04+mcp1'
'qemu-block-extra' changed from 'absent' to '1:2.11+dfsg-1ubuntu7.4~cloud0'
'sharutils' changed from 'absent' to '1:4.15.2-1ubuntu0.1'
'python-pyasn1-modules' changed from 'absent' to '0.0.7-0.1'
'qemu-utils' changed from 'absent' to '1:2.11+dfsg-1ubuntu7.4~cloud0'
'python-oslo.cache' changed from 'absent' to '1.28.0-1.0~u16.04+mcp9'
'python2.7-pyinotify' changed from 'absent' to '1'
'python-webob' changed from 'absent' to '1:1.7.2-1~u16.04+mcp2'
'python-pyparsing' changed from 'absent' to '2.1.10+dfsg1-1.1~u16.04+mcp2'
'python-babel-localedata' changed from 'absent' to '2.3.4+dfsg.1-2.1~u16.04+mcp2'
'python-positional' changed from 'absent' to '1.1.1-3.1~u16.04+mcp2'
'python-barbicanclient' changed from 'absent' to '4.6.0-1.0~u16.04+mcp1'
'python-castellan' changed from 'absent' to '0.17.0-1.0~u16.04+mcp4'
'python-cmd2' changed from 'absent' to '0.6.8-1'
'python-oslo.vmware' changed from 'absent' to '2.26.0-1.0~u16.04+mcp2'
'python-distribute' changed from 'absent' to '1'
'python-linecache2' changed from 'absent' to '1.0.0-2'
'libconfig-general-perl' changed from 'absent' to '2.60-1'
'libsgutils2-2' changed from 'absent' to '1.40-0ubuntu1'
'python-iso8601' changed from 'absent' to '0.1.11-1'
'python-jsonpatch' changed from 'absent' to '1.21-1~u16.04+mcp1'
'alembic' changed from 'absent' to '0.8.10-1.1~u16.04+mcp2'
'libwebpmux1' changed from 'absent' to '0.4.4-1'
'python2.7-googleapi' changed from 'absent' to '1'
'tgt' changed from 'absent' to '1:1.0.63-1ubuntu1.1'
'python-oslo.policy' changed from 'absent' to '1.33.2-1.0~u16.04+mcp3'
'python-stevedore' changed from 'absent' to '1:1.25.0-1~u16.04+mcp2'
'python-paste' changed from 'absent' to '2.0.3+dfsg-4.1~u16.04+mcp1'
'python-googleapi' changed from 'absent' to '1.4.2-1ubuntu1.1'
'python-lxml' changed from 'absent' to '3.5.0-1build1'
'python-oslo.config' changed from 'absent' to '1:5.2.0-1.0~u16.04+mcp5'
'libnss3' changed from 'absent' to '2:3.28.4-0ubuntu0.16.04.3'
'python-paramiko' changed from 'absent' to '2.0.0-1.1~u16.04+mcp2'
'python-futurist' changed from 'absent' to '1.6.0-1.0~u16.04+mcp1'
'python-f2py' changed from 'absent' to '1'
'libpaper1' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python-fasteners' changed from 'absent' to '0.12.0-2ubuntu1'
'python2.7-gi' changed from 'absent' to '1'
'python-oauth2client' changed from 'absent' to '2.0.1-1'
'python-mimeparse' changed from 'absent' to '0.1.4-1build1'
'python-pastedeploy-tpl' changed from 'absent' to '1.5.2-1'
'python-oauthlib' changed from 'absent' to '1.0.3-1'
'python-oslo-db' changed from 'absent' to '1'
'libblas-common' changed from 'absent' to '3.6.0-2ubuntu2'
'libgfortran3' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'python-gi' changed from 'absent' to '3.20.0-0ubuntu1'
'libpq5' changed from 'absent' to '9.5.14-0ubuntu0.16.04'
'pycadf-common' changed from 'absent' to '2.6.0-1~u16.04+mcp2'
'python-contextlib2' changed from 'absent' to '0.5.1-1'
'libjpeg8' changed from 'absent' to '8c-2ubuntu8'
'python-oslo.serialization' changed from 'absent' to '2.24.0-1.0~u16.04+mcp1'
'python-oslo.utils' changed from 'absent' to '3.35.0-1.0~u16.04+mcp8'
'python-taskflow' changed from 'absent' to '3.1.0-1.0~u16.04+mcp1'
'python-pika-pool' changed from 'absent' to '0.1.3-1ubuntu1'
'python-automaton' changed from 'absent' to '1.14.0-1.0~u16.04+mcp1'
'python-warlock' changed from 'absent' to '1.2.0-2.0~u16.04+mcp1'
'python-oslo.rootwrap' changed from 'absent' to '5.13.0-1.0~u16.04+mcp1'
'python2.7-iso8601' changed from 'absent' to '1'
'python-numpy' changed from 'absent' to '1:1.11.0-1ubuntu1'
'python-simplejson' changed from 'absent' to '3.8.1-1ubuntu2'
'python-wrapt' changed from 'absent' to '1.8.0-5build2'
'python-tooz' changed from 'absent' to '1.59.0-1.0~u16.04+mcp1'
'python-docutils' changed from 'absent' to '0.12+dfsg-1'
'python-openid' changed from 'absent' to '2.2.5-6'
'python-pastedeploy' changed from 'absent' to '1.5.2-1'
'python2.7-cmd2' changed from 'absent' to '1'
'libpaper-utils' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python2.7-zope.interface' changed from 'absent' to '1'
'python-cliff' changed from 'absent' to '2.8.0-1~u16.04+mcp2'
'python-oslo.i18n' changed from 'absent' to '3.19.0-1.0~u16.04+mcp6'
'python-bs4' changed from 'absent' to '4.6.0-1~u16.04+mcp1'
'cinder-volume' changed from 'absent' to '2:12.0.3-2~u16.04+mcp79'
'python-oslo.reports' changed from 'absent' to '1.26.0-1.0~u16.04+mcp2'
'python-networkx' changed from 'absent' to '1.11-1ubuntu1'
'python-statsd' changed from 'absent' to '3.2.1-2~u16.04+mcp2'
'python-keyring' changed from 'absent' to '8.5.1-1.1~u16.04+mcp2'
'python-redis' changed from 'absent' to '2.10.5-1ubuntu1'
'python-oslo-utils' changed from 'absent' to '1'
'libblas.so.3' changed from 'absent' to '1'
'python-novaclient' changed from 'absent' to '2:9.1.1-1~u16.04+mcp6'
'python-unicodecsv' changed from 'absent' to '0.14.1-1'
'python-memcache' changed from 'absent' to '1.57+fixed-1~u16.04+mcp1'
'python-mock' changed from 'absent' to '2.0.0-1.1~u16.04+mcp2'
'python-rfc3986' changed from 'absent' to '0.3.1-2.1~u16.04+mcp2'
'python-eventlet' changed from 'absent' to '0.20.0-4~u16.04+mcp2'
'thin-provisioning-tools' changed from 'absent' to '0.5.6-1ubuntu1'
'python-unittest2' changed from 'absent' to '1.1.0-6.1'
'python2.7-pyparsing' changed from 'absent' to '1'
'python-uritemplate' changed from 'absent' to '0.6-1ubuntu1'
'python-oslo.log' changed from 'absent' to '3.36.0-1.0~u16.04+mcp6'
'python-pyinotify' changed from 'absent' to '0.9.6-1.1~u16.04+mcp2'
'libjpeg-turbo8' changed from 'absent' to '1.4.2-0ubuntu3.1'
'python-amqp' changed from 'absent' to '2.2.1-1~exp1~u16.04+mcp1'
'python-cinder' changed from 'absent' to '2:12.0.3-2~u16.04+mcp79'
'libwebp5' changed from 'absent' to '0.4.4-1'
'python-zope.interface' changed from 'absent' to '4.1.3-1build1'
'python-numpy-abi9' changed from 'absent' to '1'
'python-vine' changed from 'absent' to '1.1.3+dfsg-2~u16.04+mcp3'
'python-defusedxml' changed from 'absent' to '0.5.0-1~u16.04+mcp1'
'python-kazoo' changed from 'absent' to '2.2.1-1ubuntu1'
'python-decorator' changed from 'absent' to '4.0.6-1'
'python-osprofiler' changed from 'absent' to '1.15.2-1.0~u16.04+mcp3'
'python-oslo.messaging' changed from 'absent' to '5.35.1-1.0~u16.04+mcp16'
'python-os-brick' changed from 'absent' to '2.3.0-1.0~u16.04+mcp2'
'python-debtcollector' changed from 'absent' to '1.3.0-2'
'python-keyrings.alt' changed from 'absent' to '1.1.1-1'
'python-oslo-log' changed from 'absent' to '1'
'python-json-pointer' changed from 'absent' to '1.9-3'
'python-pbr' changed from 'absent' to '3.1.1-3.0~u16.04+mcp1'
'python-html5lib' changed from 'absent' to '0.999-4'
'python-swiftclient' changed from 'absent' to '1:3.4.0-1~u16.04+mcp2'
'python-pika' changed from 'absent' to '0.10.0-1'
'python-keystoneclient' changed from 'absent' to '1:3.15.0-1.0~u16.04+mcp2'
'python-greenlet' changed from 'absent' to '0.4.12-2.0~u16.04+mcp1'
'python-sqlalchemy-ext' changed from 'absent' to '1.0.13+ds1-1.1~u16.04+mcp2'
'python-oslo.service' changed from 'absent' to '1.29.0-1.0~u16.04+mcp1'
'librbd1' changed from 'absent' to '12.2.4-0ubuntu1~cloud1'
'python-oslo-context' changed from 'absent' to '1'
'python-ceilometerclient' changed from 'absent' to '2.9.0-2~u16.04+mcp1'
'python-zake' changed from 'absent' to '0.1.6-1'
'python-zopeinterface' changed from 'absent' to '1'
'python-traceback2' changed from 'absent' to '1.4.0-3'
'python-numpy-api10' changed from 'absent' to '1'
'python-keystoneauth1' changed from 'absent' to '3.4.0-1.0~u16.04+mcp7'
'python-tempita' changed from 'absent' to '0.5.2-1build1'
'python-sqlalchemy' changed from 'absent' to '1.0.13+ds1-1.1~u16.04+mcp2'
'python-keystonemiddleware' changed from 'absent' to '4.21.0-1.0~u16.04+mcp1'
'python-zope' changed from 'absent' to '1'
'python-voluptuous' changed from 'absent' to '0.9.3-1.1~u16.04+mcp2'
'python-extras' changed from 'absent' to '1.0.0-2.0~u16.04+mcp1'
'cinder-common' changed from 'absent' to '2:12.0.3-2~u16.04+mcp79'
'python-oslo-rootwrap' changed from 'absent' to '1'
'python-netifaces' changed from 'absent' to '0.10.4-0.1build2'
'python-rsa' changed from 'absent' to '3.2.3-1.1'
'python2.7-rtslib-fb' changed from 'absent' to '1'
'libjbig0' changed from 'absent' to '2.1-3.1'

2018-09-01 23:00:12,768 [salt.state       :905 ][INFO    ][7808] Loading fresh modules for state activity
2018-09-01 23:00:12,787 [salt.state       :1941][INFO    ][7808] Completed state [cinder-volume] at time 23:00:12.787956 duration_in_ms=121987.136
2018-09-01 23:00:12,792 [salt.state       :1770][INFO    ][7808] Running state [lvm2] at time 23:00:12.791980
2018-09-01 23:00:12,792 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [lvm2]
2018-09-01 23:00:13,626 [salt.state       :290 ][INFO    ][7808] All specified packages are already installed
2018-09-01 23:00:13,626 [salt.state       :1941][INFO    ][7808] Completed state [lvm2] at time 23:00:13.626694 duration_in_ms=834.714
2018-09-01 23:00:13,627 [salt.state       :1770][INFO    ][7808] Running state [sysfsutils] at time 23:00:13.626977
2018-09-01 23:00:13,627 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [sysfsutils]
2018-09-01 23:00:13,631 [salt.state       :290 ][INFO    ][7808] All specified packages are already installed
2018-09-01 23:00:13,631 [salt.state       :1941][INFO    ][7808] Completed state [sysfsutils] at time 23:00:13.631833 duration_in_ms=4.855
2018-09-01 23:00:13,632 [salt.state       :1770][INFO    ][7808] Running state [sg3-utils] at time 23:00:13.632036
2018-09-01 23:00:13,632 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [sg3-utils]
2018-09-01 23:00:13,636 [salt.state       :290 ][INFO    ][7808] All specified packages are already installed
2018-09-01 23:00:13,636 [salt.state       :1941][INFO    ][7808] Completed state [sg3-utils] at time 23:00:13.636541 duration_in_ms=4.505
2018-09-01 23:00:13,636 [salt.state       :1770][INFO    ][7808] Running state [python-cinder] at time 23:00:13.636740
2018-09-01 23:00:13,636 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [python-cinder]
2018-09-01 23:00:13,642 [salt.state       :290 ][INFO    ][7808] All specified packages are already installed
2018-09-01 23:00:13,642 [salt.state       :1941][INFO    ][7808] Completed state [python-cinder] at time 23:00:13.642254 duration_in_ms=5.514
2018-09-01 23:00:13,642 [salt.state       :1770][INFO    ][7808] Running state [python-mysqldb] at time 23:00:13.642456
2018-09-01 23:00:13,642 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [python-mysqldb]
2018-09-01 23:00:13,655 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:00:13,669 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-mysqldb'] in directory '/root'
2018-09-01 23:00:16,731 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:00:16,755 [salt.state       :290 ][INFO    ][7808] Made the following changes:
'python2.7-mysqldb' changed from 'absent' to '1'
'mysql-common' changed from 'absent' to '5.7.23-0ubuntu0.16.04.1'
'mysql-common-5.6' changed from 'absent' to '1'
'libmysqlclient20' changed from 'absent' to '5.7.23-0ubuntu0.16.04.1'
'python-mysqldb' changed from 'absent' to '1.3.7-1build2'

2018-09-01 23:00:16,767 [salt.state       :905 ][INFO    ][7808] Loading fresh modules for state activity
2018-09-01 23:00:16,787 [salt.state       :1941][INFO    ][7808] Completed state [python-mysqldb] at time 23:00:16.787216 duration_in_ms=3144.759
2018-09-01 23:00:16,790 [salt.state       :1770][INFO    ][7808] Running state [p7zip] at time 23:00:16.790608
2018-09-01 23:00:16,790 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [p7zip]
2018-09-01 23:00:17,183 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:00:17,196 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'p7zip'] in directory '/root'
2018-09-01 23:00:19,975 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:00:20,002 [salt.state       :290 ][INFO    ][7808] Made the following changes:
'p7zip' changed from 'absent' to '9.20.1~dfsg.1-4.2'

2018-09-01 23:00:20,016 [salt.state       :905 ][INFO    ][7808] Loading fresh modules for state activity
2018-09-01 23:00:20,035 [salt.state       :1941][INFO    ][7808] Completed state [p7zip] at time 23:00:20.035815 duration_in_ms=3245.207
2018-09-01 23:00:20,041 [salt.state       :1770][INFO    ][7808] Running state [gettext-base] at time 23:00:20.041052
2018-09-01 23:00:20,041 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [gettext-base]
2018-09-01 23:00:20,490 [salt.state       :290 ][INFO    ][7808] All specified packages are already installed
2018-09-01 23:00:20,490 [salt.state       :1941][INFO    ][7808] Completed state [gettext-base] at time 23:00:20.490790 duration_in_ms=449.738
2018-09-01 23:00:20,491 [salt.state       :1770][INFO    ][7808] Running state [python-memcache] at time 23:00:20.491089
2018-09-01 23:00:20,491 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [python-memcache]
2018-09-01 23:00:20,496 [salt.state       :290 ][INFO    ][7808] All specified packages are already installed
2018-09-01 23:00:20,496 [salt.state       :1941][INFO    ][7808] Completed state [python-memcache] at time 23:00:20.496165 duration_in_ms=5.077
2018-09-01 23:00:20,496 [salt.state       :1770][INFO    ][7808] Running state [python-pycadf] at time 23:00:20.496391
2018-09-01 23:00:20,496 [salt.state       :1803][INFO    ][7808] Executing state pkg.installed for [python-pycadf]
2018-09-01 23:00:20,500 [salt.state       :290 ][INFO    ][7808] All specified packages are already installed
2018-09-01 23:00:20,501 [salt.state       :1941][INFO    ][7808] Completed state [python-pycadf] at time 23:00:20.501029 duration_in_ms=4.638
2018-09-01 23:00:20,502 [salt.state       :1770][INFO    ][7808] Running state [/var/lock/cinder] at time 23:00:20.502873
2018-09-01 23:00:20,503 [salt.state       :1803][INFO    ][7808] Executing state file.directory for [/var/lock/cinder]
2018-09-01 23:00:20,503 [salt.state       :290 ][INFO    ][7808] Directory /var/lock/cinder is in the correct state
Directory /var/lock/cinder updated
2018-09-01 23:00:20,503 [salt.state       :1941][INFO    ][7808] Completed state [/var/lock/cinder] at time 23:00:20.503845 duration_in_ms=0.973
2018-09-01 23:00:20,504 [salt.state       :1770][INFO    ][7808] Running state [/etc/cinder/cinder.conf] at time 23:00:20.504113
2018-09-01 23:00:20,504 [salt.state       :1803][INFO    ][7808] Executing state file.managed for [/etc/cinder/cinder.conf]
2018-09-01 23:00:20,525 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'cinder/files/queens/cinder.conf.volume.Debian'
2018-09-01 23:00:20,617 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/messaging/_default.conf'
2018-09-01 23:00:20,634 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_log.conf'
2018-09-01 23:00:20,648 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'cinder/files/backend/_lvm.conf'
2018-09-01 23:00:20,659 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/castellan/_barbican.conf'
2018-09-01 23:00:20,670 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/keystoneauth/_type_password.conf'
2018-09-01 23:00:20,693 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/keystonemiddleware/_auth_token.conf'
2018-09-01 23:00:20,710 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_database.conf'
2018-09-01 23:00:20,726 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/messaging/_notifications.conf'
2018-09-01 23:00:20,738 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/messaging/_rabbit.conf'
2018-09-01 23:00:20,754 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_middleware.conf'
2018-09-01 23:00:20,762 [salt.state       :290 ][INFO    ][7808] File changed:
--- 
+++ 
@@ -1,15 +1,4404 @@
+
 [DEFAULT]
+
+#
+# From cinder
+#
+
 rootwrap_config = /etc/cinder/rootwrap.conf
 api_paste_confg = /etc/cinder/api-paste.ini
+#
+# From oslo.messaging
+#
+
+# Size of RPC connection pool. (integer value)
+#rpc_conn_pool_size = 30
+
+# The pool size limit for connections expiration policy (integer
+# value)
+#conn_pool_min_size = 2
+
+# The time-to-live in sec of idle connections in the pool (integer
+# value)
+#conn_pool_ttl = 1200
+
+# ZeroMQ bind address. Should be a wildcard (*), an ethernet
+# interface, or IP. The "host" option should point or resolve to this
+# address. (string value)
+#rpc_zmq_bind_address = *
+
+# MatchMaker driver. (string value)
+# Possible values:
+# redis - <No description provided>
+# sentinel - <No description provided>
+# dummy - <No description provided>
+#rpc_zmq_matchmaker = redis
+
+# Number of ZeroMQ contexts, defaults to 1. (integer value)
+#rpc_zmq_contexts = 1
+
+# Maximum number of ingress messages to locally buffer per topic.
+# Default is unlimited. (integer value)
+#rpc_zmq_topic_backlog = <None>
+
+# Directory for holding IPC sockets. (string value)
+#rpc_zmq_ipc_dir = /var/run/openstack
+
+# Name of this node. Must be a valid hostname, FQDN, or IP address.
+# Must match "host" option, if running Nova. (string value)
+#rpc_zmq_host = localhost
+
+# Number of seconds to wait before all pending messages will be sent
+# after closing a socket. The default value of -1 specifies an
+# infinite linger period. The value of 0 specifies no linger period.
+# Pending messages shall be discarded immediately when the socket is
+# closed. Positive values specify an upper bound for the linger
+# period. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
+#zmq_linger = -1
+
+# The default number of seconds that poll should wait. Poll raises
+# timeout exception when timeout expired. (integer value)
+#rpc_poll_timeout = 1
+
+# Expiration timeout in seconds of a name service record about
+# existing target ( < 0 means no timeout). (integer value)
+#zmq_target_expire = 300
+
+# Update period in seconds of a name service record about existing
+# target. (integer value)
+#zmq_target_update = 180
+
+# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
+# (boolean value)
+#use_pub_sub = false
+
+# Use ROUTER remote proxy. (boolean value)
+#use_router_proxy = false
+
+# This option makes direct connections dynamic or static. It makes
+# sense only with use_router_proxy=False which means to use direct
+# connections for direct message types (ignored otherwise). (boolean
+# value)
+#use_dynamic_connections = false
+
+# How many additional connections to a host will be made for failover
+# reasons. This option is actual only in dynamic connections mode.
+# (integer value)
+#zmq_failover_connections = 2
+
+# Minimal port number for random ports range. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#rpc_zmq_min_port = 49153
+
+# Maximal port number for random ports range. (integer value)
+# Minimum value: 1
+# Maximum value: 65536
+#rpc_zmq_max_port = 65536
+
+# Number of retries to find free port number before fail with
+# ZMQBindError. (integer value)
+#rpc_zmq_bind_port_retries = 100
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#rpc_zmq_serialization = json
+
+# This option configures round-robin mode in zmq socket. True means
+# not keeping a queue when server side disconnects. False means to
+# keep queue and messages even if server is disconnected, when the
+# server appears we send all accumulated messages to it. (boolean
+# value)
+#zmq_immediate = true
+
+# Enable/disable TCP keepalive (KA) mechanism. The default value of -1
+# (or any other negative value) means to skip any overrides and leave
+# it to OS default; 0 and 1 (or any other positive value) mean to
+# disable and enable the option respectively. (integer value)
+#zmq_tcp_keepalive = -1
+
+# The duration between two keepalive transmissions in idle condition.
+# The unit is platform dependent, for example, seconds in Linux,
+# milliseconds in Windows etc. The default value of -1 (or any other
+# negative value and 0) means to skip any overrides and leave it to OS
+# default. (integer value)
+#zmq_tcp_keepalive_idle = -1
+
+# The number of retransmissions to be carried out before declaring
+# that remote end is not available. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_cnt = -1
+
+# The duration between two successive keepalive retransmissions, if
+# acknowledgement to the previous keepalive transmission is not
+# received. The unit is platform dependent, for example, seconds in
+# Linux, milliseconds in Windows etc. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_intvl = -1
+
+# Maximum number of (green) threads to work concurrently. (integer
+# value)
+#rpc_thread_pool_size = 100
+
+# Expiration timeout in seconds of a sent/received message after which
+# it is not tracked anymore by a client/server. (integer value)
+#rpc_message_ttl = 300
+
+# Wait for message acknowledgements from receivers. This mechanism
+# works only via proxy without PUB/SUB. (boolean value)
+#rpc_use_acks = false
+
+# Number of seconds to wait for an ack from a cast/call. After each
+# retry attempt this timeout is multiplied by some specified
+# multiplier. (integer value)
+#rpc_ack_timeout_base = 15
+
+# Number to multiply base ack timeout by after each retry attempt.
+# (integer value)
+#rpc_ack_timeout_multiplier = 2
+
+# Default number of message sending attempts in case of any problems
+# occurred: positive value N means at most N retries, 0 means no
+# retries, None or -1 (or any other negative values) mean to retry
+# forever. This option is used only if acknowledgments are enabled.
+# (integer value)
+#rpc_retry_attempts = 3
+
+# List of publisher hosts SubConsumer can subscribe on. This option
+# has higher priority then the default publishers list taken from the
+# matchmaker. (list value)
+#subscribe_on =
+
+# Size of executor thread pool when executor is threading or eventlet.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
+#executor_thread_pool_size = 64
+
+# Seconds to wait for a response from a call. (integer value)
+#rpc_response_timeout = 60
+rpc_response_timeout = 3600
+
+# The network address and optional user credentials for connecting to
+# the messaging backend, in URL format. The expected format is:
+#
+# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
+#
+# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
+#
+# For full details on the fields in the URL see the documentation of
+# oslo_messaging.TransportURL at
+# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
+# (string value)
+#transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
+
+# DEPRECATED: The messaging driver to use, defaults to rabbit. Other
+# drivers include amqp and zmq. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rpc_backend = rabbit
+
+# The default exchange under which topics are scoped. May be
+# overridden by an exchange name specified in the transport_url
+# option. (string value)
+#control_exchange = openstack
+control_exchange = cinder
+
+#
+# From oslo.log
+#
+
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
+# Note: This option can be changed without restarting.
+#debug = false
+
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
+# Note: This option can be changed without restarting.
+# Deprecated group/name - [DEFAULT]/log_config
+
+# Defines the format string for %%(asctime)s in log records. Default:
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
+#log_date_format = %Y-%m-%d %H:%M:%S
+
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logfile
+#log_file = <None>
+
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logdir
+#log_dir = <None>
+
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
+# (boolean value)
+#use_journal = false
+
+# Syslog facility to receive log lines. This option is ignored if
+# log_config_append is set. (string value)
+#syslog_log_facility = LOG_USER
+
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_json = false
+
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_stderr = false
+
+# Format string to use for log messages with context. (string value)
+#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+# Format string to use for log messages when context is undefined.
+# (string value)
+#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
+#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+
+# Prefix each line of exception output with this format. (string
+# value)
+#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
+
+# Defines the format string for %(user_identity)s that is used in
+# logging_context_format_string. (string value)
+#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
+
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+
+# Enables or disables publication of error events. (boolean value)
+#publish_errors = false
+
+# The format for an instance that is passed with the log message.
+# (string value)
+#instance_format = "[instance: %(uuid)s] "
+
+# The format for an instance UUID that is passed with the log message.
+# (string value)
+#instance_uuid_format = "[instance: %(uuid)s] "
+
+# Interval, number of seconds, of log rate limiting. (integer value)
+#rate_limit_interval = 0
+
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
+#rate_limit_burst = 0
+
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
+#rate_limit_except_level = CRITICAL
+
+# Enables or disables fatal status of deprecations. (boolean value)
+#fatal_deprecations = false
+
+# The maximum number of items that a collection resource returns in a single
+# response (integer value)
+#osapi_max_limit = 1000
+
+# Json file indicating user visible filter parameters for list queries. (string
+# value)
+# Deprecated group/name - [DEFAULT]/query_volume_filters
+#resource_query_filters_file = /etc/cinder/resource_filters.json
+
+# DEPRECATED: Volume filter options which non-admin user could use to query
+# volumes. Default values are: ['name', 'status', 'metadata',
+# 'availability_zone' ,'bootable', 'group_id'] (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#query_volume_filters = name,status,metadata,availability_zone,bootable,group_id
+
+# DEPRECATED: Allow the ability to modify the extra-spec settings of an in-use
+# volume-type. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#allow_inuse_volume_type_modification = false
+
+# Treat X-Forwarded-For as the canonical remote address. Only enable this if
+# you have a sanitizing proxy. (boolean value)
+#use_forwarded_for = false
+
+# Public url to use for versions endpoint. The default is None, which will use
+# the request's host_url attribute to populate the URL base. If Cinder is
+# operating behind a proxy, you will want to change this to represent the
+# proxy's URL. (string value)
+#public_endpoint = <None>
+
+# Backup services use same backend. (boolean value)
+#backup_use_same_host = false
+
+# Compression algorithm (None to disable) (string value)
+# Possible values:
+# none - <No description provided>
+# off - <No description provided>
+# no - <No description provided>
+# zlib - <No description provided>
+# gzip - <No description provided>
+# bz2 - <No description provided>
+# bzip2 - <No description provided>
+#backup_compression_algorithm = zlib
+
+# Backup metadata version to be used when backing up volume metadata. If this
+# number is bumped, make sure the service doing the restore supports the new
+# version. (integer value)
+#backup_metadata_version = 2
+
+# The number of chunks or objects, for which one Ceilometer notification will
+# be sent (integer value)
+#backup_object_number_per_notification = 10
+
+# Interval, in seconds, between two progress notifications reporting the backup
+# status (integer value)
+#backup_timer_interval = 120
+
+# Ceph configuration file to use. (string value)
+#backup_ceph_conf = /etc/ceph/ceph.conf
+
+# The Ceph user to connect with. Default here is to use the same user as for
+# Cinder volumes. If not using cephx this should be set to None. (string value)
+#backup_ceph_user = cinder
+
+# The chunk size, in bytes, that a backup is broken into before transfer to the
+# Ceph object store. (integer value)
+#backup_ceph_chunk_size = 134217728
+
+# The Ceph pool where volume backups are stored. (string value)
+#backup_ceph_pool = backups
+
+# RBD stripe unit to use when creating a backup image. (integer value)
+#backup_ceph_stripe_unit = 0
+
+# RBD stripe count to use when creating a backup image. (integer value)
+#backup_ceph_stripe_count = 0
+
+# If True, apply JOURNALING and EXCLUSIVE_LOCK feature bits to the backup RBD
+# objects to allow mirroring (boolean value)
+#backup_ceph_image_journals = false
+
+# If True, always discard excess bytes when restoring volumes i.e. pad with
+# zeroes. (boolean value)
+#restore_discard_excess_bytes = true
+
+# The GCS bucket to use. (string value)
+#backup_gcs_bucket = <None>
+
+# The size in bytes of GCS backup objects. (integer value)
+#backup_gcs_object_size = 52428800
+
+# The size in bytes that changes are tracked for incremental backups.
+# backup_gcs_object_size has to be multiple of backup_gcs_block_size. (integer
+# value)
+#backup_gcs_block_size = 32768
+
+# GCS object will be downloaded in chunks of bytes. (integer value)
+#backup_gcs_reader_chunk_size = 2097152
+
+# GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the
+# file is to be uploaded as a single chunk. (integer value)
+#backup_gcs_writer_chunk_size = 2097152
+
+# Number of times to retry. (integer value)
+#backup_gcs_num_retries = 3
+
+# List of GCS error codes. (list value)
+#backup_gcs_retry_error_codes = 429
+
+# Location of GCS bucket. (string value)
+#backup_gcs_bucket_location = US
+
+# Storage class of GCS bucket. (string value)
+#backup_gcs_storage_class = NEARLINE
+
+# Absolute path of GCS service account credential file. (string value)
+#backup_gcs_credential_file = <None>
+
+# Owner project id for GCS bucket. (string value)
+#backup_gcs_project_id = <None>
+
+# Http user-agent string for gcs api. (string value)
+#backup_gcs_user_agent = gcscinder
+
+# Enable or Disable the timer to send the periodic progress notifications to
+# Ceilometer when backing up the volume to the GCS backend storage. The default
+# value is True to enable the timer. (boolean value)
+#backup_gcs_enable_progress_timer = true
+
+# URL for http proxy access. (uri value)
+#backup_gcs_proxy_url = <None>
+
+# Base dir containing mount point for gluster share. (string value)
+#glusterfs_backup_mount_point = $state_path/backup_mount
+
+# GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format.
+# Eg: 1.2.3.4:backup_vol (string value)
+#glusterfs_backup_share = <None>
+
+# Base dir containing mount point for NFS share. (string value)
+#backup_mount_point_base = $state_path/backup_mount
+
+# NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format.
+# (string value)
+#backup_share = <None>
+
+# Mount options passed to the NFS client. See NFS man page for details. (string
+# value)
+#backup_mount_options = <None>
+
+# The maximum size in bytes of the files used to hold backups. If the volume
+# being backed up exceeds this size, then it will be backed up into multiple
+# files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
+# (integer value)
+#backup_file_size = 1999994880
+
+# The size in bytes that changes are tracked for incremental backups.
+# backup_file_size has to be multiple of backup_sha_block_size_bytes. (integer
+# value)
+#backup_sha_block_size_bytes = 32768
+
+# Enable or Disable the timer to send the periodic progress notifications to
+# Ceilometer when backing up the volume to the backend storage. The default
+# value is True to enable the timer. (boolean value)
+#backup_enable_progress_timer = true
+
+# Path specifying where to store backups. (string value)
+#backup_posix_path = $state_path/backup
+
+# Custom directory to use for backups. (string value)
+#backup_container = <None>
+
+# The URL of the Swift endpoint (uri value)
+#backup_swift_url = <None>
+
+# The URL of the Keystone endpoint (uri value)
+#backup_swift_auth_url = <None>
+
+# Info to match when looking for swift in the service catalog. Format is:
+# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
+# Only used if backup_swift_url is unset (string value)
+#swift_catalog_info = object-store:swift:publicURL
+
+# Info to match when looking for keystone in the service catalog. Format is:
+# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
+# Only used if backup_swift_auth_url is unset (string value)
+#keystone_catalog_info = identity:Identity Service:publicURL
+
+# Swift authentication mechanism (per_user or single_user). (string value)
+# Possible values:
+# per_user - <No description provided>
+# single_user - <No description provided>
+#backup_swift_auth = per_user
+
+# Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0
+# or "3" for auth 3.0 (string value)
+#backup_swift_auth_version = 1
+
+# Swift tenant/account name. Required when connecting to an auth 2.0 system
+# (string value)
+#backup_swift_tenant = <None>
+
+# Swift user domain name. Required when connecting to an auth 3.0 system
+# (string value)
+#backup_swift_user_domain = <None>
+
+# Swift project domain name. Required when connecting to an auth 3.0 system
+# (string value)
+#backup_swift_project_domain = <None>
+
+# Swift project/account name. Required when connecting to an auth 3.0 system
+# (string value)
+#backup_swift_project = <None>
+
+# Swift user name (string value)
+#backup_swift_user = <None>
+
+# Swift key for authentication (string value)
+#backup_swift_key = <None>
+
+# The default Swift container to use (string value)
+#backup_swift_container = volumebackups
+
+# The size in bytes of Swift backup objects (integer value)
+#backup_swift_object_size = 52428800
+
+# The size in bytes that changes are tracked for incremental backups.
+# backup_swift_object_size has to be multiple of backup_swift_block_size.
+# (integer value)
+#backup_swift_block_size = 32768
+
+# The number of retries to make for Swift operations (integer value)
+#backup_swift_retry_attempts = 3
+
+# The backoff time in seconds between Swift retries (integer value)
+#backup_swift_retry_backoff = 2
+
+# Enable or Disable the timer to send the periodic progress notifications to
+# Ceilometer when backing up the volume to the Swift backend storage. The
+# default value is True to enable the timer. (boolean value)
+#backup_swift_enable_progress_timer = true
+
+# Location of the CA certificate file to use for swift client requests. (string
+# value)
+#backup_swift_ca_cert_file = <None>
+
+# Bypass verification of server certificate when making SSL connection to
+# Swift. (boolean value)
+#backup_swift_auth_insecure = false
+
+# Volume prefix for the backup id when backing up to TSM (string value)
+#backup_tsm_volume_prefix = backup
+
+# TSM password for the running username (string value)
+#backup_tsm_password = password
+
+# Enable or Disable compression for backups (boolean value)
+#backup_tsm_compression = true
+
+# Driver to use for backups. (string value)
+#backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
+
+# Offload pending backup delete during backup service startup. If false, the
+# backup service will remain down until all pending backups are deleted.
+# (boolean value)
+#backup_service_inithost_offload = true
+
+# Size of the native threads pool for the backups.  Most backup drivers rely
+# heavily on this, it can be decreased for specific drivers that don't.
+# (integer value)
+# Minimum value: 20
+#backup_native_threads_pool_size = 60
+
+# Number of backup processes to launch. Improves performance with concurrent
+# backups. (integer value)
+# Minimum value: 1
+# Maximum value: 4
+#backup_workers = 1
+
+# Name of this cluster. Used to group volume hosts that share the same backend
+# configurations to work in HA Active-Active mode.  Active-Active is not yet
+# supported. (string value)
+#cluster = <None>
+
+# Top-level directory for maintaining cinder's state (string value)
+state_path = /var/lib/cinder
+
+# IP address of this host (host address value)
+#my_ip = <HOST_IP_ADDRESS>
+my_ip = 10.167.4.53
+
+# A list of the URLs of glance API servers available to cinder
+# ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to
+# http. (list value)
+glance_api_servers = http://10.167.4.35:9292
+
+# Number retries when downloading an image from glance (integer value)
+# Minimum value: 0
+glance_num_retries = 0
+
+# Allow to perform insecure SSL (https) requests to glance (https will be used
+# but cert validation will not be performed). (boolean value)
+#glance_api_insecure = false
+
+# Enables or disables negotiation of SSL layer compression. In some cases
+# disabling compression can improve data throughput, such as when high network
+# bandwidth is available and you use compressed image formats like qcow2.
+# (boolean value)
+#glance_api_ssl_compression = false
+
+# Location of ca certificates file to use for glance client requests. (string
+# value)
+
+# http/https timeout value for glance operations. If no value (None) is
+# supplied here, the glanceclient default value is used. (integer value)
+#glance_request_timeout = <None>
+
+# DEPRECATED: Deploy v2 of the Cinder API. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#enable_v2_api = true
+
+# Deploy v3 of the Cinder API. (boolean value)
+enable_v3_api = true
+
+# Enables or disables rate limit of the API. (boolean value)
+#api_rate_limit = true
+
+# Specify list of extensions to load when using osapi_volume_extension option
+# with cinder.api.contrib.select_extensions (list value)
+#osapi_volume_ext_list =
+
+# osapi volume extension to load (multi valued)
+osapi_volume_extension = cinder.api.contrib.standard_extensions
+
+# Full class name for the Manager for volume (string value)
+#volume_manager = cinder.volume.manager.VolumeManager
+
+# Full class name for the Manager for volume backup (string value)
+#backup_manager = cinder.backup.manager.BackupManager
+
+# Full class name for the Manager for scheduler (string value)
+#scheduler_manager = cinder.scheduler.manager.SchedulerManager
+
+# Name of this node.  This can be an opaque identifier. It is not necessarily a
+# host name, FQDN, or IP address. (host address value)
+#host = localhost
+
+# Availability zone of this node. Can be overridden per volume backend with the
+# option "backend_availability_zone". (string value)
+#storage_availability_zone = nova
+
+# Default availability zone for new volumes. If not set, the
+# storage_availability_zone option value is used as the default for new
+# volumes. (string value)
+#default_availability_zone = <None>
+
+# If the requested Cinder availability zone is unavailable, fall back to the
+# value of default_availability_zone, then storage_availability_zone, instead
+# of failing. (boolean value)
+allow_availability_zone_fallback = True
+
+# Default volume type to use (string value)
+#default_volume_type = <None>
+
+# Default group type to use (string value)
+#default_group_type = <None>
+
+# Time period for which to generate volume usages. The options are hour, day,
+# month, or year. (string value)
+#volume_usage_audit_period = month
+
+# Path to the rootwrap configuration file to use for running commands as root
+# (string value)
+#rootwrap_config = /etc/cinder/rootwrap.conf
+
+# Enable monkey patching (boolean value)
+#monkey_patch = false
+
+# List of modules/decorators to monkey patch (list value)
+#monkey_patch_modules =
+
+# Maximum time since last check-in for a service to be considered up (integer
+# value)
+#service_down_time = 60
+
+# The full class name of the volume API class to use (string value)
+#volume_api_class = cinder.volume.api.API
+
+# The full class name of the volume backup API class (string value)
+#backup_api_class = cinder.backup.api.API
+
+# The strategy to use for auth. Supports noauth or keystone. (string value)
+# Possible values:
+# noauth - <No description provided>
+# keystone - <No description provided>
+auth_strategy = keystone
+
+# A list of backend names to use. These backend names should be backed by a
+# unique [CONFIG] group with its options (list value)
+#enabled_backends = <None>
+default_volume_type=lvm-driver
+
+enabled_backends=lvm-driver
+
+
+# Whether snapshots count against gigabyte quota (boolean value)
+#no_snapshot_gb_quota = false
+
+# The full class name of the volume transfer API class (string value)
+#transfer_api_class = cinder.transfer.api.API
+
+# The full class name of the consistencygroup API class (string value)
+#consistencygroup_api_class = cinder.consistencygroup.api.API
+
+# The full class name of the group API class (string value)
+#group_api_class = cinder.group.api.API
+
+# The full class name of the compute API class to use (string value)
+#compute_api_class = cinder.compute.nova.API
+
+# ID of the project which will be used as the Cinder internal tenant. (string
+# value)
+#cinder_internal_tenant_project_id = <None>
+
+# ID of the user to be used in volume operations as the Cinder internal tenant.
+# (string value)
+#cinder_internal_tenant_user_id = <None>
+
+# Services to be added to the available pool on create (boolean value)
+#enable_new_services = true
+
+# Template string to be used to generate volume names (string value)
+volume_name_template = volume-%s
+
+# Template string to be used to generate snapshot names (string value)
+#snapshot_name_template = snapshot-%s
+
+# Template string to be used to generate backup names (string value)
+#backup_name_template = backup-%s
+
+# Driver to use for database access (string value)
+#db_driver = cinder.db
+
+# A list of url schemes that can be downloaded directly via the direct_url.
+# Currently supported schemes: [file, cinder]. (list value)
+#allowed_direct_url_schemes =
+
+# Info to match when looking for glance in the service catalog. Format is:
+# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
+# Only used if glance_api_servers are not provided. (string value)
+#glance_catalog_info = image:glance:publicURL
+
+# Default core properties of image (list value)
+#glance_core_properties = checksum,container_format,disk_format,image_name,image_id,min_disk,min_ram,name,size
+
+# Directory used for temporary storage during image conversion (string value)
+#image_conversion_dir = $state_path/conversion
+
+# message minimum life in seconds. (integer value)
+#message_ttl = 2592000
+
+# interval between periodic task runs to clean expired messages in seconds.
+# (integer value)
+#message_reap_interval = 86400
+
+# Number of volumes allowed per project (integer value)
+#quota_volumes = 10
+
+# Number of volume snapshots allowed per project (integer value)
+#quota_snapshots = 10
+
+# Number of consistencygroups allowed per project (integer value)
+#quota_consistencygroups = 10
+
+# Number of groups allowed per project (integer value)
+#quota_groups = 10
+
+# Total amount of storage, in gigabytes, allowed for volumes and snapshots per
+# project (integer value)
+#quota_gigabytes = 1000
+
+# Number of volume backups allowed per project (integer value)
+#quota_backups = 10
+
+# Total amount of storage, in gigabytes, allowed for backups per project
+# (integer value)
+#quota_backup_gigabytes = 1000
+
+# Number of seconds until a reservation expires (integer value)
+#reservation_expire = 86400
+
+# Interval between periodic task runs to clean expired reservations in seconds.
+# (integer value)
+#reservation_clean_interval = $reservation_expire
+
+# Count of reservations until usage is refreshed (integer value)
+#until_refresh = 0
+
+# Number of seconds between subsequent usage refreshes (integer value)
+#max_age = 0
+
+# Default driver to use for quota checks (string value)
+#quota_driver = cinder.quota.DbQuotaDriver
+
+# Enables or disables use of default quota class with default quota. (boolean
+# value)
+#use_default_quota_class = true
+
+# Max size allowed per volume, in gigabytes (integer value)
+#per_volume_size_limit = -1
+
+# The scheduler host manager class to use (string value)
+#scheduler_host_manager = cinder.scheduler.host_manager.HostManager
+
+# Maximum number of attempts to schedule a volume (integer value)
+#scheduler_max_attempts = 3
+
+# Which filter class names to use for filtering hosts when not specified in the
+# request. (list value)
+#scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter
+
+# Which weigher class names to use for weighing hosts. (list value)
+#scheduler_default_weighers = CapacityWeigher
+
+# Which handler to use for selecting the host/pool after weighing (string
+# value)
+#scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler
+
+# Default scheduler driver to use (string value)
+#scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler
+
+# Absolute path to scheduler configuration JSON file. (string value)
+#scheduler_json_config_location =
+
+# Multiplier used for weighing free capacity. Negative numbers mean to stack vs
+# spread. (floating point value)
+#capacity_weight_multiplier = 1.0
+
+# Multiplier used for weighing allocated capacity. Positive numbers mean to
+# stack vs spread. (floating point value)
+#allocated_capacity_weight_multiplier = -1.0
+
+# Multiplier used for weighing volume number. Negative numbers mean to spread
+# vs stack. (floating point value)
+#volume_number_multiplier = -1.0
+
+# Interval, in seconds, between nodes reporting state to datastore (integer
+# value)
+#report_interval = 10
+
+# Interval, in seconds, between running periodic tasks (integer value)
+#periodic_interval = 60
+
+# Range, in seconds, to randomly delay when starting the periodic task
+# scheduler to reduce stampeding. (Disable by setting to 0) (integer value)
+#periodic_fuzzy_delay = 60
+
+# IP address on which OpenStack Volume API listens (string value)
+osapi_volume_listen = 10.167.4.53
+
+# Port on which OpenStack Volume API listens (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#osapi_volume_listen_port = 8776
+
+# Number of workers for OpenStack Volume API service. The default is equal to
+# the number of CPUs available. (integer value)
+osapi_volume_workers = 4
+
+# Wraps the socket in a SSL context if True is set. A certificate file and key
+# file must be specified. (boolean value)
+#osapi_volume_use_ssl = false
+
+# Option to enable strict host key checking.  When set to "True" Cinder will
+# only connect to systems with a host key present in the configured
+# "ssh_hosts_key_file".  When set to "False" the host key will be saved upon
+# first connection and used for subsequent connections.  Default=False (boolean
+# value)
+#strict_ssh_host_key_policy = false
+
+# File containing SSH host keys for the systems with which Cinder needs to
+# communicate.  OPTIONAL: Default=$state_path/ssh_known_hosts (string value)
+#ssh_hosts_key_file = $state_path/ssh_known_hosts
+
+# The number of characters in the salt. (integer value)
+#volume_transfer_salt_length = 8
+
+# The number of characters in the autogenerated auth key. (integer value)
+#volume_transfer_key_length = 16
+
+# Enables the Force option on upload_to_image. This enables running
+# upload_volume on in-use volumes for backends that support it. (boolean value)
+#enable_force_upload = false
+enable_force_upload = false
+
+# Create volume from snapshot at the host where snapshot resides (boolean
+# value)
+#snapshot_same_host = true
+
+# Ensure that the new volumes are the same AZ as snapshot or source volume
+# (boolean value)
+#cloned_volume_same_az = true
+
+# Cache volume availability zones in memory for the provided duration in
+# seconds (integer value)
+#az_cache_duration = 3600
+
+# Number of times to attempt to run flakey shell commands (integer value)
+#num_shell_tries = 3
+
+# The percentage of backend capacity is reserved (integer value)
+# Minimum value: 0
+# Maximum value: 100
+#reserved_percentage = 0
+
+# Prefix for iSCSI volumes (string value)
+# Deprecated group/name - [DEFAULT]/iscsi_target_prefix
+#target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSCSI daemon is listening on (string value)
+# Deprecated group/name - [DEFAULT]/iscsi_ip_address
+#target_ip_address = $my_ip
+
+# The list of secondary IP addresses of the iSCSI daemon (list value)
+#iscsi_secondary_ip_addresses =
+
+# The port that the iSCSI daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/iscsi_port
+#target_port = 3260
+
+# The maximum number of times to rescan targets to find volume (integer value)
+#num_volume_device_scan_tries = 3
+
+# The backend name for a given driver implementation (string value)
+volume_backend_name = DEFAULT
+
+# Do we attach/detach volumes in cinder using multipath for volume to image and
+# image to volume transfers? (boolean value)
+#use_multipath_for_image_xfer = false
+
+# If this is set to True, attachment of volumes for image transfer will be
+# aborted when multipathd is not running. Otherwise, it will fallback to single
+# path. (boolean value)
+#enforce_multipath_for_image_xfer = false
+
+# Method used to wipe old volumes (string value)
+# Possible values:
+# none - <No description provided>
+# zero - <No description provided>
+#volume_clear = zero
+volume_clear = none
+
+# Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
+# (integer value)
+# Maximum value: 1024
+#volume_clear_size = 0
+
+# The flag to pass to ionice to alter the i/o priority of the process used to
+# zero a volume after deletion, for example "-c3" for idle only priority.
+# (string value)
+#volume_clear_ionice = <None>
+
+# Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI
+# support, scstadmin for SCST target support, ietadm for iSCSI Enterprise
+# Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, or fake
+# for testing. (string value)
+# Possible values:
+# tgtadm - <No description provided>
+# lioadm - <No description provided>
+# scstadmin - <No description provided>
+# iscsictl - <No description provided>
+# ietadm - <No description provided>
+# nvmet - <No description provided>
+# fake - <No description provided>
+# Deprecated group/name - [DEFAULT]/iscsi_helper
+target_helper = tgtadm
+
+# Volume configuration file storage directory (string value)
+volumes_dir = /var/lib/cinder/volumes
+
+# IET configuration file (string value)
+#iet_conf = /etc/iet/ietd.conf
+
+# Chiscsi (CXT) global defaults configuration file (string value)
+#chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf
+
+# Sets the behavior of the iSCSI target to either perform blockio or fileio
+# optionally, auto can be set and Cinder will autodetect type of backing device
+# (string value)
+# Possible values:
+# blockio - <No description provided>
+# fileio - <No description provided>
+# auto - <No description provided>
+#iscsi_iotype = fileio
+
+# The default block size used when copying/clearing volumes (string value)
+#volume_dd_blocksize = 1M
+
+# The blkio cgroup name to be used to limit bandwidth of volume copy (string
+# value)
+#volume_copy_blkio_cgroup_name = cinder-volume-copy
+
+# The upper limit of bandwidth of volume copy. 0 => unlimited (integer value)
+#volume_copy_bps_limit = 0
+
+# Sets the behavior of the iSCSI target to either perform write-back(on) or
+# write-through(off). This parameter is valid if target_helper is set to
+# tgtadm. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+#iscsi_write_cache = on
+
+# Sets the target-specific flags for the iSCSI target. Only used for tgtadm to
+# specify backing device flags using bsoflags option. The specified string is
+# passed as is to the underlying tool. (string value)
+#iscsi_target_flags =
+
+# Determines the target protocol for new volumes, created with tgtadm, lioadm
+# and nvmet target helpers. In order to enable RDMA, this parameter should be
+# set with the value "iser". The supported iSCSI protocol values are "iscsi"
+# and "iser", in case of nvmet target set to "nvmet_rdma". (string value)
+# Possible values:
+# iscsi - <No description provided>
+# iser - <No description provided>
+# nvmet_rdma - <No description provided>
+# Deprecated group/name - [DEFAULT]/iscsi_protocol
+#target_protocol = iscsi
+
+# The path to the client certificate key for verification, if the driver
+# supports it. (string value)
+#driver_client_cert_key = <None>
+
+# The path to the client certificate for verification, if the driver supports
+# it. (string value)
+#driver_client_cert = <None>
+
+# Tell driver to use SSL for connection to backend storage if the driver
+# supports it. (boolean value)
+#driver_use_ssl = false
+
+# Representation of the over subscription ratio when thin provisioning is
+# enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times
+# of the total physical capacity. If the ratio is 10.5, it means provisioned
+# capacity can be 10.5 times of the total physical capacity. A ratio of 1.0
+# means provisioned capacity cannot exceed the total physical capacity. If
+# ratio is 'auto', Cinder will automatically calculate the ratio based on the
+# provisioned capacity and the used space. If not set to auto, the ratio has to
+# be a minimum of 1.0. (string value)
+#max_over_subscription_ratio = 20.0
+
+# Certain ISCSI targets have predefined target names, SCST target driver uses
+# this name. (string value)
+#scst_target_iqn_name = <None>
+
+# SCST target implementation can choose from multiple SCST target drivers.
+# (string value)
+#scst_target_driver = iscsi
+
+# Option to enable/disable CHAP authentication for targets. (boolean value)
+#use_chap_auth = false
+
+# CHAP user name. (string value)
+#chap_username =
+
+# Password for specified CHAP account name. (string value)
+#chap_password =
+
+# Namespace for driver private data values to be saved in. (string value)
+#driver_data_namespace = <None>
+
+# String representation for an equation that will be used to filter hosts. Only
+# used when the driver filter is set to be used by the Cinder scheduler.
+# (string value)
+#filter_function = <None>
+
+# String representation for an equation that will be used to determine the
+# goodness of a host. Only used when using the goodness weigher is set to be
+# used by the Cinder scheduler. (string value)
+#goodness_function = <None>
+
+# If set to True the http client will validate the SSL certificate of the
+# backend endpoint. (boolean value)
+#driver_ssl_cert_verify = false
+
+# Can be used to specify a non default path to a CA_BUNDLE file or directory
+# with certificates of trusted CAs, which will be used to validate the backend
+# (string value)
+#driver_ssl_cert_path = <None>
+
+# List of options that control which trace info is written to the DEBUG log
+# level to assist developers. Valid values are method and api. (list value)
+#trace_flags = <None>
+
+# Multi opt of dictionaries to represent a replication target device.  This
+# option may be specified multiple times in a single config section to specify
+# multiple replication target devices.  Each entry takes the standard dict
+# config form: replication_device =
+# target_device_id:<required>,key1:value1,key2:value2... (dict value)
+#replication_device = <None>
+
+# If set to True, upload-to-image in raw format will create a cloned volume and
+# register its location to the image service, instead of uploading the volume
+# content. The cinder backend and locations support must be enabled in the
+# image service. (boolean value)
+#image_upload_use_cinder_backend = false
+
+# If set to True, the image volume created by upload-to-image will be placed in
+# the internal tenant. Otherwise, the image volume is created in the current
+# context's tenant. (boolean value)
+#image_upload_use_internal_tenant = false
+
+# Enable the image volume cache for this backend. (boolean value)
+#image_volume_cache_enabled = false
+
+# Max size of the image volume cache for this backend in GB. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_size_gb = 0
+
+# Max number of entries allowed in the image volume cache. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_count = 0
+
+# Report to clients of Cinder that the backend supports discard (aka.
+# trim/unmap). This will not actually change the behavior of the backend or the
+# client directly, it will only notify that it can be used. (boolean value)
+#report_discard_supported = false
+
+# Protocol for transferring data between host and storage back-end. (string
+# value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+#storage_protocol = iscsi
+
+# If this is set to True, a temporary snapshot will be created for performing
+# non-disruptive backups. Otherwise a temporary volume will be cloned in order
+# to perform a backup. (boolean value)
+#backup_use_temp_snapshot = false
+
+# Set this to True when you want to allow an unsupported driver to start.
+# Drivers that haven't maintained a working CI system and testing are marked as
+# unsupported until CI is working again.  This also marks a driver as
+# deprecated and may be removed in the next release. (boolean value)
+#enable_unsupported_driver = false
+
+# Availability zone for this volume backend. If not set, the
+# storage_availability_zone option value is used as the default for all
+# backends. (string value)
+#backend_availability_zone = <None>
+
+# The maximum number of times to rescan iSER targetto find volume (integer
+# value)
+#num_iser_scan_tries = 3
+
+# Prefix for iSER volumes (string value)
+#iser_target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSER daemon is listening on (string value)
+#iser_ip_address = $my_ip
+
+# The port that the iSER daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#iser_port = 3260
+
+# The name of the iSER target user-land tool to use (string value)
+#iser_helper = tgtadm
+
+# The port that the NVMe target is listening on. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#nvmet_port_id = 1
+
+# The namespace id associated with the subsystem that will be created with the
+# path for the LVM volume. (integer value)
+#nvmet_ns_id = 10
+
+# DataCore virtual disk type (single/mirrored). Mirrored virtual disks require
+# two storage servers in the server group. (string value)
+# Possible values:
+# single - <No description provided>
+# mirrored - <No description provided>
+#datacore_disk_type = single
+
+# DataCore virtual disk storage profile. (string value)
+#datacore_storage_profile = <None>
+
+# List of DataCore disk pools that can be used by volume driver. (list value)
+#datacore_disk_pools =
+
+# Seconds to wait for a response from a DataCore API call. (integer value)
+# Minimum value: 1
+#datacore_api_timeout = 300
+
+# Seconds to wait for DataCore virtual disk to come out of the "Failed" state.
+# (integer value)
+# Minimum value: 0
+#datacore_disk_failed_delay = 15
+
+# List of iSCSI targets that cannot be used to attach volume. To prevent the
+# DataCore iSCSI volume driver from using some front-end targets in volume
+# attachment, specify this option and list the iqn and target machine for each
+# target as the value, such as <iqn:target name>, <iqn:target name>,
+# <iqn:target name>. (list value)
+#datacore_iscsi_unallowed_targets =
+
+# Configure CHAP authentication for iSCSI connections. (boolean value)
+#datacore_iscsi_chap_enabled = false
+
+# iSCSI CHAP authentication password storage file. (string value)
+#datacore_iscsi_chap_storage = <None>
+
+# Storage system autoexpand parameter for volumes (True/False) (boolean value)
+#instorage_mcs_vol_autoexpand = true
+
+# Storage system compression option for volumes (boolean value)
+#instorage_mcs_vol_compression = false
+
+# Enable InTier for volumes (boolean value)
+#instorage_mcs_vol_intier = true
+
+# Allow tenants to specify QOS on create (boolean value)
+#instorage_mcs_allow_tenant_qos = false
+
+# Storage system grain size parameter for volumes (32/64/128/256) (integer
+# value)
+# Minimum value: 32
+# Maximum value: 256
+#instorage_mcs_vol_grainsize = 256
+
+# Storage system space-efficiency parameter for volumes (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#instorage_mcs_vol_rsize = 2
+
+# Storage system threshold for volume capacity warnings (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#instorage_mcs_vol_warning = 0
+
+# Maximum number of seconds to wait for LocalCopy to be prepared. (integer
+# value)
+# Minimum value: 1
+# Maximum value: 600
+#instorage_mcs_localcopy_timeout = 120
+
+# Specifies the InStorage LocalCopy copy rate to be used when creating a full
+# volume copy. The default is rate is 50, and the valid rates are 1-100.
+# (integer value)
+# Minimum value: 1
+# Maximum value: 100
+#instorage_mcs_localcopy_rate = 50
+
+# The I/O group in which to allocate volumes. It can be a comma-separated list
+# in which case the driver will select an io_group based on least number of
+# volumes associated with the io_group. (string value)
+#instorage_mcs_vol_iogrp = 0
+
+# Specifies secondary management IP or hostname to be used if san_ip is invalid
+# or becomes inaccessible. (string value)
+#instorage_san_secondary_ip = <None>
+
+# Comma separated list of storage system storage pools for volumes. (list
+# value)
+#instorage_mcs_volpool_name = volpool
+
+# Configure CHAP authentication for iSCSI connections (Default: Enabled)
+# (boolean value)
+#instorage_mcs_iscsi_chap_enabled = true
+
+# The StorPool template for volumes with no type. (string value)
+#storpool_template = <None>
+
+# The default StorPool chain replication value.  Used when creating a volume
+# with no specified type if storpool_template is not set.  Also used for
+# calculating the apparent free space reported in the stats. (integer value)
+#storpool_replication = 3
+
+# Create sparse Lun. (boolean value)
+#vrts_lun_sparse = true
+
+# VA config file. (string value)
+#vrts_target_config = /etc/cinder/vrts_target.xml
+
+# Timeout for creating the volume to migrate to when performing volume
+# migration (seconds) (integer value)
+#migration_create_volume_timeout_secs = 300
+
+# Offload pending volume delete during volume service startup (boolean value)
+#volume_service_inithost_offload = false
+
+# FC Zoning mode configured, only 'fabric' is supported now. (string value)
+#zoning_mode = <None>
+
+# Sets the value of TCP_KEEPALIVE (True/False) for each server socket. (boolean
+# value)
+#tcp_keepalive = true
+
+# Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not
+# supported on OS X. (integer value)
+#tcp_keepalive_interval = <None>
+
+# Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X.
+# (integer value)
+#tcp_keepalive_count = <None>
+
+
+[backend]
+
+#
+# From cinder
+#
+
+# Backend override of host value. (string value)
+#backend_host = <None>
+[lvm-driver]
+host=cmp002
+volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
+volume_backend_name=lvm-driver
 iscsi_helper = tgtadm
-volume_name_template = volume-%s
-volume_group = cinder-volumes
-verbose = True
-auth_strategy = keystone
-state_path = /var/lib/cinder
-lock_path = /var/lock/cinder
-volumes_dir = /var/lib/cinder/volumes
-enabled_backends = lvm
+volume_group = vgroot
+
+
+[backend_defaults]
+
+#
+# From cinder
+#
+
+# Number of times to attempt to run flakey shell commands (integer value)
+#num_shell_tries = 3
+
+# The percentage of backend capacity is reserved (integer value)
+# Minimum value: 0
+# Maximum value: 100
+#reserved_percentage = 0
+
+# Prefix for iSCSI volumes (string value)
+# Deprecated group/name - [backend_defaults]/iscsi_target_prefix
+#target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSCSI daemon is listening on (string value)
+# Deprecated group/name - [backend_defaults]/iscsi_ip_address
+#target_ip_address = $my_ip
+
+# The list of secondary IP addresses of the iSCSI daemon (list value)
+#iscsi_secondary_ip_addresses =
+
+# The port that the iSCSI daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# Deprecated group/name - [backend_defaults]/iscsi_port
+#target_port = 3260
+
+# The maximum number of times to rescan targets to find volume (integer value)
+#num_volume_device_scan_tries = 3
+
+# The backend name for a given driver implementation (string value)
+#volume_backend_name = <None>
+
+# Do we attach/detach volumes in cinder using multipath for volume to image and
+# image to volume transfers? (boolean value)
+#use_multipath_for_image_xfer = false
+
+# If this is set to True, attachment of volumes for image transfer will be
+# aborted when multipathd is not running. Otherwise, it will fallback to single
+# path. (boolean value)
+#enforce_multipath_for_image_xfer = false
+
+# Method used to wipe old volumes (string value)
+# Possible values:
+# none - <No description provided>
+# zero - <No description provided>
+#volume_clear = zero
+
+# Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
+# (integer value)
+# Maximum value: 1024
+#volume_clear_size = 0
+
+# The flag to pass to ionice to alter the i/o priority of the process used to
+# zero a volume after deletion, for example "-c3" for idle only priority.
+# (string value)
+#volume_clear_ionice = <None>
+
+# Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI
+# support, scstadmin for SCST target support, ietadm for iSCSI Enterprise
+# Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, or fake
+# for testing. (string value)
+# Possible values:
+# tgtadm - <No description provided>
+# lioadm - <No description provided>
+# scstadmin - <No description provided>
+# iscsictl - <No description provided>
+# ietadm - <No description provided>
+# nvmet - <No description provided>
+# fake - <No description provided>
+# Deprecated group/name - [backend_defaults]/iscsi_helper
+#target_helper = tgtadm
+
+# Volume configuration file storage directory (string value)
+#volumes_dir = $state_path/volumes
+
+# IET configuration file (string value)
+#iet_conf = /etc/iet/ietd.conf
+
+# Chiscsi (CXT) global defaults configuration file (string value)
+#chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf
+
+# Sets the behavior of the iSCSI target to either perform blockio or fileio
+# optionally, auto can be set and Cinder will autodetect type of backing device
+# (string value)
+# Possible values:
+# blockio - <No description provided>
+# fileio - <No description provided>
+# auto - <No description provided>
+#iscsi_iotype = fileio
+
+# The default block size used when copying/clearing volumes (string value)
+#volume_dd_blocksize = 1M
+
+# The blkio cgroup name to be used to limit bandwidth of volume copy (string
+# value)
+#volume_copy_blkio_cgroup_name = cinder-volume-copy
+
+# The upper limit of bandwidth of volume copy. 0 => unlimited (integer value)
+#volume_copy_bps_limit = 0
+
+# Sets the behavior of the iSCSI target to either perform write-back(on) or
+# write-through(off). This parameter is valid if target_helper is set to
+# tgtadm. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+#iscsi_write_cache = on
+
+# Sets the target-specific flags for the iSCSI target. Only used for tgtadm to
+# specify backing device flags using bsoflags option. The specified string is
+# passed as is to the underlying tool. (string value)
+#iscsi_target_flags =
+
+# Determines the target protocol for new volumes, created with tgtadm, lioadm
+# and nvmet target helpers. In order to enable RDMA, this parameter should be
+# set with the value "iser". The supported iSCSI protocol values are "iscsi"
+# and "iser", in case of nvmet target set to "nvmet_rdma". (string value)
+# Possible values:
+# iscsi - <No description provided>
+# iser - <No description provided>
+# nvmet_rdma - <No description provided>
+# Deprecated group/name - [backend_defaults]/iscsi_protocol
+#target_protocol = iscsi
+
+# The path to the client certificate key for verification, if the driver
+# supports it. (string value)
+#driver_client_cert_key = <None>
+
+# The path to the client certificate for verification, if the driver supports
+# it. (string value)
+#driver_client_cert = <None>
+
+# Tell driver to use SSL for connection to backend storage if the driver
+# supports it. (boolean value)
+#driver_use_ssl = false
+
+# Representation of the over subscription ratio when thin provisioning is
+# enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times
+# of the total physical capacity. If the ratio is 10.5, it means provisioned
+# capacity can be 10.5 times of the total physical capacity. A ratio of 1.0
+# means provisioned capacity cannot exceed the total physical capacity. If
+# ratio is 'auto', Cinder will automatically calculate the ratio based on the
+# provisioned capacity and the used space. If not set to auto, the ratio has to
+# be a minimum of 1.0. (string value)
+#max_over_subscription_ratio = 20.0
+
+# Certain ISCSI targets have predefined target names, SCST target driver uses
+# this name. (string value)
+#scst_target_iqn_name = <None>
+
+# SCST target implementation can choose from multiple SCST target drivers.
+# (string value)
+#scst_target_driver = iscsi
+
+# Option to enable/disable CHAP authentication for targets. (boolean value)
+#use_chap_auth = false
+
+# CHAP user name. (string value)
+#chap_username =
+
+# Password for specified CHAP account name. (string value)
+#chap_password =
+
+# Namespace for driver private data values to be saved in. (string value)
+#driver_data_namespace = <None>
+
+# String representation for an equation that will be used to filter hosts. Only
+# used when the driver filter is set to be used by the Cinder scheduler.
+# (string value)
+#filter_function = <None>
+
+# String representation for an equation that will be used to determine the
+# goodness of a host. Only used when using the goodness weigher is set to be
+# used by the Cinder scheduler. (string value)
+#goodness_function = <None>
+
+# If set to True the http client will validate the SSL certificate of the
+# backend endpoint. (boolean value)
+#driver_ssl_cert_verify = false
+
+# Can be used to specify a non default path to a CA_BUNDLE file or directory
+# with certificates of trusted CAs, which will be used to validate the backend
+# (string value)
+#driver_ssl_cert_path = <None>
+
+# List of options that control which trace info is written to the DEBUG log
+# level to assist developers. Valid values are method and api. (list value)
+#trace_flags = <None>
+
+# Multi opt of dictionaries to represent a replication target device.  This
+# option may be specified multiple times in a single config section to specify
+# multiple replication target devices.  Each entry takes the standard dict
+# config form: replication_device =
+# target_device_id:<required>,key1:value1,key2:value2... (dict value)
+#replication_device = <None>
+
+# If set to True, upload-to-image in raw format will create a cloned volume and
+# register its location to the image service, instead of uploading the volume
+# content. The cinder backend and locations support must be enabled in the
+# image service. (boolean value)
+#image_upload_use_cinder_backend = false
+
+# If set to True, the image volume created by upload-to-image will be placed in
+# the internal tenant. Otherwise, the image volume is created in the current
+# context's tenant. (boolean value)
+#image_upload_use_internal_tenant = false
+
+# Enable the image volume cache for this backend. (boolean value)
+#image_volume_cache_enabled = false
+
+# Max size of the image volume cache for this backend in GB. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_size_gb = 0
+
+# Max number of entries allowed in the image volume cache. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_count = 0
+
+# Report to clients of Cinder that the backend supports discard (aka.
+# trim/unmap). This will not actually change the behavior of the backend or the
+# client directly, it will only notify that it can be used. (boolean value)
+#report_discard_supported = false
+
+# Protocol for transferring data between host and storage back-end. (string
+# value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+#storage_protocol = iscsi
+
+# If this is set to True, a temporary snapshot will be created for performing
+# non-disruptive backups. Otherwise a temporary volume will be cloned in order
+# to perform a backup. (boolean value)
+#backup_use_temp_snapshot = false
+
+# Set this to True when you want to allow an unsupported driver to start.
+# Drivers that haven't maintained a working CI system and testing are marked as
+# unsupported until CI is working again.  This also marks a driver as
+# deprecated and may be removed in the next release. (boolean value)
+#enable_unsupported_driver = false
+
+# Availability zone for this volume backend. If not set, the
+# storage_availability_zone option value is used as the default for all
+# backends. (string value)
+#backend_availability_zone = <None>
+
+# The maximum number of times to rescan iSER targetto find volume (integer
+# value)
+#num_iser_scan_tries = 3
+
+# Prefix for iSER volumes (string value)
+#iser_target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSER daemon is listening on (string value)
+#iser_ip_address = $my_ip
+
+# The port that the iSER daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#iser_port = 3260
+
+# The name of the iSER target user-land tool to use (string value)
+#iser_helper = tgtadm
+
+# The port that the NVMe target is listening on. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#nvmet_port_id = 1
+
+# The namespace id associated with the subsystem that will be created with the
+# path for the LVM volume. (integer value)
+#nvmet_ns_id = 10
+
+# Hostname for the CoprHD Instance (string value)
+#coprhd_hostname = <None>
+
+# Port for the CoprHD Instance (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#coprhd_port = 4443
+
+# Username for accessing the CoprHD Instance (string value)
+#coprhd_username = <None>
+
+# Password for accessing the CoprHD Instance (string value)
+#coprhd_password = <None>
+
+# Tenant to utilize within the CoprHD Instance (string value)
+#coprhd_tenant = <None>
+
+# Project to utilize within the CoprHD Instance (string value)
+#coprhd_project = <None>
+
+# Virtual Array to utilize within the CoprHD Instance (string value)
+#coprhd_varray = <None>
+
+# True | False to indicate if the storage array in CoprHD is VMAX or VPLEX
+# (boolean value)
+#coprhd_emulate_snapshot = false
+
+# Rest Gateway IP or FQDN for Scaleio (string value)
+#coprhd_scaleio_rest_gateway_host = None
+
+# Rest Gateway Port for Scaleio (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#coprhd_scaleio_rest_gateway_port = 4984
+
+# Username for Rest Gateway (string value)
+#coprhd_scaleio_rest_server_username = <None>
+
+# Rest Gateway Password (string value)
+#coprhd_scaleio_rest_server_password = <None>
+
+# verify server certificate (boolean value)
+#scaleio_verify_server_certificate = false
+
+# Server certificate path (string value)
+#scaleio_server_certificate_path = <None>
+
+# Datera API port. (string value)
+#datera_api_port = 7717
+
+# DEPRECATED: Datera API version. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#datera_api_version = 2
+
+# Timeout for HTTP 503 retry messages (integer value)
+#datera_503_timeout = 120
+
+# Interval between 503 retries (integer value)
+#datera_503_interval = 5
+
+# True to set function arg and return logging (boolean value)
+#datera_debug = false
+
+# ONLY FOR DEBUG/TESTING PURPOSES
+# True to set replica_count to 1 (boolean value)
+#datera_debug_replica_count_override = false
+
+# If set to 'Map' --> OpenStack project ID will be mapped implicitly to Datera
+# tenant ID
+# If set to 'None' --> Datera tenant ID will not be used during volume
+# provisioning
+# If set to anything else --> Datera tenant ID will be the provided value
+# (string value)
+#datera_tenant_id = <None>
+
+# Set to True to disable profiling in the Datera driver (boolean value)
+#datera_disable_profiler = false
+
+# Group name to use for creating volumes. Defaults to "group-0". (string value)
+#eqlx_group_name = group-0
+
+# Maximum retry count for reconnection. Default is 5. (integer value)
+# Minimum value: 0
+#eqlx_cli_max_retries = 5
+
+# Pool in which volumes will be created. Defaults to "default". (string value)
+#eqlx_pool = default
+
+# Storage Center System Serial Number (integer value)
+#dell_sc_ssn = 64702
+
+# Dell API port (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#dell_sc_api_port = 3033
+
+# Name of the server folder to use on the Storage Center (string value)
+#dell_sc_server_folder = openstack
+
+# Name of the volume folder to use on the Storage Center (string value)
+#dell_sc_volume_folder = openstack
+
+# Enable HTTPS SC certificate verification (boolean value)
+#dell_sc_verify_cert = false
+
+# IP address of secondary DSM volume (string value)
+#secondary_san_ip =
+
+# Secondary DSM user name (string value)
+#secondary_san_login = Admin
+
+# Secondary DSM user password name (string value)
+#secondary_san_password =
+
+# Secondary Dell API port (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#secondary_sc_api_port = 3033
+
+# Dell SC API async call default timeout in seconds. (integer value)
+#dell_api_async_rest_timeout = 15
+
+# Dell SC API sync call default timeout in seconds. (integer value)
+#dell_api_sync_rest_timeout = 30
+
+# Domain IP to be excluded from iSCSI returns. (IP address value)
+#excluded_domain_ip = <None>
+
+# Server OS type to use when creating a new server on the Storage Center.
+# (string value)
+#dell_server_os = Red Hat Linux 6.x
+
+# REST server port. (string value)
+#sio_rest_server_port = 443
+
+# Verify server certificate. (boolean value)
+#sio_verify_server_certificate = false
+
+# Server certificate path. (string value)
+#sio_server_certificate_path = <None>
+
+# Round up volume capacity. (boolean value)
+#sio_round_volume_capacity = true
+
+# Unmap volume before deletion. (boolean value)
+#sio_unmap_volume_before_deletion = false
+
+# Storage Pools. (string value)
+#sio_storage_pools = <None>
+
+# DEPRECATED: Protection Domain ID. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_protection_domain_id = <None>
+
+# DEPRECATED: Protection Domain name. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_protection_domain_name = <None>
+
+# DEPRECATED: Storage Pool name. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_storage_pool_name = <None>
+
+# DEPRECATED: Storage Pool ID. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_storage_pool_id = <None>
+
+# ScaleIO API version. (string value)
+#sio_server_api_version = <None>
+
+# max_over_subscription_ratio setting for the ScaleIO driver. This replaces the
+# general max_over_subscription_ratio which has no effect in this
+# driver.Maximum value allowed for ScaleIO is 10.0. (floating point value)
+#sio_max_over_subscription_ratio = 10.0
+
+# Allow thick volumes to be created in Storage Pools when zero padding is
+# disabled. This option should not be enabled if multiple tenants will utilize
+# thick volumes from a shared Storage Pool. (boolean value)
+#sio_allow_non_padded_thick_volumes = false
+
+# A comma-separated list of storage pool names to be used. (list value)
+#unity_storage_pool_names = <None>
+
+# A comma-separated list of iSCSI or FC ports to be used. Each port can be
+# Unix-style glob expressions. (list value)
+#unity_io_ports = <None>
+
+# To remove the host from Unity when the last LUN is detached from it. By
+# default, it is False. (boolean value)
+#remove_empty_host = false
+
+# DEPRECATED: Use this file for cinder emc plugin config data. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#cinder_dell_emc_config_file = /etc/cinder/cinder_dell_emc_config.xml
+
+# Use this value to specify length of the interval in seconds. (integer value)
+#interval = 3
+
+# Use this value to specify number of retries. (integer value)
+#retries = 200
+
+# Use this value to enable the initiator_check. (boolean value)
+#initiator_check = false
+
+# REST server port number. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#san_rest_port = 8443
+
+# Serial number of the array to connect to. (string value)
+#vmax_array = <None>
+
+# Storage resource pool on array to use for provisioning. (string value)
+#vmax_srp = <None>
+
+# Service level to use for provisioning storage. (string value)
+#vmax_service_level = <None>
+
+# Workload (string value)
+#vmax_workload = <None>
+
+# List of port groups containing frontend ports configured prior for server
+# connection. (list value)
+#vmax_port_groups = <None>
+
+# VNX authentication scope type. By default, the value is global. (string
+# value)
+#storage_vnx_authentication_type = global
+
+# Directory path that contains the VNX security file. Make sure the security
+# file is generated first. (string value)
+#storage_vnx_security_file_dir = <None>
+
+# Naviseccli Path. (string value)
+#naviseccli_path = <None>
+
+# Comma-separated list of storage pool names to be used. (list value)
+#storage_vnx_pool_names = <None>
+
+# Default timeout for CLI operations in minutes. For example, LUN migration is
+# a typical long running operation, which depends on the LUN size and the load
+# of the array. An upper bound in the specific deployment can be set to avoid
+# unnecessary long wait. By default, it is 365 days long. (integer value)
+#default_timeout = 31536000
+
+# Default max number of LUNs in a storage group. By default, the value is 255.
+# (integer value)
+#max_luns_per_storage_group = 255
+
+# To destroy storage group when the last LUN is removed from it. By default,
+# the value is False. (boolean value)
+#destroy_empty_storage_group = false
+
+# Mapping between hostname and its iSCSI initiator IP addresses. (string value)
+#iscsi_initiators = <None>
+
+# Comma separated iSCSI or FC ports to be used in Nova or Cinder. (list value)
+#io_port_list = <None>
+
+# Automatically register initiators. By default, the value is False. (boolean
+# value)
+#initiator_auto_registration = false
+
+# Automatically deregister initiators after the related storage group is
+# destroyed. By default, the value is False. (boolean value)
+#initiator_auto_deregistration = false
+
+# DEPRECATED: Report free_capacity_gb as 0 when the limit to maximum number of
+# pool LUNs is reached. By default, the value is False. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#check_max_pool_luns_threshold = false
+
+# Delete a LUN even if it is in Storage Groups. By default, the value is False.
+# (boolean value)
+#force_delete_lun_in_storagegroup = false
+
+# Force LUN creation even if the full threshold of pool is reached. By default,
+# the value is False. (boolean value)
+#ignore_pool_full_threshold = false
+
+# XMS cluster id in multi-cluster environment (string value)
+#xtremio_cluster_name =
+
+# Number of retries in case array is busy (integer value)
+#xtremio_array_busy_retry_count = 5
+
+# Interval between retries in case array is busy (integer value)
+#xtremio_array_busy_retry_interval = 5
+
+# Number of volumes created from each cached glance image (integer value)
+#xtremio_volumes_per_glance_cache = 100
+
+# Should the driver remove initiator groups with no volumes after the last
+# connection was terminated. Since the behavior till now was to leave the IG
+# be, we default to False (not deleting IGs without connected volumes); setting
+# this parameter to True will remove any IG after terminating its connection to
+# the last volume. (boolean value)
+#xtremio_clean_unused_ig = false
+
+# The IP of DMS client socket server (IP address value)
+#disco_client = 127.0.0.1
+
+# The port to connect DMS client socket server (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#disco_client_port = 9898
+
+# DEPRECATED: Path to the wsdl file to communicate with DISCO request manager
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#disco_wsdl_path = /etc/cinder/DISCOService.wsdl
+
+# DEPRECATED: The IP address of the REST server (IP address value)
+# Deprecated group/name - [DEFAULT]/rest_ip
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Using san_ip later
+#disco_rest_ip = <None>
+
+# Use soap client or rest client for communicating with DISCO. Possible values
+# are "soap" or "rest". (string value)
+# Possible values:
+# soap - <No description provided>
+# rest - <No description provided>
+# Deprecated group/name - [DEFAULT]/choice_client
+#disco_choice_client = <None>
+
+# DEPRECATED: The port of DISCO source API (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Using san_api_port later
+#disco_src_api_port = 8080
+
+# Prefix before volume name to differentiate DISCO volume created through
+# openstack and the other ones (string value)
+# Deprecated group/name - [backend_defaults]/volume_name_prefix
+#disco_volume_name_prefix = openstack-
+
+# How long we check whether a snapshot is finished before we give up (integer
+# value)
+# Deprecated group/name - [backend_defaults]/snapshot_check_timeout
+#disco_snapshot_check_timeout = 3600
+
+# How long we check whether a restore is finished before we give up (integer
+# value)
+# Deprecated group/name - [backend_defaults]/restore_check_timeout
+#disco_restore_check_timeout = 3600
+
+# How long we check whether a clone is finished before we give up (integer
+# value)
+# Deprecated group/name - [backend_defaults]/clone_check_timeout
+#disco_clone_check_timeout = 3600
+
+# How long we wait before retrying to get an item detail (integer value)
+# Deprecated group/name - [backend_defaults]/retry_interval
+#disco_retry_interval = 1
+
+# Number of nodes that should replicate the data. (integer value)
+#drbdmanage_redundancy = 1
+
+# Resource deployment completion wait policy. (string value)
+#drbdmanage_resource_policy = {"ratio": "0.51", "timeout": "60"}
+
+# Disk options to set on new resources. See http://www.drbd.org/en/doc/users-
+# guide-90/re-drbdconf for all the details. (string value)
+#drbdmanage_disk_options = {"c-min-rate": "4M"}
+
+# Net options to set on new resources. See http://www.drbd.org/en/doc/users-
+# guide-90/re-drbdconf for all the details. (string value)
+#drbdmanage_net_options = {"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"}
+
+# Resource options to set on new resources. See
+# http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details.
+# (string value)
+#drbdmanage_resource_options = {"auto-promote-timeout": "300"}
+
+# Snapshot completion wait policy. (string value)
+#drbdmanage_snapshot_policy = {"count": "1", "timeout": "60"}
+
+# Volume resize completion wait policy. (string value)
+#drbdmanage_resize_policy = {"timeout": "60"}
+
+# Resource deployment completion wait plugin. (string value)
+#drbdmanage_resource_plugin = drbdmanage.plugins.plugins.wait_for.WaitForResource
+
+# Snapshot completion wait plugin. (string value)
+#drbdmanage_snapshot_plugin = drbdmanage.plugins.plugins.wait_for.WaitForSnapshot
+
+# Volume resize completion wait plugin. (string value)
+#drbdmanage_resize_plugin = drbdmanage.plugins.plugins.wait_for.WaitForVolumeSize
+
+# If set, the c-vol node will receive a useable
+#                 /dev/drbdX device, even if the actual data is stored on
+#                 other nodes only.
+#                 This is useful for debugging, maintenance, and to be
+#                 able to do the iSCSI export from the c-vol node. (boolean
+# value)
+#drbdmanage_devs_on_volume = true
+
+# config file for cinder eternus_dx volume driver (string value)
+#cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml
+
+# The flag of thin storage allocation. (boolean value)
+#dsware_isthin = false
+
+# Fusionstorage manager ip addr for cinder-volume. (string value)
+#dsware_manager =
+
+# Fusionstorage agent ip addr range. (string value)
+#fusionstorageagent =
+
+# Pool type, like sata-2copy. (string value)
+#pool_type = default
+
+# Pool id permit to use. (list value)
+#pool_id_filter =
+
+# Create clone volume timeout. (integer value)
+#clone_volume_timeout = 680
+
+# Space network name to use for data transfer (string value)
+#hgst_net = Net 1 (IPv4)
+
+# Comma separated list of Space storage servers:devices. ex:
+# os1_stor:gbd0,os2_stor:gbd0 (string value)
+#hgst_storage_servers = os:gbd0
+
+# Should spaces be redundantly stored (1/0) (string value)
+#hgst_redundancy = 0
+
+# User to own created spaces (string value)
+#hgst_space_user = root
+
+# Group to own created spaces (string value)
+#hgst_space_group = disk
+
+# UNIX mode for created spaces (string value)
+#hgst_space_mode = 0600
+
+# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1 (string value)
+#hpe3par_api_url =
+
+# 3PAR username with the 'edit' role (string value)
+#hpe3par_username =
+
+# 3PAR password for the user specified in hpe3par_username (string value)
+#hpe3par_password =
+
+# List of the CPG(s) to use for volume creation (list value)
+#hpe3par_cpg = OpenStack
+
+# The CPG to use for Snapshots for volumes. If empty the userCPG will be used.
+# (string value)
+#hpe3par_cpg_snap =
+
+# The time in hours to retain a snapshot.  You can't delete it before this
+# expires. (string value)
+#hpe3par_snapshot_retention =
+
+# The time in hours when a snapshot expires  and is deleted.  This must be
+# larger than expiration (string value)
+#hpe3par_snapshot_expiration =
+
+# Enable HTTP debugging to 3PAR (boolean value)
+#hpe3par_debug = false
+
+# List of target iSCSI addresses to use. (list value)
+#hpe3par_iscsi_ips =
+
+# Enable CHAP authentication for iSCSI connections. (boolean value)
+#hpe3par_iscsi_chap_enabled = false
+
+# HPE LeftHand WSAPI Server Url like https://<LeftHand ip>:8081/lhos (uri
+# value)
+# Deprecated group/name - [backend_defaults]/hplefthand_api_url
+#hpelefthand_api_url = <None>
+
+# HPE LeftHand Super user username (string value)
+# Deprecated group/name - [backend_defaults]/hplefthand_username
+#hpelefthand_username = <None>
+
+# HPE LeftHand Super user password (string value)
+# Deprecated group/name - [backend_defaults]/hplefthand_password
+#hpelefthand_password = <None>
+
+# HPE LeftHand cluster name (string value)
+# Deprecated group/name - [backend_defaults]/hplefthand_clustername
+#hpelefthand_clustername = <None>
+
+# Configure CHAP authentication for iSCSI connections (Default: Disabled)
+# (boolean value)
+# Deprecated group/name - [backend_defaults]/hplefthand_iscsi_chap_enabled
+#hpelefthand_iscsi_chap_enabled = false
+
+# Enable HTTP debugging to LeftHand (boolean value)
+# Deprecated group/name - [backend_defaults]/hplefthand_debug
+#hpelefthand_debug = false
+
+# Port number of SSH service. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#hpelefthand_ssh_port = 16022
+
+# The configuration file for the Cinder Huawei driver. (string value)
+#cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
+
+# The remote device hypermetro will use. (string value)
+#hypermetro_devices = <None>
+
+# The remote metro device san user. (string value)
+#metro_san_user = <None>
+
+# The remote metro device san password. (string value)
+#metro_san_password = <None>
+
+# The remote metro device domain name. (string value)
+#metro_domain_name = <None>
+
+# The remote metro device request url. (string value)
+#metro_san_address = <None>
+
+# The remote metro device pool names. (string value)
+#metro_storage_pools = <None>
+
+# Connection protocol should be FC. (Default is FC.) (string value)
+#flashsystem_connection_protocol = FC
+
+# Allows vdisk to multi host mapping. (Default is True) (boolean value)
+#flashsystem_multihostmap_enabled = true
+
+# DEPRECATED: This option no longer has any affect. It is deprecated and will
+# be removed in the next release. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#flashsystem_multipath_enabled = false
+
+# Default iSCSI Port ID of FlashSystem. (Default port is 0.) (integer value)
+#flashsystem_iscsi_portid = 0
+
+# Specifies the path of the GPFS directory where Block Storage volume and
+# snapshot files are stored. (string value)
+#gpfs_mount_point_base = <None>
+
+# Specifies the path of the Image service repository in GPFS.  Leave undefined
+# if not storing images in GPFS. (string value)
+#gpfs_images_dir = <None>
+
+# Specifies the type of image copy to be used.  Set this when the Image service
+# repository also uses GPFS so that image files can be transferred efficiently
+# from the Image service to the Block Storage service. There are two valid
+# values: "copy" specifies that a full copy of the image is made;
+# "copy_on_write" specifies that copy-on-write optimization strategy is used
+# and unmodified blocks of the image file are shared efficiently. (string
+# value)
+# Possible values:
+# copy - <No description provided>
+# copy_on_write - <No description provided>
+# <None> - <No description provided>
+#gpfs_images_share_mode = <None>
+
+# Specifies an upper limit on the number of indirections required to reach a
+# specific block due to snapshots or clones.  A lengthy chain of copy-on-write
+# snapshots or clones can have a negative impact on performance, but improves
+# space utilization.  0 indicates unlimited clone depth. (integer value)
+#gpfs_max_clone_depth = 0
+
+# Specifies that volumes are created as sparse files which initially consume no
+# space. If set to False, the volume is created as a fully allocated file, in
+# which case, creation may take a significantly longer time. (boolean value)
+#gpfs_sparse_volumes = true
+
+# Specifies the storage pool that volumes are assigned to. By default, the
+# system storage pool is used. (string value)
+#gpfs_storage_pool = system
+
+# Comma-separated list of IP address or hostnames of GPFS nodes. (list value)
+#gpfs_hosts =
+
+# Username for GPFS nodes. (string value)
+#gpfs_user_login = root
+
+# Password for GPFS node user. (string value)
+#gpfs_user_password =
+
+# Filename of private key to use for SSH authentication. (string value)
+#gpfs_private_key =
+
+# SSH port to use. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#gpfs_ssh_port = 22
+
+# File containing SSH host keys for the gpfs nodes with which driver needs to
+# communicate. Default=$state_path/ssh_known_hosts (string value)
+#gpfs_hosts_key_file = $state_path/ssh_known_hosts
+
+# Option to enable strict gpfs host key checking while connecting to gpfs
+# nodes. Default=False (boolean value)
+#gpfs_strict_host_key_policy = false
+
+# Mapping between IODevice address and unit address. (string value)
+#ds8k_devadd_unitadd_mapping =
+
+# Set the first two digits of SSID. (string value)
+#ds8k_ssid_prefix = FF
+
+# Reserve LSSs for consistency group. (string value)
+#lss_range_for_cg =
+
+# Set to zLinux if your OpenStack version is prior to Liberty and you're
+# connecting to zLinux systems. Otherwise set to auto. Valid values for this
+# parameter are: 'auto', 'AMDLinuxRHEL', 'AMDLinuxSuse', 'AppleOSX', 'Fujitsu',
+# 'Hp', 'HpTru64', 'HpVms', 'LinuxDT', 'LinuxRF', 'LinuxRHEL', 'LinuxSuse',
+# 'Novell', 'SGI', 'SVC', 'SanFsAIX', 'SanFsLinux', 'Sun', 'VMWare', 'Win2000',
+# 'Win2003', 'Win2008', 'Win2012', 'iLinux', 'nSeries', 'pLinux', 'pSeries',
+# 'pSeriesPowerswap', 'zLinux', 'iSeries'. (string value)
+#ds8k_host_type = auto
+
+# Proxy driver that connects to the IBM Storage Array (string value)
+#proxy = cinder.volume.drivers.ibm.ibm_storage.proxy.IBMStorageProxy
+
+# Connection type to the IBM Storage Array (string value)
+# Possible values:
+# fibre_channel - <No description provided>
+# iscsi - <No description provided>
+#connection_type = iscsi
+
+# CHAP authentication mode, effective only for iscsi (disabled|enabled) (string
+# value)
+# Possible values:
+# disabled - <No description provided>
+# enabled - <No description provided>
+#chap = disabled
+
+# List of Management IP addresses (separated by commas) (string value)
+#management_ips =
+
+# Comma separated list of storage system storage pools for volumes. (list
+# value)
+#storwize_svc_volpool_name = volpool
+
+# Storage system space-efficiency parameter for volumes (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#storwize_svc_vol_rsize = 2
+
+# Storage system threshold for volume capacity warnings (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#storwize_svc_vol_warning = 0
+
+# Storage system autoexpand parameter for volumes (True/False) (boolean value)
+#storwize_svc_vol_autoexpand = true
+
+# Storage system grain size parameter for volumes (32/64/128/256) (integer
+# value)
+#storwize_svc_vol_grainsize = 256
+
+# Storage system compression option for volumes (boolean value)
+#storwize_svc_vol_compression = false
+
+# Enable Easy Tier for volumes (boolean value)
+#storwize_svc_vol_easytier = true
+
+# The I/O group in which to allocate volumes. It can be a comma-separated list
+# in which case the driver will select an io_group based on least number of
+# volumes associated with the io_group. (string value)
+#storwize_svc_vol_iogrp = 0
+
+# Maximum number of seconds to wait for FlashCopy to be prepared. (integer
+# value)
+# Minimum value: 1
+# Maximum value: 600
+#storwize_svc_flashcopy_timeout = 120
+
+# DEPRECATED: This option no longer has any affect. It is deprecated and will
+# be removed in the next release. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#storwize_svc_multihostmap_enabled = true
+
+# Allow tenants to specify QOS on create (boolean value)
+#storwize_svc_allow_tenant_qos = false
+
+# If operating in stretched cluster mode, specify the name of the pool in which
+# mirrored copies are stored.Example: "pool2" (string value)
+#storwize_svc_stretched_cluster_partner = <None>
+
+# Specifies secondary management IP or hostname to be used if san_ip is invalid
+# or becomes inaccessible. (string value)
+#storwize_san_secondary_ip = <None>
+
+# Specifies that the volume not be formatted during creation. (boolean value)
+#storwize_svc_vol_nofmtdisk = false
+
+# Specifies the Storwize FlashCopy copy rate to be used when creating a full
+# volume copy. The default is rate is 50, and the valid rates are 1-150.
+# (integer value)
+# Minimum value: 1
+# Maximum value: 150
+#storwize_svc_flashcopy_rate = 50
+
+# Specifies the name of the pool in which mirrored copy is stored. Example:
+# "pool2" (string value)
+#storwize_svc_mirror_pool = <None>
+
+# Specifies the name of the peer pool for hyperswap volume, the peer pool must
+# exist on the other site. (string value)
+#storwize_peer_pool = <None>
+
+# Specifies the site information for host. One WWPN or multi WWPNs used in the
+# host can be specified. For example:
+# storwize_preferred_host_site=site1:wwpn1,site2:wwpn2&wwpn3 or
+# storwize_preferred_host_site=site1:iqn1,site2:iqn2 (dict value)
+#storwize_preferred_host_site =
+
+# This defines an optional cycle period that applies to Global Mirror
+# relationships with a cycling mode of multi. A Global Mirror relationship
+# using the multi cycling_mode performs a complete cycle at most once each
+# period. The default is 300 seconds, and the valid seconds are 60-86400.
+# (integer value)
+# Minimum value: 60
+# Maximum value: 86400
+#cycle_period_seconds = 300
+
+# Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
+# (boolean value)
+#storwize_svc_multipath_enabled = false
+
+# Configure CHAP authentication for iSCSI connections (Default: Enabled)
+# (boolean value)
+#storwize_svc_iscsi_chap_enabled = true
+
+# Name of the pool from which volumes are allocated (string value)
+#infinidat_pool_name = <None>
+
+# Protocol for transferring data between host and storage back-end. (string
+# value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+#infinidat_storage_protocol = fc
+
+# List of names of network spaces to use for iSCSI connectivity (list value)
+#infinidat_iscsi_netspaces =
+
+# Specifies whether to turn on compression for newly created volumes. (boolean
+# value)
+#infinidat_use_compression = false
+
+# K2 driver will calculate max_oversubscription_ratio on setting this option as
+# True. (boolean value)
+#auto_calc_max_oversubscription_ratio = false
+
+# Whether or not our private network has unique FQDN on each initiator or not.
+# For example networks with QA systems usually have multiple servers/VMs with
+# the same FQDN.  When true this will create host entries on K2 using the FQDN,
+# when false it will use the reversed IQN/WWNN. (boolean value)
+#unique_fqdn_network = true
+
+# Disabling iSCSI discovery (sendtargets) for multipath connections on K2
+# driver. (boolean value)
+#disable_discovery = false
+
+# Pool or Vdisk name to use for volume creation. (string value)
+#lenovo_backend_name = A
+
+# linear (for VDisk) or virtual (for Pool). (string value)
+# Possible values:
+# linear - <No description provided>
+# virtual - <No description provided>
+#lenovo_backend_type = virtual
+
+# Lenovo api interface protocol. (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+#lenovo_api_protocol = https
+
+# Whether to verify Lenovo array SSL certificate. (boolean value)
+#lenovo_verify_certificate = false
+
+# Lenovo array SSL certificate path. (string value)
+#lenovo_verify_certificate_path = <None>
+
+# List of comma-separated target iSCSI IP addresses. (list value)
+#lenovo_iscsi_ips =
+
+# Name for the VG that will contain exported volumes (string value)
+#volume_group = cinder-volumes
+
+# If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors
+# + 2 PVs with available space (integer value)
+#lvm_mirrors = 0
+
+# Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to
+# thin if thin is supported. (string value)
+# Possible values:
+# default - <No description provided>
+# thin - <No description provided>
+# auto - <No description provided>
+#lvm_type = auto
+
+# LVM conf file to use for the LVM driver in Cinder; this setting is ignored if
+# the specified file does not exist (You can also specify 'None' to not use a
+# conf file even if one exists). (string value)
+#lvm_conf_file = /etc/cinder/lvm.conf
+
+# Suppress leaked file descriptor warnings in LVM commands. (boolean value)
+#lvm_suppress_fd_warnings = false
+
+# The storage family type used on the storage system; valid values are
+# ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
+# (string value)
+# Possible values:
+# ontap_cluster - <No description provided>
+# eseries - <No description provided>
+#netapp_storage_family = ontap_cluster
+
+# The storage protocol to be used on the data path with the storage system.
+# (string value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+# nfs - <No description provided>
+#netapp_storage_protocol = <None>
+
+# The hostname (or IP address) for the storage system or proxy server. (string
+# value)
+#netapp_server_hostname = <None>
+
+# The TCP port to use for communication with the storage system or proxy
+# server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for
+# HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. (integer value)
+#netapp_server_port = <None>
+
+# The transport protocol used when communicating with the storage system or
+# proxy server. (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+#netapp_transport_type = http
+
+# Administrative user account name used to access the storage system or proxy
+# server. (string value)
+#netapp_login = <None>
+
+# Password for the administrative user account specified in the netapp_login
+# option. (string value)
+#netapp_password = <None>
+
+# This option specifies the virtual storage server (Vserver) name on the
+# storage cluster on which provisioning of block storage volumes should occur.
+# (string value)
+#netapp_vserver = <None>
+
+# The quantity to be multiplied by the requested volume size to ensure enough
+# space is available on the virtual storage server (Vserver) to fulfill the
+# volume creation request.  Note: this option is deprecated and will be removed
+# in favor of "reserved_percentage" in the Mitaka release. (floating point
+# value)
+#netapp_size_multiplier = 1.2
+
+# This option determines if storage space is reserved for LUN allocation. If
+# enabled, LUNs are thick provisioned. If space reservation is disabled,
+# storage space is allocated on demand. (string value)
+# Possible values:
+# enabled - <No description provided>
+# disabled - <No description provided>
+#netapp_lun_space_reservation = enabled
+
+# If the percentage of available space for an NFS share has dropped below the
+# value specified by this option, the NFS image cache will be cleaned. (integer
+# value)
+#thres_avl_size_perc_start = 20
+
+# When the percentage of available space on an NFS share has reached the
+# percentage specified by this option, the driver will stop clearing files from
+# the NFS image cache that have not been accessed in the last M minutes, where
+# M is the value of the expiry_thres_minutes configuration option. (integer
+# value)
+#thres_avl_size_perc_stop = 60
+
+# This option specifies the threshold for last access time for images in the
+# NFS image cache. When a cache cleaning cycle begins, images in the cache that
+# have not been accessed in the last M minutes, where M is the value of this
+# parameter, will be deleted from the cache to create free space on the NFS
+# share. (integer value)
+#expiry_thres_minutes = 720
+
+# This option is used to specify the path to the E-Series proxy application on
+# a proxy server. The value is combined with the value of the
+# netapp_transport_type, netapp_server_hostname, and netapp_server_port options
+# to create the URL used by the driver to connect to the proxy application.
+# (string value)
+#netapp_webservice_path = /devmgr/v2
+
+# This option is only utilized when the storage family is configured to
+# eseries. This option is used to restrict provisioning to the specified
+# volumes. Specify the value of this option to be a comma separated list of
+# volume hostnames or IP addresses to be used for provisioning. (string
+# value)
+#netapp_volume_ips = <None>
+
+# Password for the NetApp E-Series storage array. (string value)
+#netapp_sa_password = <None>
+
+# This option specifies whether the driver should allow operations that require
+# multiple attachments to a volume. An example would be live migration of
+# servers that have volumes attached. When enabled, this backend is limited to
+# 256 total volumes in order to guarantee volumes can be accessed by more than
+# one host. (boolean value)
+#netapp_enable_multiattach = false
+
+# This option specifies the path of the NetApp copy offload tool binary. Ensure
+# that the binary has execute permissions set which allow the effective user of
+# the cinder-volume process to execute the file. (string value)
+#netapp_copyoffload_tool_path = <None>
+
+# This option defines the type of operating system that will access a LUN
+# exported from Data ONTAP; it is assigned to the LUN at the time it is
+# created. (string value)
+#netapp_lun_ostype = <None>
+
+# This option defines the type of operating system for all initiators that can
+# access a LUN. This information is used when mapping LUNs to individual hosts
+# or groups of hosts. (string value)
+#netapp_host_type = <None>
+
+# This option is used to restrict provisioning to the specified pools. Specify
+# the value of this option to be a regular expression which will be applied to
+# the names of objects from the storage backend which represent pools in
+# Cinder. This option is only utilized when the storage protocol is configured
+# to use iSCSI or FC. (string value)
+# Deprecated group/name - [backend_defaults]/netapp_volume_list
+# Deprecated group/name - [backend_defaults]/netapp_storage_pools
+#netapp_pool_name_search_pattern = (.+)
+
+# Multi opt of dictionaries to represent the aggregate mapping between source
+# and destination back ends when using whole back end replication. For every
+# source aggregate associated with a cinder pool (NetApp FlexVol), you would
+# need to specify the destination aggregate on the replication target device. A
+# replication target device is configured with the configuration option
+# replication_device. Specify this option as many times as you have replication
+# devices. Each entry takes the standard dict config form:
+# netapp_replication_aggregate_map =
+# backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
+# (dict value)
+#netapp_replication_aggregate_map = <None>
+
+# The maximum time in seconds to wait for existing SnapMirror transfers to
+# complete before aborting during a failover. (integer value)
+# Minimum value: 0
+#netapp_snapmirror_quiesce_timeout = 3600
+
+# IP address of Nexenta SA (string value)
+#nexenta_host =
+
+# HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443
+# for HTTPS and 8080 for HTTP is used (integer value)
+#nexenta_rest_port = 0
+
+# Use http or https for REST connection (default auto) (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+# auto - <No description provided>
+#nexenta_rest_protocol = auto
+
+# Use secure HTTP for REST connection (default True) (boolean value)
+#nexenta_use_https = true
+
+# User name to connect to Nexenta SA (string value)
+#nexenta_user = admin
+
+# Password to connect to Nexenta SA (string value)
+#nexenta_password = nexenta
+
+# Nexenta target portal port (integer value)
+#nexenta_iscsi_target_portal_port = 3260
+
+# SA Pool that holds all volumes (string value)
+#nexenta_volume = cinder
+
+# IQN prefix for iSCSI targets (string value)
+#nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder-
+
+# Prefix for iSCSI target groups on SA (string value)
+#nexenta_target_group_prefix = cinder/
+
+# Volume group for ns5 (string value)
+#nexenta_volume_group = iscsi
+
+# Compression value for new ZFS folders. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+# gzip - <No description provided>
+# gzip-1 - <No description provided>
+# gzip-2 - <No description provided>
+# gzip-3 - <No description provided>
+# gzip-4 - <No description provided>
+# gzip-5 - <No description provided>
+# gzip-6 - <No description provided>
+# gzip-7 - <No description provided>
+# gzip-8 - <No description provided>
+# gzip-9 - <No description provided>
+# lzjb - <No description provided>
+# zle - <No description provided>
+# lz4 - <No description provided>
+#nexenta_dataset_compression = on
+
+# Deduplication value for new ZFS folders. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+# sha256 - <No description provided>
+# verify - <No description provided>
+# sha256, verify - <No description provided>
+#nexenta_dataset_dedup = off
+
+# Human-readable description for the folder. (string value)
+#nexenta_dataset_description =
+
+# Block size for datasets (integer value)
+#nexenta_blocksize = 4096
+
+# Block size for datasets (integer value)
+#nexenta_ns5_blocksize = 32
+
+# Enables or disables the creation of sparse datasets (boolean value)
+#nexenta_sparse = false
+
+# File with the list of available nfs shares (string value)
+#nexenta_shares_config = /etc/cinder/nfs_shares
+
+# Base directory that contains NFS share mount points (string value)
+#nexenta_mount_point_base = $state_path/mnt
+
+# Enables or disables the creation of volumes as sparsed files that take no
+# space. If disabled (False), volume is created as a regular file, which takes
+# a long time. (boolean value)
+#nexenta_sparsed_volumes = true
+
+# If set True cache NexentaStor appliance volroot option value. (boolean value)
+#nexenta_nms_cache_volroot = true
+
+# Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best
+# compression. (integer value)
+#nexenta_rrmgr_compression = 0
+
+# TCP Buffer size in KiloBytes. (integer value)
+#nexenta_rrmgr_tcp_buf_size = 4096
+
+# Number of TCP connections. (integer value)
+#nexenta_rrmgr_connections = 2
+
+# NexentaEdge logical path of directory to store symbolic links to NBDs (string
+# value)
+#nexenta_nbd_symlinks_dir = /dev/disk/by-path
+
+# IP address of NexentaEdge management REST API endpoint (string value)
+#nexenta_rest_address =
+
+# User name to connect to NexentaEdge (string value)
+#nexenta_rest_user = admin
+
+# Password to connect to NexentaEdge (string value)
+#nexenta_rest_password = nexenta
+
+# NexentaEdge logical path of bucket for LUNs (string value)
+#nexenta_lun_container =
+
+# NexentaEdge iSCSI service name (string value)
+#nexenta_iscsi_service =
+
+# NexentaEdge iSCSI Gateway client address for non-VIP service (string value)
+#nexenta_client_address =
+
+# NexentaEdge iSCSI LUN object chunk size (integer value)
+#nexenta_chunksize = 32768
+
+# File with the list of available NFS shares. (string value)
+#nfs_shares_config = /etc/cinder/nfs_shares
+
+# Create volumes as sparsed files which take no space. If set to False volume
+# is created as regular file. In such case volume creation takes a lot of time.
+# (boolean value)
+#nfs_sparsed_volumes = true
+
+# Create volumes as QCOW2 files rather than raw files. (boolean value)
+#nfs_qcow2_volumes = false
+
+# Base dir containing mount points for NFS shares. (string value)
+#nfs_mount_point_base = $state_path/mnt
+
+# Mount options passed to the NFS client. See section of the NFS man page for
+# details. (string value)
+#nfs_mount_options = <None>
+
+# The number of attempts to mount NFS shares before raising an error.  At least
+# one attempt will be made to mount an NFS share, regardless of the value
+# specified. (integer value)
+#nfs_mount_attempts = 3
+
+# Enable support for snapshots on the NFS driver. Platforms using libvirt
+# <1.2.7 will encounter issues with this feature. (boolean value)
+#nfs_snapshot_support = false
+
+# Nimble Controller pool name (string value)
+#nimble_pool_name = default
+
+# Nimble Subnet Label (string value)
+#nimble_subnet_label = *
+
+# Whether to verify Nimble SSL Certificate (boolean value)
+#nimble_verify_certificate = false
+
+# Path to Nimble Array SSL certificate (string value)
+#nimble_verify_cert_path = <None>
+
+# DPL pool uuid in which DPL volumes are stored. (string value)
+#dpl_pool =
+
+# DPL port number. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#dpl_port = 8357
+
+# REST API authorization token. (string value)
+#pure_api_token = <None>
+
+# Automatically determine an oversubscription ratio based on the current total
+# data reduction values. If used this calculated value will override the
+# max_over_subscription_ratio config option. (boolean value)
+#pure_automatic_max_oversubscription_ratio = true
+
+# Snapshot replication interval in seconds. (integer value)
+#pure_replica_interval_default = 3600
+
+# Retain all snapshots on target for this time (in seconds.) (integer value)
+#pure_replica_retention_short_term_default = 14400
+
+# Retain how many snapshots for each day. (integer value)
+#pure_replica_retention_long_term_per_day_default = 3
+
+# Retain snapshots per day on target for this time (in days.) (integer value)
+#pure_replica_retention_long_term_default = 7
+
+# When enabled, all Pure volumes, snapshots, and protection groups will be
+# eradicated at the time of deletion in Cinder. Data will NOT be recoverable
+# after a delete with this set to True! When disabled, volumes and snapshots
+# will go into pending eradication state and can be recovered. (boolean value)
+#pure_eradicate_on_delete = false
+
+# The URL to management QNAP Storage. Driver does not support IPv6 address in
+# URL. (uri value)
+#qnap_management_url = <None>
+
+# The pool name in the QNAP Storage (string value)
+#qnap_poolname = <None>
+
+# Communication protocol to access QNAP storage (string value)
+#qnap_storage_protocol = iscsi
+
+# Quobyte URL to the Quobyte volume using e.g. a DNS SRV record (preferred) or
+# a host list (alternatively) like quobyte://<DIR host1>, <DIR host2>/<volume
+# name> (string value)
+#quobyte_volume_url = <None>
+
+# Path to a Quobyte Client configuration file. (string value)
+#quobyte_client_cfg = <None>
+
+# Create volumes as sparse files which take no space. If set to False, volume
+# is created as regular file. (boolean value)
+#quobyte_sparsed_volumes = true
+
+# Create volumes as QCOW2 files rather than raw files. (boolean value)
+#quobyte_qcow2_volumes = true
+
+# Base dir containing the mount point for the Quobyte volume. (string value)
+#quobyte_mount_point_base = $state_path/mnt
+
+# Create a cache of volumes from merged snapshots to speed up creation of
+# multiple volumes from a single snapshot. (boolean value)
+#quobyte_volume_from_snapshot_cache = false
+
+# The name of ceph cluster (string value)
+#rbd_cluster_name = ceph
+
+# The RADOS pool where rbd volumes are stored (string value)
+#rbd_pool = rbd
+
+# The RADOS client name for accessing rbd volumes - only set when using cephx
+# authentication (string value)
+#rbd_user = <None>
+
+# Path to the ceph configuration file (string value)
+#rbd_ceph_conf =
+
+# Path to the ceph keyring file (string value)
+#rbd_keyring_conf =
+
+# Flatten volumes created from snapshots to remove dependency from volume to
+# snapshot (boolean value)
+#rbd_flatten_volume_from_snapshot = false
+
+# The libvirt uuid of the secret for the rbd_user volumes (string value)
+#rbd_secret_uuid = <None>
+
+# Maximum number of nested volume clones that are taken before a flatten
+# occurs. Set to 0 to disable cloning. (integer value)
+#rbd_max_clone_depth = 5
+
+# Volumes will be chunked into objects of this size (in megabytes). (integer
+# value)
+#rbd_store_chunk_size = 4
+
+# Timeout value (in seconds) used when connecting to ceph cluster. If value <
+# 0, no timeout is set and default librados value is used. (integer value)
+#rados_connect_timeout = -1
+
+# Number of retries if connection to ceph cluster failed. (integer value)
+#rados_connection_retries = 3
+
+# Interval value (in seconds) between connection retries to ceph cluster.
+# (integer value)
+#rados_connection_interval = 5
+
+# Timeout value (in seconds) used when connecting to ceph cluster to do a
+# demotion/promotion of volumes. If value < 0, no timeout is set and default
+# librados value is used. (integer value)
+#replication_connect_timeout = 5
+
+# Set to True for driver to report total capacity as a dynamic value -used +
+# current free- and to False to report a static value -quota max bytes if
+# defined and global size of cluster if not. (boolean value)
+#report_dynamic_total_capacity = true
+
+# Set to True if the pool is used exclusively by Cinder. On exclusive use
+# driver won't query images' provisioned size as they will match the value
+# calculated by the Cinder core code for allocated_capacity_gb. This reduces
+# the load on the Ceph cluster as well as on the volume service. (boolean
+# value)
+#rbd_exclusive_cinder_pool = false
+
+# IP address or Hostname of NAS system. (string value)
+#nas_host =
+
+# User name to connect to NAS system. (string value)
+#nas_login = admin
+
+# Password to connect to NAS system. (string value)
+#nas_password =
+
+# SSH port to use to connect to NAS system. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#nas_ssh_port = 22
+
+# Filename of private key to use for SSH authentication. (string value)
+#nas_private_key =
+
+# Allow network-attached storage systems to operate in a secure environment
+# where root level access is not permitted. If set to False, access is as the
+# root user and insecure. If set to True, access is not as root. If set to
+# auto, a check is done to determine if this is a new installation: True is
+# used if so, otherwise False. Default is auto. (string value)
+#nas_secure_file_operations = auto
+
+# Set more secure file permissions on network-attached storage volume files to
+# restrict broad other/world access. If set to False, volumes are created with
+# open permissions. If set to True, volumes are created with permissions for
+# the cinder user and group (660). If set to auto, a check is done to determine
+# if this is a new installation: True is used if so, otherwise False. Default
+# is auto. (string value)
+#nas_secure_file_permissions = auto
+
+# Path to the share to use for storing Cinder volumes. For example:
+# "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 .
+# (string value)
+#nas_share_path =
+
+# Options used to mount the storage backend file system where Cinder volumes
+# are stored. (string value)
+#nas_mount_options = <None>
+
+# Provisioning type that will be used when creating volumes. (string value)
+# Possible values:
+# thin - <No description provided>
+# thick - <No description provided>
+#nas_volume_prov_type = thin
+
+# Pool or Vdisk name to use for volume creation. (string value)
+#hpmsa_backend_name = A
+
+# linear (for Vdisk) or virtual (for Pool). (string value)
+# Possible values:
+# linear - <No description provided>
+# virtual - <No description provided>
+#hpmsa_backend_type = virtual
+
+# HPMSA API interface protocol. (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+#hpmsa_api_protocol = https
+
+# Whether to verify HPMSA array SSL certificate. (boolean value)
+#hpmsa_verify_certificate = false
+
+# HPMSA array SSL certificate path. (string value)
+#hpmsa_verify_certificate_path = <None>
+
+# List of comma-separated target iSCSI IP addresses. (list value)
+#hpmsa_iscsi_ips =
+
+# Use thin provisioning for SAN volumes? (boolean value)
+#san_thin_provision = true
+
+# IP address of SAN volume (string value)
+#san_ip =
+
+# Username for SAN volume (string value)
+#san_login = admin
+
+# Password for SAN volume (string value)
+#san_password =
+
+# Filename of private key to use for SSH authentication (string value)
+#san_private_key =
+
+# Cluster name to use for creating volumes (string value)
+#san_clustername =
+
+# SSH port to use with SAN (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#san_ssh_port = 22
+
+# Port to use to access the SAN API (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#san_api_port = <None>
+
+# Execute commands locally instead of over SSH; use if the volume service is
+# running on the SAN device (boolean value)
+#san_is_local = false
+
+# SSH connection timeout in seconds (integer value)
+#ssh_conn_timeout = 30
+
+# Minimum ssh connections in the pool (integer value)
+#ssh_min_pool_conn = 1
+
+# Maximum ssh connections in the pool (integer value)
+#ssh_max_pool_conn = 5
+
+# IP address of sheep daemon. (string value)
+#sheepdog_store_address = 127.0.0.1
+
+# Port of sheep daemon. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#sheepdog_store_port = 7000
+
+# Set 512 byte emulation on volume creation;  (boolean value)
+#sf_emulate_512 = true
+
+# Allow tenants to specify QOS on create (boolean value)
+#sf_allow_tenant_qos = false
+
+# Create SolidFire accounts with this prefix. Any string can be used here, but
+# the string "hostname" is special and will create a prefix using the cinder
+# node hostname (previous default behavior).  The default is NO prefix. (string
+# value)
+#sf_account_prefix = <None>
+
+# Create SolidFire volumes with this prefix. Volume names are of the form
+# <sf_volume_prefix><cinder-volume-id>.  The default is to use a prefix of
+# 'UUID-'. (string value)
+#sf_volume_prefix = UUID-
+
+# Account name on the SolidFire Cluster to use as owner of template/cache
+# volumes (created if does not exist). (string value)
+#sf_template_account_name = openstack-vtemplate
+
+# DEPRECATED: This option is deprecated and will be removed in the next
+# OpenStack release.  Please use the general cinder image-caching feature
+# instead. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: The Cinder caching feature should be used rather than this driver
+# specific implementation.
+#sf_allow_template_caching = false
+
+# Overrides default cluster SVIP with the one specified. This is required or
+# deployments that have implemented the use of VLANs for iSCSI networks in
+# their cloud. (string value)
+#sf_svip = <None>
+
+# SolidFire API port. Useful if the device api is behind a proxy on a different
+# port. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#sf_api_port = 443
+
+# Utilize volume access groups on a per-tenant basis. (boolean value)
+#sf_enable_vag = false
+
+# Volume on Synology storage to be used for creating lun. (string value)
+#synology_pool_name =
+
+# Management port for Synology storage. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#synology_admin_port = 5000
+
+# Administrator of Synology storage. (string value)
+#synology_username = admin
+
+# Password of administrator for logging in Synology storage. (string value)
+#synology_password =
+
+# Do certificate validation or not if $driver_use_ssl is True (boolean value)
+#synology_ssl_verify = true
+
+# One time password of administrator for logging in Synology storage if OTP is
+# enabled. (string value)
+#synology_one_time_pass = <None>
+
+# Device id for skip one time password check for logging in Synology storage if
+# OTP is enabled. (string value)
+#synology_device_id = <None>
+
+# The hostname (or IP address) for the storage system (string value)
+#tintri_server_hostname = <None>
+
+# User name for the storage system (string value)
+#tintri_server_username = <None>
+
+# Password for the storage system (string value)
+#tintri_server_password = <None>
+
+# API version for the storage system (string value)
+#tintri_api_version = v310
+
+# Delete unused image snapshots older than mentioned days (integer value)
+#tintri_image_cache_expiry_days = 30
+
+# Path to image nfs shares file (string value)
+#tintri_image_shares_config = <None>
+
+# IP address for connecting to VMware vCenter server. (string value)
+#vmware_host_ip = <None>
+
+# Port number for connecting to VMware vCenter server. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#vmware_host_port = 443
+
+# Username for authenticating with VMware vCenter server. (string value)
+#vmware_host_username = <None>
+
+# Password for authenticating with VMware vCenter server. (string value)
+#vmware_host_password = <None>
+
+# Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl.
+# Optional over-ride to default location for bug work-arounds. (string value)
+#vmware_wsdl_location = <None>
+
+# Number of times VMware vCenter server API must be retried upon connection
+# related issues. (integer value)
+#vmware_api_retry_count = 10
+
+# The interval (in seconds) for polling remote tasks invoked on VMware vCenter
+# server. (floating point value)
+#vmware_task_poll_interval = 2.0
+
+# Name of the vCenter inventory folder that will contain Cinder volumes. This
+# folder will be created under "OpenStack/<project_folder>", where
+# project_folder is of format "Project (<volume_project_id>)". (string value)
+#vmware_volume_folder = Volumes
+
+# Timeout in seconds for VMDK volume transfer between Cinder and Glance.
+# (integer value)
+#vmware_image_transfer_timeout_secs = 7200
+
+# Max number of objects to be retrieved per batch. Query results will be
+# obtained in batches from the server and not in one shot. Server may still
+# limit the count to something less than the configured value. (integer value)
+#vmware_max_objects_retrieval = 100
+
+# Optional string specifying the VMware vCenter server version. The driver
+# attempts to retrieve the version from VMware vCenter server. Set this
+# configuration only if you want to override the vCenter server version.
+# (string value)
+#vmware_host_version = <None>
+
+# Directory where virtual disks are stored during volume backup and restore.
+# (string value)
+#vmware_tmp_dir = /tmp
+
+# CA bundle file to use in verifying the vCenter server certificate. (string
+# value)
+#vmware_ca_file = <None>
+
+# If true, the vCenter server certificate is not verified. If false, then the
+# default CA truststore is used for verification. This option is ignored if
+# "vmware_ca_file" is set. (boolean value)
+#vmware_insecure = false
+
+# Name of a vCenter compute cluster where volumes should be created. (multi
+# valued)
+#vmware_cluster_name =
+
+# Maximum number of connections in http connection pool. (integer value)
+#vmware_connection_pool_size = 10
+
+# Default adapter type to be used for attaching volumes. (string value)
+# Possible values:
+# lsiLogic - <No description provided>
+# busLogic - <No description provided>
+# lsiLogicsas - <No description provided>
+# paraVirtual - <No description provided>
+# ide - <No description provided>
+#vmware_adapter_type = lsiLogic
+
+# Volume snapshot format in vCenter server. (string value)
+# Possible values:
+# template - <No description provided>
+# COW - <No description provided>
+#vmware_snapshot_format = template
+
+# If true, the backend volume in vCenter server is created lazily when the
+# volume is created without any source. The backend volume is created when the
+# volume is attached, uploaded to image service or during backup. (boolean
+# value)
+#vmware_lazy_create = true
+
+# Regular expression pattern to match the name of datastores where backend
+# volumes are created. (string value)
+#vmware_datastore_regex = <None>
+
+# File with the list of available vzstorage shares. (string value)
+#vzstorage_shares_config = /etc/cinder/vzstorage_shares
+
+# Create volumes as sparsed files which take no space rather than regular files
+# when using raw format, in which case volume creation takes lot of time.
+# (boolean value)
+#vzstorage_sparsed_volumes = true
+
+# Percent of ACTUAL usage of the underlying volume before no new volumes can be
+# allocated to the volume destination. (floating point value)
+#vzstorage_used_ratio = 0.95
+
+# Base dir containing mount points for vzstorage shares. (string value)
+#vzstorage_mount_point_base = $state_path/mnt
+
+# Mount options passed to the vzstorage client. See section of the pstorage-
+# mount man page for details. (list value)
+#vzstorage_mount_options = <None>
+
+# Default format that will be used when creating volumes if no volume format is
+# specified. (string value)
+#vzstorage_default_volume_format = raw
+
+# Path to store VHD backed volumes (string value)
+#windows_iscsi_lun_path = C:\iSCSIVirtualDisks
+
+# File with the list of available smbfs shares. (string value)
+#smbfs_shares_config = C:\OpenStack\smbfs_shares.txt
+
+# Default format that will be used when creating volumes if no volume format is
+# specified. (string value)
+# Possible values:
+# vhd - <No description provided>
+# vhdx - <No description provided>
+#smbfs_default_volume_format = vhd
+
+# Base dir containing mount points for smbfs shares. (string value)
+#smbfs_mount_point_base = C:\OpenStack\_mnt
+
+# Mappings between share locations and pool names. If not specified, the share
+# names will be used as pool names. Example:
+# //addr/share:pool_name,//addr/share2:pool_name2 (dict value)
+#smbfs_pool_mappings =
+
+# VPSA - Use ISER instead of iSCSI (boolean value)
+#zadara_use_iser = true
+
+# VPSA - Management Host name or IP address (string value)
+#zadara_vpsa_host = <None>
+
+# VPSA - Port number (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#zadara_vpsa_port = <None>
+
+# VPSA - Use SSL connection (boolean value)
+#zadara_vpsa_use_ssl = false
+
+# If set to True the http client will validate the SSL certificate of the VPSA
+# endpoint. (boolean value)
+#zadara_ssl_cert_verify = true
+
+# VPSA - Username (string value)
+#zadara_user = <None>
+
+# VPSA - Password (string value)
+#zadara_password = <None>
+
+# VPSA - Storage Pool assigned for volumes (string value)
+#zadara_vpsa_poolname = <None>
+
+# VPSA - Default encryption policy for volumes (boolean value)
+#zadara_vol_encrypt = false
+
+# VPSA - Default template for VPSA volume names (string value)
+#zadara_vol_name_template = OS_%s
+
+# VPSA - Attach snapshot policy for volumes (boolean value)
+#zadara_default_snap_policy = false
+
+# Storage pool name. (string value)
+#zfssa_pool = <None>
+
+# Project name. (string value)
+#zfssa_project = <None>
+
+# Block size. (string value)
+# Possible values:
+# 512 - <No description provided>
+# 1k - <No description provided>
+# 2k - <No description provided>
+# 4k - <No description provided>
+# 8k - <No description provided>
+# 16k - <No description provided>
+# 32k - <No description provided>
+# 64k - <No description provided>
+# 128k - <No description provided>
+#zfssa_lun_volblocksize = 8k
+
+# Flag to enable sparse (thin-provisioned): True, False. (boolean value)
+#zfssa_lun_sparse = false
+
+# Data compression. (string value)
+# Possible values:
+# off - <No description provided>
+# lzjb - <No description provided>
+# gzip-2 - <No description provided>
+# gzip - <No description provided>
+# gzip-9 - <No description provided>
+#zfssa_lun_compression = off
+
+# Synchronous write bias. (string value)
+# Possible values:
+# latency - <No description provided>
+# throughput - <No description provided>
+#zfssa_lun_logbias = latency
+
+# iSCSI initiator group. (string value)
+#zfssa_initiator_group =
+
+# iSCSI initiator IQNs. (comma separated) (string value)
+#zfssa_initiator =
+
+# iSCSI initiator CHAP user (name). (string value)
+#zfssa_initiator_user =
+
+# Secret of the iSCSI initiator CHAP user. (string value)
+#zfssa_initiator_password =
+
+# iSCSI initiators configuration. (string value)
+#zfssa_initiator_config =
+
+# iSCSI target group name. (string value)
+#zfssa_target_group = tgt-grp
+
+# iSCSI target CHAP user (name). (string value)
+#zfssa_target_user =
+
+# Secret of the iSCSI target CHAP user. (string value)
+#zfssa_target_password =
+
+# iSCSI target portal (Data-IP:Port, w.x.y.z:3260). (string value)
+#zfssa_target_portal = <None>
+
+# Network interfaces of iSCSI targets. (comma separated) (string value)
+#zfssa_target_interfaces = <None>
+
+# REST connection timeout. (seconds) (integer value)
+#zfssa_rest_timeout = <None>
+
+# IP address used for replication data. (maybe the same as data ip) (string
+# value)
+#zfssa_replication_ip =
+
+# Flag to enable local caching: True, False. (boolean value)
+#zfssa_enable_local_cache = true
+
+# Name of ZFSSA project where cache volumes are stored. (string value)
+#zfssa_cache_project = os-cinder-cache
+
+# Driver policy for volume manage. (string value)
+# Possible values:
+# loose - <No description provided>
+# strict - <No description provided>
+#zfssa_manage_policy = loose
+
+# Data path IP address (string value)
+#zfssa_data_ip = <None>
+
+# HTTPS port number (string value)
+#zfssa_https_port = 443
+
+# Options to be passed while mounting share over nfs (string value)
+#zfssa_nfs_mount_options =
+
+# Storage pool name. (string value)
+#zfssa_nfs_pool =
+
+# Project name. (string value)
+#zfssa_nfs_project = NFSProject
+
+# Share name. (string value)
+#zfssa_nfs_share = nfs_share
+
+# Data compression. (string value)
+# Possible values:
+# off - <No description provided>
+# lzjb - <No description provided>
+# gzip-2 - <No description provided>
+# gzip - <No description provided>
+# gzip-9 - <No description provided>
+#zfssa_nfs_share_compression = off
+
+# Synchronous write bias-latency, throughput. (string value)
+# Possible values:
+# latency - <No description provided>
+# throughput - <No description provided>
+#zfssa_nfs_share_logbias = latency
+
+# Name of directory inside zfssa_nfs_share where cache volumes are stored.
+# (string value)
+#zfssa_cache_directory = os-cinder-cache
+
+# Driver to use for volume creation (string value)
+#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
+
+# User defined capabilities, a JSON formatted string specifying key/value
+# pairs. The key/value pairs can be used by the CapabilitiesFilter to select
+# between backends when requests specify volume types. For example, specifying
+# a service level or the geographical location of a backend, then creating a
+# volume type to allow the user to select by these different properties.
+# (string value)
+#extra_capabilities = {}
+
+# Suppress requests library SSL certificate warnings. (boolean value)
+#suppress_requests_ssl_warnings = false
+
+# Size of the native threads pool for the backend.  Increase for backends that
+# heavily rely on this, like the RBD driver. (integer value)
+# Minimum value: 20
+#backend_native_threads_pool_size = 20
+
+
+[coordination]
+
+#
+# From cinder
+#
+
+# The backend URL to use for distributed coordination. (string value)
+#backend_url = file://$state_path
+
+
+[fc-zone-manager]
+
+#
+# From cinder
+#
+
+# South bound connector for zoning operation (string value)
+#brcd_sb_connector = HTTP
+
+# Southbound connector for zoning operation (string value)
+#cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
+
+# FC Zone Driver responsible for zone management (string value)
+#zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver
+
+# Zoning policy configured by user; valid values include "initiator-target" or
+# "initiator" (string value)
+#zoning_policy = initiator-target
+
+# Comma separated list of Fibre Channel fabric names. This list of names is
+# used to retrieve other SAN credentials for connecting to each SAN fabric
+# (string value)
+#fc_fabric_names = <None>
+
+# FC SAN Lookup Service (string value)
+#fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService
+
+# Set this to True when you want to allow an unsupported zone manager driver to
+# start.  Drivers that haven't maintained a working CI system and testing are
+# marked as unsupported until CI is working again.  This also marks a driver as
+# deprecated and may be removed in the next release. (boolean value)
+#enable_unsupported_driver = false
+
+
+[nova]
+
+#
+# From cinder
+#
+
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#interface = public
+
+# The authentication URL for the nova connection when using the current users
+# token (string value)
+#token_auth_url = <None>
+
+# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# (string value)
+#cafile = <None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile = <None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section = <None>
+
+
+[service_user]
+
+#
+# From cinder
+#
+
+#
+# When True, if sending a user token to an REST API, also send a service token.
+#  (boolean value)
+#send_service_user_token = false
+[barbican]
+#
+# From castellan.config
+#
+
+# Use this endpoint to connect to Barbican, for example:
+# "http://localhost:9311/" (string value)
+#barbican_endpoint = <None>
+
+# Version of the Barbican API, for example: "v1" (string value)
+#barbican_api_version = <None>
+
+# Use this endpoint to connect to Keystone (string value)
+# Deprecated group/name - [key_manager]/auth_url
+#auth_endpoint = http://localhost/identity/v3
+auth_endpoint = http://10.167.4.35:35357/v3
+
+# Number of seconds to wait before retrying poll for key creation completion
+# (integer value)
+#retry_delay = 1
+
+# Number of times to retry poll for key creation completion (integer value)
+#number_of_retries = 60
+
+# Specifies if insecure TLS (https) requests. If False, the server's
+# certificate will not be validated (boolean value)
+#verify_ssl = true
+
+# Specifies the type of endpoint.  Allowed values are: public, private, and
+# admin (string value)
+# Possible values:
+# public - <No description provided>
+# internal - <No description provided>
+# admin - <No description provided>
+#barbican_endpoint_type = public
+
+
+
+[key_manager]
+#
+# From castellan.config
+#
+
+# Specify the key manager implementation. Options are "barbican" and "vault".
+# Default is  "barbican". Will support the  values earlier set using
+# [key_manager]/api_class for some time. (string value)
+# Deprecated group/name - [key_manager]/api_class
+#backend = barbican
+backend = barbican
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internalURL
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = cinder
+
+
+[keystone_authtoken]
+#
+# From keystonemiddleware.auth_token
+#
+
+# Complete "public" Identity API endpoint. This endpoint should not be an
+# "admin" endpoint, as it should be accessible by all end users.
+# Unauthenticated clients are redirected to this endpoint to authenticate.
+# Although this endpoint should ideally be unversioned, client support in the
+# wild varies. If you're using a versioned v2 endpoint here, then this should
+# *not* be the same endpoint the service user utilizes for validating tokens,
+# because normal end users may not be able to reach that endpoint. (string
+# value)
+# Deprecated group/name - [keystone_authtoken]/auth_uri
+#www_authenticate_uri = <None>
+www_authenticate_uri = http://10.167.4.35:5000
+
+# DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
+# be an "admin" endpoint, as it should be accessible by all end users.
+# Unauthenticated clients are redirected to this endpoint to authenticate.
+# Although this endpoint should ideally be unversioned, client support in the
+# wild varies. If you're using a versioned v2 endpoint here, then this should
+# *not* be the same endpoint the service user utilizes for validating tokens,
+# because normal end users may not be able to reach that endpoint. This option
+# is deprecated in favor of www_authenticate_uri and will be removed in the S
+# release. (string value)
+# This option is deprecated for removal since Queens.
+# Its value may be silently ignored in the future.
+# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
+# and will be removed in the S  release.
+#auth_uri = <None>
+auth_uri = http://10.167.4.35:5000
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+# Do not handle authorization requests within the middleware, but delegate the
+# authorization decision to downstream WSGI components. (boolean value)
+#delay_auth_decision = false
+
+# Request timeout value for communicating with Identity API server. (integer
+# value)
+#http_connect_timeout = <None>
+
+# How many times are we trying to reconnect when communicating with Identity
+# API Server. (integer value)
+#http_request_max_retries = 3
+
+# Request environment key where the Swift cache object is stored. When
+# auth_token middleware is deployed with a Swift cache, use this option to have
+# the middleware share a caching backend with swift. Otherwise, use the
+# ``memcached_servers`` option instead. (string value)
+#cache = <None>
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# The region in which the identity server can be found. (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# DEPRECATED: Directory used to cache files related to PKI tokens. This option
+# has been deprecated in the Ocata release and will be removed in the P
+# release. (string value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#signing_dir = <None>
+
+# Optionally specify a list of memcached server(s) to use for caching. If left
+# undefined, tokens will instead be cached in-process. (list value)
+# Deprecated group/name - [keystone_authtoken]/memcache_servers
+#memcached_servers = <None>
+
+# In order to prevent excessive effort spent validating tokens, the middleware
+# caches previously-seen tokens for a configurable duration (in seconds). Set
+# to -1 to disable caching completely. (integer value)
+#token_cache_time = 300
+
+# DEPRECATED: Determines the frequency at which the list of revoked tokens is
+# retrieved from the Identity service (in seconds). A high number of revocation
+# events combined with a low cache duration may significantly reduce
+# performance. Only valid for PKI tokens. This option has been deprecated in
+# the Ocata release and will be removed in the P release. (integer value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#revocation_cache_time = 10
+
+# (Optional) If defined, indicate whether token data should be authenticated or
+# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
+# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
+# cache. If the value is not one of these options or empty, auth_token will
+# raise an exception on initialization. (string value)
+# Possible values:
+# None - <No description provided>
+# MAC - <No description provided>
+# ENCRYPT - <No description provided>
+#memcache_security_strategy = None
+
+# (Optional, mandatory if memcache_security_strategy is defined) This string is
+# used for key derivation. (string value)
+#memcache_secret_key = <None>
+
+# (Optional) Number of seconds memcached server is considered dead before it is
+# tried again. (integer value)
+#memcache_pool_dead_retry = 300
+
+# (Optional) Maximum total number of open connections to every memcached
+# server. (integer value)
+#memcache_pool_maxsize = 10
+
+# (Optional) Socket timeout in seconds for communicating with a memcached
+# server. (integer value)
+#memcache_pool_socket_timeout = 3
+
+# (Optional) Number of seconds a connection to memcached is held unused in the
+# pool before it is closed. (integer value)
+#memcache_pool_unused_timeout = 60
+
+# (Optional) Number of seconds that an operation will wait to get a memcached
+# client connection from the pool. (integer value)
+#memcache_pool_conn_get_timeout = 10
+
+# (Optional) Use the advanced (eventlet safe) memcached client pool. The
+# advanced pool will only work under python 2.x. (boolean value)
+#memcache_use_advanced_pool = false
+
+# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
+# middleware will not ask for service catalog on token validation and will not
+# set the X-Service-Catalog header. (boolean value)
+#include_service_catalog = true
+
+# Used to control the use and type of token binding. Can be set to: "disabled"
+# to not check token binding. "permissive" (default) to validate binding
+# information if the bind type is of a form known to the server and ignore it
+# if not. "strict" like "permissive" but if the bind type is unknown the token
+# will be rejected. "required" any form of token binding is needed to be
+# allowed. Finally the name of a binding method that must be present in tokens.
+# (string value)
+#enforce_token_bind = permissive
+
+# DEPRECATED: If true, the revocation list will be checked for cached tokens.
+# This requires that PKI tokens are configured on the identity server. (boolean
+# value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#check_revocations_for_cached = false
+
+# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
+# single algorithm or multiple. The algorithms are those supported by Python
+# standard hashlib.new(). The hashes will be tried in the order given, so put
+# the preferred one first for performance. The result of the first hash will be
+# stored in the cache. This will typically be set to multiple values only while
+# migrating from a less secure algorithm to a more secure one. Once all the old
+# tokens are expired this option should be set to a single value for better
+# performance. (list value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#hash_algorithms = md5
+
+# A choice of roles that must be present in a service token. Service tokens are
+# allowed to request that an expired token can be used and so this check should
+# tightly control that only actual services should be sending this token. Roles
+# here are applied as an ANY check so any role in this list must be present.
+# For backwards compatibility reasons this currently only affects the
+# allow_expired check. (list value)
+#service_token_roles = service
+
+# For backwards compatibility reasons we must let valid service tokens pass
+# that don't pass the service_token_roles check as valid. Setting this true
+# will become the default in a future release and should be enabled if
+# possible. (boolean value)
+#service_token_roles_required = false
+
+# Authentication type to load (string value)
+# Deprecated group/name - [keystone_authtoken]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section = <None>
+
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internalURL
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = cinder
+
+[profiler]
+
+[oslo_concurrency]
 
 [database]
-connection = sqlite:////var/lib/cinder/cinder.sqlite
+#
+# From oslo.db
+#
+
+# If True, SQLite uses synchronous mode. (boolean value)
+#sqlite_synchronous = true
+
+# The back end to use for the database. (string value)
+# Deprecated group/name - [DEFAULT]/db_backend
+#backend = sqlalchemy
+
+# The SQLAlchemy connection string to use to connect to the database.
+# (string value)
+# Deprecated group/name - [DEFAULT]/sql_connection
+# Deprecated group/name - [DATABASE]/sql_connection
+# Deprecated group/name - [sql]/connection
+#connection = <None>
+connection = mysql+pymysql://cinder:opnfv_secret@10.167.4.23/cinder?charset=utf8
+# The SQLAlchemy connection string to use to connect to the slave
+# database. (string value)
+#slave_connection = <None>
+
+# The SQL mode to be used for MySQL sessions. This option, including
+# the default, overrides any server-set SQL mode. To use whatever SQL
+# mode is set by the server configuration, set this to no value.
+# Example: mysql_sql_mode= (string value)
+#mysql_sql_mode = TRADITIONAL
+
+# If True, transparently enables support for handling MySQL Cluster
+# (NDB). (boolean value)
+#mysql_enable_ndb = false
+
+# Connections which have been present in the connection pool longer
+# than this number of seconds will be replaced with a new one the next
+# time they are checked out from the pool. (integer value)
+# Deprecated group/name - [DATABASE]/idle_timeout
+# Deprecated group/name - [database]/idle_timeout
+# Deprecated group/name - [DEFAULT]/sql_idle_timeout
+# Deprecated group/name - [DATABASE]/sql_idle_timeout
+# Deprecated group/name - [sql]/idle_timeout
+#connection_recycle_time = 3600
+connection_recycle_time = 300
+
+# Minimum number of SQL connections to keep open in a pool. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_min_pool_size
+# Deprecated group/name - [DATABASE]/sql_min_pool_size
+#min_pool_size = 1
+
+# Maximum number of SQL connections to keep open in a pool. Setting a
+# value of 0 indicates no limit. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_pool_size
+# Deprecated group/name - [DATABASE]/sql_max_pool_size
+#max_pool_size = 5
+max_pool_size = 10
+
+# Maximum number of database connection retries during startup. Set to
+# -1 to specify an infinite retry count. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_retries
+# Deprecated group/name - [DATABASE]/sql_max_retries
+#max_retries = 10
+max_retries = -1
+
+# Interval between retries of opening a SQL connection. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_retry_interval
+# Deprecated group/name - [DATABASE]/reconnect_interval
+#retry_interval = 10
+
+# If set, use this value for max_overflow with SQLAlchemy. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_max_overflow
+# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
+#max_overflow = 50
+max_overflow = 30
+
+# Verbosity of SQL debugging information: 0=None, 100=Everything.
+# (integer value)
+# Minimum value: 0
+# Maximum value: 100
+# Deprecated group/name - [DEFAULT]/sql_connection_debug
+#connection_debug = 0
+
+# Add Python stack traces to SQL as comment strings. (boolean value)
+# Deprecated group/name - [DEFAULT]/sql_connection_trace
+#connection_trace = false
+
+# If set, use this value for pool_timeout with SQLAlchemy. (integer
+# value)
+# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
+#pool_timeout = <None>
+
+# Enable the experimental use of database reconnect on connection
+# lost. (boolean value)
+#use_db_reconnect = false
+
+# Seconds between retries of a database transaction. (integer value)
+#db_retry_interval = 1
+
+# If True, increases the interval between retries of a database
+# operation up to db_max_retry_interval. (boolean value)
+#db_inc_retry_interval = true
+
+# If db_inc_retry_interval is set, the maximum seconds between retries
+# of a database operation. (integer value)
+#db_max_retry_interval = 10
+
+# Maximum retries in case of connection error or deadlock error before
+# error is raised. Set to -1 to specify an infinite retry count.
+# (integer value)
+#db_max_retries = 20
+
+#
+# From oslo.db.concurrency
+#
+
+# Enable the experimental use of thread pooling for all DB API calls
+# (boolean value)
+# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
+#use_tpool = false
+
+[oslo_messaging_notifications]
+#
+# From oslo.messaging
+#
+
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+
+# A URL representing the messaging driver to use for notifications. If
+# not set, we fall back to the same configuration used for RPC.
+# (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+
+# The maximum number of attempts to re-send a notification message
+# which failed to be delivered due to a recoverable error. 0 - No
+# retry, -1 - indefinite (integer value)
+#retry = -1
+[oslo_messaging_rabbit]
+#
+# From oslo.messaging
+#
+
+# Use durable queues in AMQP. (boolean value)
+# Deprecated group/name - [DEFAULT]/amqp_durable_queues
+# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
+#amqp_durable_queues = false
+
+# Auto-delete queues in AMQP. (boolean value)
+#amqp_auto_delete = false
+
+# Enable SSL (boolean value)
+#ssl = <None>
+
+
+# How long to wait before reconnecting in response to an AMQP consumer
+# cancel notification. (floating point value)
+#kombu_reconnect_delay = 1.0
+
+# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression
+# will not be used. This option may not be available in future
+# versions. (string value)
+#kombu_compression = <None>
+
+# How long to wait a missing client before abandoning to send it its
+# replies. This value should not be longer than rpc_response_timeout.
+# (integer value)
+# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
+#kombu_missing_consumer_retry_timeout = 60
+
+# Determines how the next RabbitMQ node is chosen in case the one we
+# are currently connected to becomes unavailable. Takes effect only if
+# more than one RabbitMQ node is provided in config. (string value)
+# Possible values:
+# round-robin - <No description provided>
+# shuffle - <No description provided>
+#kombu_failover_strategy = round-robin
+
+# DEPRECATED: The RabbitMQ broker address where a single node is used.
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_host = localhost
+
+# DEPRECATED: The RabbitMQ broker port where a single node is used.
+# (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_port = 5672
+
+# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_hosts = $rabbit_host:$rabbit_port
+
+# DEPRECATED: The RabbitMQ userid. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_userid = guest
+
+# DEPRECATED: The RabbitMQ password. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_password = guest
+
+# The RabbitMQ login method. (string value)
+# Possible values:
+# PLAIN - <No description provided>
+# AMQPLAIN - <No description provided>
+# RABBIT-CR-DEMO - <No description provided>
+#rabbit_login_method = AMQPLAIN
+
+# DEPRECATED: The RabbitMQ virtual host. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_virtual_host = /
+
+# How frequently to retry connecting with RabbitMQ. (integer value)
+#rabbit_retry_interval = 1
+
+# How long to backoff for between retries when connecting to RabbitMQ.
+# (integer value)
+#rabbit_retry_backoff = 2
+
+# Maximum interval of RabbitMQ connection retries. Default is 30
+# seconds. (integer value)
+#rabbit_interval_max = 30
+
+# DEPRECATED: Maximum number of RabbitMQ connection retries. Default
+# is 0 (infinite retry count). (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#rabbit_max_retries = 0
+
+# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change
+# this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0,
+# queue mirroring is no longer controlled by the x-ha-policy argument
+# when declaring a queue. If you just want to make sure that all
+# queues (except those with auto-generated names) are mirrored across
+# all nodes, run: "rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-
+# mode": "all"}' " (boolean value)
+#rabbit_ha_queues = false
+
+# Positive integer representing duration in seconds for queue TTL
+# (x-expires). Queues which are unused for the duration of the TTL are
+# automatically deleted. The parameter affects only reply and fanout
+# queues. (integer value)
+# Minimum value: 1
+#rabbit_transient_queues_ttl = 1800
+
+# Specifies the number of messages to prefetch. Setting to zero allows
+# unlimited messages. (integer value)
+#rabbit_qos_prefetch_count = 64
+
+# Number of seconds after which the Rabbit broker is considered down
+# if heartbeat's keep-alive fails (0 disable the heartbeat).
+# EXPERIMENTAL (integer value)
+#heartbeat_timeout_threshold = 60
+
+# How often times during the heartbeat_timeout_threshold we check the
+# heartbeat. (integer value)
+#heartbeat_rate = 2
+
+# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
+# (boolean value)
+#fake_rabbit = false
+
+# Maximum number of channels to allow (integer value)
+#channel_max = <None>
+
+# The maximum byte size for an AMQP frame (integer value)
+#frame_max = <None>
+
+# How often to send heartbeats for consumer's connections (integer
+# value)
+#heartbeat_interval = 3
+
+# Arguments passed to ssl.wrap_socket (dict value)
+#ssl_options = <None>
+
+# Set socket timeout in seconds for connection's socket (floating
+# point value)
+#socket_timeout = 0.25
+
+# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating
+# point value)
+#tcp_user_timeout = 0.25
+
+# Set delay for reconnection to some host which has connection error
+# (floating point value)
+#host_connection_reconnect_delay = 0.25
+
+# Connection factory implementation (string value)
+# Possible values:
+# new - <No description provided>
+# single - <No description provided>
+# read_write - <No description provided>
+#connection_factory = single
+
+# Maximum number of connections to keep queued. (integer value)
+#pool_max_size = 30
+
+# Maximum number of connections to create above `pool_max_size`.
+# (integer value)
+#pool_max_overflow = 0
+
+# Default number of seconds to wait for a connections to available
+# (integer value)
+#pool_timeout = 30
+
+# Lifetime of a connection (since creation) in seconds or None for no
+# recycling. Expired connections are closed on acquire. (integer
+# value)
+#pool_recycle = 600
+
+# Threshold at which inactive (since release) connections are
+# considered stale in seconds or None for no staleness. Stale
+# connections are closed on acquire. (integer value)
+#pool_stale = 60
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#default_serializer_type = json
+
+# Persist notification messages. (boolean value)
+#notification_persistence = false
+
+# Exchange name for sending notifications (string value)
+#default_notification_exchange = ${control_exchange}_notification
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# notification listener. (integer value)
+#notification_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending notification, -1 means infinite retry. (integer value)
+#default_notification_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending notification message (floating point value)
+#notification_retry_delay = 0.25
+
+# Time to live for rpc queues without consumers in seconds. (integer
+# value)
+#rpc_queue_expiration = 60
+
+# Exchange name for sending RPC messages (string value)
+#default_rpc_exchange = ${control_exchange}_rpc
+
+# Exchange name for receiving RPC replies (string value)
+#rpc_reply_exchange = ${control_exchange}_rpc_reply
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc listener. (integer value)
+#rpc_listener_prefetch_count = 100
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc reply listener. (integer value)
+#rpc_reply_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending reply. -1 means infinite retry during rpc_timeout (integer
+# value)
+#rpc_reply_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending reply. (floating point value)
+#rpc_reply_retry_delay = 0.25
+
+# Reconnecting retry count in case of connectivity problem during
+# sending RPC message, -1 means infinite retry. If actual retry
+# attempts in not 0 the rpc request could be processed more than one
+# time (integer value)
+#default_rpc_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending RPC message (floating point value)
+#rpc_retry_delay = 0.25
+
+
+[oslo_middleware]
+#
+# From oslo.middleware
+#
+
+# The maximum body size for each  request, in bytes. (integer value)
+# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
+# Deprecated group/name - [DEFAULT]/max_request_body_size
+#max_request_body_size = 114688
+
+# DEPRECATED: The HTTP Header that will be used to determine what the
+# original request protocol scheme was, even if it was hidden by a SSL
+# termination proxy. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#secure_proxy_ssl_header = X-Forwarded-Proto
+
+# Whether the application is behind a proxy or not. This determines if
+# the middleware should parse the headers or not. (boolean value)
+enable_proxy_headers_parsing = True
+
+[oslo_policy]
+
+[oslo_reports]

2018-09-01 23:00:20,764 [salt.state       :1941][INFO    ][7808] Completed state [/etc/cinder/cinder.conf] at time 23:00:20.764108 duration_in_ms=259.994
2018-09-01 23:00:20,764 [salt.state       :1770][INFO    ][7808] Running state [/etc/cinder/api-paste.ini] at time 23:00:20.764397
2018-09-01 23:00:20,764 [salt.state       :1803][INFO    ][7808] Executing state file.managed for [/etc/cinder/api-paste.ini]
2018-09-01 23:00:20,775 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'cinder/files/queens/api-paste.ini.volume.Debian'
2018-09-01 23:00:20,807 [salt.state       :290 ][INFO    ][7808] {'mode': '0640'}
2018-09-01 23:00:20,807 [salt.state       :1941][INFO    ][7808] Completed state [/etc/cinder/api-paste.ini] at time 23:00:20.807919 duration_in_ms=43.522
2018-09-01 23:00:20,808 [salt.state       :1770][INFO    ][7808] Running state [/etc/default/cinder-volume] at time 23:00:20.808164
2018-09-01 23:00:20,808 [salt.state       :1803][INFO    ][7808] Executing state file.managed for [/etc/default/cinder-volume]
2018-09-01 23:00:20,819 [salt.fileclient  :1215][INFO    ][7808] Fetching file from saltenv 'base', ** done ** 'cinder/files/default'
2018-09-01 23:00:20,823 [salt.state       :290 ][INFO    ][7808] File changed:
New file
2018-09-01 23:00:20,824 [salt.state       :1941][INFO    ][7808] Completed state [/etc/default/cinder-volume] at time 23:00:20.824088 duration_in_ms=15.924
2018-09-01 23:00:20,825 [salt.state       :1770][INFO    ][7808] Running state [cinder-volume] at time 23:00:20.825224
2018-09-01 23:00:20,825 [salt.state       :1803][INFO    ][7808] Executing state service.running for [cinder-volume]
2018-09-01 23:00:20,825 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemctl', 'status', 'cinder-volume.service', '-n', '0'] in directory '/root'
2018-09-01 23:00:20,837 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:20,846 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:20,856 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:20,868 [salt.loaded.int.module.cmdmod:722 ][ERROR   ][7808] Command '['systemd-run', '--scope', 'systemctl', 'start', 'cinder-volume.service']' failed with return code: 1
2018-09-01 23:00:20,868 [salt.loaded.int.module.cmdmod:726 ][ERROR   ][7808] stderr: Running scope as unit run-rd80775a737f84ab18addf50fc7d5fe8b.scope.
Job for cinder-volume.service failed because the control process exited with error code. See "systemctl status cinder-volume.service" and "journalctl -xe" for details.
2018-09-01 23:00:20,868 [salt.loaded.int.module.cmdmod:728 ][ERROR   ][7808] retcode: 1
2018-09-01 23:00:20,868 [salt.state       :292 ][ERROR   ][7808] Running scope as unit run-rd80775a737f84ab18addf50fc7d5fe8b.scope.
Job for cinder-volume.service failed because the control process exited with error code. See "systemctl status cinder-volume.service" and "journalctl -xe" for details.
2018-09-01 23:00:20,869 [salt.state       :1941][INFO    ][7808] Completed state [cinder-volume] at time 23:00:20.869034 duration_in_ms=43.809
2018-09-01 23:00:20,869 [salt.state       :1770][INFO    ][7808] Running state [cinder-volume] at time 23:00:20.869196
2018-09-01 23:00:20,869 [salt.state       :1803][INFO    ][7808] Executing state service.mod_watch for [cinder-volume]
2018-09-01 23:00:20,869 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:20,879 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemd-run', '--scope', 'systemctl', 'stop', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:20,890 [salt.loaded.int.module.cmdmod:395 ][INFO    ][7808] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:20,901 [salt.loaded.int.module.cmdmod:722 ][ERROR   ][7808] Command '['systemd-run', '--scope', 'systemctl', 'start', 'cinder-volume.service']' failed with return code: 1
2018-09-01 23:00:20,902 [salt.loaded.int.module.cmdmod:726 ][ERROR   ][7808] stderr: Running scope as unit run-r06736a7e811a4521bc52f2b00df1fe8e.scope.
Job for cinder-volume.service failed because the control process exited with error code. See "systemctl status cinder-volume.service" and "journalctl -xe" for details.
2018-09-01 23:00:20,902 [salt.loaded.int.module.cmdmod:728 ][ERROR   ][7808] retcode: 1
2018-09-01 23:00:20,902 [salt.state       :292 ][ERROR   ][7808] Running scope as unit run-r06736a7e811a4521bc52f2b00df1fe8e.scope.
Job for cinder-volume.service failed because the control process exited with error code. See "systemctl status cinder-volume.service" and "journalctl -xe" for details.
2018-09-01 23:00:20,902 [salt.state       :1941][INFO    ][7808] Completed state [cinder-volume] at time 23:00:20.902745 duration_in_ms=33.549
2018-09-01 23:00:20,905 [salt.minion      :1708][INFO    ][7808] Returning information for job: 20180901225804205230
2018-09-01 23:00:31,596 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command state.sls with jid 20180901230031586326
2018-09-01 23:00:31,606 [salt.minion      :1431][INFO    ][13360] Starting a new job with PID 13360
2018-09-01 23:00:36,833 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230036824244
2018-09-01 23:00:36,842 [salt.minion      :1431][INFO    ][13367] Starting a new job with PID 13367
2018-09-01 23:00:36,862 [salt.minion      :1708][INFO    ][13367] Returning information for job: 20180901230036824244
2018-09-01 23:00:37,228 [salt.state       :905 ][INFO    ][13360] Loading fresh modules for state activity
2018-09-01 23:00:38,565 [salt.state       :1770][INFO    ][13360] Running state [cinder-volume] at time 23:00:38.565235
2018-09-01 23:00:38,565 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [cinder-volume]
2018-09-01 23:00:38,566 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13360] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:00:38,855 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,855 [salt.state       :1941][INFO    ][13360] Completed state [cinder-volume] at time 23:00:38.855436 duration_in_ms=290.201
2018-09-01 23:00:38,855 [salt.state       :1770][INFO    ][13360] Running state [lvm2] at time 23:00:38.855680
2018-09-01 23:00:38,855 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [lvm2]
2018-09-01 23:00:38,860 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,860 [salt.state       :1941][INFO    ][13360] Completed state [lvm2] at time 23:00:38.860391 duration_in_ms=4.711
2018-09-01 23:00:38,860 [salt.state       :1770][INFO    ][13360] Running state [sysfsutils] at time 23:00:38.860538
2018-09-01 23:00:38,860 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [sysfsutils]
2018-09-01 23:00:38,865 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,865 [salt.state       :1941][INFO    ][13360] Completed state [sysfsutils] at time 23:00:38.865158 duration_in_ms=4.62
2018-09-01 23:00:38,865 [salt.state       :1770][INFO    ][13360] Running state [sg3-utils] at time 23:00:38.865308
2018-09-01 23:00:38,865 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [sg3-utils]
2018-09-01 23:00:38,869 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,869 [salt.state       :1941][INFO    ][13360] Completed state [sg3-utils] at time 23:00:38.869767 duration_in_ms=4.459
2018-09-01 23:00:38,869 [salt.state       :1770][INFO    ][13360] Running state [python-cinder] at time 23:00:38.869907
2018-09-01 23:00:38,870 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [python-cinder]
2018-09-01 23:00:38,874 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,874 [salt.state       :1941][INFO    ][13360] Completed state [python-cinder] at time 23:00:38.874486 duration_in_ms=4.58
2018-09-01 23:00:38,874 [salt.state       :1770][INFO    ][13360] Running state [python-mysqldb] at time 23:00:38.874633
2018-09-01 23:00:38,874 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [python-mysqldb]
2018-09-01 23:00:38,879 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,879 [salt.state       :1941][INFO    ][13360] Completed state [python-mysqldb] at time 23:00:38.879598 duration_in_ms=4.964
2018-09-01 23:00:38,879 [salt.state       :1770][INFO    ][13360] Running state [p7zip] at time 23:00:38.879743
2018-09-01 23:00:38,879 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [p7zip]
2018-09-01 23:00:38,884 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,884 [salt.state       :1941][INFO    ][13360] Completed state [p7zip] at time 23:00:38.884185 duration_in_ms=4.441
2018-09-01 23:00:38,884 [salt.state       :1770][INFO    ][13360] Running state [gettext-base] at time 23:00:38.884324
2018-09-01 23:00:38,884 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [gettext-base]
2018-09-01 23:00:38,888 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,888 [salt.state       :1941][INFO    ][13360] Completed state [gettext-base] at time 23:00:38.888919 duration_in_ms=4.595
2018-09-01 23:00:38,889 [salt.state       :1770][INFO    ][13360] Running state [python-memcache] at time 23:00:38.889059
2018-09-01 23:00:38,889 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [python-memcache]
2018-09-01 23:00:38,893 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,893 [salt.state       :1941][INFO    ][13360] Completed state [python-memcache] at time 23:00:38.893680 duration_in_ms=4.621
2018-09-01 23:00:38,893 [salt.state       :1770][INFO    ][13360] Running state [python-pycadf] at time 23:00:38.893820
2018-09-01 23:00:38,893 [salt.state       :1803][INFO    ][13360] Executing state pkg.installed for [python-pycadf]
2018-09-01 23:00:38,898 [salt.state       :290 ][INFO    ][13360] All specified packages are already installed
2018-09-01 23:00:38,898 [salt.state       :1941][INFO    ][13360] Completed state [python-pycadf] at time 23:00:38.898304 duration_in_ms=4.484
2018-09-01 23:00:38,900 [salt.state       :1770][INFO    ][13360] Running state [/var/lock/cinder] at time 23:00:38.900953
2018-09-01 23:00:38,901 [salt.state       :1803][INFO    ][13360] Executing state file.directory for [/var/lock/cinder]
2018-09-01 23:00:38,901 [salt.state       :290 ][INFO    ][13360] Directory /var/lock/cinder is in the correct state
Directory /var/lock/cinder updated
2018-09-01 23:00:38,901 [salt.state       :1941][INFO    ][13360] Completed state [/var/lock/cinder] at time 23:00:38.901897 duration_in_ms=0.945
2018-09-01 23:00:38,902 [salt.state       :1770][INFO    ][13360] Running state [/etc/cinder/cinder.conf] at time 23:00:38.902128
2018-09-01 23:00:38,902 [salt.state       :1803][INFO    ][13360] Executing state file.managed for [/etc/cinder/cinder.conf]
2018-09-01 23:00:39,099 [salt.state       :290 ][INFO    ][13360] File /etc/cinder/cinder.conf is in the correct state
2018-09-01 23:00:39,099 [salt.state       :1941][INFO    ][13360] Completed state [/etc/cinder/cinder.conf] at time 23:00:39.099891 duration_in_ms=197.762
2018-09-01 23:00:39,100 [salt.state       :1770][INFO    ][13360] Running state [/etc/cinder/api-paste.ini] at time 23:00:39.100149
2018-09-01 23:00:39,100 [salt.state       :1803][INFO    ][13360] Executing state file.managed for [/etc/cinder/api-paste.ini]
2018-09-01 23:00:39,138 [salt.state       :290 ][INFO    ][13360] File /etc/cinder/api-paste.ini is in the correct state
2018-09-01 23:00:39,138 [salt.state       :1941][INFO    ][13360] Completed state [/etc/cinder/api-paste.ini] at time 23:00:39.138965 duration_in_ms=38.816
2018-09-01 23:00:39,139 [salt.state       :1770][INFO    ][13360] Running state [/etc/default/cinder-volume] at time 23:00:39.139202
2018-09-01 23:00:39,139 [salt.state       :1803][INFO    ][13360] Executing state file.managed for [/etc/default/cinder-volume]
2018-09-01 23:00:39,150 [salt.state       :290 ][INFO    ][13360] File /etc/default/cinder-volume is in the correct state
2018-09-01 23:00:39,151 [salt.state       :1941][INFO    ][13360] Completed state [/etc/default/cinder-volume] at time 23:00:39.151068 duration_in_ms=11.865
2018-09-01 23:00:39,152 [salt.state       :1770][INFO    ][13360] Running state [cinder-volume] at time 23:00:39.152227
2018-09-01 23:00:39,152 [salt.state       :1803][INFO    ][13360] Executing state service.running for [cinder-volume]
2018-09-01 23:00:39,152 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13360] Executing command ['systemctl', 'status', 'cinder-volume.service', '-n', '0'] in directory '/root'
2018-09-01 23:00:39,165 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13360] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:39,171 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13360] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:39,176 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13360] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:39,190 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13360] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:39,198 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13360] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:39,206 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13360] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2018-09-01 23:00:39,213 [salt.state       :290 ][INFO    ][13360] {'cinder-volume': True}
2018-09-01 23:00:39,213 [salt.state       :1941][INFO    ][13360] Completed state [cinder-volume] at time 23:00:39.213294 duration_in_ms=61.067
2018-09-01 23:00:39,214 [salt.minion      :1708][INFO    ][13360] Returning information for job: 20180901230031586326
2018-09-01 23:01:30,266 [salt.utils.schedule:1375][INFO    ][3467] Running scheduled job: __mine_interval
2018-09-01 23:03:54,852 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command state.sls with jid 20180901230354843185
2018-09-01 23:03:54,860 [salt.minion      :1431][INFO    ][13717] Starting a new job with PID 13717
2018-09-01 23:03:58,627 [salt.state       :905 ][INFO    ][13717] Loading fresh modules for state activity
2018-09-01 23:03:59,019 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'neutron/gateway.sls'
2018-09-01 23:03:59,051 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'neutron/map.jinja'
2018-09-01 23:03:59,908 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230359894580
2018-09-01 23:03:59,915 [salt.minion      :1431][INFO    ][13730] Starting a new job with PID 13730
2018-09-01 23:03:59,921 [salt.state       :1770][INFO    ][13717] Running state [neutron-dhcp-agent] at time 23:03:59.921425
2018-09-01 23:03:59,923 [salt.state       :1803][INFO    ][13717] Executing state pkg.installed for [neutron-dhcp-agent]
2018-09-01 23:03:59,924 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:03:59,931 [salt.minion      :1708][INFO    ][13730] Returning information for job: 20180901230359894580
2018-09-01 23:04:00,239 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['apt-cache', '-q', 'policy', 'neutron-dhcp-agent'] in directory '/root'
2018-09-01 23:04:00,312 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-09-01 23:04:01,968 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:04:01,983 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-dhcp-agent'] in directory '/root'
2018-09-01 23:04:10,087 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230410074432
2018-09-01 23:04:10,100 [salt.minion      :1431][INFO    ][14352] Starting a new job with PID 14352
2018-09-01 23:04:10,120 [salt.minion      :1708][INFO    ][14352] Returning information for job: 20180901230410074432
2018-09-01 23:04:20,273 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230420262740
2018-09-01 23:04:20,285 [salt.minion      :1431][INFO    ][15220] Starting a new job with PID 15220
2018-09-01 23:04:20,431 [salt.minion      :1708][INFO    ][15220] Returning information for job: 20180901230420262740
2018-09-01 23:04:26,381 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:04:26,406 [salt.state       :290 ][INFO    ][13717] Made the following changes:
'python-waitress' changed from 'absent' to '0.8.10-1'
'python-os-service-types' changed from 'absent' to '1.1.0-1.0~u16.04+mcp1'
'python-neutron-fwaas' changed from 'absent' to '2:12.0.0-1.0~u16.04+mcp17'
'python-jmespath' changed from 'absent' to '0.9.0-2'
'python-osc-lib' changed from 'absent' to '1.9.0-1.0~u16.04+mcp1'
'python-openstacksdk' changed from 'absent' to '0.11.3+repack-1.0~u16.04+mcp2'
'python-deprecation' changed from 'absent' to '1.0.1-1~u16.04+mcp2'
'python-os-xenapi' changed from 'absent' to '0.3.1-1.0~u16.04+mcp1'
'python-os-client-config' changed from 'absent' to '1.29.0-1.0~u16.04+mcp2'
'python-logutils' changed from 'absent' to '0.3.3-5'
'ipset-6.29' changed from 'absent' to '1'
'neutron-dhcp-agent' changed from 'absent' to '2:12.0.3-4~u16.04+mcp96'
'neutron-common' changed from 'absent' to '2:12.0.3-4~u16.04+mcp96'
'python-designateclient' changed from 'absent' to '2.9.0-1.0~u16.04+mcp4'
'python-pyroute2' changed from 'absent' to '0.4.21-0.1~u16.04+mcp1'
'python-neutronclient' changed from 'absent' to '1:6.7.0-1.0~u16.04+mcp12'
'python2.7-waitress' changed from 'absent' to '1'
'python-munch' changed from 'absent' to '2.2.0-1.0~u16.04+mcp1'
'python-pecan' changed from 'absent' to '1.1.2-1.1~u16.04+mcp2'
'python2.7-neutron' changed from 'absent' to '1'
'ipset' changed from 'absent' to '6.29-1'
'python-ovsdbapp' changed from 'absent' to '0.9.1-1.0~u16.04+mcp2'
'python2.7-ryu' changed from 'absent' to '1'
'python-neutron-lib' changed from 'absent' to '1.13.0-1.0~u16.04+mcp1'
'python-simplegeneric' changed from 'absent' to '0.8.1-1'
'python-weakrefmethod' changed from 'absent' to '1.0.3-2~u16.04+mcp1'
'python-neutron' changed from 'absent' to '2:12.0.3-4~u16.04+mcp96'
'python-ryu' changed from 'absent' to '4.15-1~u16.04+mcp2'
'dnsmasq-utils' changed from 'absent' to '2.78-1~u16.04+mcp2'
'haproxy' changed from 'absent' to '1.6.3-1ubuntu0.1'
'python-tinyrpc' changed from 'absent' to '0.5-1.1~u16.04+mcp2'
'python-webtest' changed from 'absent' to '2.0.28-1.0~u16.04+mcp1'
'python-appdirs' changed from 'absent' to '1.4.0-2'
'neutron-metadata-agent' changed from 'absent' to '2:12.0.3-4~u16.04+mcp96'
'python-openvswitch' changed from 'absent' to '2.8.0-4~u16.04+mcp1'
'python2.7-pyroute2' changed from 'absent' to '1'
'python-requestsexceptions' changed from 'absent' to '1.3.0-3~u16.04+mcp2'
'liblua5.3-0' changed from 'absent' to '5.3.1-1ubuntu2'
'libipset3' changed from 'absent' to '6.29-1'

2018-09-01 23:04:26,429 [salt.state       :905 ][INFO    ][13717] Loading fresh modules for state activity
2018-09-01 23:04:26,453 [salt.state       :1941][INFO    ][13717] Completed state [neutron-dhcp-agent] at time 23:04:26.453377 duration_in_ms=26531.956
2018-09-01 23:04:26,457 [salt.state       :1770][INFO    ][13717] Running state [openvswitch-common] at time 23:04:26.457256
2018-09-01 23:04:26,457 [salt.state       :1803][INFO    ][13717] Executing state pkg.installed for [openvswitch-common]
2018-09-01 23:04:26,968 [salt.state       :290 ][INFO    ][13717] All specified packages are already installed
2018-09-01 23:04:26,968 [salt.state       :1941][INFO    ][13717] Completed state [openvswitch-common] at time 23:04:26.968472 duration_in_ms=511.214
2018-09-01 23:04:26,968 [salt.state       :1770][INFO    ][13717] Running state [neutron-metadata-agent] at time 23:04:26.968724
2018-09-01 23:04:26,968 [salt.state       :1803][INFO    ][13717] Executing state pkg.installed for [neutron-metadata-agent]
2018-09-01 23:04:26,973 [salt.state       :290 ][INFO    ][13717] All specified packages are already installed
2018-09-01 23:04:26,973 [salt.state       :1941][INFO    ][13717] Completed state [neutron-metadata-agent] at time 23:04:26.973694 duration_in_ms=4.97
2018-09-01 23:04:26,973 [salt.state       :1770][INFO    ][13717] Running state [neutron-openvswitch-agent] at time 23:04:26.973897
2018-09-01 23:04:26,974 [salt.state       :1803][INFO    ][13717] Executing state pkg.installed for [neutron-openvswitch-agent]
2018-09-01 23:04:26,986 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:04:27,000 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-openvswitch-agent'] in directory '/root'
2018-09-01 23:04:30,368 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230430358779
2018-09-01 23:04:30,376 [salt.minion      :1431][INFO    ][16234] Starting a new job with PID 16234
2018-09-01 23:04:30,385 [salt.minion      :1708][INFO    ][16234] Returning information for job: 20180901230430358779
2018-09-01 23:04:30,832 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:04:30,856 [salt.state       :290 ][INFO    ][13717] Made the following changes:
'neutron-openvswitch-agent' changed from 'absent' to '2:12.0.3-4~u16.04+mcp96'
'conntrack' changed from 'absent' to '1:1.4.3-3'

2018-09-01 23:04:30,867 [salt.state       :905 ][INFO    ][13717] Loading fresh modules for state activity
2018-09-01 23:04:30,996 [salt.state       :1941][INFO    ][13717] Completed state [neutron-openvswitch-agent] at time 23:04:30.996699 duration_in_ms=4022.801
2018-09-01 23:04:31,002 [salt.state       :1770][INFO    ][13717] Running state [neutron-l3-agent] at time 23:04:31.002342
2018-09-01 23:04:31,002 [salt.state       :1803][INFO    ][13717] Executing state pkg.installed for [neutron-l3-agent]
2018-09-01 23:04:31,388 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:04:31,402 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-l3-agent'] in directory '/root'
2018-09-01 23:04:40,552 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230440541385
2018-09-01 23:04:40,560 [salt.minion      :1431][INFO    ][17675] Starting a new job with PID 17675
2018-09-01 23:04:40,571 [salt.minion      :1708][INFO    ][17675] Returning information for job: 20180901230440541385
2018-09-01 23:04:40,840 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:04:40,865 [salt.state       :290 ][INFO    ][13717] Made the following changes:
'libsnmp-base' changed from 'absent' to '5.7.3+dfsg-1ubuntu4.1'
'keepalived' changed from 'absent' to '1:1.3.9-1ubuntu0.18.04.1~cloud0'
'neutron-l3-agent' changed from 'absent' to '2:12.0.3-4~u16.04+mcp96'
'ipvsadm' changed from 'absent' to '1:1.28-3'
'libsnmp30' changed from 'absent' to '5.7.3+dfsg-1ubuntu4.1'
'iputils-arping' changed from 'absent' to '3:20121221-5ubuntu2'
'libsensors4' changed from 'absent' to '1:3.4.0-2'
'libnl-route-3-200' changed from 'absent' to '3.2.27-1ubuntu0.16.04.1'
'radvd' changed from 'absent' to '1:2.11-1'

2018-09-01 23:04:40,881 [salt.state       :905 ][INFO    ][13717] Loading fresh modules for state activity
2018-09-01 23:04:40,903 [salt.state       :1941][INFO    ][13717] Completed state [neutron-l3-agent] at time 23:04:40.903646 duration_in_ms=9901.304
2018-09-01 23:04:41,203 [salt.state       :1770][INFO    ][13717] Running state [haproxy] at time 23:04:41.203938
2018-09-01 23:04:41,204 [salt.state       :1803][INFO    ][13717] Executing state service.dead for [haproxy]
2018-09-01 23:04:41,204 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'status', 'haproxy.service', '-n', '0'] in directory '/root'
2018-09-01 23:04:41,212 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'haproxy.service'] in directory '/root'
2018-09-01 23:04:41,218 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2018-09-01 23:04:41,224 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemd-run', '--scope', 'systemctl', 'stop', 'haproxy.service'] in directory '/root'
2018-09-01 23:04:41,233 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'haproxy.service'] in directory '/root'
2018-09-01 23:04:41,240 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2018-09-01 23:04:41,247 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2018-09-01 23:04:41,255 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemd-run', '--scope', 'systemctl', 'disable', 'haproxy.service'] in directory '/root'
2018-09-01 23:04:41,539 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2018-09-01 23:04:41,559 [salt.state       :290 ][INFO    ][13717] {'haproxy': True}
2018-09-01 23:04:41,559 [salt.state       :1941][INFO    ][13717] Completed state [haproxy] at time 23:04:41.559861 duration_in_ms=355.922
2018-09-01 23:04:41,561 [salt.state       :1770][INFO    ][13717] Running state [/etc/neutron/neutron.conf] at time 23:04:41.561926
2018-09-01 23:04:41,562 [salt.state       :1803][INFO    ][13717] Executing state file.managed for [/etc/neutron/neutron.conf]
2018-09-01 23:04:41,583 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'neutron/files/queens/neutron-generic.conf'
2018-09-01 23:04:41,668 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/service/_wsgi_default.conf'
2018-09-01 23:04:41,679 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_concurrency.conf'
2018-09-01 23:04:41,715 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/service/_ssl.conf'
2018-09-01 23:04:41,728 [salt.state       :290 ][INFO    ][13717] File changed:
--- 
+++ 
@@ -1,5 +1,5 @@
+
 [DEFAULT]
-core_plugin = ml2
 
 #
 # From neutron
@@ -8,6 +8,7 @@
 # Where to store Neutron state files. This directory must be writable by the
 # agent. (string value)
 #state_path = /var/lib/neutron
+state_path = /var/lib/neutron
 
 # The host IP to bind to. (unknown value)
 #bind_host = 0.0.0.0
@@ -26,9 +27,15 @@
 
 # The type of authentication to use (string value)
 #auth_strategy = keystone
-
-# The core plugin Neutron will use (string value)
-#core_plugin = <None>
+auth_strategy = keystone
+
+
+
+core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
+
+service_plugins = router, metering
+
+
 
 # The service plugins Neutron will use (list value)
 #service_plugins =
@@ -44,6 +51,7 @@
 # The maximum number of items returned in a single response, value was
 # 'infinite' or negative integer means no limit (string value)
 #pagination_max_limit = -1
+pagination_max_limit = -1
 
 # Default value of availability zone hints. The availability zone aware
 # schedulers use this when the resources availability_zone_hints is empty.
@@ -70,6 +78,7 @@
 # DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite
 # lease times. (integer value)
 #dhcp_lease_duration = 86400
+dhcp_lease_duration = 600
 
 # Domain to use for building the hostnames (string value)
 #dns_domain = openstacklocal
@@ -84,6 +93,7 @@
 # MUST be set to False if Neutron is being used in conjunction with Nova
 # security groups. (boolean value)
 #allow_overlapping_ips = false
+allow_overlapping_ips = True
 
 # Hostname to be used by the Neutron server, agents and services running on
 # this machine. All the agents and services running on this machine must use
@@ -97,10 +107,12 @@
 
 # Send notification to nova when port status changes (boolean value)
 #notify_nova_on_port_status_changes = true
+notify_nova_on_port_status_changes = true
 
 # Send notification to nova when port data (fixed_ips/floatingip) changes so
 # nova can update its cache. (boolean value)
 #notify_nova_on_port_data_changes = true
+notify_nova_on_port_data_changes = true
 
 # Number of seconds between sending events to nova if there are any events to
 # send. (integer value)
@@ -121,6 +133,7 @@
 # value. Defaults to 1500, the standard value for Ethernet. (integer value)
 # Deprecated group/name - [ml2]/segment_mtu
 #global_physnet_mtu = 1500
+global_physnet_mtu = 1500
 
 # Number of backlog requests to configure the socket with (integer value)
 #backlog = 4096
@@ -141,10 +154,13 @@
 
 # Number of RPC worker processes for service. (integer value)
 #rpc_workers = 1
+rpc_workers = 16
+
 
 # Number of RPC worker processes dedicated to state reports queue. (integer
 # value)
 #rpc_state_report_workers = 1
+rpc_state_report_workers = 4
 
 # Range of seconds to randomly delay when starting the periodic task scheduler
 # to reduce stampeding. (Disable by setting to 0) (integer value)
@@ -218,6 +234,7 @@
 # a given tenant network, providing high availability for DHCP service.
 # (integer value)
 #dhcp_agents_per_network = 1
+dhcp_agents_per_network = 2
 
 # Enable services on an agent with admin_state_up False. If this option is
 # False, when admin_state_up of an agent is turned False, services on it will
@@ -237,13 +254,16 @@
 # System-wide flag to determine the type of router that tenants can create.
 # Only admin can override. (boolean value)
 #router_distributed = false
+router_distributed = False
 
 # Determine if setup is configured for DVR. If False, DVR API extension will be
 # disabled. (boolean value)
 #enable_dvr = true
+enable_dvr = False
 
 # Driver to use for scheduling router to a default L3 agent (string value)
 #router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler
+router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
 
 # Allow auto scheduling of routers to L3 agent. (boolean value)
 #router_auto_schedule = true
@@ -251,13 +271,16 @@
 # Automatically reschedule routers from offline L3 agents to online L3 agents.
 # (boolean value)
 #allow_automatic_l3agent_failover = false
+allow_automatic_l3agent_failover = true
 
 # Enable HA mode for virtual routers. (boolean value)
 #l3_ha = false
+l3_ha = False
 
 # Maximum number of L3 agents which a HA router will be scheduled on. If it is
 # set to 0 then the router will be scheduled on every agent. (integer value)
 #max_l3_agents_per_router = 3
+max_l3_agents_per_router = 0
 
 # Subnet used for the l3 HA admin network. (string value)
 #l3_ha_net_cidr = 169.254.192.0/18
@@ -278,120 +301,121 @@
 
 # Maximum number of allowed address pairs (integer value)
 #max_allowed_address_pair = 10
-
 #
 # From oslo.log
 #
 
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
 
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
 # Note: This option can be changed without restarting.
 # Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
 
 # Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
 #log_date_format = %Y-%m-%d %H:%M:%S
 
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
 
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
 #log_dir = <None>
 
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
 # (boolean value)
-#watch_log_file = false
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
 #use_journal = false
 
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
 
-# Use JSON formatting for logging. This option is ignored if log_config_append
-# is set. (boolean value)
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_json = false
 
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_stderr = false
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
-# Format string to use for log messages when context is undefined. (string
-# value)
+# Format string to use for log messages when context is undefined.
+# (string value)
 #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
 
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
 #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
 
-# Prefix each line of exception output with this format. (string value)
+# Prefix each line of exception output with this format. (string
+# value)
 #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
 
 # Defines the format string for %(user_identity)s that is used in
 # logging_context_format_string. (string value)
 #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
 
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
 #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
 
-# The format for an instance that is passed with the log message. (string
-# value)
+# The format for an instance that is passed with the log message.
+# (string value)
 #instance_format = "[instance: %(uuid)s] "
 
-# The format for an instance UUID that is passed with the log message. (string
-# value)
+# The format for an instance UUID that is passed with the log message.
+# (string value)
 #instance_uuid_format = "[instance: %(uuid)s] "
 
 # Interval, number of seconds, of log rate limiting. (integer value)
 #rate_limit_interval = 0
 
-# Maximum number of logged messages per rate_limit_interval. (integer value)
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
 #rate_limit_burst = 0
 
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
 #rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
-
 #
 # From oslo.messaging
 #
@@ -399,14 +423,17 @@
 # Size of RPC connection pool. (integer value)
 #rpc_conn_pool_size = 30
 
-# The pool size limit for connections expiration policy (integer value)
+# The pool size limit for connections expiration policy (integer
+# value)
 #conn_pool_min_size = 2
 
-# The time-to-live in sec of idle connections in the pool (integer value)
+# The time-to-live in sec of idle connections in the pool (integer
+# value)
 #conn_pool_ttl = 1200
 
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
+# ZeroMQ bind address. Should be a wildcard (*), an ethernet
+# interface, or IP. The "host" option should point or resolve to this
+# address. (string value)
 #rpc_zmq_bind_address = *
 
 # MatchMaker driver. (string value)
@@ -419,51 +446,54 @@
 # Number of ZeroMQ contexts, defaults to 1. (integer value)
 #rpc_zmq_contexts = 1
 
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
+# Maximum number of ingress messages to locally buffer per topic.
+# Default is unlimited. (integer value)
 #rpc_zmq_topic_backlog = <None>
 
 # Directory for holding IPC sockets. (string value)
 #rpc_zmq_ipc_dir = /var/run/openstack
 
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
+# Name of this node. Must be a valid hostname, FQDN, or IP address.
+# Must match "host" option, if running Nova. (string value)
 #rpc_zmq_host = localhost
 
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
+# Number of seconds to wait before all pending messages will be sent
+# after closing a socket. The default value of -1 specifies an
+# infinite linger period. The value of 0 specifies no linger period.
+# Pending messages shall be discarded immediately when the socket is
+# closed. Positive values specify an upper bound for the linger
+# period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
 #zmq_linger = -1
 
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
+# The default number of seconds that poll should wait. Poll raises
+# timeout exception when timeout expired. (integer value)
 #rpc_poll_timeout = 1
 
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
+# Expiration timeout in seconds of a name service record about
+# existing target ( < 0 means no timeout). (integer value)
 #zmq_target_expire = 300
 
-# Update period in seconds of a name service record about existing target.
-# (integer value)
+# Update period in seconds of a name service record about existing
+# target. (integer value)
 #zmq_target_update = 180
 
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
+# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
+# (boolean value)
 #use_pub_sub = false
 
 # Use ROUTER remote proxy. (boolean value)
 #use_router_proxy = false
 
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
+# This option makes direct connections dynamic or static. It makes
+# sense only with use_router_proxy=False which means to use direct
+# connections for direct message types (ignored otherwise). (boolean
+# value)
 #use_dynamic_connections = false
 
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
+# How many additional connections to a host will be made for failover
+# reasons. This option is actual only in dynamic connections mode.
+# (integer value)
 #zmq_failover_connections = 2
 
 # Minimal port number for random ports range. (port value)
@@ -476,8 +506,8 @@
 # Maximum value: 65536
 #rpc_zmq_max_port = 65536
 
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
+# Number of retries to find free port number before fail with
+# ZMQBindError. (integer value)
 #rpc_zmq_bind_port_retries = 100
 
 # Default serialization mechanism for serializing/deserializing
@@ -487,78 +517,83 @@
 # msgpack - <No description provided>
 #rpc_zmq_serialization = json
 
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
+# This option configures round-robin mode in zmq socket. True means
+# not keeping a queue when server side disconnects. False means to
+# keep queue and messages even if server is disconnected, when the
+# server appears we send all accumulated messages to it. (boolean
+# value)
 #zmq_immediate = true
 
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
+# Enable/disable TCP keepalive (KA) mechanism. The default value of -1
+# (or any other negative value) means to skip any overrides and leave
+# it to OS default; 0 and 1 (or any other positive value) mean to
+# disable and enable the option respectively. (integer value)
 #zmq_tcp_keepalive = -1
 
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
+# The duration between two keepalive transmissions in idle condition.
+# The unit is platform dependent, for example, seconds in Linux,
+# milliseconds in Windows etc. The default value of -1 (or any other
+# negative value and 0) means to skip any overrides and leave it to OS
+# default. (integer value)
 #zmq_tcp_keepalive_idle = -1
 
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
+# The number of retransmissions to be carried out before declaring
+# that remote end is not available. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
 #zmq_tcp_keepalive_cnt = -1
 
 # The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
+# acknowledgement to the previous keepalive transmission is not
+# received. The unit is platform dependent, for example, seconds in
+# Linux, milliseconds in Windows etc. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
 #zmq_tcp_keepalive_intvl = -1
 
-# Maximum number of (green) threads to work concurrently. (integer value)
+# Maximum number of (green) threads to work concurrently. (integer
+# value)
 #rpc_thread_pool_size = 100
 
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
+# Expiration timeout in seconds of a sent/received message after which
+# it is not tracked anymore by a client/server. (integer value)
 #rpc_message_ttl = 300
 
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
+# Wait for message acknowledgements from receivers. This mechanism
+# works only via proxy without PUB/SUB. (boolean value)
 #rpc_use_acks = false
 
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
+# Number of seconds to wait for an ack from a cast/call. After each
+# retry attempt this timeout is multiplied by some specified
+# multiplier. (integer value)
 #rpc_ack_timeout_base = 15
 
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
+# Number to multiply base ack timeout by after each retry attempt.
+# (integer value)
 #rpc_ack_timeout_multiplier = 2
 
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
+# Default number of message sending attempts in case of any problems
+# occurred: positive value N means at most N retries, 0 means no
+# retries, None or -1 (or any other negative values) mean to retry
+# forever. This option is used only if acknowledgments are enabled.
+# (integer value)
 #rpc_retry_attempts = 3
 
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
+# List of publisher hosts SubConsumer can subscribe on. This option
+# has higher priority then the default publishers list taken from the
+# matchmaker. (list value)
 #subscribe_on =
 
-# Size of executor thread pool when executor is threading or eventlet. (integer
-# value)
+# Size of executor thread pool when executor is threading or eventlet.
+# (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
 #executor_thread_pool_size = 64
 
 # Seconds to wait for a response from a call. (integer value)
 #rpc_response_timeout = 60
 
-# The network address and optional user credentials for connecting to the
-# messaging backend, in URL format. The expected format is:
+# The network address and optional user credentials for connecting to
+# the messaging backend, in URL format. The expected format is:
 #
 # driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
 #
@@ -569,18 +604,19 @@
 # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
 # (string value)
 #transport_url = <None>
-
-# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
-# include amqp and zmq. (string value)
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
+
+# DEPRECATED: The messaging driver to use, defaults to rabbit. Other
+# drivers include amqp and zmq. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
 #rpc_backend = rabbit
 
-# The default exchange under which topics are scoped. May be overridden by an
-# exchange name specified in the transport_url option. (string value)
-#control_exchange = neutron
-
+# The default exchange under which topics are scoped. May be
+# overridden by an exchange name specified in the transport_url
+# option. (string value)
+#control_exchange = openstack
 #
 # From oslo.service.wsgi
 #
@@ -614,9 +650,7 @@
 # wait forever. (integer value)
 #client_socket_timeout = 900
 
-
 [agent]
-root_helper = "sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf"
 
 #
 # From neutron.agent
@@ -626,7 +660,8 @@
 # /etc/neutron/rootwrap.conf' to use the real root filter facility. Change to
 # 'sudo' to skip the filtering and just run the command directly. (string
 # value)
-#root_helper = sudo
+#root_helper_daemon = <None>
+root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
 
 # Use the root helper when listing the namespaces on a system. This may not be
 # required depending on the security configuration. If the root helper is not
@@ -643,6 +678,7 @@
 # agent_down_time, best if it is half or less than agent_down_time. (floating
 # point value)
 #report_interval = 30
+report_interval = 10
 
 # Log agent heartbeats (boolean value)
 #log_agent_heartbeats = false
@@ -675,533 +711,15 @@
 
 [cors]
 
-#
-# From oslo.middleware.cors
-#
-
-# Indicate whether this resource may be shared with the domain received in the
-# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
-# slash. Example: https://horizon.example.com (list value)
-#allowed_origin = <None>
-
-# Indicate that the actual request can include user credentials (boolean value)
-#allow_credentials = true
-
-# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
-# Headers. (list value)
-#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-Volume-microversion
-
-# Maximum cache age of CORS preflight requests. (integer value)
-#max_age = 3600
-
-# Indicate which methods can be used during the actual request. (list value)
-#allow_methods = GET,PUT,POST,DELETE,PATCH
-
-# Indicate which header field names may be used during the actual request.
-# (list value)
-#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
-
 
 [database]
 connection = sqlite:////var/lib/neutron/neutron.sqlite
 
-#
-# From neutron.db
-#
-
-# Database engine for which script will be generated when using offline
-# migration. (string value)
-#engine =
-
-#
-# From oslo.db
-#
-
-# If True, SQLite uses synchronous mode. (boolean value)
-#sqlite_synchronous = true
-
-# The back end to use for the database. (string value)
-# Deprecated group/name - [DEFAULT]/db_backend
-#backend = sqlalchemy
-
-# The SQLAlchemy connection string to use to connect to the database. (string
-# value)
-# Deprecated group/name - [DEFAULT]/sql_connection
-# Deprecated group/name - [DATABASE]/sql_connection
-# Deprecated group/name - [sql]/connection
-#connection = <None>
-
-# The SQLAlchemy connection string to use to connect to the slave database.
-# (string value)
-#slave_connection = <None>
-
-# The SQL mode to be used for MySQL sessions. This option, including the
-# default, overrides any server-set SQL mode. To use whatever SQL mode is set
-# by the server configuration, set this to no value. Example: mysql_sql_mode=
-# (string value)
-#mysql_sql_mode = TRADITIONAL
-
-# If True, transparently enables support for handling MySQL Cluster (NDB).
-# (boolean value)
-#mysql_enable_ndb = false
-
-# Connections which have been present in the connection pool longer than this
-# number of seconds will be replaced with a new one the next time they are
-# checked out from the pool. (integer value)
-# Deprecated group/name - [DATABASE]/idle_timeout
-# Deprecated group/name - [database]/idle_timeout
-# Deprecated group/name - [DEFAULT]/sql_idle_timeout
-# Deprecated group/name - [DATABASE]/sql_idle_timeout
-# Deprecated group/name - [sql]/idle_timeout
-#connection_recycle_time = 3600
-
-# Minimum number of SQL connections to keep open in a pool. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_min_pool_size
-# Deprecated group/name - [DATABASE]/sql_min_pool_size
-#min_pool_size = 1
-
-# Maximum number of SQL connections to keep open in a pool. Setting a value of
-# 0 indicates no limit. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_pool_size
-# Deprecated group/name - [DATABASE]/sql_max_pool_size
-#max_pool_size = 5
-
-# Maximum number of database connection retries during startup. Set to -1 to
-# specify an infinite retry count. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_retries
-# Deprecated group/name - [DATABASE]/sql_max_retries
-#max_retries = 10
-
-# Interval between retries of opening a SQL connection. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_retry_interval
-# Deprecated group/name - [DATABASE]/reconnect_interval
-#retry_interval = 10
-
-# If set, use this value for max_overflow with SQLAlchemy. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_overflow
-# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
-#max_overflow = 50
-
-# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
-# value)
-# Minimum value: 0
-# Maximum value: 100
-# Deprecated group/name - [DEFAULT]/sql_connection_debug
-#connection_debug = 0
-
-# Add Python stack traces to SQL as comment strings. (boolean value)
-# Deprecated group/name - [DEFAULT]/sql_connection_trace
-#connection_trace = false
-
-# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
-# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
-#pool_timeout = <None>
-
-# Enable the experimental use of database reconnect on connection lost.
-# (boolean value)
-#use_db_reconnect = false
-
-# Seconds between retries of a database transaction. (integer value)
-#db_retry_interval = 1
-
-# If True, increases the interval between retries of a database operation up to
-# db_max_retry_interval. (boolean value)
-#db_inc_retry_interval = true
-
-# If db_inc_retry_interval is set, the maximum seconds between retries of a
-# database operation. (integer value)
-#db_max_retry_interval = 10
-
-# Maximum retries in case of connection error or deadlock error before error is
-# raised. Set to -1 to specify an infinite retry count. (integer value)
-#db_max_retries = 20
-
-
 [keystone_authtoken]
 
-#
-# From keystonemiddleware.auth_token
-#
-
-# Complete "public" Identity API endpoint. This endpoint should not be an
-# "admin" endpoint, as it should be accessible by all end users.
-# Unauthenticated clients are redirected to this endpoint to authenticate.
-# Although this endpoint should ideally be unversioned, client support in the
-# wild varies. If you're using a versioned v2 endpoint here, then this should
-# *not* be the same endpoint the service user utilizes for validating tokens,
-# because normal end users may not be able to reach that endpoint. (string
-# value)
-# Deprecated group/name - [keystone_authtoken]/auth_uri
-#www_authenticate_uri = <None>
-
-# DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
-# be an "admin" endpoint, as it should be accessible by all end users.
-# Unauthenticated clients are redirected to this endpoint to authenticate.
-# Although this endpoint should ideally be unversioned, client support in the
-# wild varies. If you're using a versioned v2 endpoint here, then this should
-# *not* be the same endpoint the service user utilizes for validating tokens,
-# because normal end users may not be able to reach that endpoint. This option
-# is deprecated in favor of www_authenticate_uri and will be removed in the S
-# release. (string value)
-# This option is deprecated for removal since Queens.
-# Its value may be silently ignored in the future.
-# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
-# and will be removed in the S  release.
-#auth_uri = <None>
-
-# API version of the admin Identity API endpoint. (string value)
-#auth_version = <None>
-
-# Do not handle authorization requests within the middleware, but delegate the
-# authorization decision to downstream WSGI components. (boolean value)
-#delay_auth_decision = false
-
-# Request timeout value for communicating with Identity API server. (integer
-# value)
-#http_connect_timeout = <None>
-
-# How many times are we trying to reconnect when communicating with Identity
-# API Server. (integer value)
-#http_request_max_retries = 3
-
-# Request environment key where the Swift cache object is stored. When
-# auth_token middleware is deployed with a Swift cache, use this option to have
-# the middleware share a caching backend with swift. Otherwise, use the
-# ``memcached_servers`` option instead. (string value)
-#cache = <None>
-
-# Required if identity server requires client certificate (string value)
-#certfile = <None>
-
-# Required if identity server requires client certificate (string value)
-#keyfile = <None>
-
-# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# Defaults to system CAs. (string value)
-#cafile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# The region in which the identity server can be found. (string value)
-#region_name = <None>
-
-# DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P
-# release. (string value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#signing_dir = <None>
-
-# Optionally specify a list of memcached server(s) to use for caching. If left
-# undefined, tokens will instead be cached in-process. (list value)
-# Deprecated group/name - [keystone_authtoken]/memcache_servers
-#memcached_servers = <None>
-
-# In order to prevent excessive effort spent validating tokens, the middleware
-# caches previously-seen tokens for a configurable duration (in seconds). Set
-# to -1 to disable caching completely. (integer value)
-#token_cache_time = 300
-
-# DEPRECATED: Determines the frequency at which the list of revoked tokens is
-# retrieved from the Identity service (in seconds). A high number of revocation
-# events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in
-# the Ocata release and will be removed in the P release. (integer value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#revocation_cache_time = 10
-
-# (Optional) If defined, indicate whether token data should be authenticated or
-# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
-# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
-# cache. If the value is not one of these options or empty, auth_token will
-# raise an exception on initialization. (string value)
-# Possible values:
-# None - <No description provided>
-# MAC - <No description provided>
-# ENCRYPT - <No description provided>
-#memcache_security_strategy = None
-
-# (Optional, mandatory if memcache_security_strategy is defined) This string is
-# used for key derivation. (string value)
-#memcache_secret_key = <None>
-
-# (Optional) Number of seconds memcached server is considered dead before it is
-# tried again. (integer value)
-#memcache_pool_dead_retry = 300
-
-# (Optional) Maximum total number of open connections to every memcached
-# server. (integer value)
-#memcache_pool_maxsize = 10
-
-# (Optional) Socket timeout in seconds for communicating with a memcached
-# server. (integer value)
-#memcache_pool_socket_timeout = 3
-
-# (Optional) Number of seconds a connection to memcached is held unused in the
-# pool before it is closed. (integer value)
-#memcache_pool_unused_timeout = 60
-
-# (Optional) Number of seconds that an operation will wait to get a memcached
-# client connection from the pool. (integer value)
-#memcache_pool_conn_get_timeout = 10
-
-# (Optional) Use the advanced (eventlet safe) memcached client pool. The
-# advanced pool will only work under python 2.x. (boolean value)
-#memcache_use_advanced_pool = false
-
-# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
-# middleware will not ask for service catalog on token validation and will not
-# set the X-Service-Catalog header. (boolean value)
-#include_service_catalog = true
-
-# Used to control the use and type of token binding. Can be set to: "disabled"
-# to not check token binding. "permissive" (default) to validate binding
-# information if the bind type is of a form known to the server and ignore it
-# if not. "strict" like "permissive" but if the bind type is unknown the token
-# will be rejected. "required" any form of token binding is needed to be
-# allowed. Finally the name of a binding method that must be present in tokens.
-# (string value)
-#enforce_token_bind = permissive
-
-# DEPRECATED: If true, the revocation list will be checked for cached tokens.
-# This requires that PKI tokens are configured on the identity server. (boolean
-# value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#check_revocations_for_cached = false
-
-# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
-# single algorithm or multiple. The algorithms are those supported by Python
-# standard hashlib.new(). The hashes will be tried in the order given, so put
-# the preferred one first for performance. The result of the first hash will be
-# stored in the cache. This will typically be set to multiple values only while
-# migrating from a less secure algorithm to a more secure one. Once all the old
-# tokens are expired this option should be set to a single value for better
-# performance. (list value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#hash_algorithms = md5
-
-# A choice of roles that must be present in a service token. Service tokens are
-# allowed to request that an expired token can be used and so this check should
-# tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present.
-# For backwards compatibility reasons this currently only affects the
-# allow_expired check. (list value)
-#service_token_roles = service
-
-# For backwards compatibility reasons we must let valid service tokens pass
-# that don't pass the service_token_roles check as valid. Setting this true
-# will become the default in a future release and should be enabled if
-# possible. (boolean value)
-#service_token_roles_required = false
-
-# Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
-# (string value)
-#auth_admin_prefix =
-
-# Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
-# (string value)
-#auth_host = 127.0.0.1
-
-# Port of the admin Identity API endpoint. Deprecated, use identity_uri.
-# (integer value)
-#auth_port = 35357
-
-# Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
-# (string value)
-# Possible values:
-# http - <No description provided>
-# https - <No description provided>
-#auth_protocol = https
-
-# Complete admin Identity API endpoint. This should specify the unversioned
-# root endpoint e.g. https://localhost:35357/ (string value)
-#identity_uri = <None>
-
-# This option is deprecated and may be removed in a future release. Single
-# shared secret with the Keystone configuration used for bootstrapping a
-# Keystone installation, or otherwise bypassing the normal authentication
-# process. This option should not be used, use `admin_user` and
-# `admin_password` instead. (string value)
-#admin_token = <None>
-
-# Service username. (string value)
-#admin_user = <None>
-
-# Service user password. (string value)
-#admin_password = <None>
-
-# Service tenant name. (string value)
-#admin_tenant_name = admin
-
-# Authentication type to load (string value)
-# Deprecated group/name - [keystone_authtoken]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
-
-
-[matchmaker_redis]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Host to locate redis. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
-
-# DEPRECATED: Use this port to connect to redis host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
-
-# DEPRECATED: Password for Redis server (optional). (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#password =
-
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
-# [host:port, host1:port ... ] (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#sentinel_hosts =
-
-# Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
-
-# Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
-
-# Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
-
-# Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
-
-
 [nova]
 
-#
-# From neutron
-#
-
-# Name of nova region to use. Useful if keystone manages more than one region.
-# (string value)
-#region_name = <None>
-
-# Type of the nova endpoint to use.  This endpoint will be looked up in the
-# keystone catalog and should be one of public, internal or admin. (string
-# value)
-# Possible values:
-# public - <No description provided>
-# admin - <No description provided>
-# internal - <No description provided>
-#endpoint_type = public
-
-#
-# From nova.auth
-#
-
-# Authentication URL (string value)
-#auth_url = <None>
-
-# Authentication type to load (string value)
-# Deprecated group/name - [nova]/auth_plugin
-#auth_type = <None>
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# Optional domain ID to use with v3 and v2 parameters. It will be used for both
-# the user and project domain in v3 and ignored in v2 authentication. (string
-# value)
-#default_domain_id = <None>
-
-# Optional domain name to use with v3 API and v2 parameters. It will be used
-# for both the user and project domain in v3 and ignored in v2 authentication.
-# (string value)
-#default_domain_name = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# User's password (string value)
-#password = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Project ID to scope to (string value)
-# Deprecated group/name - [nova]/tenant_id
-#project_id = <None>
-
-# Project name to scope to (string value)
-# Deprecated group/name - [nova]/tenant_name
-#project_name = <None>
-
-# Scope for system operations (string value)
-#system_scope = <None>
-
-# Tenant ID (string value)
-#tenant_id = <None>
-
-# Tenant Name (string value)
-#tenant_name = <None>
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User id (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [nova]/user_name
-#username = <None>
-
-
 [oslo_concurrency]
-
 #
 # From oslo.concurrency
 #
@@ -1209,308 +727,14 @@
 # Enables or disables inter-process locks. (boolean value)
 #disable_process_locking = false
 
-# Directory to use for lock files.  For security, the specified directory
-# should only be writable by the user running the processes that need locking.
-# Defaults to environment variable OSLO_LOCK_PATH. If OSLO_LOCK_PATH is not set
-# in the environment, use the Python tempfile.gettempdir function to find a
-# suitable location. If external locks are used, a lock path must be set.
-# (string value)
+# Directory to use for lock files.  For security, the specified
+# directory should only be writable by the user running the processes
+# that need locking. Defaults to environment variable OSLO_LOCK_PATH.
+# If OSLO_LOCK_PATH is not set in the environment, use the Python
+# tempfile.gettempdir function to find a suitable location. If
+# external locks are used, a lock path must be set. (string value)
 #lock_path = /tmp
-
-
-[oslo_messaging_amqp]
-
-#
-# From oslo.messaging
-#
-
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string
-# value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string
-# value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional)
-# (string value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# By default SSL checks that the name in the server's certificate matches the
-# hostname in the transport_url. In some configurations it may be preferable to
-# use the virtual hostname instead, for example if the server uses the Server
-# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
-# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
-# virtual host name instead of the DNS name. (boolean value)
-#ssl_verify_vhost = false
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# Enable virtual host support for those message buses that do not natively
-# support virtual hosting (such as qpidd). When set to true the virtual host
-# name will be added to all message bus addresses, effectively creating a
-# private 'subnet' per virtual host. Set to False if the message bus supports
-# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
-# as the name of the virtual host. (boolean value)
-#pseudo_vhost = true
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-
-# robin fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# DEPRECATED: Pool Size for Kafka Consumers (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#pool_size = 10
-
-# DEPRECATED: The pool size limit for connections expiration policy (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_min_size = 2
-
-# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating
-# point value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
-
-[oslo_messaging_notifications]
-
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are
-# messaging, messagingv2, routing, log, test, noop (multi valued)
-# Deprecated group/name - [DEFAULT]/notification_driver
-#driver =
-
-# A URL representing the messaging driver to use for notifications. If not set,
-# we fall back to the same configuration used for RPC. (string value)
-# Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
-
-# AMQP topic used for OpenStack notifications. (list value)
-# Deprecated group/name - [rpc_notifier2]/topics
-# Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
-
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
-
-
 [oslo_messaging_rabbit]
-
 #
 # From oslo.messaging
 #
@@ -1526,54 +750,39 @@
 # Enable SSL (boolean value)
 #ssl = <None>
 
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
-
-# How long to wait before reconnecting in response to an AMQP consumer cancel
-# notification. (floating point value)
+
+# How long to wait before reconnecting in response to an AMQP consumer
+# cancel notification. (floating point value)
 #kombu_reconnect_delay = 1.0
 
-# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
-# be used. This option may not be available in future versions. (string value)
+# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression
+# will not be used. This option may not be available in future
+# versions. (string value)
 #kombu_compression = <None>
 
-# How long to wait a missing client before abandoning to send it its replies.
-# This value should not be longer than rpc_response_timeout. (integer value)
+# How long to wait a missing client before abandoning to send it its
+# replies. This value should not be longer than rpc_response_timeout.
+# (integer value)
 # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
 #kombu_missing_consumer_retry_timeout = 60
 
-# Determines how the next RabbitMQ node is chosen in case the one we are
-# currently connected to becomes unavailable. Takes effect only if more than
-# one RabbitMQ node is provided in config. (string value)
+# Determines how the next RabbitMQ node is chosen in case the one we
+# are currently connected to becomes unavailable. Takes effect only if
+# more than one RabbitMQ node is provided in config. (string value)
 # Possible values:
 # round-robin - <No description provided>
 # shuffle - <No description provided>
 #kombu_failover_strategy = round-robin
 
-# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
-# value)
+# DEPRECATED: The RabbitMQ broker address where a single node is used.
+# (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
 #rabbit_host = localhost
 
-# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
-# value)
+# DEPRECATED: The RabbitMQ broker port where a single node is used.
+# (port value)
 # Minimum value: 0
 # Maximum value: 65535
 # This option is deprecated for removal.
@@ -1615,31 +824,33 @@
 # How frequently to retry connecting with RabbitMQ. (integer value)
 #rabbit_retry_interval = 1
 
-# How long to backoff for between retries when connecting to RabbitMQ. (integer
-# value)
+# How long to backoff for between retries when connecting to RabbitMQ.
+# (integer value)
 #rabbit_retry_backoff = 2
 
-# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
-# (integer value)
+# Maximum interval of RabbitMQ connection retries. Default is 30
+# seconds. (integer value)
 #rabbit_interval_max = 30
 
-# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
-# (infinite retry count). (integer value)
+# DEPRECATED: Maximum number of RabbitMQ connection retries. Default
+# is 0 (infinite retry count). (integer value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #rabbit_max_retries = 0
 
-# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
-# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
-# is no longer controlled by the x-ha-policy argument when declaring a queue.
-# If you just want to make sure that all queues (except those with auto-
-# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
-# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
+# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change
+# this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0,
+# queue mirroring is no longer controlled by the x-ha-policy argument
+# when declaring a queue. If you just want to make sure that all
+# queues (except those with auto-generated names) are mirrored across
+# all nodes, run: "rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-
+# mode": "all"}' " (boolean value)
 #rabbit_ha_queues = false
 
-# Positive integer representing duration in seconds for queue TTL (x-expires).
-# Queues which are unused for the duration of the TTL are automatically
-# deleted. The parameter affects only reply and fanout queues. (integer value)
+# Positive integer representing duration in seconds for queue TTL
+# (x-expires). Queues which are unused for the duration of the TTL are
+# automatically deleted. The parameter affects only reply and fanout
+# queues. (integer value)
 # Minimum value: 1
 #rabbit_transient_queues_ttl = 1800
 
@@ -1647,16 +858,17 @@
 # unlimited messages. (integer value)
 #rabbit_qos_prefetch_count = 64
 
-# Number of seconds after which the Rabbit broker is considered down if
-# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
-# value)
+# Number of seconds after which the Rabbit broker is considered down
+# if heartbeat's keep-alive fails (0 disable the heartbeat).
+# EXPERIMENTAL (integer value)
 #heartbeat_timeout_threshold = 60
 
 # How often times during the heartbeat_timeout_threshold we check the
 # heartbeat. (integer value)
 #heartbeat_rate = 2
 
-# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
+# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
+# (boolean value)
 #fake_rabbit = false
 
 # Maximum number of channels to allow (integer value)
@@ -1665,21 +877,23 @@
 # The maximum byte size for an AMQP frame (integer value)
 #frame_max = <None>
 
-# How often to send heartbeats for consumer's connections (integer value)
+# How often to send heartbeats for consumer's connections (integer
+# value)
 #heartbeat_interval = 3
 
 # Arguments passed to ssl.wrap_socket (dict value)
 #ssl_options = <None>
 
-# Set socket timeout in seconds for connection's socket (floating point value)
+# Set socket timeout in seconds for connection's socket (floating
+# point value)
 #socket_timeout = 0.25
 
-# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
-# value)
+# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating
+# point value)
 #tcp_user_timeout = 0.25
 
-# Set delay for reconnection to some host which has connection error (floating
-# point value)
+# Set delay for reconnection to some host which has connection error
+# (floating point value)
 #host_connection_reconnect_delay = 0.25
 
 # Connection factory implementation (string value)
@@ -1692,21 +906,22 @@
 # Maximum number of connections to keep queued. (integer value)
 #pool_max_size = 30
 
-# Maximum number of connections to create above `pool_max_size`. (integer
-# value)
+# Maximum number of connections to create above `pool_max_size`.
+# (integer value)
 #pool_max_overflow = 0
 
-# Default number of seconds to wait for a connections to available (integer
-# value)
+# Default number of seconds to wait for a connections to available
+# (integer value)
 #pool_timeout = 30
 
 # Lifetime of a connection (since creation) in seconds or None for no
-# recycling. Expired connections are closed on acquire. (integer value)
+# recycling. Expired connections are closed on acquire. (integer
+# value)
 #pool_recycle = 600
 
-# Threshold at which inactive (since release) connections are considered stale
-# in seconds or None for no staleness. Stale connections are closed on acquire.
-# (integer value)
+# Threshold at which inactive (since release) connections are
+# considered stale in seconds or None for no staleness. Stale
+# connections are closed on acquire. (integer value)
 #pool_stale = 60
 
 # Default serialization mechanism for serializing/deserializing
@@ -1726,15 +941,16 @@
 # notification listener. (integer value)
 #notification_listener_prefetch_count = 100
 
-# Reconnecting retry count in case of connectivity problem during sending
-# notification, -1 means infinite retry. (integer value)
+# Reconnecting retry count in case of connectivity problem during
+# sending notification, -1 means infinite retry. (integer value)
 #default_notification_retry_attempts = -1
 
-# Reconnecting retry delay in case of connectivity problem during sending
-# notification message (floating point value)
+# Reconnecting retry delay in case of connectivity problem during
+# sending notification message (floating point value)
 #notification_retry_delay = 0.25
 
-# Time to live for rpc queues without consumers in seconds. (integer value)
+# Time to live for rpc queues without consumers in seconds. (integer
+# value)
 #rpc_queue_expiration = 60
 
 # Exchange name for sending RPC messages (string value)
@@ -1743,239 +959,85 @@
 # Exchange name for receiving RPC replies (string value)
 #rpc_reply_exchange = ${control_exchange}_rpc_reply
 
-# Max number of not acknowledged message which RabbitMQ can send to rpc
-# listener. (integer value)
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc listener. (integer value)
 #rpc_listener_prefetch_count = 100
 
-# Max number of not acknowledged message which RabbitMQ can send to rpc reply
-# listener. (integer value)
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc reply listener. (integer value)
 #rpc_reply_listener_prefetch_count = 100
 
-# Reconnecting retry count in case of connectivity problem during sending
-# reply. -1 means infinite retry during rpc_timeout (integer value)
+# Reconnecting retry count in case of connectivity problem during
+# sending reply. -1 means infinite retry during rpc_timeout (integer
+# value)
 #rpc_reply_retry_attempts = -1
 
-# Reconnecting retry delay in case of connectivity problem during sending
-# reply. (floating point value)
+# Reconnecting retry delay in case of connectivity problem during
+# sending reply. (floating point value)
 #rpc_reply_retry_delay = 0.25
 
-# Reconnecting retry count in case of connectivity problem during sending RPC
-# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
-# request could be processed more than one time (integer value)
+# Reconnecting retry count in case of connectivity problem during
+# sending RPC message, -1 means infinite retry. If actual retry
+# attempts in not 0 the rpc request could be processed more than one
+# time (integer value)
 #default_rpc_retry_attempts = -1
 
-# Reconnecting retry delay in case of connectivity problem during sending RPC
-# message (floating point value)
+# Reconnecting retry delay in case of connectivity problem during
+# sending RPC message (floating point value)
 #rpc_retry_delay = 0.25
 
 
-[oslo_messaging_zmq]
-
+[oslo_messaging_notifications]
 #
 # From oslo.messaging
 #
 
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+driver = messagingv2
+
+# A URL representing the messaging driver to use for notifications. If
+# not set, we fall back to the same configuration used for RPC.
+# (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+
+# The maximum number of attempts to re-send a notification message
+# which failed to be delivered due to a recoverable error. 0 - No
+# retry, -1 - indefinite (integer value)
+#retry = -1
 
 
 [oslo_middleware]
-
-#
-# From oslo.middleware.http_proxy_to_wsgi
-#
-
-# Whether the application is behind a proxy or not. This determines if the
-# middleware should parse the headers or not. (boolean value)
-#enable_proxy_headers_parsing = false
+#
+# From oslo.middleware
+#
+
+# The maximum body size for each  request, in bytes. (integer value)
+# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
+# Deprecated group/name - [DEFAULT]/max_request_body_size
+#max_request_body_size = 114688
+
+# DEPRECATED: The HTTP Header that will be used to determine what the
+# original request protocol scheme was, even if it was hidden by a SSL
+# termination proxy. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#secure_proxy_ssl_header = X-Forwarded-Proto
+
+# Whether the application is behind a proxy or not. This determines if
+# the middleware should parse the headers or not. (boolean value)
+enable_proxy_headers_parsing = True
 
 
 [oslo_policy]
-
-#
-# From oslo.policy
-#
-
-# This option controls whether or not to enforce scope when evaluating
-# policies. If ``True``, the scope of the token used in the request is compared
-# to the ``scope_types`` of the policy being enforced. If the scopes do not
-# match, an ``InvalidScope`` exception will be raised. If ``False``, a message
-# will be logged informing operators that policies are being invoked with
-# mismatching scope. (boolean value)
-#enforce_scope = false
-
-# The file that defines policies. (string value)
-#policy_file = policy.json
-
-# Default rule. Enforced when a requested rule is not found. (string value)
-#policy_default_rule = default
-
-# Directories where policy configuration files are stored. They can be relative
-# to any directory in the search path defined by the config_dir option, or
-# absolute paths. The file defined by policy_file must exist for these
-# directories to be searched.  Missing or empty directories are ignored. (multi
-# valued)
-#policy_dirs = policy.d
-
-# Content Type to send and receive data for REST based policy check (string
-# value)
-# Possible values:
-# application/x-www-form-urlencoded - <No description provided>
-# application/json - <No description provided>
-#remote_content_type = application/x-www-form-urlencoded
-
-# server identity verification for REST based policy check (boolean value)
-#remote_ssl_verify_server_crt = false
-
-# Absolute path to ca cert file for REST based policy check (string value)
-#remote_ssl_ca_crt_file = <None>
-
-# Absolute path to client cert for REST based policy check (string value)
-#remote_ssl_client_crt_file = <None>
-
-# Absolute path client key file REST based policy check (string value)
-#remote_ssl_client_key_file = <None>
 
 
 [quotas]
@@ -2030,7 +1092,6 @@
 
 
 [ssl]
-
 #
 # From oslo.service.sslutils
 #
@@ -2055,3 +1116,5 @@
 # Sets the list of available ciphers. value should be a string in the OpenSSL
 # cipher list format. (string value)
 #ciphers = <None>
+
+[ovs]

2018-09-01 23:04:41,729 [salt.state       :1941][INFO    ][13717] Completed state [/etc/neutron/neutron.conf] at time 23:04:41.729140 duration_in_ms=167.213
2018-09-01 23:04:41,729 [salt.state       :1770][INFO    ][13717] Running state [/etc/neutron/l3_agent.ini] at time 23:04:41.729416
2018-09-01 23:04:41,729 [salt.state       :1803][INFO    ][13717] Executing state file.managed for [/etc/neutron/l3_agent.ini]
2018-09-01 23:04:41,743 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'neutron/files/queens/l3_agent.ini'
2018-09-01 23:04:41,793 [salt.state       :290 ][INFO    ][13717] File changed:
--- 
+++ 
@@ -1,3 +1,5 @@
+
+
 [DEFAULT]
 
 #
@@ -14,6 +16,7 @@
 
 # The driver used to manage the virtual interface. (string value)
 #interface_driver = <None>
+interface_driver = openvswitch
 
 #
 # From neutron.l3.agent
@@ -37,12 +40,20 @@
 # dvr_snat - <No description provided>
 # legacy - <No description provided>
 # dvr_no_external - <No description provided>
-#agent_mode = legacy
+
+agent_mode = legacy
 
 # TCP Port used by Neutron metadata namespace proxy. (port value)
 # Minimum value: 0
 # Maximum value: 65535
 #metadata_port = 9697
+metadata_port = 8775
+
+# DEPRECATED: Send this many gratuitous ARPs for HA setup, if less than or
+# equal to 0, the feature is disabled (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#send_arp_for_ha = 3
 
 # Indicates that this L3 agent should also handle routers that do not have an
 # external network gateway configured. This option should be True only for a
@@ -162,120 +173,121 @@
 
 # MaxRtrAdvInterval setting for radvd.conf (integer value)
 #max_rtr_adv_interval = 100
-
 #
 # From oslo.log
 #
 
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
 
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
 # Note: This option can be changed without restarting.
 # Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
 
 # Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
 #log_date_format = %Y-%m-%d %H:%M:%S
 
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
 
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
 #log_dir = <None>
 
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
 # (boolean value)
-#watch_log_file = false
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
 #use_journal = false
 
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
 
-# Use JSON formatting for logging. This option is ignored if log_config_append
-# is set. (boolean value)
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_json = false
 
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_stderr = false
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
-# Format string to use for log messages when context is undefined. (string
-# value)
+# Format string to use for log messages when context is undefined.
+# (string value)
 #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
 
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
 #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
 
-# Prefix each line of exception output with this format. (string value)
+# Prefix each line of exception output with this format. (string
+# value)
 #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
 
 # Defines the format string for %(user_identity)s that is used in
 # logging_context_format_string. (string value)
 #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
 
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
 #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
 
-# The format for an instance that is passed with the log message. (string
-# value)
+# The format for an instance that is passed with the log message.
+# (string value)
 #instance_format = "[instance: %(uuid)s] "
 
-# The format for an instance UUID that is passed with the log message. (string
-# value)
+# The format for an instance UUID that is passed with the log message.
+# (string value)
 #instance_uuid_format = "[instance: %(uuid)s] "
 
 # Interval, number of seconds, of log rate limiting. (integer value)
 #rate_limit_interval = 0
 
-# Maximum number of logged messages per rate_limit_interval. (integer value)
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
 #rate_limit_burst = 0
 
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
 #rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
-
 
 [agent]
 
@@ -314,8 +326,8 @@
 
 # DEPRECATED: The interface for interacting with the OVSDB (string value)
 # Possible values:
+# native - <No description provided>
 # vsctl - <No description provided>
-# native - <No description provided>
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #ovsdb_interface = native
@@ -342,8 +354,3 @@
 # will fail with ALARMCLOCK error. (integer value)
 # Deprecated group/name - [DEFAULT]/ovs_vsctl_timeout
 #ovsdb_timeout = 10
-
-# The maximum number of MAC addresses to learn on a bridge managed by the
-# Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might
-# be overridden by Open vSwitch according to the documentation. (integer value)
-#bridge_mac_table_size = 50000

2018-09-01 23:04:41,794 [salt.state       :1941][INFO    ][13717] Completed state [/etc/neutron/l3_agent.ini] at time 23:04:41.794017 duration_in_ms=64.6
2018-09-01 23:04:41,794 [salt.state       :1770][INFO    ][13717] Running state [/etc/neutron/plugins/ml2/openvswitch_agent.ini] at time 23:04:41.794350
2018-09-01 23:04:41,794 [salt.state       :1803][INFO    ][13717] Executing state file.managed for [/etc/neutron/plugins/ml2/openvswitch_agent.ini]
2018-09-01 23:04:41,808 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'neutron/files/queens/openvswitch_agent.ini'
2018-09-01 23:04:41,875 [salt.state       :290 ][INFO    ][13717] File changed:
--- 
+++ 
@@ -1,118 +1,121 @@
+
+
 [DEFAULT]
-
 #
 # From oslo.log
 #
 
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
 
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
 # Note: This option can be changed without restarting.
 # Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
 
 # Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
 #log_date_format = %Y-%m-%d %H:%M:%S
 
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
 
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
 #log_dir = <None>
 
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
 # (boolean value)
-#watch_log_file = false
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
 #use_journal = false
 
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
 
-# Use JSON formatting for logging. This option is ignored if log_config_append
-# is set. (boolean value)
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_json = false
 
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_stderr = false
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
-# Format string to use for log messages when context is undefined. (string
+# Format string to use for log messages when context is undefined.
+# (string value)
+#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
+#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+
+# Prefix each line of exception output with this format. (string
 # value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string value)
 #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
 
 # Defines the format string for %(user_identity)s that is used in
 # logging_context_format_string. (string value)
 #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
 
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
 #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
 
-# The format for an instance that is passed with the log message. (string
-# value)
+# The format for an instance that is passed with the log message.
+# (string value)
 #instance_format = "[instance: %(uuid)s] "
 
-# The format for an instance UUID that is passed with the log message. (string
-# value)
+# The format for an instance UUID that is passed with the log message.
+# (string value)
 #instance_uuid_format = "[instance: %(uuid)s] "
 
 # Interval, number of seconds, of log rate limiting. (integer value)
 #rate_limit_interval = 0
 
-# Maximum number of logged messages per rate_limit_interval. (integer value)
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
 #rate_limit_burst = 0
 
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
 #rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
-
 
 [agent]
 
@@ -129,24 +132,27 @@
 
 # Network types supported by the agent (gre and/or vxlan). (list value)
 #tunnel_types =
+tunnel_types =vxlan
 
 # The UDP port to use for VXLAN tunnels. (port value)
 # Minimum value: 0
 # Maximum value: 65535
 #vxlan_udp_port = 4789
+vxlan_udp_port = 4789
 
 # MTU size of veth interfaces (integer value)
 #veth_mtu = 9000
 
-# Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve
-# tunnel scalability. (boolean value)
+# Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve tunnel scalability. (boolean value)
 #l2_population = false
+l2_population = True
 
 # Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2
 # l2population driver. Allows the switch (when supporting an overlay) to
 # respond to an ARP request locally without performing a costly ARP broadcast
 # into the overlay. (boolean value)
 #arp_responder = false
+arp_responder = True
 
 # Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying
 # GRE/VXLAN tunnel. (boolean value)
@@ -154,10 +160,12 @@
 
 # Make the l2 agent run in DVR mode. (boolean value)
 #enable_distributed_routing = false
+enable_distributed_routing = False
 
 # Reset flow table on start. Setting this to True will cause brief traffic
 # interruption. (boolean value)
 #drop_flows_on_start = false
+drop_flows_on_start = False
 
 # Set or un-set the tunnel header checksum  on outgoing IP packet carrying
 # GRE/VXLAN tunnel. (boolean value)
@@ -169,7 +177,9 @@
 #agent_type = Open vSwitch agent
 
 # Extensions list to use (list value)
-#extensions =
+
+
+extensions=
 
 
 [network_log]
@@ -202,9 +212,11 @@
 # VIFs are attached to this bridge and then 'patched' according to their
 # network connectivity. (string value)
 #integration_bridge = br-int
+integration_bridge = br-int
 
 # Tunnel bridge to use. (string value)
 #tunnel_bridge = br-tun
+tunnel_bridge = br-tun
 
 # Peer patch port in integration bridge for tunnel bridge. (string value)
 #int_peer_patch_port = patch-tun
@@ -218,6 +230,7 @@
 # in the ML2 plug-in configuration file on the neutron server node(s). (IP
 # address value)
 #local_ip = <None>
+local_ip = 10.1.0.6
 
 # Comma-separated list of <physical_network>:<bridge> tuples mapping physical
 # network names to the agent's node-specific Open vSwitch bridge names to be
@@ -227,7 +240,8 @@
 # have mappings to appropriate bridges on each agent. Note: If you remove a
 # bridge from this mapping, make sure to disconnect it from the integration
 # bridge as it won't be managed by the agent anymore. (list value)
-#bridge_mappings =
+
+bridge_mappings = physnet1:br-floating
 
 # Use veths instead of patch ports to interconnect the integration bridge to
 # physical networks. Support kernel without Open vSwitch patch port support so
@@ -273,8 +287,8 @@
 
 # DEPRECATED: The interface for interacting with the OVSDB (string value)
 # Possible values:
+# native - <No description provided>
 # vsctl - <No description provided>
-# native - <No description provided>
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #ovsdb_interface = native
@@ -306,11 +320,13 @@
 
 # Driver for security groups firewall in the L2 agent (string value)
 #firewall_driver = <None>
+firewall_driver = openvswitch
 
 # Controls whether the neutron security group API is enabled in the server. It
 # should be false when using no security groups or using the nova security
 # group API. (boolean value)
 #enable_security_group = true
+enable_security_group = True
 
 # Use ipset to speed-up the iptables based security groups. Enabling ipset
 # support requires that ipset is installed on L2 agent node. (boolean value)

2018-09-01 23:04:41,876 [salt.state       :1941][INFO    ][13717] Completed state [/etc/neutron/plugins/ml2/openvswitch_agent.ini] at time 23:04:41.876475 duration_in_ms=82.12
2018-09-01 23:04:41,876 [salt.state       :1770][INFO    ][13717] Running state [/etc/neutron/dhcp_agent.ini] at time 23:04:41.876856
2018-09-01 23:04:41,877 [salt.state       :1803][INFO    ][13717] Executing state file.managed for [/etc/neutron/dhcp_agent.ini]
2018-09-01 23:04:41,891 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'neutron/files/queens/dhcp_agent.ini'
2018-09-01 23:04:41,958 [salt.state       :290 ][INFO    ][13717] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -14,6 +15,7 @@
 
 # The driver used to manage the virtual interface. (string value)
 #interface_driver = <None>
+interface_driver = openvswitch
 
 #
 # From neutron.dhcp.agent
@@ -23,6 +25,7 @@
 # transient notification or RPC errors. The interval is number of seconds
 # between attempts. (integer value)
 #resync_interval = 5
+resync_interval = 30
 
 # The driver used to manage the DHCP server. (string value)
 #dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
@@ -35,6 +38,7 @@
 # This option doesn't have any effect when force_metadata is set to True.
 # (boolean value)
 #enable_isolated_metadata = false
+enable_isolated_metadata = True
 
 # In some cases the Neutron router is not present to provide the metadata IP
 # but the DHCP server can be used to provide this info. Setting this value will
@@ -50,6 +54,7 @@
 # 169.254.169.254 through a router. This option requires
 # enable_isolated_metadata = True. (boolean value)
 #enable_metadata_network = false
+enable_metadata_network = False
 
 # Number of threads to use during sync process. Should not exceed connection
 # pool size configured on server. (integer value)
@@ -82,128 +87,121 @@
 
 # Use broadcast in DHCP replies. (boolean value)
 #dhcp_broadcast_reply = false
-
-# DHCP renewal time T1 (in seconds). If set to 0, it will default to half of
-# the lease time. (integer value)
-#dhcp_renewal_time = 0
-
-# DHCP rebinding time T2 (in seconds). If set to 0, it will default to 7/8 of
-# the lease time. (integer value)
-#dhcp_rebinding_time = 0
-
 #
 # From oslo.log
 #
 
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
 
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
 # Note: This option can be changed without restarting.
 # Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
 
 # Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
 #log_date_format = %Y-%m-%d %H:%M:%S
 
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
 
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
 #log_dir = <None>
 
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
 # (boolean value)
-#watch_log_file = false
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
 #use_journal = false
 
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
 
-# Use JSON formatting for logging. This option is ignored if log_config_append
-# is set. (boolean value)
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_json = false
 
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_stderr = false
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
-# Format string to use for log messages when context is undefined. (string
-# value)
+# Format string to use for log messages when context is undefined.
+# (string value)
 #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
 
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
 #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
 
-# Prefix each line of exception output with this format. (string value)
+# Prefix each line of exception output with this format. (string
+# value)
 #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
 
 # Defines the format string for %(user_identity)s that is used in
 # logging_context_format_string. (string value)
 #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
 
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
 #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
 
-# The format for an instance that is passed with the log message. (string
-# value)
+# The format for an instance that is passed with the log message.
+# (string value)
 #instance_format = "[instance: %(uuid)s] "
 
-# The format for an instance UUID that is passed with the log message. (string
-# value)
+# The format for an instance UUID that is passed with the log message.
+# (string value)
 #instance_uuid_format = "[instance: %(uuid)s] "
 
 # Interval, number of seconds, of log rate limiting. (integer value)
 #rate_limit_interval = 0
 
-# Maximum number of logged messages per rate_limit_interval. (integer value)
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
 #rate_limit_burst = 0
 
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
 #rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
-
 
 [agent]
 
@@ -226,7 +224,6 @@
 # Log agent heartbeats (boolean value)
 #log_agent_heartbeats = false
 
-
 [ovs]
 
 #
@@ -235,8 +232,8 @@
 
 # DEPRECATED: The interface for interacting with the OVSDB (string value)
 # Possible values:
+# native - <No description provided>
 # vsctl - <No description provided>
-# native - <No description provided>
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #ovsdb_interface = native
@@ -263,8 +260,3 @@
 # will fail with ALARMCLOCK error. (integer value)
 # Deprecated group/name - [DEFAULT]/ovs_vsctl_timeout
 #ovsdb_timeout = 10
-
-# The maximum number of MAC addresses to learn on a bridge managed by the
-# Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might
-# be overridden by Open vSwitch according to the documentation. (integer value)
-#bridge_mac_table_size = 50000

2018-09-01 23:04:41,959 [salt.state       :1941][INFO    ][13717] Completed state [/etc/neutron/dhcp_agent.ini] at time 23:04:41.959003 duration_in_ms=82.147
2018-09-01 23:04:41,959 [salt.state       :1770][INFO    ][13717] Running state [/etc/neutron/metadata_agent.ini] at time 23:04:41.959275
2018-09-01 23:04:41,959 [salt.state       :1803][INFO    ][13717] Executing state file.managed for [/etc/neutron/metadata_agent.ini]
2018-09-01 23:04:41,973 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'neutron/files/queens/metadata_agent.ini'
2018-09-01 23:04:42,019 [salt.state       :290 ][INFO    ][13717] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -20,6 +21,7 @@
 
 # IP address or DNS name of Nova metadata server. (unknown value)
 #nova_metadata_host = 127.0.0.1
+nova_metadata_host = 10.167.4.35
 
 # TCP Port used by Nova metadata server. (port value)
 # Minimum value: 0
@@ -32,12 +34,14 @@
 # Server. NOTE: Nova uses the same config key, but in [neutron] section.
 # (string value)
 #metadata_proxy_shared_secret =
+metadata_proxy_shared_secret = opnfv_secret
 
 # Protocol to access nova metadata, http or https (string value)
 # Possible values:
 # http - <No description provided>
 # https - <No description provided>
 #nova_metadata_protocol = http
+nova_metadata_protocol = http
 
 # Allow to perform insecure SSL (https) requests to nova metadata (boolean
 # value)
@@ -55,11 +59,7 @@
 # root, 'group': set metadata proxy socket mode to 0o664, to use when
 # metadata_proxy_group is agent effective group or root, 'all': set metadata
 # proxy socket mode to 0o666, to use otherwise. (string value)
-# Possible values:
-# deduce - <No description provided>
-# user - <No description provided>
-# group - <No description provided>
-# all - <No description provided>
+# Allowed values: deduce, user, group, all
 #metadata_proxy_socket_mode = deduce
 
 # Number of separate worker processes for metadata server (defaults to half of
@@ -69,120 +69,121 @@
 # Number of backlog requests to configure the metadata server socket with
 # (integer value)
 #metadata_backlog = 4096
-
 #
 # From oslo.log
 #
 
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
 
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
 # Note: This option can be changed without restarting.
 # Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
 
 # Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
 #log_date_format = %Y-%m-%d %H:%M:%S
 
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
 
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
 #log_dir = <None>
 
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
 # (boolean value)
-#watch_log_file = false
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
 #use_journal = false
 
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
 
-# Use JSON formatting for logging. This option is ignored if log_config_append
-# is set. (boolean value)
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_json = false
 
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_stderr = false
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
-# Format string to use for log messages when context is undefined. (string
+# Format string to use for log messages when context is undefined.
+# (string value)
+#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
+#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+
+# Prefix each line of exception output with this format. (string
 # value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string value)
 #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
 
 # Defines the format string for %(user_identity)s that is used in
 # logging_context_format_string. (string value)
 #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
 
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
 #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
 
-# The format for an instance that is passed with the log message. (string
-# value)
+# The format for an instance that is passed with the log message.
+# (string value)
 #instance_format = "[instance: %(uuid)s] "
 
-# The format for an instance UUID that is passed with the log message. (string
-# value)
+# The format for an instance UUID that is passed with the log message.
+# (string value)
 #instance_uuid_format = "[instance: %(uuid)s] "
 
 # Interval, number of seconds, of log rate limiting. (integer value)
 #rate_limit_interval = 0
 
-# Maximum number of logged messages per rate_limit_interval. (integer value)
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
 #rate_limit_burst = 0
 
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
 #rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
-
 
 [agent]
 
@@ -197,85 +198,3 @@
 
 # Log agent heartbeats (boolean value)
 #log_agent_heartbeats = false
-
-
-[cache]
-
-#
-# From oslo.cache
-#
-
-# Prefix for building the configuration dictionary for the cache region. This
-# should not need to be changed unless there is another dogpile.cache region
-# with the same configuration name. (string value)
-#config_prefix = cache.oslo
-
-# Default TTL, in seconds, for any cached item in the dogpile.cache region.
-# This applies to any cached method that doesn't have an explicit cache
-# expiration time defined for it. (integer value)
-#expiration_time = 600
-
-# Cache backend module. For eventlet-based or environments with hundreds of
-# threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is
-# recommended. For environments with less than 100 threaded servers, Memcached
-# (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test
-# environments with a single instance of the server can use the
-# dogpile.cache.memory backend. (string value)
-# Possible values:
-# oslo_cache.memcache_pool - <No description provided>
-# oslo_cache.dict - <No description provided>
-# oslo_cache.mongo - <No description provided>
-# oslo_cache.etcd3gw - <No description provided>
-# dogpile.cache.memcached - <No description provided>
-# dogpile.cache.pylibmc - <No description provided>
-# dogpile.cache.bmemcached - <No description provided>
-# dogpile.cache.dbm - <No description provided>
-# dogpile.cache.redis - <No description provided>
-# dogpile.cache.memory - <No description provided>
-# dogpile.cache.memory_pickle - <No description provided>
-# dogpile.cache.null - <No description provided>
-#backend = dogpile.cache.null
-
-# Arguments supplied to the backend module. Specify this option once per
-# argument to be passed to the dogpile.cache backend. Example format:
-# "<argname>:<value>". (multi valued)
-#backend_argument =
-
-# Proxy classes to import that will affect the way the dogpile.cache backend
-# functions. See the dogpile.cache documentation on changing-backend-behavior.
-# (list value)
-#proxies =
-
-# Global toggle for caching. (boolean value)
-#enabled = false
-
-# Extra debugging from the cache backend (cache keys, get/set/delete/etc
-# calls). This is only really useful if you need to see the specific cache-
-# backend get/set/delete calls with the keys/values.  Typically this should be
-# left set to false. (boolean value)
-#debug_cache_backend = false
-
-# Memcache servers in the format of "host:port". (dogpile.cache.memcache and
-# oslo_cache.memcache_pool backends only). (list value)
-#memcache_servers = localhost:11211
-
-# Number of seconds memcached server is considered dead before it is tried
-# again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
-# (integer value)
-#memcache_dead_retry = 300
-
-# Timeout in seconds for every call to a server. (dogpile.cache.memcache and
-# oslo_cache.memcache_pool backends only). (integer value)
-#memcache_socket_timeout = 3
-
-# Max total number of open connections to every memcached server.
-# (oslo_cache.memcache_pool backend only). (integer value)
-#memcache_pool_maxsize = 10
-
-# Number of seconds a connection to memcached is held unused in the pool before
-# it is closed. (oslo_cache.memcache_pool backend only). (integer value)
-#memcache_pool_unused_timeout = 60
-
-# Number of seconds that an operation will wait to get a memcache client
-# connection. (integer value)
-#memcache_pool_connection_get_timeout = 10

2018-09-01 23:04:42,020 [salt.state       :1941][INFO    ][13717] Completed state [/etc/neutron/metadata_agent.ini] at time 23:04:42.020011 duration_in_ms=60.735
2018-09-01 23:04:42,020 [salt.state       :1770][INFO    ][13717] Running state [/etc/default/neutron-metadata-agent] at time 23:04:42.020295
2018-09-01 23:04:42,020 [salt.state       :1803][INFO    ][13717] Executing state file.managed for [/etc/default/neutron-metadata-agent]
2018-09-01 23:04:42,033 [salt.fileclient  :1215][INFO    ][13717] Fetching file from saltenv 'base', ** done ** 'neutron/files/default'
2018-09-01 23:04:42,038 [salt.state       :290 ][INFO    ][13717] File changed:
New file
2018-09-01 23:04:42,038 [salt.state       :1941][INFO    ][13717] Completed state [/etc/default/neutron-metadata-agent] at time 23:04:42.038121 duration_in_ms=17.825
2018-09-01 23:04:42,038 [salt.state       :1770][INFO    ][13717] Running state [/etc/default/neutron-dhcp-agent] at time 23:04:42.038394
2018-09-01 23:04:42,038 [salt.state       :1803][INFO    ][13717] Executing state file.managed for [/etc/default/neutron-dhcp-agent]
2018-09-01 23:04:42,050 [salt.state       :290 ][INFO    ][13717] File changed:
New file
2018-09-01 23:04:42,051 [salt.state       :1941][INFO    ][13717] Completed state [/etc/default/neutron-dhcp-agent] at time 23:04:42.050973 duration_in_ms=12.579
2018-09-01 23:04:42,051 [salt.state       :1770][INFO    ][13717] Running state [/etc/default/neutron-openvswitch-agent] at time 23:04:42.051216
2018-09-01 23:04:42,051 [salt.state       :1803][INFO    ][13717] Executing state file.managed for [/etc/default/neutron-openvswitch-agent]
2018-09-01 23:04:42,064 [salt.state       :290 ][INFO    ][13717] File changed:
New file
2018-09-01 23:04:42,064 [salt.state       :1941][INFO    ][13717] Completed state [/etc/default/neutron-openvswitch-agent] at time 23:04:42.064238 duration_in_ms=13.021
2018-09-01 23:04:42,064 [salt.state       :1770][INFO    ][13717] Running state [/etc/default/neutron-l3-agent] at time 23:04:42.064526
2018-09-01 23:04:42,064 [salt.state       :1803][INFO    ][13717] Executing state file.managed for [/etc/default/neutron-l3-agent]
2018-09-01 23:04:42,080 [salt.state       :290 ][INFO    ][13717] File changed:
New file
2018-09-01 23:04:42,080 [salt.state       :1941][INFO    ][13717] Completed state [/etc/default/neutron-l3-agent] at time 23:04:42.080219 duration_in_ms=15.693
2018-09-01 23:04:42,082 [salt.state       :1770][INFO    ][13717] Running state [neutron-metadata-agent] at time 23:04:42.082025
2018-09-01 23:04:42,082 [salt.state       :1803][INFO    ][13717] Executing state service.running for [neutron-metadata-agent]
2018-09-01 23:04:42,082 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'status', 'neutron-metadata-agent.service', '-n', '0'] in directory '/root'
2018-09-01 23:04:42,094 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'neutron-metadata-agent.service'] in directory '/root'
2018-09-01 23:04:42,102 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-enabled', 'neutron-metadata-agent.service'] in directory '/root'
2018-09-01 23:04:42,110 [salt.state       :290 ][INFO    ][13717] The service neutron-metadata-agent is already running
2018-09-01 23:04:42,111 [salt.state       :1941][INFO    ][13717] Completed state [neutron-metadata-agent] at time 23:04:42.111140 duration_in_ms=29.113
2018-09-01 23:04:42,111 [salt.state       :1770][INFO    ][13717] Running state [neutron-metadata-agent] at time 23:04:42.111292
2018-09-01 23:04:42,111 [salt.state       :1803][INFO    ][13717] Executing state service.mod_watch for [neutron-metadata-agent]
2018-09-01 23:04:42,111 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'neutron-metadata-agent.service'] in directory '/root'
2018-09-01 23:04:42,120 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-metadata-agent.service'] in directory '/root'
2018-09-01 23:04:42,816 [salt.state       :290 ][INFO    ][13717] {'neutron-metadata-agent': True}
2018-09-01 23:04:42,816 [salt.state       :1941][INFO    ][13717] Completed state [neutron-metadata-agent] at time 23:04:42.816511 duration_in_ms=705.218
2018-09-01 23:04:42,817 [salt.state       :1770][INFO    ][13717] Running state [neutron-dhcp-agent] at time 23:04:42.817441
2018-09-01 23:04:42,817 [salt.state       :1803][INFO    ][13717] Executing state service.running for [neutron-dhcp-agent]
2018-09-01 23:04:42,818 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'status', 'neutron-dhcp-agent.service', '-n', '0'] in directory '/root'
2018-09-01 23:04:42,829 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'neutron-dhcp-agent.service'] in directory '/root'
2018-09-01 23:04:42,838 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-enabled', 'neutron-dhcp-agent.service'] in directory '/root'
2018-09-01 23:04:42,845 [salt.state       :290 ][INFO    ][13717] The service neutron-dhcp-agent is already running
2018-09-01 23:04:42,845 [salt.state       :1941][INFO    ][13717] Completed state [neutron-dhcp-agent] at time 23:04:42.845930 duration_in_ms=28.487
2018-09-01 23:04:42,846 [salt.state       :1770][INFO    ][13717] Running state [neutron-dhcp-agent] at time 23:04:42.846116
2018-09-01 23:04:42,846 [salt.state       :1803][INFO    ][13717] Executing state service.mod_watch for [neutron-dhcp-agent]
2018-09-01 23:04:42,846 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'neutron-dhcp-agent.service'] in directory '/root'
2018-09-01 23:04:42,854 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-dhcp-agent.service'] in directory '/root'
2018-09-01 23:04:42,955 [salt.state       :290 ][INFO    ][13717] {'neutron-dhcp-agent': True}
2018-09-01 23:04:42,955 [salt.state       :1941][INFO    ][13717] Completed state [neutron-dhcp-agent] at time 23:04:42.955477 duration_in_ms=109.36
2018-09-01 23:04:42,956 [salt.state       :1770][INFO    ][13717] Running state [neutron-openvswitch-agent] at time 23:04:42.956304
2018-09-01 23:04:42,956 [salt.state       :1803][INFO    ][13717] Executing state service.running for [neutron-openvswitch-agent]
2018-09-01 23:04:42,957 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'status', 'neutron-openvswitch-agent.service', '-n', '0'] in directory '/root'
2018-09-01 23:04:42,966 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'neutron-openvswitch-agent.service'] in directory '/root'
2018-09-01 23:04:42,974 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-enabled', 'neutron-openvswitch-agent.service'] in directory '/root'
2018-09-01 23:04:42,984 [salt.state       :290 ][INFO    ][13717] The service neutron-openvswitch-agent is already running
2018-09-01 23:04:42,984 [salt.state       :1941][INFO    ][13717] Completed state [neutron-openvswitch-agent] at time 23:04:42.984355 duration_in_ms=28.051
2018-09-01 23:04:42,984 [salt.state       :1770][INFO    ][13717] Running state [neutron-openvswitch-agent] at time 23:04:42.984530
2018-09-01 23:04:42,984 [salt.state       :1803][INFO    ][13717] Executing state service.mod_watch for [neutron-openvswitch-agent]
2018-09-01 23:04:42,985 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'neutron-openvswitch-agent.service'] in directory '/root'
2018-09-01 23:04:42,993 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-openvswitch-agent.service'] in directory '/root'
2018-09-01 23:04:43,015 [salt.state       :290 ][INFO    ][13717] {'neutron-openvswitch-agent': True}
2018-09-01 23:04:43,015 [salt.state       :1941][INFO    ][13717] Completed state [neutron-openvswitch-agent] at time 23:04:43.015470 duration_in_ms=30.939
2018-09-01 23:04:43,016 [salt.state       :1770][INFO    ][13717] Running state [neutron-l3-agent] at time 23:04:43.016372
2018-09-01 23:04:43,016 [salt.state       :1803][INFO    ][13717] Executing state service.running for [neutron-l3-agent]
2018-09-01 23:04:43,017 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'status', 'neutron-l3-agent.service', '-n', '0'] in directory '/root'
2018-09-01 23:04:43,026 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'neutron-l3-agent.service'] in directory '/root'
2018-09-01 23:04:43,034 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-enabled', 'neutron-l3-agent.service'] in directory '/root'
2018-09-01 23:04:43,042 [salt.state       :290 ][INFO    ][13717] The service neutron-l3-agent is already running
2018-09-01 23:04:43,042 [salt.state       :1941][INFO    ][13717] Completed state [neutron-l3-agent] at time 23:04:43.042655 duration_in_ms=26.283
2018-09-01 23:04:43,042 [salt.state       :1770][INFO    ][13717] Running state [neutron-l3-agent] at time 23:04:43.042812
2018-09-01 23:04:43,043 [salt.state       :1803][INFO    ][13717] Executing state service.mod_watch for [neutron-l3-agent]
2018-09-01 23:04:43,043 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemctl', 'is-active', 'neutron-l3-agent.service'] in directory '/root'
2018-09-01 23:04:43,050 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13717] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-l3-agent.service'] in directory '/root'
2018-09-01 23:04:44,126 [salt.state       :290 ][INFO    ][13717] {'neutron-l3-agent': True}
2018-09-01 23:04:44,126 [salt.state       :1941][INFO    ][13717] Completed state [neutron-l3-agent] at time 23:04:44.126602 duration_in_ms=1083.79
2018-09-01 23:04:44,128 [salt.minion      :1708][INFO    ][13717] Returning information for job: 20180901230354843185
2018-09-01 23:04:44,841 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command test.ping with jid 20180901230444831226
2018-09-01 23:04:44,851 [salt.minion      :1431][INFO    ][18132] Starting a new job with PID 18132
2018-09-01 23:04:44,865 [salt.minion      :1708][INFO    ][18132] Returning information for job: 20180901230444831226
2018-09-01 23:04:45,022 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command match.pillar with jid 20180901230445005746
2018-09-01 23:04:45,030 [salt.minion      :1431][INFO    ][18137] Starting a new job with PID 18137
2018-09-01 23:04:45,033 [salt.minion      :1708][INFO    ][18137] Returning information for job: 20180901230445005746
2018-09-01 23:04:45,643 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command state.sls with jid 20180901230445633919
2018-09-01 23:04:45,651 [salt.minion      :1431][INFO    ][18151] Starting a new job with PID 18151
2018-09-01 23:04:49,335 [salt.state       :905 ][INFO    ][18151] Loading fresh modules for state activity
2018-09-01 23:04:49,831 [salt.fileclient  :1215][INFO    ][18151] Fetching file from saltenv 'base', ** done ** 'nova/init.sls'
2018-09-01 23:04:49,850 [salt.fileclient  :1215][INFO    ][18151] Fetching file from saltenv 'base', ** done ** 'nova/compute.sls'
2018-09-01 23:04:50,763 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230450749724
2018-09-01 23:04:50,771 [salt.minion      :1431][INFO    ][18324] Starting a new job with PID 18324
2018-09-01 23:04:50,780 [salt.minion      :1708][INFO    ][18324] Returning information for job: 20180901230450749724
2018-09-01 23:04:51,474 [salt.state       :1770][INFO    ][18151] Running state [libvirtd] at time 23:04:51.474856
2018-09-01 23:04:51,475 [salt.state       :1803][INFO    ][18151] Executing state group.present for [libvirtd]
2018-09-01 23:04:51,475 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['groupadd', '-r', 'libvirtd'] in directory '/root'
2018-09-01 23:04:51,583 [salt.state       :290 ][INFO    ][18151] {'passwd': 'x', 'gid': 999, 'name': 'libvirtd', 'members': []}
2018-09-01 23:04:51,583 [salt.state       :1941][INFO    ][18151] Completed state [libvirtd] at time 23:04:51.583426 duration_in_ms=108.57
2018-09-01 23:04:51,583 [salt.state       :1770][INFO    ][18151] Running state [nova] at time 23:04:51.583670
2018-09-01 23:04:51,583 [salt.state       :1803][INFO    ][18151] Executing state group.present for [nova]
2018-09-01 23:04:51,584 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['groupadd', '-g 303', '-r', 'nova'] in directory '/root'
2018-09-01 23:04:51,672 [salt.state       :290 ][INFO    ][18151] {'passwd': 'x', 'gid': 303, 'name': 'nova', 'members': []}
2018-09-01 23:04:51,673 [salt.state       :1941][INFO    ][18151] Completed state [nova] at time 23:04:51.673092 duration_in_ms=89.42
2018-09-01 23:04:51,673 [salt.state       :1770][INFO    ][18151] Running state [nova] at time 23:04:51.673521
2018-09-01 23:04:51,673 [salt.state       :1803][INFO    ][18151] Executing state user.present for [nova]
2018-09-01 23:04:51,674 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['useradd', '-s', '/bin/bash', '-u', '303', '-g', '303', '-m', '-d', '/var/lib/nova', '-r', 'nova'] in directory '/root'
2018-09-01 23:04:51,788 [salt.state       :290 ][INFO    ][18151] {'shell': '/bin/bash', 'workphone': '', 'uid': 303, 'passwd': 'x', 'roomnumber': '', 'groups': ['nova'], 'home': '/var/lib/nova', 'name': 'nova', 'gid': 303, 'fullname': '', 'homephone': ''}
2018-09-01 23:04:51,788 [salt.state       :1941][INFO    ][18151] Completed state [nova] at time 23:04:51.788714 duration_in_ms=115.193
2018-09-01 23:04:51,789 [salt.state       :1770][INFO    ][18151] Running state [nova-common] at time 23:04:51.789148
2018-09-01 23:04:51,789 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [nova-common]
2018-09-01 23:04:51,790 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:04:52,116 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['apt-cache', '-q', 'policy', 'nova-common'] in directory '/root'
2018-09-01 23:04:52,179 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-09-01 23:04:53,858 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:04:53,876 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-common'] in directory '/root'
2018-09-01 23:05:00,938 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230500927079
2018-09-01 23:05:00,948 [salt.minion      :1431][INFO    ][18954] Starting a new job with PID 18954
2018-09-01 23:05:00,959 [salt.minion      :1708][INFO    ][18954] Returning information for job: 20180901230500927079
2018-09-01 23:05:11,115 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230511102975
2018-09-01 23:05:11,125 [salt.minion      :1431][INFO    ][19263] Starting a new job with PID 19263
2018-09-01 23:05:11,141 [salt.minion      :1708][INFO    ][19263] Returning information for job: 20180901230511102975
2018-09-01 23:05:21,298 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230521287269
2018-09-01 23:05:21,309 [salt.minion      :1431][INFO    ][20073] Starting a new job with PID 20073
2018-09-01 23:05:21,321 [salt.minion      :1708][INFO    ][20073] Returning information for job: 20180901230521287269
2018-09-01 23:05:31,490 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230531478956
2018-09-01 23:05:31,500 [salt.minion      :1431][INFO    ][20343] Starting a new job with PID 20343
2018-09-01 23:05:31,509 [salt.minion      :1708][INFO    ][20343] Returning information for job: 20180901230531478956
2018-09-01 23:05:31,724 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:05:31,754 [salt.state       :290 ][INFO    ][18151] Made the following changes:
'python-pypowervm' changed from 'absent' to '1.1.10-1.0~u16.04+mcp2'
'libxinerama1' changed from 'absent' to '2:1.1.3-1'
'libavahi-common3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2.2'
'python-junitxml' changed from 'absent' to '0.6-1.1ubuntu1'
'libthai-data' changed from 'absent' to '0.1.24-2'
'python2.7-gobject-2' changed from 'absent' to '1'
'fonts-dejavu-core' changed from 'absent' to '2.35-1'
'python2.7-cairo' changed from 'absent' to '1'
'libcups2' changed from 'absent' to '2.1.3-4ubuntu0.5'
'libgtk2.0-bin' changed from 'absent' to '2.24.30-1ubuntu1.16.04.2'
'libxdamage1' changed from 'absent' to '1:1.1.4-2'
'libxrender1' changed from 'absent' to '1:0.9.9-0ubuntu1'
'python-nova' changed from 'absent' to '2:17.0.5-5~u16.04+mcp142'
'fontconfig' changed from 'absent' to '2.11.94-0ubuntu1.1'
'libpixman-1-0' changed from 'absent' to '0.33.6-1'
'libxi6' changed from 'absent' to '2:1.7.6-1'
'python3-mimeparse' changed from 'absent' to '0.1.4-1build1'
'libsubunit-diff-perl' changed from 'absent' to '1'
'python-gobject-2' changed from 'absent' to '2.28.6-12ubuntu1'
'python3-fixtures' changed from 'absent' to '3.0.0-1.1~u16.04+mcp2'
'websockify' changed from 'absent' to '0.8.0+dfsg1-7~u16.04+mcp2'
'python3-unittest2' changed from 'absent' to '1.1.0-6.1'
'libcairo2' changed from 'absent' to '1.14.6-1'
'libxcomposite1' changed from 'absent' to '1:0.4.4-1'
'python3-extras' changed from 'absent' to '1.0.0-2.0~u16.04+mcp1'
'libpangocairo-1.0-0' changed from 'absent' to '1.38.1-1'
'python-cairo' changed from 'absent' to '1.8.8-2'
'libgtk2.0-0' changed from 'absent' to '2.24.30-1ubuntu1.16.04.2'
'libpango-1.0-0' changed from 'absent' to '1.38.1-1'
'python3-linecache2' changed from 'absent' to '1.0.0-2'
'libgraphite2-3' changed from 'absent' to '1.3.10-0ubuntu0.16.04.1'
'libsubunit-perl' changed from 'absent' to '1.2.0-0ubuntu2~cloud0'
'libxcb-render0' changed from 'absent' to '1.11.1-1ubuntu1'
'libpangoft2-1.0-0' changed from 'absent' to '1.38.1-1'
'python-microversion-parse' changed from 'absent' to '0.1.3-2.1~u16.04+mcp2'
'python3-pbr' changed from 'absent' to '3.1.1-3.0~u16.04+mcp1'
'hicolor-icon-theme' changed from 'absent' to '0.15-0ubuntu1.1'
'libatk1.0-0' changed from 'absent' to '2.18.0-1'
'libgraphite2-2.0.0' changed from 'absent' to '1'
'python-os-vif' changed from 'absent' to '1.9.0-1.0~u16.04+mcp3'
'libavahi-client3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2.2'
'python3-subunit' changed from 'absent' to '1.2.0-0ubuntu2~cloud0'
'libxcb-shm0' changed from 'absent' to '1.11.1-1ubuntu1'
'libthai0' changed from 'absent' to '0.1.24-2'
'python3-testtools' changed from 'absent' to '2.3.0-1.0~u16.04+mcp1'
'libxfixes3' changed from 'absent' to '1:5.0.1-2'
'python-subunit' changed from 'absent' to '1.2.0-0ubuntu2~cloud0'
'libxrandr2' changed from 'absent' to '2:1.5.0-1'
'libgdk-pixbuf2.0-0' changed from 'absent' to '2.32.2-1ubuntu1.5'
'sqlite3' changed from 'absent' to '3.11.0-1ubuntu1'
'libgtk2.0-common' changed from 'absent' to '2.24.30-1ubuntu1.16.04.2'
'subunit' changed from 'absent' to '1.2.0-0ubuntu2~cloud0'
'python2.7-cinderclient' changed from 'absent' to '1'
'libxcursor1' changed from 'absent' to '1:1.1.14-1ubuntu0.16.04.2'
'python-os-traits' changed from 'absent' to '0.5.0-1.0~u16.04+mcp2'
'libatk1.0-data' changed from 'absent' to '2.18.0-1'
'nova-common' changed from 'absent' to '2:17.0.5-5~u16.04+mcp142'
'python2.7-gobject' changed from 'absent' to '1'
'python-websockify' changed from 'absent' to '0.8.0+dfsg1-7~u16.04+mcp2'
'libfontconfig' changed from 'absent' to '1'
'python-cursive' changed from 'absent' to '0.2.1-1.0~u16.04+mcp1'
'libharfbuzz0b' changed from 'absent' to '1.0.1-1ubuntu0.1'
'libavahi-common-data' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2.2'
'python2.7-nova' changed from 'absent' to '1'
'python-gtk2' changed from 'absent' to '2.24.0-4ubuntu1'
'libdatrie1' changed from 'absent' to '0.2.10-2'
'python3-traceback2' changed from 'absent' to '1.4.0-3'
'fontconfig-config' changed from 'absent' to '2.11.94-0ubuntu1.1'
'libgdk-pixbuf2.0-common' changed from 'absent' to '2.32.2-1ubuntu1.5'
'websockify-common' changed from 'absent' to '0.8.0+dfsg1-7~u16.04+mcp2'
'libfontconfig1' changed from 'absent' to '2.11.94-0ubuntu1.1'
'gtk2.0-binver-2.10.0' changed from 'absent' to '1'
'python-cinderclient' changed from 'absent' to '1:3.5.0-1.0~u16.04+mcp1'

2018-09-01 23:05:31,770 [salt.state       :905 ][INFO    ][18151] Loading fresh modules for state activity
2018-09-01 23:05:31,893 [salt.state       :1941][INFO    ][18151] Completed state [nova-common] at time 23:05:31.893721 duration_in_ms=40104.573
2018-09-01 23:05:31,897 [salt.state       :1770][INFO    ][18151] Running state [nova-compute-kvm] at time 23:05:31.897466
2018-09-01 23:05:31,897 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [nova-compute-kvm]
2018-09-01 23:05:32,325 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:05:32,339 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-compute-kvm'] in directory '/root'
2018-09-01 23:05:41,667 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230541657124
2018-09-01 23:05:41,677 [salt.minion      :1431][INFO    ][20417] Starting a new job with PID 20417
2018-09-01 23:05:41,687 [salt.minion      :1708][INFO    ][20417] Returning information for job: 20180901230541657124
2018-09-01 23:05:51,844 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230551834030
2018-09-01 23:05:51,854 [salt.minion      :1431][INFO    ][20746] Starting a new job with PID 20746
2018-09-01 23:05:51,869 [salt.minion      :1708][INFO    ][20746] Returning information for job: 20180901230551834030
2018-09-01 23:06:02,021 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230602013188
2018-09-01 23:06:02,033 [salt.minion      :1431][INFO    ][21023] Starting a new job with PID 21023
2018-09-01 23:06:02,044 [salt.minion      :1708][INFO    ][21023] Returning information for job: 20180901230602013188
2018-09-01 23:06:12,190 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230612180240
2018-09-01 23:06:12,199 [salt.minion      :1431][INFO    ][21376] Starting a new job with PID 21376
2018-09-01 23:06:12,211 [salt.minion      :1708][INFO    ][21376] Returning information for job: 20180901230612180240
2018-09-01 23:06:22,376 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230622366083
2018-09-01 23:06:22,384 [salt.minion      :1431][INFO    ][21702] Starting a new job with PID 21702
2018-09-01 23:06:22,393 [salt.minion      :1708][INFO    ][21702] Returning information for job: 20180901230622366083
2018-09-01 23:06:32,566 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230632555787
2018-09-01 23:06:32,574 [salt.minion      :1431][INFO    ][29103] Starting a new job with PID 29103
2018-09-01 23:06:32,584 [salt.minion      :1708][INFO    ][29103] Returning information for job: 20180901230632555787
2018-09-01 23:06:42,752 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230642741143
2018-09-01 23:06:42,763 [salt.minion      :1431][INFO    ][30008] Starting a new job with PID 30008
2018-09-01 23:06:42,774 [salt.minion      :1708][INFO    ][30008] Returning information for job: 20180901230642741143
2018-09-01 23:06:52,942 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230652931826
2018-09-01 23:06:52,953 [salt.minion      :1431][INFO    ][32378] Starting a new job with PID 32378
2018-09-01 23:06:52,963 [salt.minion      :1708][INFO    ][32378] Returning information for job: 20180901230652931826
2018-09-01 23:07:03,132 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230703122035
2018-09-01 23:07:03,139 [salt.minion      :1431][INFO    ][5351] Starting a new job with PID 5351
2018-09-01 23:07:03,148 [salt.minion      :1708][INFO    ][5351] Returning information for job: 20180901230703122035
2018-09-01 23:07:05,248 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:07:05,280 [salt.state       :290 ][INFO    ][18151] Made the following changes:
'libnet-ssleay-perl' changed from 'absent' to '1.72-1build1'
'c++-compiler' changed from 'absent' to '1'
'libguestfs-hfsplus' changed from 'absent' to '1:1.32.2-4ubuntu2'
'qemu-keymaps' changed from 'absent' to '1'
'libmailtools-perl' changed from 'absent' to '2.13-1'
'libyajl2' changed from 'absent' to '2.1.0-2'
'elf-binutils' changed from 'absent' to '1'
'libubsan0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libasyncns0' changed from 'absent' to '0.8-5build1'
'c-compiler' changed from 'absent' to '1'
'augeas-lenses' changed from 'absent' to '1.4.0-0ubuntu1.1'
'libmpc3' changed from 'absent' to '1.0.3-1'
'libvirt-daemon-system' changed from 'absent' to '4.0.0-1ubuntu8.3~cloud0'
'libvirt-clients' changed from 'absent' to '4.0.0-1ubuntu8.3~cloud0'
'ipxe-qemu' changed from 'absent' to '1.0.0+git-20180124.fbe8c52d-0ubuntu2~cloud0'
'liblwp-mediatypes-perl' changed from 'absent' to '6.02-1'
'libsndfile1' changed from 'absent' to '1.0.25-10ubuntu0.16.04.1'
'libxml2-utils' changed from 'absent' to '2.9.3+dfsg1-1ubuntu0.6'
'mkisofs' changed from 'absent' to '1'
'liburi-perl' changed from 'absent' to '1.71-1'
'lzop' changed from 'absent' to '1.03-3.2'
'libspice-server1' changed from 'absent' to '0.12.6-4ubuntu0.3'
'libtimedate-perl' changed from 'absent' to '2.3000-2'
'libhfsp0' changed from 'absent' to '1.0.4-13'
'genisoimage' changed from 'absent' to '9:1.1.11-3ubuntu1'
'libcaca0' changed from 'absent' to '0.99.beta19-2build2~gcc5.2'
'libfile-listing-perl' changed from 'absent' to '6.04-1'
'reiserfsprogs' changed from 'absent' to '1:3.6.24-3.1'
'libalgorithm-merge-perl' changed from 'absent' to '0.08-3'
'qemu-system-common' changed from 'absent' to '1:2.11+dfsg-1ubuntu7.4~cloud0'
'qemu-system-i386' changed from 'absent' to '1'
'c++abi2-dev' changed from 'absent' to '1'
'qemu-system-x86-64' changed from 'absent' to '1'
'msr-tools' changed from 'absent' to '1.3-2'
'libusbredirparser1' changed from 'absent' to '0.7.1-1'
'supermin' changed from 'absent' to '5.1.14-2ubuntu1.1'
'mtools' changed from 'absent' to '4.0.18-2ubuntu0.16.04'
'libcc1-0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libgomp1' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libio-socket-ssl-perl' changed from 'absent' to '2.024-1'
'syslinux-common' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libfont-afm-perl' changed from 'absent' to '1.20-1'
'libwww-robotrules-perl' changed from 'absent' to '6.01-1'
'linux-libc-dev' changed from 'absent' to '4.4.0-134.160'
'gcc-5' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libxen-4.6' changed from 'absent' to '4.6.5-0ubuntu1.4'
'libxenstore3.0' changed from 'absent' to '4.6.5-0ubuntu1.4'
'libwww-perl' changed from 'absent' to '6.15-1'
'libasan2' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libatomic1' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libcacard0' changed from 'absent' to '1:2.5.0-2'
'libitm1' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libvorbisenc2' changed from 'absent' to '1.3.5-3ubuntu0.2'
'g++' changed from 'absent' to '4:5.3.1-1ubuntu1'
'libwin-hivex-perl' changed from 'absent' to '1.3.13-1build3'
'cpp:any' changed from 'absent' to '1'
'kpartx' changed from 'absent' to '0.5.0+git1.656f8865-5ubuntu2.5'
'libmpx0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libstdc++-5-dev' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libhtml-format-perl' changed from 'absent' to '2.11-2'
'nova-compute-kvm' changed from 'absent' to '2:17.0.5-5~u16.04+mcp142'
'hfsplus' changed from 'absent' to '1.0.4-13'
'libhtml-parser-perl' changed from 'absent' to '3.72-1'
'libhttp-message-perl' changed from 'absent' to '6.11-1'
'libtsan0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'g++-5' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libstdc++-dev' changed from 'absent' to '1'
'syslinux' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libintl-perl' changed from 'absent' to '1.24-1build1'
'libhtml-form-perl' changed from 'absent' to '6.03-1'
'libxml-parser-perl' changed from 'absent' to '2.44-1build1'
'libsys-virt-perl' changed from 'absent' to '1.2.16-1ubuntu2'
'libbrlapi0.6' changed from 'absent' to '5.3.1-2ubuntu2.1'
'manpages-dev' changed from 'absent' to '4.04-2'
'cpp-5' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'linux-kernel-headers' changed from 'absent' to '1'
'libguestfs-reiserfs' changed from 'absent' to '1:1.32.2-4ubuntu2'
'libhtml-tagset-perl' changed from 'absent' to '3.20-2'
'libflac8' changed from 'absent' to '1.3.1-4'
'binutils' changed from 'absent' to '2.26.1-1ubuntu1~16.04.6'
'build-essential' changed from 'absent' to '12.1ubuntu2'
'libbluetooth3' changed from 'absent' to '5.37-0ubuntu5.1'
'libencode-locale-perl' changed from 'absent' to '1.05-1'
'libconfig9' changed from 'absent' to '1.5-0.2'
'qemu-kvm-spice' changed from 'absent' to '1'
'libstring-shellquote-perl' changed from 'absent' to '1.03-1.2'
'libguestfs-tools' changed from 'absent' to '1:1.32.2-4ubuntu2'
'libalgorithm-diff-xs-perl' changed from 'absent' to '0.04-4build1'
'libaugeas0' changed from 'absent' to '1.4.0-0ubuntu1.1'
'timedate' changed from 'absent' to '1'
'libpciaccess0' changed from 'absent' to '0.13.4-1'
'libfdt1' changed from 'absent' to '1.4.2-1.2~u16.04+mcp2'
'libfile-fcntllock-perl' changed from 'absent' to '0.22-3'
'libhttp-daemon-perl' changed from 'absent' to '6.01-1'
'libguestfs-perl' changed from 'absent' to '1:1.32.2-4ubuntu2'
'qemu-kvm' changed from 'absent' to '1:2.11+dfsg-1ubuntu7.4~cloud0'
'make:any' changed from 'absent' to '1'
'libguestfs0' changed from 'absent' to '1:1.32.2-4ubuntu2'
'mailtools' changed from 'absent' to '1'
'libhttp-cookies-perl' changed from 'absent' to '6.01-1'
'qemu-system-x86' changed from 'absent' to '1:2.11+dfsg-1ubuntu7.4~cloud0'
'libauthen-sasl-perl' changed from 'absent' to '2.1600-1'
'libnet-smtp-ssl-perl' changed from 'absent' to '1.03-1'
'libvirt0' changed from 'absent' to '4.0.0-1ubuntu8.3~cloud0'
'libguestfs-xfs' changed from 'absent' to '1:1.32.2-4ubuntu2'
'libisl15' changed from 'absent' to '0.16.1-1'
'libopus0' changed from 'absent' to '1.1.2-1ubuntu1'
'libvirt-daemon-driver-storage-rbd' changed from 'absent' to '4.0.0-1ubuntu8.3~cloud0'
'liblsan0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'ipxe-qemu-256k-compat-efi-roms' changed from 'absent' to '1.0.0+git-20150424.a25a16d-0ubuntu2~cloud0'
'libgcc-5-dev' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'libalgorithm-diff-perl' changed from 'absent' to '1.19.03-1'
'libhttp-negotiate-perl' changed from 'absent' to '6.00-2'
'seabios' changed from 'absent' to '1.10.2-1.1~u16.04+mcp2'
'libmail-perl' changed from 'absent' to '1'
'libfakeroot' changed from 'absent' to '1.20.2-1ubuntu1'
'libasound2' changed from 'absent' to '1.1.0-0ubuntu1'
'libxml-xpath-perl' changed from 'absent' to '1.30-1'
'lsscsi' changed from 'absent' to '0.27-3'
'gcc' changed from 'absent' to '4:5.3.1-1ubuntu1'
'libsdl1.2debian' changed from 'absent' to '1.2.15+dfsg1-3'
'make' changed from 'absent' to '4.1-6'
'libvirt-daemon' changed from 'absent' to '4.0.0-1ubuntu8.3~cloud0'
'nova-compute-hypervisor' changed from 'absent' to '1'
'dpkg-dev' changed from 'absent' to '1.18.4ubuntu1.4'
'libnet-http-perl' changed from 'absent' to '6.09-1'
'libio-html-perl' changed from 'absent' to '1.001-1'
'libvorbis0a' changed from 'absent' to '1.3.5-3ubuntu0.2'
'libogg0' changed from 'absent' to '1.3.2-1'
'ebtables' changed from 'absent' to '2.0.10.4-3.4ubuntu2.16.04.2'
'libasound2-data' changed from 'absent' to '1.1.0-0ubuntu1'
'scrub' changed from 'absent' to '2.6.1-1'
'libc-dev' changed from 'absent' to '1'
'libdpkg-perl' changed from 'absent' to '1.18.4ubuntu1.4'
'libhivex0' changed from 'absent' to '1.3.13-1build3'
'kvm' changed from 'absent' to '1'
'libc6-dev' changed from 'absent' to '2.23-0ubuntu10'
'libcilkrts5' changed from 'absent' to '5.4.0-6ubuntu1~16.04.10'
'liblwp-protocol-https-perl' changed from 'absent' to '6.06-2'
'python-libvirt' changed from 'absent' to '3.5.0-1.1~u16.04+mcp3'
'libpulse0' changed from 'absent' to '1:8.0-0ubuntu3.10'
'nova-compute' changed from 'absent' to '2:17.0.5-5~u16.04+mcp142'
'libnetcf1' changed from 'absent' to '1:0.2.8-1ubuntu1'
'libhtml-tree-perl' changed from 'absent' to '5.03-2'
'libhttp-date-perl' changed from 'absent' to '6.02-1'
'fakeroot' changed from 'absent' to '1.20.2-1ubuntu1'
'libc-dev-bin' changed from 'absent' to '2.23-0ubuntu10'
'cpp' changed from 'absent' to '4:5.3.1-1ubuntu1'
'extlinux' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'binutils-gold' changed from 'absent' to '1'
'cpu-checker' changed from 'absent' to '0.7-0ubuntu7'

2018-09-01 23:07:05,296 [salt.state       :905 ][INFO    ][18151] Loading fresh modules for state activity
2018-09-01 23:07:05,320 [salt.state       :1941][INFO    ][18151] Completed state [nova-compute-kvm] at time 23:07:05.320330 duration_in_ms=93422.863
2018-09-01 23:07:05,323 [salt.state       :1770][INFO    ][18151] Running state [python-novaclient] at time 23:07:05.323836
2018-09-01 23:07:05,324 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [python-novaclient]
2018-09-01 23:07:05,777 [salt.state       :290 ][INFO    ][18151] All specified packages are already installed
2018-09-01 23:07:05,777 [salt.state       :1941][INFO    ][18151] Completed state [python-novaclient] at time 23:07:05.777626 duration_in_ms=453.79
2018-09-01 23:07:05,778 [salt.state       :1770][INFO    ][18151] Running state [pm-utils] at time 23:07:05.777996
2018-09-01 23:07:05,778 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [pm-utils]
2018-09-01 23:07:05,791 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:07:05,806 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'pm-utils'] in directory '/root'
2018-09-01 23:07:08,964 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:07:08,998 [salt.state       :290 ][INFO    ][18151] Made the following changes:
'pm-utils' changed from 'absent' to '1.4.1-16'
'libx86-1' changed from 'absent' to '1.1+ds1-10'
'vbetool' changed from 'absent' to '1.1-3'

2018-09-01 23:07:09,011 [salt.state       :905 ][INFO    ][18151] Loading fresh modules for state activity
2018-09-01 23:07:09,031 [salt.state       :1941][INFO    ][18151] Completed state [pm-utils] at time 23:07:09.031109 duration_in_ms=3253.113
2018-09-01 23:07:09,034 [salt.state       :1770][INFO    ][18151] Running state [sysfsutils] at time 23:07:09.034527
2018-09-01 23:07:09,034 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [sysfsutils]
2018-09-01 23:07:09,425 [salt.state       :290 ][INFO    ][18151] All specified packages are already installed
2018-09-01 23:07:09,426 [salt.state       :1941][INFO    ][18151] Completed state [sysfsutils] at time 23:07:09.426114 duration_in_ms=391.587
2018-09-01 23:07:09,426 [salt.state       :1770][INFO    ][18151] Running state [sg3-utils] at time 23:07:09.426568
2018-09-01 23:07:09,426 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [sg3-utils]
2018-09-01 23:07:09,433 [salt.state       :290 ][INFO    ][18151] All specified packages are already installed
2018-09-01 23:07:09,433 [salt.state       :1941][INFO    ][18151] Completed state [sg3-utils] at time 23:07:09.433742 duration_in_ms=7.173
2018-09-01 23:07:09,434 [salt.state       :1770][INFO    ][18151] Running state [libvirt-bin] at time 23:07:09.434056
2018-09-01 23:07:09,434 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [libvirt-bin]
2018-09-01 23:07:09,447 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:07:09,464 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'libvirt-bin'] in directory '/root'
2018-09-01 23:07:11,253 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:07:11,284 [salt.state       :290 ][INFO    ][18151] Made the following changes:
'libvirt-bin' changed from 'absent' to '4.0.0-1ubuntu8.3~cloud0'

2018-09-01 23:07:11,303 [salt.state       :905 ][INFO    ][18151] Loading fresh modules for state activity
2018-09-01 23:07:11,323 [salt.state       :1941][INFO    ][18151] Completed state [libvirt-bin] at time 23:07:11.323870 duration_in_ms=1889.814
2018-09-01 23:07:11,327 [salt.state       :1770][INFO    ][18151] Running state [python-memcache] at time 23:07:11.327272
2018-09-01 23:07:11,327 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [python-memcache]
2018-09-01 23:07:11,706 [salt.state       :290 ][INFO    ][18151] All specified packages are already installed
2018-09-01 23:07:11,706 [salt.state       :1941][INFO    ][18151] Completed state [python-memcache] at time 23:07:11.706271 duration_in_ms=378.999
2018-09-01 23:07:11,706 [salt.state       :1770][INFO    ][18151] Running state [qemu-kvm] at time 23:07:11.706637
2018-09-01 23:07:11,706 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [qemu-kvm]
2018-09-01 23:07:11,711 [salt.state       :290 ][INFO    ][18151] All specified packages are already installed
2018-09-01 23:07:11,711 [salt.state       :1941][INFO    ][18151] Completed state [qemu-kvm] at time 23:07:11.711656 duration_in_ms=5.02
2018-09-01 23:07:11,711 [salt.state       :1770][INFO    ][18151] Running state [python-guestfs] at time 23:07:11.711975
2018-09-01 23:07:11,712 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [python-guestfs]
2018-09-01 23:07:11,724 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:07:11,741 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-guestfs'] in directory '/root'
2018-09-01 23:07:13,319 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901230713308223
2018-09-01 23:07:13,329 [salt.minion      :1431][INFO    ][6398] Starting a new job with PID 6398
2018-09-01 23:07:13,338 [salt.minion      :1708][INFO    ][6398] Returning information for job: 20180901230713308223
2018-09-01 23:07:14,904 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:07:14,937 [salt.state       :290 ][INFO    ][18151] Made the following changes:
'python-guestfs' changed from 'absent' to '1:1.32.2-4ubuntu2'
'python-libguestfs' changed from 'absent' to '1'

2018-09-01 23:07:14,950 [salt.state       :905 ][INFO    ][18151] Loading fresh modules for state activity
2018-09-01 23:07:14,972 [salt.state       :1941][INFO    ][18151] Completed state [python-guestfs] at time 23:07:14.971971 duration_in_ms=3259.996
2018-09-01 23:07:14,975 [salt.state       :1770][INFO    ][18151] Running state [gettext-base] at time 23:07:14.975689
2018-09-01 23:07:14,975 [salt.state       :1803][INFO    ][18151] Executing state pkg.installed for [gettext-base]
2018-09-01 23:07:15,472 [salt.state       :290 ][INFO    ][18151] All specified packages are already installed
2018-09-01 23:07:15,472 [salt.state       :1941][INFO    ][18151] Completed state [gettext-base] at time 23:07:15.472539 duration_in_ms=496.85
2018-09-01 23:07:15,474 [salt.state       :1770][INFO    ][18151] Running state [/var/log/nova] at time 23:07:15.474346
2018-09-01 23:07:15,474 [salt.state       :1803][INFO    ][18151] Executing state file.directory for [/var/log/nova]
2018-09-01 23:07:15,475 [salt.state       :290 ][INFO    ][18151] {'group': 'nova'}
2018-09-01 23:07:15,475 [salt.state       :1941][INFO    ][18151] Completed state [/var/log/nova] at time 23:07:15.475243 duration_in_ms=0.897
2018-09-01 23:07:15,475 [salt.state       :1770][INFO    ][18151] Running state [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com] at time 23:07:15.475595
2018-09-01 23:07:15,475 [salt.state       :1803][INFO    ][18151] Executing state ssh_auth.present for [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com]
2018-09-01 23:07:15,477 [salt.state       :290 ][INFO    ][18151] {'AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ': 'New'}
2018-09-01 23:07:15,477 [salt.state       :1941][INFO    ][18151] Completed state [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com] at time 23:07:15.477771 duration_in_ms=2.176
2018-09-01 23:07:15,478 [salt.state       :1770][INFO    ][18151] Running state [nova] at time 23:07:15.478218
2018-09-01 23:07:15,478 [salt.state       :1803][INFO    ][18151] Executing state user.present for [nova]
2018-09-01 23:07:15,479 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['usermod', '-s', '/bin/bash', 'nova'] in directory '/root'
2018-09-01 23:07:15,540 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['usermod', '-G', 'libvirtd', 'nova'] in directory '/root'
2018-09-01 23:07:15,631 [salt.state       :290 ][INFO    ][18151] {'shell': '/bin/bash', 'groups': ['libvirtd', 'nova']}
2018-09-01 23:07:15,631 [salt.state       :1941][INFO    ][18151] Completed state [nova] at time 23:07:15.631475 duration_in_ms=153.255
2018-09-01 23:07:15,631 [salt.state       :1770][INFO    ][18151] Running state [libvirt-qemu] at time 23:07:15.631701
2018-09-01 23:07:15,631 [salt.state       :1803][INFO    ][18151] Executing state user.present for [libvirt-qemu]
2018-09-01 23:07:15,633 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['usermod', '-G', 'nova', 'libvirt-qemu'] in directory '/root'
2018-09-01 23:07:15,743 [salt.state       :290 ][INFO    ][18151] {'groups': ['kvm', 'nova']}
2018-09-01 23:07:15,743 [salt.state       :1941][INFO    ][18151] Completed state [libvirt-qemu] at time 23:07:15.743487 duration_in_ms=111.785
2018-09-01 23:07:15,743 [salt.state       :1770][INFO    ][18151] Running state [/var/lib/nova] at time 23:07:15.743738
2018-09-01 23:07:15,744 [salt.state       :1803][INFO    ][18151] Executing state file.directory for [/var/lib/nova]
2018-09-01 23:07:15,744 [salt.state       :290 ][INFO    ][18151] {'mode': '0750'}
2018-09-01 23:07:15,745 [salt.state       :1941][INFO    ][18151] Completed state [/var/lib/nova] at time 23:07:15.745093 duration_in_ms=1.356
2018-09-01 23:07:15,745 [salt.state       :1770][INFO    ][18151] Running state [/var/lib/nova/.ssh/id_rsa] at time 23:07:15.745636
2018-09-01 23:07:15,745 [salt.state       :1803][INFO    ][18151] Executing state file.managed for [/var/lib/nova/.ssh/id_rsa]
2018-09-01 23:07:15,755 [salt.state       :290 ][INFO    ][18151] File changed:
New file
2018-09-01 23:07:15,755 [salt.state       :1941][INFO    ][18151] Completed state [/var/lib/nova/.ssh/id_rsa] at time 23:07:15.755917 duration_in_ms=10.28
2018-09-01 23:07:15,756 [salt.state       :1770][INFO    ][18151] Running state [/var/lib/nova/.ssh/config] at time 23:07:15.756330
2018-09-01 23:07:15,756 [salt.state       :1803][INFO    ][18151] Executing state file.managed for [/var/lib/nova/.ssh/config]
2018-09-01 23:07:15,761 [salt.state       :290 ][INFO    ][18151] File changed:
New file
2018-09-01 23:07:15,761 [salt.state       :1941][INFO    ][18151] Completed state [/var/lib/nova/.ssh/config] at time 23:07:15.761812 duration_in_ms=5.481
2018-09-01 23:07:15,762 [salt.state       :1770][INFO    ][18151] Running state [/etc/nova/nova.conf] at time 23:07:15.762144
2018-09-01 23:07:15,762 [salt.state       :1803][INFO    ][18151] Executing state file.managed for [/etc/nova/nova.conf]
2018-09-01 23:07:15,800 [salt.fileclient  :1215][INFO    ][18151] Fetching file from saltenv 'base', ** done ** 'nova/files/queens/nova-compute.conf.Debian'
2018-09-01 23:07:16,122 [salt.state       :290 ][INFO    ][18151] File changed:
--- 
+++ 
@@ -1,117 +1,3448 @@
+
 [DEFAULT]
 
 #
+# From nova.conf
+#
+compute_manager=nova.compute.manager.ComputeManager
+network_device_mtu=65000
+use_neutron = True
+security_group_api=neutron
+image_service=nova.image.glance.GlanceImageService
+
+#
+# Availability zone for internal services.
+#
+# This option determines the availability zone for the various
+# internal nova
+# services, such as 'nova-scheduler', 'nova-conductor', etc.
+#
+# Possible values:
+#
+# * Any string representing an existing availability zone name.
+#  (string value)
+#internal_service_availability_zone = internal
+
+#
+# Default availability zone for compute services.
+#
+# This option determines the default availability zone for 'nova-
+# compute'
+# services, which will be used if the service(s) do not belong to
+# aggregates with
+# availability zone metadata.
+#
+# Possible values:
+#
+# * Any string representing an existing availability zone name.
+#  (string value)
+#default_availability_zone = nova
+
+#
+# Default availability zone for instances.
+#
+# This option determines the default availability zone for instances,
+# which will
+# be used when a user does not specify one when creating an instance.
+# The
+# instance(s) will be bound to this availability zone for their
+# lifetime.
+#
+# Possible values:
+#
+# * Any string representing an existing availability zone name.
+# * None, which means that the instance can move from one availability
+# zone to
+#   another during its lifetime if it is moved from one compute node
+# to another.
+#  (string value)
+#default_schedule_zone = <None>
+
+# Length of generated instance admin passwords. (integer value)
+# Minimum value: 0
+#password_length = 12
+
+#
+# Time period to generate instance usages for. It is possible to
+# define optional
+# offset to given period by appending @ character followed by a number
+# defining
+# offset.
+#
+# Possible values:
+#
+# *  period, example: ``hour``, ``day``, ``month` or ``year``
+# *  period with offset, example: ``month@15`` will result in monthly
+# audits
+#    starting on 15th day of month.
+#  (string value)
+#instance_usage_audit_period = month
+
+instance_usage_audit = True
+instance_usage_audit_period = hour
+
+#
+# Start and use a daemon that can run the commands that need to be run
+# with
+# root privileges. This option is usually enabled on nodes that run
+# nova compute
+# processes.
+#  (boolean value)
+#use_rootwrap_daemon = false
+
+#
+# Path to the rootwrap configuration file.
+#
+# Goal of the root wrapper is to allow a service-specific unprivileged
+# user to
+# run a number of actions as the root user in the safest manner
+# possible.
+# The configuration file used here must match the one defined in the
+# sudoers
+# entry.
+#  (string value)
+rootwrap_config = /etc/nova/rootwrap.conf
+
+# Explicitly specify the temporary working directory. (string value)
+#tempdir = <None>
+
+# DEPRECATED:
+# Determine if monkey patching should be applied.
+#
+# Related options:
+#
+# * ``monkey_patch_modules``: This must have values set for this
+# option to
+#   have any effect
+#  (boolean value)
+# This option is deprecated for removal since 17.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# Monkey patching nova is not tested, not supported, and is a barrier
+# for interoperability.
+#monkey_patch = false
+
+# DEPRECATED:
+# List of modules/decorators to monkey patch.
+#
+# This option allows you to patch a decorator for all functions in
+# specified
+# modules.
+#
+# Possible values:
+#
+# * nova.compute.api:nova.notifications.notify_decorator
+# * [...]
+#
+# Related options:
+#
+# * ``monkey_patch``: This must be set to ``True`` for this option to
+#   have any effect
+#  (list value)
+# This option is deprecated for removal since 17.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# Monkey patching nova is not tested, not supported, and is a barrier
+# for interoperability.
+#monkey_patch_modules = nova.compute.api:nova.notifications.notify_decorator
+
+#
+# Defines which driver to use for controlling virtualization.
+#
+# Possible values:
+#
+# * ``libvirt.LibvirtDriver``
+# * ``xenapi.XenAPIDriver``
+# * ``fake.FakeDriver``
+# * ``ironic.IronicDriver``
+# * ``vmwareapi.VMwareVCDriver``
+# * ``hyperv.HyperVDriver``
+# * ``powervm.PowerVMDriver``
+#  (string value)
+#compute_driver = <None>
+compute_driver = libvirt.LibvirtDriver
+
+#
+# Allow destination machine to match source for resize. Useful when
+# testing in single-host environments. By default it is not allowed
+# to resize to the same host. Setting this option to true will add
+# the same host to the destination options. Also set to true
+# if you allow the ServerGroupAffinityFilter and need to resize.
+#  (boolean value)
+#allow_resize_to_same_host = false
+allow_resize_to_same_host = true
+
+#
+# Image properties that should not be inherited from the instance
+# when taking a snapshot.
+#
+# This option gives an opportunity to select which image-properties
+# should not be inherited by newly created snapshots.
+#
+# Possible values:
+#
+# * A comma-separated list whose item is an image property. Usually
+# only
+#   the image properties that are only needed by base images can be
+# included
+#   here, since the snapshots that are created from the base images
+# don't
+#   need them.
+# * Default list: cache_in_nova, bittorrent,
+# img_signature_hash_method,
+#                 img_signature, img_signature_key_type,
+#                 img_signature_certificate_uuid
+#
+#  (list value)
+#non_inheritable_image_properties = cache_in_nova,bittorrent,img_signature_hash_method,img_signature,img_signature_key_type,img_signature_certificate_uuid
+
+# DEPRECATED:
+# When creating multiple instances with a single request using the
+# os-multiple-create API extension, this template will be used to
+# build
+# the display name for each instance. The benefit is that the
+# instances
+# end up with different hostnames. Example display names when creating
+# two VM's: name-1, name-2.
+#
+# Possible values:
+#
+# * Valid keys for the template are: name, uuid, count.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This config changes API behaviour. All changes in API behaviour
+# should be
+# discoverable.
+#multi_instance_display_name_template = %(name)s-%(count)d
+
+#
+# Maximum number of devices that will result in a local image being
+# created on the hypervisor node.
+#
+# A negative number means unlimited. Setting max_local_block_devices
+# to 0 means that any request that attempts to create a local disk
+# will fail. This option is meant to limit the number of local discs
+# (so root local disc that is the result of --image being used, and
+# any other ephemeral and swap disks). 0 does not mean that images
+# will be automatically converted to volumes and boot instances from
+# volumes - it just means that all requests that attempt to create a
+# local disk will fail.
+#
+# Possible values:
+#
+# * 0: Creating a local disk is not allowed.
+# * Negative number: Allows unlimited number of local discs.
+# * Positive number: Allows only these many number of local discs.
+#                        (Default value is 3).
+#  (integer value)
+#max_local_block_devices = 3
+
+#
+# A comma-separated list of monitors that can be used for getting
+# compute metrics. You can use the alias/name from the setuptools
+# entry points for nova.compute.monitors.* namespaces. If no
+# namespace is supplied, the "cpu." namespace is assumed for
+# backwards-compatibility.
+#
+# NOTE: Only one monitor per namespace (For example: cpu) can be
+# loaded at
+# a time.
+#
+# Possible values:
+#
+# * An empty list will disable the feature (Default).
+# * An example value that would enable both the CPU and NUMA memory
+#   bandwidth monitors that use the virt driver variant:
+#
+#     compute_monitors = cpu.virt_driver, numa_mem_bw.virt_driver
+#  (list value)
+#compute_monitors =
+
+#
+# The default format an ephemeral_volume will be formatted with on
+# creation.
+#
+# Possible values:
+#
+# * ``ext2``
+# * ``ext3``
+# * ``ext4``
+# * ``xfs``
+# * ``ntfs`` (only for Windows guests)
+#  (string value)
+#default_ephemeral_format = <None>
+
+#
+# Determine if instance should boot or fail on VIF plugging timeout.
+#
+# Nova sends a port update to Neutron after an instance has been
+# scheduled,
+# providing Neutron with the necessary information to finish setup of
+# the port.
+# Once completed, Neutron notifies Nova that it has finished setting
+# up the
+# port, at which point Nova resumes the boot of the instance since
+# network
+# connectivity is now supposed to be present. A timeout will occur if
+# the reply
+# is not received after a given interval.
+#
+# This option determines what Nova does when the VIF plugging timeout
+# event
+# happens. When enabled, the instance will error out. When disabled,
+# the
+# instance will continue to boot on the assumption that the port is
+# ready.
+#
+# Possible values:
+#
+# * True: Instances should fail after VIF plugging timeout
+# * False: Instances should continue booting after VIF plugging
+# timeout
+#  (boolean value)
+vif_plugging_is_fatal = true
+
+#
+# Timeout for Neutron VIF plugging event message arrival.
+#
+# Number of seconds to wait for Neutron vif plugging events to
+# arrive before continuing or failing (see 'vif_plugging_is_fatal').
+#
+# Related options:
+#
+# * vif_plugging_is_fatal - If ``vif_plugging_timeout`` is set to zero
+# and
+#   ``vif_plugging_is_fatal`` is False, events should not be expected
+# to
+#   arrive at all.
+#  (integer value)
+# Minimum value: 0
+vif_plugging_timeout = 300
+
+# Path to '/etc/network/interfaces' template.
+#
+# The path to a template file for the '/etc/network/interfaces'-style
+# file, which
+# will be populated by nova and subsequently used by cloudinit. This
+# provides a
+# method to configure network connectivity in environments without a
+# DHCP server.
+#
+# The template will be rendered using Jinja2 template engine, and
+# receive a
+# top-level key called ``interfaces``. This key will contain a list of
+# dictionaries, one for each interface.
+#
+# Refer to the cloudinit documentaion for more information:
+#
+#   https://cloudinit.readthedocs.io/en/latest/topics/datasources.html
+#
+# Possible values:
+#
+# * A path to a Jinja2-formatted template for a Debian
+# '/etc/network/interfaces'
+#   file. This applies even if using a non Debian-derived guest.
+#
+# Related options:
+#
+# * ``flat_inject``: This must be set to ``True`` to ensure nova
+# embeds network
+#   configuration information in the metadata provided through the
+# config drive.
+#  (string value)
+#injected_network_template = $pybasedir/nova/virt/interfaces.template
+
+#
+# The image preallocation mode to use.
+#
+# Image preallocation allows storage for instance images to be
+# allocated up front
+# when the instance is initially provisioned. This ensures immediate
+# feedback is
+# given if enough space isn't available. In addition, it should
+# significantly
+# improve performance on writes to new blocks and may even improve I/O
+# performance to prewritten blocks due to reduced fragmentation.
+#
+# Possible values:
+#
+# * "none"  => no storage provisioning is done up front
+# * "space" => storage is fully allocated at instance start
+#  (string value)
+# Possible values:
+# none - <No description provided>
+# space - <No description provided>
+#preallocate_images = none
+preallocate_images = space
+
+#
+# Enable use of copy-on-write (cow) images.
+#
+# QEMU/KVM allow the use of qcow2 as backing files. By disabling this,
+# backing files will not be used.
+#  (boolean value)
+#use_cow_images = true
+
+#
+# Force conversion of backing images to raw format.
+#
+# Possible values:
+#
+# * True: Backing image files will be converted to raw image format
+# * False: Backing image files will not be converted
+#
+# Related options:
+#
+# * ``compute_driver``: Only the libvirt driver uses this option.
+#  (boolean value)
+#force_raw_images = true
+force_raw_images=true
+
+#
+# Name of the mkfs commands for ephemeral device.
+#
+# The format is <os_type>=<mkfs command>
+#  (multi valued)
+#virt_mkfs =
+
+#
+# Enable resizing of filesystems via a block device.
+#
+# If enabled, attempt to resize the filesystem by accessing the image
+# over a
+# block device. This is done by the host and may not be necessary if
+# the image
+# contains a recent version of cloud-init. Possible mechanisms require
+# the nbd
+# driver (for qcow and raw), or loop (for raw).
+#  (boolean value)
+#resize_fs_using_block_device = false
+
+# Amount of time, in seconds, to wait for NBD device start up.
+# (integer value)
+# Minimum value: 0
+#timeout_nbd = 10
+
+#
+# Location of cached images.
+#
+# This is NOT the full path - just a folder name relative to
+# '$instances_path'.
+# For per-compute-host cached images, set to '_base_$my_ip'
+#  (string value)
+#image_cache_subdirectory_name = _base
+
+# Should unused base images be removed? (boolean value)
+#remove_unused_base_images = true
+
+#
+# Unused unresized base images younger than this will not be removed.
+#  (integer value)
+remove_unused_original_minimum_age_seconds = 86400
+
+#
+# Generic property to specify the pointer type.
+#
+# Input devices allow interaction with a graphical framebuffer. For
+# example to provide a graphic tablet for absolute cursor movement.
+#
+# If set, the 'hw_pointer_model' image property takes precedence over
+# this configuration option.
+#
+# Possible values:
+#
+# * None: Uses default behavior provided by drivers (mouse on PS2 for
+#         libvirt x86)
+# * ps2mouse: Uses relative movement. Mouse connected by PS2
+# * usbtablet: Uses absolute movement. Tablet connect by USB
+#
+# Related options:
+#
+# * usbtablet must be configured with VNC enabled or SPICE enabled and
+# SPICE
+#   agent disabled. When used with libvirt the instance mode should be
+#   configured as HVM.
+#   (string value)
+# Possible values:
+# <None> - <No description provided>
+# ps2mouse - <No description provided>
+# usbtablet - <No description provided>
+#pointer_model = usbtablet
+
+#
+# Defines which physical CPUs (pCPUs) can be used by instance
+# virtual CPUs (vCPUs).
+#
+# Possible values:
+#
+# * A comma-separated list of physical CPU numbers that virtual CPUs
+# can be
+#   allocated to by default. Each element should be either a single
+# CPU number,
+#   a range of CPU numbers, or a caret followed by a CPU number to be
+#   excluded from a previous range. For example:
+#
+#     vcpu_pin_set = "4-12,^8,15"
+#  (string value)
+#vcpu_pin_set = <None>
+
+#
+# Number of huge/large memory pages to reserved per NUMA host cell.
+#
+# Possible values:
+#
+# * A list of valid key=value which reflect NUMA node ID, page size
+#   (Default unit is KiB) and number of pages to be reserved.
+#
+#     reserved_huge_pages = node:0,size:2048,count:64
+#     reserved_huge_pages = node:1,size:1GB,count:1
+#
+#   In this example we are reserving on NUMA node 0 64 pages of 2MiB
+#   and on NUMA node 1 1 page of 1GiB.
+#  (dict value)
+#reserved_huge_pages = <None>
+
+#
+# Amount of disk resources in MB to make them always available to
+# host. The
+# disk usage gets reported back to the scheduler from nova-compute
+# running
+# on the compute nodes. To prevent the disk resources from being
+# considered
+# as available, this option can be used to reserve disk space for that
+# host.
+#
+# Possible values:
+#
+# * Any positive integer representing amount of disk in MB to reserve
+#   for the host.
+#  (integer value)
+# Minimum value: 0
+#reserved_host_disk_mb = 0
+
+#
+# Amount of memory in MB to reserve for the host so that it is always
+# available
+# to host processes. The host resources usage is reported back to the
+# scheduler
+# continuously from nova-compute running on the compute node. To
+# prevent the host
+# memory from being considered as available, this option is used to
+# reserve
+# memory for the host.
+#
+# Possible values:
+#
+# * Any positive integer representing amount of memory in MB to
+# reserve
+#   for the host.
+#  (integer value)
+# Minimum value: 0
+#reserved_host_memory_mb = 512
+reserved_host_memory_mb = 512
+
+#
+# Number of physical CPUs to reserve for the host. The host resources
+# usage is
+# reported back to the scheduler continuously from nova-compute
+# running on the
+# compute node. To prevent the host CPU from being considered as
+# available,
+# this option is used to reserve random pCPU(s) for the host.
+#
+# Possible values:
+#
+# * Any positive integer representing number of physical CPUs to
+# reserve
+#   for the host.
+#  (integer value)
+# Minimum value: 0
+#reserved_host_cpus = 0
+
+#
+# This option helps you specify virtual CPU to physical CPU allocation
+# ratio.
+#
+# From Ocata (15.0.0) this is used to influence the hosts selected by
+# the Placement API. Note that when Placement is used, the CoreFilter
+# is redundant, because the Placement API will have already filtered
+# out hosts that would have failed the CoreFilter.
+#
+# This configuration specifies ratio for CoreFilter which can be set
+# per compute node. For AggregateCoreFilter, it will fall back to this
+# configuration value if no per-aggregate setting is found.
+#
+# NOTE: This can be set per-compute, or if set to 0.0, the value
+# set on the scheduler node(s) or compute node(s) will be used
+# and defaulted to 16.0.
+#
+# NOTE: As of the 16.0.0 Pike release, this configuration option is
+# ignored
+# for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
+#
+# Possible values:
+#
+# * Any valid positive integer or float value
+#  (floating point value)
+# Minimum value: 0
+#cpu_allocation_ratio = 0.0
+#cpu_allocation_ratio=0.0
+
+#
+# This option helps you specify virtual RAM to physical RAM
+# allocation ratio.
+#
+# From Ocata (15.0.0) this is used to influence the hosts selected by
+# the Placement API. Note that when Placement is used, the RamFilter
+# is redundant, because the Placement API will have already filtered
+# out hosts that would have failed the RamFilter.
+#
+# This configuration specifies ratio for RamFilter which can be set
+# per compute node. For AggregateRamFilter, it will fall back to this
+# configuration value if no per-aggregate setting found.
+#
+# NOTE: This can be set per-compute, or if set to 0.0, the value
+# set on the scheduler node(s) or compute node(s) will be used and
+# defaulted to 1.5.
+#
+# NOTE: As of the 16.0.0 Pike release, this configuration option is
+# ignored
+# for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
+#
+# Possible values:
+#
+# * Any valid positive integer or float value
+#  (floating point value)
+# Minimum value: 0
+#ram_allocation_ratio = 0.0
+#ram_allocation_ratio=0.0
+
+#
+# This option helps you specify virtual disk to physical disk
+# allocation ratio.
+#
+# From Ocata (15.0.0) this is used to influence the hosts selected by
+# the Placement API. Note that when Placement is used, the DiskFilter
+# is redundant, because the Placement API will have already filtered
+# out hosts that would have failed the DiskFilter.
+#
+# A ratio greater than 1.0 will result in over-subscription of the
+# available physical disk, which can be useful for more
+# efficiently packing instances created with images that do not
+# use the entire virtual disk, such as sparse or compressed
+# images. It can be set to a value between 0.0 and 1.0 in order
+# to preserve a percentage of the disk for uses other than
+# instances.
+#
+# NOTE: This can be set per-compute, or if set to 0.0, the value
+# set on the scheduler node(s) or compute node(s) will be used and
+# defaulted to 1.0.
+#
+# NOTE: As of the 16.0.0 Pike release, this configuration option is
+# ignored
+# for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
+#
+# Possible values:
+#
+# * Any valid positive integer or float value
+#  (floating point value)
+# Minimum value: 0
+#disk_allocation_ratio = 0.0
+
+#
+# Console proxy host to be used to connect to instances on this host.
+# It is the
+# publicly visible name for the console host.
+#
+# Possible values:
+#
+# * Current hostname (default) or any string representing hostname.
+#  (string value)
+#console_host = <current_hostname>
+
+#
+# Name of the network to be used to set access IPs for instances. If
+# there are
+# multiple IPs to choose from, an arbitrary one will be chosen.
+#
+# Possible values:
+#
+# * None (default)
+# * Any string representing network name.
+#  (string value)
+#default_access_ip_network_name = <None>
+
+#
+# Whether to batch up the application of IPTables rules during a host
+# restart
+# and apply all at the end of the init phase.
+#  (boolean value)
+#defer_iptables_apply = false
+
+#
+# Specifies where instances are stored on the hypervisor's disk.
+# It can point to locally attached storage or a directory on NFS.
+#
+# Possible values:
+#
+# * $state_path/instances where state_path is a config option that
+# specifies
+#   the top-level directory for maintaining nova's state. (default) or
+#   Any string representing directory path.
+#  (string value)
+instances_path = $state_path/instances
+
+#
+# This option enables periodic compute.instance.exists notifications.
+# Each
+# compute node must be configured to generate system usage data. These
+# notifications are consumed by OpenStack Telemetry service.
+#  (boolean value)
+#instance_usage_audit = false
+
+#
+# Maximum number of 1 second retries in live_migration. It specifies
+# number
+# of retries to iptables when it complains. It happens when an user
+# continuously
+# sends live-migration request to same host leading to concurrent
+# request
+# to iptables.
+#
+# Possible values:
+#
+# * Any positive integer representing retry count.
+#  (integer value)
+# Minimum value: 0
+#live_migration_retry_count = 30
+
+#
+# This option specifies whether to start guests that were running
+# before the
+# host rebooted. It ensures that all of the instances on a Nova
+# compute node
+# resume their state each time the compute node boots or restarts.
+#  (boolean value)
+resume_guests_state_on_host_boot = True
+
+#
+# Number of times to retry network allocation. It is required to
+# attempt network
+# allocation retries if the virtual interface plug fails.
+#
+# Possible values:
+#
+# * Any positive integer representing retry count.
+#  (integer value)
+# Minimum value: 0
+#network_allocate_retries = 0
+
+#
+# Limits the maximum number of instance builds to run concurrently by
+# nova-compute. Compute service can attempt to build an infinite
+# number of
+# instances, if asked to do so. This limit is enforced to avoid
+# building
+# unlimited instance concurrently on a compute node. This value can be
+# set
+# per compute node.
+#
+# Possible Values:
+#
+# * 0 : treated as unlimited.
+# * Any positive integer representing maximum concurrent builds.
+#  (integer value)
+# Minimum value: 0
+#max_concurrent_builds = 10
+
+#
+# Maximum number of live migrations to run concurrently. This limit is
+# enforced
+# to avoid outbound live migrations overwhelming the host/network and
+# causing
+# failures. It is not recommended that you change this unless you are
+# very sure
+# that doing so is safe and stable in your environment.
+#
+# Possible values:
+#
+# * 0 : treated as unlimited.
+# * Negative value defaults to 0.
+# * Any positive integer representing maximum number of live
+# migrations
+#   to run concurrently.
+#  (integer value)
+#max_concurrent_live_migrations = 1
+
+#
+# Number of times to retry block device allocation on failures.
+# Starting with
+# Liberty, Cinder can use image volume cache. This may help with block
+# device
+# allocation performance. Look at the cinder
+# image_volume_cache_enabled
+# configuration option.
+#
+# Possible values:
+#
+# * 60 (default)
+# * If value is 0, then one attempt is made.
+# * Any negative value is treated as 0.
+# * For any value > 0, total attempts are (value + 1)
+#  (integer value)
+block_device_allocate_retries = 600
+
+#
+# Number of greenthreads available for use to sync power states.
+#
+# This option can be used to reduce the number of concurrent requests
+# made to the hypervisor or system with real instance power states
+# for performance reasons, for example, with Ironic.
+#
+# Possible values:
+#
+# * Any positive integer representing greenthreads count.
+#  (integer value)
+#sync_power_state_pool_size = 1000
+
+#
+# Number of seconds to wait between runs of the image cache manager.
+#
+# Possible values:
+# * 0: run at the default rate.
+# * -1: disable
+# * Any other value
+#  (integer value)
+# Minimum value: -1
+image_cache_manager_interval = 0
+
+#
+# Interval to pull network bandwidth usage info.
+#
+# Not supported on all hypervisors. If a hypervisor doesn't support
+# bandwidth
+# usage, it will not get the info in the usage events.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#  (integer value)
+#bandwidth_poll_interval = 600
+
+#
+# Interval to sync power states between the database and the
+# hypervisor.
+#
+# The interval that Nova checks the actual virtual machine power state
+# and the power state that Nova has in its database. If a user powers
+# down their VM, Nova updates the API to report the VM has been
+# powered down. Should something turn on the VM unexpectedly,
+# Nova will turn the VM back off to keep the system in the expected
+# state.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#
+# Related options:
+#
+# * If ``handle_virt_lifecycle_events`` in workarounds_group is
+#   false and this option is negative, then instances that get out
+#   of sync between the hypervisor and the Nova database will have
+#   to be synchronized manually.
+#  (integer value)
+#sync_power_state_interval = 600
+
+#
+# Interval between instance network information cache updates.
+#
+# Number of seconds after which each compute node runs the task of
+# querying Neutron for all of its instances networking information,
+# then updates the Nova db with that information. Nova will never
+# update it's cache if this option is set to 0. If we don't update the
+# cache, the metadata service and nova-api endpoints will be proxying
+# incorrect network data about the instance. So, it is not recommended
+# to set this option to 0.
+#
+# Possible values:
+#
+# * Any positive integer in seconds.
+# * Any value <=0 will disable the sync. This is not recommended.
+#  (integer value)
+#heal_instance_info_cache_interval = 60
+heal_instance_info_cache_interval = 60
+
+#
+# Interval for reclaiming deleted instances.
+#
+# A value greater than 0 will enable SOFT_DELETE of instances.
+# This option decides whether the server to be deleted will be put
+# into
+# the SOFT_DELETED state. If this value is greater than 0, the deleted
+# server will not be deleted immediately, instead it will be put into
+# a queue until it's too old (deleted time greater than the value of
+# reclaim_instance_interval). The server can be recovered from the
+# delete queue by using the restore action. If the deleted server
+# remains
+# longer than the value of reclaim_instance_interval, it will be
+# deleted by a periodic task in the compute service automatically.
+#
+# Note that this option is read from both the API and compute nodes,
+# and
+# must be set globally otherwise servers could be put into a soft
+# deleted
+# state in the API and never actually reclaimed (deleted) on the
+# compute
+# node.
+#
+# Possible values:
+#
+# * Any positive integer(in seconds) greater than 0 will enable
+#   this option.
+# * Any value <=0 will disable the option.
+#  (integer value)
+#reclaim_instance_interval = 0
+
+#
+# Interval for gathering volume usages.
+#
+# This option updates the volume usage cache for every
+# volume_usage_poll_interval number of seconds.
+#
+# Possible values:
+#
+# * Any positive integer(in seconds) greater than 0 will enable
+#   this option.
+# * Any value <=0 will disable the option.
+#  (integer value)
+#volume_usage_poll_interval = 0
+
+#
+# Interval for polling shelved instances to offload.
+#
+# The periodic task runs for every shelved_poll_interval number
+# of seconds and checks if there are any shelved instances. If it
+# finds a shelved instance, based on the 'shelved_offload_time' config
+# value it offloads the shelved instances. Check
+# 'shelved_offload_time'
+# config option description for details.
+#
+# Possible values:
+#
+# * Any value <= 0: Disables the option.
+# * Any positive integer in seconds.
+#
+# Related options:
+#
+# * ``shelved_offload_time``
+#  (integer value)
+#shelved_poll_interval = 3600
+
+#
+# Time before a shelved instance is eligible for removal from a host.
+#
+# By default this option is set to 0 and the shelved instance will be
+# removed from the hypervisor immediately after shelve operation.
+# Otherwise, the instance will be kept for the value of
+# shelved_offload_time(in seconds) so that during the time period the
+# unshelve action will be faster, then the periodic task will remove
+# the instance from hypervisor after shelved_offload_time passes.
+#
+# Possible values:
+#
+# * 0: Instance will be immediately offloaded after being
+#      shelved.
+# * Any value < 0: An instance will never offload.
+# * Any positive integer in seconds: The instance will exist for
+#   the specified number of seconds before being offloaded.
+#  (integer value)
+#shelved_offload_time = 0
+
+#
+# Interval for retrying failed instance file deletes.
+#
+# This option depends on 'maximum_instance_delete_attempts'.
+# This option specifies how often to retry deletes whereas
+# 'maximum_instance_delete_attempts' specifies the maximum number
+# of retry attempts that can be made.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#
+# Related options:
+#
+# * ``maximum_instance_delete_attempts`` from instance_cleaning_opts
+#   group.
+#  (integer value)
+#instance_delete_interval = 300
+
+#
+# Interval (in seconds) between block device allocation retries on
+# failures.
+#
+# This option allows the user to specify the time interval between
+# consecutive retries. 'block_device_allocate_retries' option
+# specifies
+# the maximum number of retries.
+#
+# Possible values:
+#
+# * 0: Disables the option.
+# * Any positive integer in seconds enables the option.
+#
+# Related options:
+#
+# * ``block_device_allocate_retries`` in compute_manager_opts group.
+#  (integer value)
+# Minimum value: 0
+block_device_allocate_retries_interval = 10
+
+#
+# Interval between sending the scheduler a list of current instance
+# UUIDs to
+# verify that its view of instances is in sync with nova.
+#
+# If the CONF option 'scheduler_tracks_instance_changes' is
+# False, the sync calls will not be made. So, changing this option
+# will
+# have no effect.
+#
+# If the out of sync situations are not very common, this interval
+# can be increased to lower the number of RPC messages being sent.
+# Likewise, if sync issues turn out to be a problem, the interval
+# can be lowered to check more frequently.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#
+# Related options:
+#
+# * This option has no impact if ``scheduler_tracks_instance_changes``
+#   is set to False.
+#  (integer value)
+#scheduler_instance_sync_interval = 120
+
+#
+# Interval for updating compute resources.
+#
+# This option specifies how often the update_available_resources
+# periodic task should run. A number less than 0 means to disable the
+# task completely. Leaving this at the default of 0 will cause this to
+# run at the default periodic interval. Setting it to any positive
+# value will cause it to run at approximately that number of seconds.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#  (integer value)
+#update_resources_interval = 0
+
+#
+# Time interval after which an instance is hard rebooted
+# automatically.
+#
+# When doing a soft reboot, it is possible that a guest kernel is
+# completely hung in a way that causes the soft reboot task
+# to not ever finish. Setting this option to a time period in seconds
+# will automatically hard reboot an instance if it has been stuck
+# in a rebooting state longer than N seconds.
+#
+# Possible values:
+#
+# * 0: Disables the option (default).
+# * Any positive integer in seconds: Enables the option.
+#  (integer value)
+# Minimum value: 0
+#reboot_timeout = 0
+
+#
+# Maximum time in seconds that an instance can take to build.
+#
+# If this timer expires, instance status will be changed to ERROR.
+# Enabling this option will make sure an instance will not be stuck
+# in BUILD state for a longer period.
+#
+# Possible values:
+#
+# * 0: Disables the option (default)
+# * Any positive integer in seconds: Enables the option.
+#  (integer value)
+# Minimum value: 0
+#instance_build_timeout = 0
+
+#
+# Interval to wait before un-rescuing an instance stuck in RESCUE.
+#
+# Possible values:
+#
+# * 0: Disables the option (default)
+# * Any positive integer in seconds: Enables the option.
+#  (integer value)
+# Minimum value: 0
+#rescue_timeout = 0
+
+#
+# Automatically confirm resizes after N seconds.
+#
+# Resize functionality will save the existing server before resizing.
+# After the resize completes, user is requested to confirm the resize.
+# The user has the opportunity to either confirm or revert all
+# changes. Confirm resize removes the original server and changes
+# server status from resized to active. Setting this option to a time
+# period (in seconds) will automatically confirm the resize if the
+# server is in resized state longer than that time.
+#
+# Possible values:
+#
+# * 0: Disables the option (default)
+# * Any positive integer in seconds: Enables the option.
+#  (integer value)
+# Minimum value: 0
+#resize_confirm_window = 0
+
+#
+# Total time to wait in seconds for an instance toperform a clean
+# shutdown.
+#
+# It determines the overall period (in seconds) a VM is allowed to
+# perform a clean shutdown. While performing stop, rescue and shelve,
+# rebuild operations, configuring this option gives the VM a chance
+# to perform a controlled shutdown before the instance is powered off.
+# The default timeout is 60 seconds.
+#
+# The timeout value can be overridden on a per image basis by means
+# of os_shutdown_timeout that is an image metadata setting allowing
+# different types of operating systems to specify how much time they
+# need to shut down cleanly.
+#
+# Possible values:
+#
+# * Any positive integer in seconds (default value is 60).
+#  (integer value)
+# Minimum value: 1
+#shutdown_timeout = 60
+
+#
+# The compute service periodically checks for instances that have been
+# deleted in the database but remain running on the compute node. The
+# above option enables action to be taken when such instances are
+# identified.
+#
+# Possible values:
+#
+# * reap: Powers down the instances and deletes them(default)
+# * log: Logs warning message about deletion of the resource
+# * shutdown: Powers down instances and marks them as non-
+#   bootable which can be later used for debugging/analysis
+# * noop: Takes no action
+#
+# Related options:
+#
+# * running_deleted_instance_poll_interval
+# * running_deleted_instance_timeout
+#  (string value)
+# Possible values:
+# noop - <No description provided>
+# log - <No description provided>
+# shutdown - <No description provided>
+# reap - <No description provided>
+#running_deleted_instance_action = reap
+
+#
+# Time interval in seconds to wait between runs for the clean up
+# action.
+# If set to 0, above check will be disabled. If
+# "running_deleted_instance
+# _action" is set to "log" or "reap", a value greater than 0 must be
+# set.
+#
+# Possible values:
+#
+# * Any positive integer in seconds enables the option.
+# * 0: Disables the option.
+# * 1800: Default value.
+#
+# Related options:
+#
+# * running_deleted_instance_action
+#  (integer value)
+#running_deleted_instance_poll_interval = 1800
+
+#
+# Time interval in seconds to wait for the instances that have
+# been marked as deleted in database to be eligible for cleanup.
+#
+# Possible values:
+#
+# * Any positive integer in seconds(default is 0).
+#
+# Related options:
+#
+# * "running_deleted_instance_action"
+#  (integer value)
+#running_deleted_instance_timeout = 0
+
+#
+# The number of times to attempt to reap an instance's files.
+#
+# This option specifies the maximum number of retry attempts
+# that can be made.
+#
+# Possible values:
+#
+# * Any positive integer defines how many attempts are made.
+# * Any value <=0 means no delete attempts occur, but you should use
+#   ``instance_delete_interval`` to disable the delete attempts.
+#
+# Related options:
+# * ``instance_delete_interval`` in interval_opts group can be used to
+# disable
+#   this option.
+#  (integer value)
+#maximum_instance_delete_attempts = 5
+
+#
+# Sets the scope of the check for unique instance names.
+#
+# The default doesn't check for unique names. If a scope for the name
+# check is
+# set, a launch of a new instance or an update of an existing instance
+# with a
+# duplicate name will result in an ''InstanceExists'' error. The
+# uniqueness is
+# case-insensitive. Setting this option can increase the usability for
+# end
+# users as they don't have to distinguish among instances with the
+# same name
+# by their IDs.
+#
+# Possible values:
+#
+# * '': An empty value means that no uniqueness check is done and
+# duplicate
+#   names are possible.
+# * "project": The instance name check is done only for instances
+# within the
+#   same project.
+# * "global": The instance name check is done for all instances
+# regardless of
+#   the project.
+#  (string value)
+# Possible values:
+# '' - <No description provided>
+# project - <No description provided>
+# global - <No description provided>
+#osapi_compute_unique_server_name_scope =
+
+#
+# Enable new nova-compute services on this host automatically.
+#
+# When a new nova-compute service starts up, it gets
+# registered in the database as an enabled service. Sometimes it can
+# be useful
+# to register new compute services in disabled state and then enabled
+# them at a
+# later point in time. This option only sets this behavior for nova-
+# compute
+# services, it does not auto-disable other services like nova-
+# conductor,
+# nova-scheduler, nova-consoleauth, or nova-osapi_compute.
+#
+# Possible values:
+#
+# * ``True``: Each new compute service is enabled as soon as it
+# registers itself.
+# * ``False``: Compute services must be enabled via an os-services
+# REST API call
+#   or with the CLI with ``nova service-enable <hostname> <binary>``,
+# otherwise
+#   they are not ready to use.
+#  (boolean value)
+#enable_new_services = true
+
+#
+# Template string to be used to generate instance names.
+#
+# This template controls the creation of the database name of an
+# instance. This
+# is *not* the display name you enter when creating an instance (via
+# Horizon
+# or CLI). For a new deployment it is advisable to change the default
+# value
+# (which uses the database autoincrement) to another value which makes
+# use
+# of the attributes of an instance, like ``instance-%(uuid)s``. If you
+# already have instances in your deployment when you change this, your
+# deployment will break.
+#
+# Possible values:
+#
+# * A string which either uses the instance database ID (like the
+#   default)
+# * A string with a list of named database columns, for example
+# ``%(id)d``
+#   or ``%(uuid)s`` or ``%(hostname)s``.
+#
+# Related options:
+#
+# * not to be confused with: ``multi_instance_display_name_template``
+#  (string value)
+#instance_name_template = instance-%08x
+
+#
+# Number of times to retry live-migration before failing.
+#
+# Possible values:
+#
+# * If == -1, try until out of hosts (default)
+# * If == 0, only try once, no retries
+# * Integer greater than 0
+#  (integer value)
+# Minimum value: -1
+#migrate_max_retries = -1
+
+#
+# Configuration drive format
+#
+# Configuration drive format that will contain metadata attached to
+# the
+# instance when it boots.
+#
+# Possible values:
+#
+# * iso9660: A file system image standard that is widely supported
+# across
+#   operating systems. NOTE: Mind the libvirt bug
+#   (https://bugs.launchpad.net/nova/+bug/1246201) - If your
+# hypervisor
+#   driver is libvirt, and you want live migrate to work without
+# shared storage,
+#   then use VFAT.
+# * vfat: For legacy reasons, you can configure the configuration
+# drive to
+#   use VFAT format instead of ISO 9660.
+#
+# Related options:
+#
+# * This option is meaningful when one of the following alternatives
+# occur:
+#   1. force_config_drive option set to 'true'
+#   2. the REST API call to create the instance contains an enable
+# flag for
+#      config drive option
+#   3. the image used to create the instance requires a config drive,
+#      this is defined by img_config_drive property for that image.
+# * A compute node running Hyper-V hypervisor can be configured to
+# attach
+#   configuration drive as a CD drive. To attach the configuration
+# drive as a CD
+#   drive, set config_drive_cdrom option at hyperv section, to true.
+#  (string value)
+# Possible values:
+# iso9660 - <No description provided>
+# vfat - <No description provided>
+#config_drive_format = iso9660
+config_drive_format=vfat
+
+#
+# Force injection to take place on a config drive
+#
+# When this option is set to true configuration drive functionality
+# will be
+# forced enabled by default, otherwise user can still enable
+# configuration
+# drives via the REST API or image metadata properties.
+#
+# Possible values:
+#
+# * True: Force to use of configuration drive regardless the user's
+# input in the
+#         REST API call.
+# * False: Do not force use of configuration drive. Config drives can
+# still be
+#          enabled via the REST API or image metadata properties.
+#
+# Related options:
+#
+# * Use the 'mkisofs_cmd' flag to set the path where you install the
+#   genisoimage program. If genisoimage is in same path as the
+#   nova-compute service, you do not need to set this flag.
+# * To use configuration drive with Hyper-V, you must set the
+#   'mkisofs_cmd' value to the full path to an mkisofs.exe
+# installation.
+#   Additionally, you must set the qemu_img_cmd value in the hyperv
+#   configuration section to the full path to an qemu-img command
+#   installation.
+#  (boolean value)
+#force_config_drive = false
+force_config_drive=true
+
+#
+# Name or path of the tool used for ISO image creation
+#
+# Use the mkisofs_cmd flag to set the path where you install the
+# genisoimage
+# program. If genisoimage is on the system path, you do not need to
+# change
+# the default value.
+#
+# To use configuration drive with Hyper-V, you must set the
+# mkisofs_cmd value
+# to the full path to an mkisofs.exe installation. Additionally, you
+# must set
+# the qemu_img_cmd value in the hyperv configuration section to the
+# full path
+# to an qemu-img command installation.
+#
+# Possible values:
+#
+# * Name of the ISO image creator program, in case it is in the same
+# directory
+#   as the nova-compute service
+# * Path to ISO image creator program
+#
+# Related options:
+#
+# * This option is meaningful when config drives are enabled.
+# * To use configuration drive with Hyper-V, you must set the
+# qemu_img_cmd
+#   value in the hyperv configuration section to the full path to an
+# qemu-img
+#   command installation.
+#  (string value)
+#mkisofs_cmd = genisoimage
+
+# DEPRECATED: The driver to use for database access (string value)
+# This option is deprecated for removal since 13.0.0.
+# Its value may be silently ignored in the future.
+#db_driver = nova.db
+
+# DEPRECATED:
+# Default flavor to use for the EC2 API only.
+# The Nova API does not support a default flavor.
+#  (string value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: The EC2 API is deprecated.
+#default_flavor = m1.small
+
+#
+# The IP address which the host is using to connect to the management
+# network.
+#
+# Possible values:
+#
+# * String with valid IP address. Default is IPv4 address of this
+# host.
+#
+# Related options:
+#
+# * metadata_host
+# * my_block_storage_ip
+# * routing_source_ip
+# * vpn_ip
+#  (string value)
+#my_ip = <host_ipv4>
+my_ip=10.167.4.53
+
+#
+# The IP address which is used to connect to the block storage
+# network.
+#
+# Possible values:
+#
+# * String with valid IP address. Default is IP address of this host.
+#
+# Related options:
+#
+# * my_ip - if my_block_storage_ip is not set, then my_ip value is
+# used.
+#  (string value)
+#my_block_storage_ip = $my_ip
+
+#
+# Hostname, FQDN or IP address of this host.
+#
+# Used as:
+#
+# * the oslo.messaging queue name for nova-compute worker
+# * we use this value for the binding_host sent to neutron. This means
+# if you use
+#   a neutron agent, it should have the same value for host.
+# * cinder host attachment information
+#
+# Must be valid within AMQP key.
+#
+# Possible values:
+#
+# * String with hostname, FQDN or IP address. Default is hostname of
+# this host.
+#  (string value)
+#host = <current_hostname>
+
+# DEPRECATED:
+# This option is a list of full paths to one or more configuration
+# files for
+# dhcpbridge. In most cases the default path of '/etc/nova/nova-
+# dhcpbridge.conf'
+# should be sufficient, but if you have special needs for configuring
+# dhcpbridge,
+# you can change or add to this list.
+#
+# Possible values
+#
+# * A list of strings, where each string is the full path to a
+# dhcpbridge
+#   configuration file.
+#  (multi valued)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#dhcpbridge_flagfile = /etc/nova/nova.conf
+
+# DEPRECATED:
+# The location where the network configuration files will be kept. The
+# default is
+# the 'networks' directory off of the location where nova's Python
+# module is
+# installed.
+#
+# Possible values
+#
+# * A string containing the full path to the desired configuration
+# directory
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#networks_path = $state_path/networks
+
+# DEPRECATED:
+# This is the name of the network interface for public IP addresses.
+# The default
+# is 'eth0'.
+#
+# Possible values:
+#
+# * Any string representing a network interface name
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#public_interface = eth0
+
+# DEPRECATED:
+# The location of the binary nova-dhcpbridge. By default it is the
+# binary named
+# 'nova-dhcpbridge' that is installed with all the other nova
+# binaries.
+#
+# Possible values:
+#
+# * Any string representing the full path to the binary for dhcpbridge
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#dhcpbridge = $bindir/nova-dhcpbridge
+
+# DEPRECATED:
+# The public IP address of the network host.
+#
+# This is used when creating an SNAT rule.
+#
+# Possible values:
+#
+# * Any valid IP address
+#
+# Related options:
+#
+# * ``force_snat_range``
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#routing_source_ip = $my_ip
+
+# DEPRECATED:
+# The lifetime of a DHCP lease, in seconds. The default is 86400 (one
+# day).
+#
+# Possible values:
+#
+# * Any positive integer value.
+#  (integer value)
+# Minimum value: 1
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#dhcp_lease_time = 86400
+
+# DEPRECATED:
+# Despite the singular form of the name of this option, it is actually
+# a list of
+# zero or more server addresses that dnsmasq will use for DNS
+# nameservers. If
+# this is not empty, dnsmasq will not read /etc/resolv.conf, but will
+# only use
+# the servers specified in this option. If the option
+# use_network_dns_servers is
+# True, the dns1 and dns2 servers from the network will be appended to
+# this list,
+# and will be used as DNS servers, too.
+#
+# Possible values:
+#
+# * A list of strings, where each string is either an IP address or a
+# FQDN.
+#
+# Related options:
+#
+# * ``use_network_dns_servers``
+#  (multi valued)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#dns_server =
+
+# DEPRECATED:
+# When this option is set to True, the dns1 and dns2 servers for the
+# network
+# specified by the user on boot will be used for DNS, as well as any
+# specified in
+# the `dns_server` option.
+#
+# Related options:
+#
+# * ``dns_server``
+#  (boolean value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#use_network_dns_servers = false
+
+# DEPRECATED:
+# This option is a list of zero or more IP address ranges in your
+# network's DMZ
+# that should be accepted.
+#
+# Possible values:
+#
+# * A list of strings, each of which should be a valid CIDR.
+#  (list value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#dmz_cidr =
+
+# DEPRECATED:
+# This is a list of zero or more IP ranges that traffic from the
+# `routing_source_ip` will be SNATted to. If the list is empty, then
+# no SNAT
+# rules are created.
+#
+# Possible values:
+#
+# * A list of strings, each of which should be a valid CIDR.
+#
+# Related options:
+#
+# * ``routing_source_ip``
+#  (multi valued)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#force_snat_range =
+
+# DEPRECATED:
+# The path to the custom dnsmasq configuration file, if any.
+#
+# Possible values:
+#
+# * The full path to the configuration file, or an empty string if
+# there is no
+#   custom dnsmasq configuration file.
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#dnsmasq_config_file =
+
+# DEPRECATED:
+# This is the class used as the ethernet device driver for linuxnet
+# bridge
+# operations. The default value should be all you need for most cases,
+# but if you
+# wish to use a customized class, set this option to the full dot-
+# separated
+# import path for that class.
+#
+# Possible values:
+#
+# * Any string representing a dot-separated class path that Nova can
+# import.
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver
+
+# DEPRECATED:
+# The name of the Open vSwitch bridge that is used with linuxnet when
+# connecting
+# with Open vSwitch."
+#
+# Possible values:
+#
+# * Any string representing a valid bridge name.
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#linuxnet_ovs_integration_bridge = br-int
+
+#
+# When True, when a device starts up, and upon binding floating IP
+# addresses, arp
+# messages will be sent to ensure that the arp caches on the compute
+# hosts are
+# up-to-date.
+#
+# Related options:
+#
+# * ``send_arp_for_ha_count``
+#  (boolean value)
+#send_arp_for_ha = false
+
+#
+# When arp messages are configured to be sent, they will be sent with
+# the count
+# set to the value of this option. Of course, if this is set to zero,
+# no arp
+# messages will be sent.
+#
+# Possible values:
+#
+# * Any integer greater than or equal to 0
+#
+# Related options:
+#
+# * ``send_arp_for_ha``
+#  (integer value)
+#send_arp_for_ha_count = 3
+
+# DEPRECATED:
+# When set to True, only the firt nic of a VM will get its default
+# gateway from
+# the DHCP server.
+#  (boolean value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#use_single_default_gateway = false
+
+# DEPRECATED:
+# One or more interfaces that bridges can forward traffic to. If any
+# of the items
+# in this list is the special keyword 'all', then all traffic will be
+# forwarded.
+#
+# Possible values:
+#
+# * A list of zero or more interface names, or the word 'all'.
+#  (multi valued)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#forward_bridge_interface = all
+
+#
+# This option determines the IP address for the network metadata API
+# server.
+#
+# This is really the client side of the metadata host equation that
+# allows
+# nova-network to find the metadata server when doing a default multi
+# host
+# networking.
+#
+# Possible values:
+#
+# * Any valid IP address. The default is the address of the Nova API
+# server.
+#
+# Related options:
+#
+# * ``metadata_port``
+#  (string value)
+#metadata_host = $my_ip
+
+# DEPRECATED:
+# This option determines the port used for the metadata API server.
+#
+# Related options:
+#
+# * ``metadata_host``
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#metadata_port = 8775
+
+# DEPRECATED:
+# This expression, if defined, will select any matching iptables rules
+# and place
+# them at the top when applying metadata changes to the rules.
+#
+# Possible values:
+#
+# * Any string representing a valid regular expression, or an empty
+# string
+#
+# Related options:
+#
+# * ``iptables_bottom_regex``
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#iptables_top_regex =
+
+# DEPRECATED:
+# This expression, if defined, will select any matching iptables rules
+# and place
+# them at the bottom when applying metadata changes to the rules.
+#
+# Possible values:
+#
+# * Any string representing a valid regular expression, or an empty
+# string
+#
+# Related options:
+#
+# * iptables_top_regex
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#iptables_bottom_regex =
+
+# DEPRECATED:
+# By default, packets that do not pass the firewall are DROPped. In
+# many cases,
+# though, an operator may find it more useful to change this from DROP
+# to REJECT,
+# so that the user issuing those packets may have a better idea as to
+# what's
+# going on, or LOGDROP in order to record the blocked traffic before
+# DROPping.
+#
+# Possible values:
+#
+# * A string representing an iptables chain. The default is DROP.
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#iptables_drop_action = DROP
+
+# DEPRECATED:
+# This option represents the period of time, in seconds, that the
+# ovs_vsctl calls
+# will wait for a response from the database before timing out. A
+# setting of 0
+# means that the utility should wait forever for a response.
+#
+# Possible values:
+#
+# * Any positive integer if a limited timeout is desired, or zero if
+# the calls
+#   should wait forever for a response.
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ovs_vsctl_timeout = 120
+
+# DEPRECATED:
+# This option is used mainly in testing to avoid calls to the
+# underlying network
+# utilities.
+#  (boolean value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#fake_network = false
+
+# DEPRECATED:
+# This option determines the number of times to retry ebtables
+# commands before
+# giving up. The minimum number of retries is 1.
+#
+# Possible values:
+#
+# * Any positive integer
+#
+# Related options:
+#
+# * ``ebtables_retry_interval``
+#  (integer value)
+# Minimum value: 1
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ebtables_exec_attempts = 3
+
+# DEPRECATED:
+# This option determines the time, in seconds, that the system will
+# sleep in
+# between ebtables retries. Note that each successive retry waits a
+# multiple of
+# this value, so for example, if this is set to the default of 1.0
+# seconds, and
+# ebtables_exec_attempts is 4, after the first failure, the system
+# will sleep for
+# 1 * 1.0 seconds, after the second failure it will sleep 2 * 1.0
+# seconds, and
+# after the third failure it will sleep 3 * 1.0 seconds.
+#
+# Possible values:
+#
+# * Any non-negative float or integer. Setting this to zero will
+# result in no
+#   waiting between attempts.
+#
+# Related options:
+#
+# * ebtables_exec_attempts
+#  (floating point value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ebtables_retry_interval = 1.0
+
+# DEPRECATED:
+# Enable neutron as the backend for networking.
+#
+# Determine whether to use Neutron or Nova Network as the back end.
+# Set to true
+# to use neutron.
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#use_neutron = true
+
+#
+# This option determines whether the network setup information is
+# injected into
+# the VM before it is booted. While it was originally designed to be
+# used only
+# by nova-network, it is also used by the vmware and xenapi virt
+# drivers to
+# control whether network information is injected into a VM. The
+# libvirt virt
+# driver also uses it when we use config_drive to configure network to
+# control
+# whether network information is injected into a VM.
+#  (boolean value)
+#flat_injected = false
+
+# DEPRECATED:
+# This option determines the bridge used for simple network interfaces
+# when no
+# bridge is specified in the VM creation request.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+# * Any string representing a valid network bridge, such as 'br100'
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#flat_network_bridge = <None>
+
+# DEPRECATED:
+# This is the address of the DNS server for a simple network. If this
+# option is
+# not specified, the default of '8.8.4.4' is used.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+# * Any valid IP address.
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#flat_network_dns = 8.8.4.4
+
+# DEPRECATED:
+# This option is the name of the virtual interface of the VM on which
+# the bridge
+# will be built. While it was originally designed to be used only by
+# nova-network, it is also used by libvirt for the bridge interface
+# name.
+#
+# Possible values:
+#
+# * Any valid virtual interface name, such as 'eth0'
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#flat_interface = <None>
+
+# DEPRECATED:
+# This is the VLAN number used for private networks. Note that the
+# when creating
+# the networks, if the specified number has already been assigned,
+# nova-network
+# will increment this number until it finds an available VLAN.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment. It also will be ignored if the
+# configuration option
+# for `network_manager` is not set to the default of
+# 'nova.network.manager.VlanManager'.
+#
+# Possible values:
+#
+# * Any integer between 1 and 4094. Values outside of that range will
+# raise a
+#   ValueError exception.
+#
+# Related options:
+#
+# * ``network_manager``
+# * ``use_neutron``
+#  (integer value)
+# Minimum value: 1
+# Maximum value: 4094
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#vlan_start = 100
+
+# DEPRECATED:
+# This option is the name of the virtual interface of the VM on which
+# the VLAN
+# bridge will be built. While it was originally designed to be used
+# only by
+# nova-network, it is also used by libvirt and xenapi for the bridge
+# interface
+# name.
+#
+# Please note that this setting will be ignored in nova-network if the
+# configuration option for `network_manager` is not set to the default
+# of
+# 'nova.network.manager.VlanManager'.
+#
+# Possible values:
+#
+# * Any valid virtual interface name, such as 'eth0'
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options. While
+# this option has an effect when using neutron, it incorrectly
+# override the value
+# provided by neutron and should therefore not be used.
+#vlan_interface = <None>
+
+# DEPRECATED:
+# This option represents the number of networks to create if not
+# explicitly
+# specified when the network is created. The only time this is used is
+# if a CIDR
+# is specified, but an explicit network_size is not. In that case, the
+# subnets
+# are created by diving the IP address space of the CIDR by
+# num_networks. The
+# resulting subnet sizes cannot be larger than the configuration
+# option
+# `network_size`; in that event, they are reduced to `network_size`,
+# and a
+# warning is logged.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+# * Any positive integer is technically valid, although there are
+# practical
+#   limits based upon available IP address space and virtual
+# interfaces.
+#
+# Related options:
+#
+# * ``use_neutron``
+# * ``network_size``
+#  (integer value)
+# Minimum value: 1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#num_networks = 1
+
+# DEPRECATED:
+# This option is no longer used since the /os-cloudpipe API was
+# removed in the
+# 16.0.0 Pike release. This is the public IP address for the cloudpipe
+# VPN
+# servers. It defaults to the IP address of the host.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment. It also will be ignored if the
+# configuration option
+# for `network_manager` is not set to the default of
+# 'nova.network.manager.VlanManager'.
+#
+# Possible values:
+#
+# * Any valid IP address. The default is ``$my_ip``, the IP address of
+# the VM.
+#
+# Related options:
+#
+# * ``network_manager``
+# * ``use_neutron``
+# * ``vpn_start``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#vpn_ip = $my_ip
+
+# DEPRECATED:
+# This is the port number to use as the first VPN port for private
+# networks.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment. It also will be ignored if the
+# configuration option
+# for `network_manager` is not set to the default of
+# 'nova.network.manager.VlanManager', or if you specify a value the
+# 'vpn_start'
+# parameter when creating a network.
+#
+# Possible values:
+#
+# * Any integer representing a valid port number. The default is 1000.
+#
+# Related options:
+#
+# * ``use_neutron``
+# * ``vpn_ip``
+# * ``network_manager``
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#vpn_start = 1000
+
+# DEPRECATED:
+# This option determines the number of addresses in each private
+# subnet.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+# * Any positive integer that is less than or equal to the available
+# network
+#   size. Note that if you are creating multiple networks, they must
+# all fit in
+#   the available IP address space. The default is 256.
+#
+# Related options:
+#
+# * ``use_neutron``
+# * ``num_networks``
+#  (integer value)
+# Minimum value: 1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#network_size = 256
+
+# DEPRECATED:
+# This option determines the fixed IPv6 address block when creating a
+# network.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+# * Any valid IPv6 CIDR
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#fixed_range_v6 = fd00::/48
+
+# DEPRECATED:
+# This is the default IPv4 gateway. It is used only in the testing
+# suite.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+# * Any valid IP address.
+#
+# Related options:
+#
+# * ``use_neutron``
+# * ``gateway_v6``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#gateway = <None>
+
+# DEPRECATED:
+# This is the default IPv6 gateway. It is used only in the testing
+# suite.
+#
+# Please note that this option is only used when using nova-network
+# instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+# * Any valid IP address.
+#
+# Related options:
+#
+# * ``use_neutron``
+# * ``gateway``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#gateway_v6 = <None>
+
+# DEPRECATED:
+# This option represents the number of IP addresses to reserve at the
+# top of the
+# address range for VPN clients. It also will be ignored if the
+# configuration
+# option for `network_manager` is not set to the default of
+# 'nova.network.manager.VlanManager'.
+#
+# Possible values:
+#
+# * Any integer, 0 or greater.
+#
+# Related options:
+#
+# * ``use_neutron``
+# * ``network_manager``
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#cnt_vpn_clients = 0
+
+# DEPRECATED:
+# This is the number of seconds to wait before disassociating a
+# deallocated fixed
+# IP address. This is only used with the nova-network service, and has
+# no effect
+# when using neutron for networking.
+#
+# Possible values:
+#
+# * Any integer, zero or greater.
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#fixed_ip_disassociate_timeout = 600
+
+# DEPRECATED:
+# This option determines how many times nova-network will attempt to
+# create a
+# unique MAC address before giving up and raising a
+# `VirtualInterfaceMacAddressException` error.
+#
+# Possible values:
+#
+# * Any positive integer. The default is 5.
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (integer value)
+# Minimum value: 1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#create_unique_mac_address_attempts = 5
+
+# DEPRECATED:
+# Determines whether unused gateway devices, both VLAN and bridge, are
+# deleted if
+# the network is in nova-network VLAN mode and is multi-hosted.
+#
+# Related options:
+#
+# * ``use_neutron``
+# * ``vpn_ip``
+# * ``fake_network``
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#teardown_unused_network_gateway = false
+
+# DEPRECATED:
+# When this option is True, a call is made to release the DHCP for the
+# instance
+# when that instance is terminated.
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+force_dhcp_release = true
+
+# DEPRECATED:
+# When this option is True, whenever a DNS entry must be updated, a
+# fanout cast
+# message is sent to all network hosts to update their DNS entries in
+# multi-host
+# mode.
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#update_dns_entries = false
+
+# DEPRECATED:
+# This option determines the time, in seconds, to wait between
+# refreshing DNS
+# entries for the network.
+#
+# Possible values:
+#
+# * A positive integer
+# * -1 to disable updates
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (integer value)
+# Minimum value: -1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#dns_update_periodic_interval = -1
+
+# DEPRECATED:
+# This option allows you to specify the domain for the DHCP server.
+#
+# Possible values:
+#
+# * Any string that is a valid domain name.
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#dhcp_domain = novalocal
+dhcp_domain=novalocal
+
+# DEPRECATED:
+# This option allows you to specify the L3 management library to be
+# used.
+#
+# Possible values:
+#
+# * Any dot-separated string that represents the import path to an L3
+# networking
+#   library.
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#l3_lib = nova.network.l3.LinuxNetL3
+
+# DEPRECATED:
+# THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK.
+#
+# If True in multi_host mode, all compute hosts share the same dhcp
+# address. The
+# same IP address used for DHCP will be added on each nova-network
+# node which is
+# only visible to the VMs on the same host.
+#
+# The use of this configuration has been deprecated and may be removed
+# in any
+# release after Mitaka. It is recommended that instead of relying on
+# this option,
+# an explicit value should be passed to 'create_networks()' as a
+# keyword argument
+# with the name 'share_address'.
+#  (boolean value)
+# This option is deprecated for removal since 2014.2.
+# Its value may be silently ignored in the future.
+#share_dhcp_address = false
+
+# DEPRECATED:
+# URL for LDAP server which will store DNS entries
+#
+# Possible values:
+#
+# * A valid LDAP URL representing the server
+#  (uri value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_url = ldap://ldap.example.com:389
+
+# DEPRECATED: Bind user for LDAP server (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_user = uid=admin,ou=people,dc=example,dc=org
+
+# DEPRECATED: Bind user's password for LDAP server (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_password = password
+
+# DEPRECATED:
+# Hostmaster for LDAP DNS driver Statement of Authority
+#
+# Possible values:
+#
+# * Any valid string representing LDAP DNS hostmaster.
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_soa_hostmaster = hostmaster@example.org
+
+# DEPRECATED:
+# DNS Servers for LDAP DNS driver
+#
+# Possible values:
+#
+# * A valid URL representing a DNS server
+#  (multi valued)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_servers = dns.example.org
+
+# DEPRECATED:
+# Base distinguished name for the LDAP search query
+#
+# This option helps to decide where to look up the host in LDAP.
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_base_dn = ou=hosts,dc=example,dc=org
+
+# DEPRECATED:
+# Refresh interval (in seconds) for LDAP DNS driver Start of Authority
+#
+# Time interval, a secondary/slave DNS server waits before requesting
+# for
+# primary DNS server's current SOA record. If the records are
+# different,
+# secondary DNS server will request a zone transfer from primary.
+#
+# NOTE: Lower values would cause more traffic.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_soa_refresh = 1800
+
+# DEPRECATED:
+# Retry interval (in seconds) for LDAP DNS driver Start of Authority
+#
+# Time interval, a secondary/slave DNS server should wait, if an
+# attempt to transfer zone failed during the previous refresh
+# interval.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_soa_retry = 3600
+
+# DEPRECATED:
+# Expiry interval (in seconds) for LDAP DNS driver Start of Authority
+#
+# Time interval, a secondary/slave DNS server holds the information
+# before it is no longer considered authoritative.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_soa_expiry = 86400
+
+# DEPRECATED:
+# Minimum interval (in seconds) for LDAP DNS driver Start of Authority
+#
+# It is Minimum time-to-live applies for all resource records in the
+# zone file. This value is supplied to other servers how long they
+# should keep the data in cache.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ldap_dns_soa_minimum = 7200
+
+# DEPRECATED:
+# Default value for multi_host in networks.
+#
+# nova-network service can operate in a multi-host or single-host
+# mode.
+# In multi-host mode each compute node runs a copy of nova-network and
+# the
+# instances on that compute node use the compute node as a gateway to
+# the
+# Internet. Where as in single-host mode, a central server runs the
+# nova-network
+# service. All compute nodes forward traffic from the instances to the
+# cloud controller which then forwards traffic to the Internet.
+#
+# If this options is set to true, some rpc network calls will be sent
+# directly
+# to host.
+#
+# Note that this option is only used when using nova-network instead
+# of
+# Neutron in your deployment.
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#multi_host = false
+
+# DEPRECATED:
+# Driver to use for network creation.
+#
+# Network driver initializes (creates bridges and so on) only when the
+# first VM lands on a host node. All network managers configure the
+# network using network drivers. The driver is not tied to any
+# particular
+# network manager.
+#
+# The default Linux driver implements vlans, bridges, and iptables
+# rules
+# using linux utilities.
+#
+# Note that this option is only used when using nova-network instead
+# of Neutron in your deployment.
+#
+# Related options:
+#
+# * ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#network_driver = nova.network.linux_net
+
+# DEPRECATED:
+# Firewall driver to use with ``nova-network`` service.
+#
+# This option only applies when using the ``nova-network`` service.
+# When using
+# another networking services, such as Neutron, this should be to set
+# to the
+# ``nova.virt.firewall.NoopFirewallDriver``.
+#
+# Possible values:
+#
+# * ``nova.virt.firewall.IptablesFirewallDriver``
+# * ``nova.virt.firewall.NoopFirewallDriver``
+# * ``nova.virt.libvirt.firewall.IptablesFirewallDriver``
+# * [...]
+#
+# Related options:
+#
+# * ``use_neutron``: This must be set to ``False`` to enable ``nova-
+# network``
+#   networking
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+firewall_driver = nova.virt.firewall.NoopFirewallDriver
+
+# DEPRECATED:
+# Determine whether to allow network traffic from same network.
+#
+# When set to true, hosts on the same subnet are not filtered and are
+# allowed
+# to pass all types of traffic between them. On a flat network, this
+# allows
+# all instances from all projects unfiltered communication. With VLAN
+# networking, this allows access between instances within the same
+# project.
+#
+# This option only applies when using the ``nova-network`` service.
+# When using
+# another networking services, such as Neutron, security groups or
+# other
+# approaches should be used.
+#
+# Possible values:
+#
+# * True: Network traffic should be allowed pass between all instances
+# on the
+#   same network, regardless of their tenant and security policies
+# * False: Network traffic should not be allowed pass between
+# instances unless
+#   it is unblocked in a security group
+#
+# Related options:
+#
+# * ``use_neutron``: This must be set to ``False`` to enable ``nova-
+# network``
+#   networking
+# * ``firewall_driver``: This must be set to
+#   ``nova.virt.libvirt.firewall.IptablesFirewallDriver`` to ensure
+# the
+#   libvirt firewall driver is enabled.
+#  (boolean value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#allow_same_net_traffic = true
+
+# DEPRECATED:
+# Default pool for floating IPs.
+#
+# This option specifies the default floating IP pool for allocating
+# floating IPs.
+#
+# While allocating a floating ip, users can optionally pass in the
+# name of the
+# pool they want to allocate from, otherwise it will be pulled from
+# the
+# default pool.
+#
+# If this option is not set, then 'nova' is used as default floating
+# pool.
+#
+# Possible values:
+#
+# * Any string representing a floating IP pool name
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This option was used for two purposes: to set the floating IP pool
+# name for
+# nova-network and to do the same for neutron. nova-network is
+# deprecated, as are
+# any related configuration options. Users of neutron, meanwhile,
+# should use the
+# 'default_floating_pool' option in the '[neutron]' group.
+#default_floating_pool = nova
+
+# DEPRECATED:
+# Autoassigning floating IP to VM
+#
+# When set to True, floating IP is auto allocated and associated
+# to the VM upon creation.
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#auto_assign_floating_ip = false
+
+# DEPRECATED:
+# Full class name for the DNS Manager for floating IPs.
+#
+# This option specifies the class of the driver that provides
+# functionality
+# to manage DNS entries associated with floating IPs.
+#
+# When a user adds a DNS entry for a specified domain to a floating
+# IP,
+# nova will add a DNS entry using the specified floating DNS driver.
+# When a floating IP is deallocated, its DNS entry will automatically
+# be deleted.
+#
+# Possible values:
+#
+# * Full Python path to the class to be used
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#floating_ip_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver
+
+# DEPRECATED:
+# Full class name for the DNS Manager for instance IPs.
+#
+# This option specifies the class of the driver that provides
+# functionality
+# to manage DNS entries for instances.
+#
+# On instance creation, nova will add DNS entries for the instance
+# name and
+# id, using the specified instance DNS driver and domain. On instance
+# deletion,
+# nova will remove the DNS entries.
+#
+# Possible values:
+#
+# * Full Python path to the class to be used
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#instance_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver
+
+# DEPRECATED:
+# If specified, Nova checks if the availability_zone of every instance
+# matches
+# what the database says the availability_zone should be for the
+# specified
+# dns_domain.
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#instance_dns_domain =
+
+# DEPRECATED:
+# Assign IPv6 and IPv4 addresses when creating instances.
+#
+# Related options:
+#
+# * use_neutron: this only works with nova-network.
+#  (boolean value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#use_ipv6 = false
+
+# DEPRECATED:
+# Abstracts out IPv6 address generation to pluggable backends.
+#
+# nova-network can be put into dual-stack mode, so that it uses
+# both IPv4 and IPv6 addresses. In dual-stack mode, by default,
+# instances
+# acquire IPv6 global unicast addresses with the help of stateless
+# address
+# auto-configuration mechanism.
+#
+# Related options:
+#
+# * use_neutron: this option only works with nova-network.
+# * use_ipv6: this option only works if ipv6 is enabled for nova-
+# network.
+#  (string value)
+# Possible values:
+# rfc2462 - <No description provided>
+# account_identifier - <No description provided>
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#ipv6_backend = rfc2462
+
+# DEPRECATED:
+# This option is used to enable or disable quota checking for tenant
+# networks.
+#
+# Related options:
+#
+# * quota_networks
+#  (boolean value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# CRUD operations on tenant networks are only available when using
+# nova-network
+# and nova-network is itself deprecated.
+#enable_network_quota = false
+
+# DEPRECATED:
+# This option controls the number of private networks that can be
+# created per
+# project (or per tenant).
+#
+# Related options:
+#
+# * enable_network_quota
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# CRUD operations on tenant networks are only available when using
+# nova-network
+# and nova-network is itself deprecated.
+#quota_networks = 3
+
+#
+# Filename that will be used for storing websocket frames received
+# and sent by a proxy service (like VNC, spice, serial) running on
+# this host.
+# If this is not set, no recording will be done.
+#  (string value)
+#record = <None>
+
+# Run as a background process. (boolean value)
+#daemon = false
+
+# Disallow non-encrypted connections. (boolean value)
+#ssl_only = false
+
+# Set to True if source host is addressed with IPv6. (boolean value)
+#source_is_ipv6 = false
+
+# Path to SSL certificate file. (string value)
+#cert = self.pem
+
+# SSL key file (if separate from cert). (string value)
+#key = <None>
+
+#
+# Path to directory with content which will be served by a web server.
+#  (string value)
+#web = /usr/share/spice-html5
+
+#
+# The directory where the Nova python modules are installed.
+#
+# This directory is used to store template files for networking and
+# remote
+# console access. It is also the default path for other config options
+# which
+# need to persist Nova internal data. It is very unlikely that you
+# need to
+# change this option from its default value.
+#
+# Possible values:
+#
+# * The full path to a directory.
+#
+# Related options:
+#
+# * ``state_path``
+#  (string value)
+#pybasedir = /usr/lib/python2.7/dist-packages
+
+#
+# The directory where the Nova binaries are installed.
+#
+# This option is only relevant if the networking capabilities from
+# Nova are
+# used (see services below). Nova's networking capabilities are
+# targeted to
+# be fully replaced by Neutron in the future. It is very unlikely that
+# you need
+# to change this option from its default value.
+#
+# Possible values:
+#
+# * The full path to a directory.
+#  (string value)
+#bindir = /usr/local/bin
+
+#
+# The top-level directory for maintaining Nova's state.
+#
+# This directory is used to store Nova's internal state. It is used by
+# a
+# variety of other config options which derive from this. In some
+# scenarios
+# (for example migrations) it makes sense to use a storage location
+# which is
+# shared between multiple compute hosts (for example via NFS). Unless
+# the
+# option ``instances_path`` gets overwritten, this directory can grow
+# very
+# large.
+#
+# Possible values:
+#
+# * The full path to a directory. Defaults to value provided in
+# ``pybasedir``.
+#  (string value)
+state_path = /var/lib/nova
+
+#
+# Number of seconds indicating how frequently the state of services on
+# a
+# given hypervisor is reported. Nova needs to know this to determine
+# the
+# overall health of the deployment.
+#
+# Related Options:
+#
+# * service_down_time
+#   report_interval should be less than service_down_time. If
+# service_down_time
+#   is less than report_interval, services will routinely be
+# considered down,
+#   because they report in too rarely.
+#  (integer value)
+#report_interval = 10
+report_interval = 60
+
+#
+# Maximum time in seconds since last check-in for up service
+#
+# Each compute node periodically updates their database status based
+# on the
+# specified report interval. If the compute node hasn't updated the
+# status
+# for more than service_down_time, then the compute node is considered
+# down.
+#
+# Related Options:
+#
+# * report_interval (service_down_time should not be less than
+# report_interval)
+#  (integer value)
+service_down_time = 90
+
+#
+# Enable periodic tasks.
+#
+# If set to true, this option allows services to periodically run
+# tasks
+# on the manager.
+#
+# In case of running multiple schedulers or conductors you may want to
+# run
+# periodic tasks on only one host - in this case disable this option
+# for all
+# hosts but one.
+#  (boolean value)
+#periodic_enable = true
+
+#
+# Number of seconds to randomly delay when starting the periodic task
+# scheduler to reduce stampeding.
+#
+# When compute workers are restarted in unison across a cluster,
+# they all end up running the periodic tasks at the same time
+# causing problems for the external services. To mitigate this
+# behavior, periodic_fuzzy_delay option allows you to introduce a
+# random initial delay when starting the periodic task scheduler.
+#
+# Possible Values:
+#
+# * Any positive integer (in seconds)
+# * 0 : disable the random delay
+#  (integer value)
+# Minimum value: 0
+#periodic_fuzzy_delay = 60
+
+# List of APIs to be enabled by default. (list value)
+enabled_apis = osapi_compute,metadata
+
+#
+# List of APIs with enabled SSL.
+#
+# Nova provides SSL support for the API servers. enabled_ssl_apis
+# option
+# allows configuring the SSL support.
+#  (list value)
+#enabled_ssl_apis =
+
+#
+# IP address on which the OpenStack API will listen.
+#
+# The OpenStack API service listens on this IP address for incoming
+# requests.
+#  (string value)
+#osapi_compute_listen = 0.0.0.0
+
+#
+# Port on which the OpenStack API will listen.
+#
+# The OpenStack API service listens on this port number for incoming
+# requests.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#osapi_compute_listen_port = 8774
+
+#
+# Number of workers for OpenStack API service. The default will be the
+# number
+# of CPUs available.
+#
+# OpenStack API services can be configured to run as multi-process
+# (workers).
+# This overcomes the problem of reduction in throughput when API
+# request
+# concurrency increases. OpenStack API service will run in the
+# specified
+# number of processes.
+#
+# Possible Values:
+#
+# * Any positive integer
+# * None (default value)
+#  (integer value)
+# Minimum value: 1
+#osapi_compute_workers = <None>
+
+#
+# IP address on which the metadata API will listen.
+#
+# The metadata API service listens on this IP address for incoming
+# requests.
+#  (string value)
+#metadata_listen = 0.0.0.0
+
+#
+# Port on which the metadata API will listen.
+#
+# The metadata API service listens on this port number for incoming
+# requests.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#metadata_listen_port = 8775
+
+#
+# Number of workers for metadata service. If not specified the number
+# of
+# available CPUs will be used.
+#
+# The metadata service can be configured to run as multi-process
+# (workers).
+# This overcomes the problem of reduction in throughput when API
+# request
+# concurrency increases. The metadata service will run in the
+# specified
+# number of processes.
+#
+# Possible Values:
+#
+# * Any positive integer
+# * None (default value)
+#  (integer value)
+# Minimum value: 1
+#metadata_workers = <None>
+
+# Full class name for the Manager for network (string value)
+# Possible values:
+# nova.network.manager.FlatManager - <No description provided>
+# nova.network.manager.FlatDHCPManager - <No description provided>
+# nova.network.manager.VlanManager - <No description provided>
+#network_manager = nova.network.manager.VlanManager
+
+#
+# This option specifies the driver to be used for the servicegroup
+# service.
+#
+# ServiceGroup API in nova enables checking status of a compute node.
+# When a
+# compute worker running the nova-compute daemon starts, it calls the
+# join API
+# to join the compute group. Services like nova scheduler can query
+# the
+# ServiceGroup API to check if a node is alive. Internally, the
+# ServiceGroup
+# client driver automatically updates the compute worker status. There
+# are
+# multiple backend implementations for this service: Database
+# ServiceGroup driver
+# and Memcache ServiceGroup driver.
+#
+# Possible Values:
+#
+#     * db : Database ServiceGroup driver
+#     * mc : Memcache ServiceGroup driver
+#
+# Related Options:
+#
+#     * service_down_time (maximum time since last check-in for up
+# service)
+#  (string value)
+# Possible values:
+# db - <No description provided>
+# mc - <No description provided>
+#servicegroup_driver = db
+
+#
+# From oslo.service.periodic_task
+#
+
+# Some periodic tasks can be run in a separate process. Should we run
+# them here? (boolean value)
+#run_external_periodic_tasks = true
+
+#
+# From oslo.service.service
+#
+
+# Enable eventlet backdoor.  Acceptable values are 0, <port>, and
+# <start>:<end>, where 0 results in listening on a random tcp port
+# number; <port> results in listening on the specified port number
+# (and not enabling backdoor if that port is in use); and
+# <start>:<end> results in listening on the smallest unused port
+# number within the specified range of port numbers.  The chosen port
+# is displayed in the service's log file. (string value)
+#backdoor_port = <None>
+
+# Enable eventlet backdoor, using the provided path as a unix socket
+# that can receive connections. This option is mutually exclusive with
+# 'backdoor_port' in that only one should be provided. If both are
+# provided then the existence of this option overrides the usage of
+# that option. (string value)
+#backdoor_socket = <None>
+
+# Enables or disables logging values of all registered options when
+# starting a service (at DEBUG level). (boolean value)
+#log_options = true
+
+# Specify a timeout after which a gracefully shutdown server will
+# exit. Zero value means endless wait. (integer value)
+#graceful_shutdown_timeout = 60
+#
 # From oslo.log
 #
 
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
 
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
 # Note: This option can be changed without restarting.
 # Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
 
 # Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
 #log_date_format = %Y-%m-%d %H:%M:%S
 
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
 
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
 #log_dir = <None>
 
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and Linux
-# platform is used. This option is ignored if log_config_append is set. (boolean
-# value)
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
 #watch_log_file = false
 
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append is
-# set. (boolean value)
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_syslog = false
 
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol which
-# includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
+# (boolean value)
 #use_journal = false
 
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
 
-# Use JSON formatting for logging. This option is ignored if log_config_append
-# is set. (boolean value)
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_json = false
 
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_stderr = false
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
-# Format string to use for log messages when context is undefined. (string
+# Format string to use for log messages when context is undefined.
+# (string value)
+#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
+#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+
+# Prefix each line of exception output with this format. (string
 # value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Additional data to append to log message when logging level for the message is
-# DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string value)
 #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
 
 # Defines the format string for %(user_identity)s that is used in
 # logging_context_format_string. (string value)
 #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
 
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
 #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
 
-# The format for an instance that is passed with the log message. (string value)
+# The format for an instance that is passed with the log message.
+# (string value)
 #instance_format = "[instance: %(uuid)s] "
 
-# The format for an instance UUID that is passed with the log message. (string
-# value)
+# The format for an instance UUID that is passed with the log message.
+# (string value)
 #instance_uuid_format = "[instance: %(uuid)s] "
 
 # Interval, number of seconds, of log rate limiting. (integer value)
 #rate_limit_interval = 0
 
-# Maximum number of logged messages per rate_limit_interval. (integer value)
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
 #rate_limit_burst = 0
 
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
-# empty string. Logs with level greater or equal to rate_limit_except_level are
-# not filtered. An empty string means that all levels are filtered. (string
-# value)
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
 #rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
-
 #
 # From oslo.messaging
 #
@@ -119,14 +3450,17 @@
 # Size of RPC connection pool. (integer value)
 #rpc_conn_pool_size = 30
 
-# The pool size limit for connections expiration policy (integer value)
+# The pool size limit for connections expiration policy (integer
+# value)
 #conn_pool_min_size = 2
 
-# The time-to-live in sec of idle connections in the pool (integer value)
+# The time-to-live in sec of idle connections in the pool (integer
+# value)
 #conn_pool_ttl = 1200
 
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
+# ZeroMQ bind address. Should be a wildcard (*), an ethernet
+# interface, or IP. The "host" option should point or resolve to this
+# address. (string value)
 #rpc_zmq_bind_address = *
 
 # MatchMaker driver. (string value)
@@ -139,51 +3473,54 @@
 # Number of ZeroMQ contexts, defaults to 1. (integer value)
 #rpc_zmq_contexts = 1
 
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
+# Maximum number of ingress messages to locally buffer per topic.
+# Default is unlimited. (integer value)
 #rpc_zmq_topic_backlog = <None>
 
 # Directory for holding IPC sockets. (string value)
 #rpc_zmq_ipc_dir = /var/run/openstack
 
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
+# Name of this node. Must be a valid hostname, FQDN, or IP address.
+# Must match "host" option, if running Nova. (string value)
 #rpc_zmq_host = localhost
 
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger period.
-# The value of 0 specifies no linger period. Pending messages shall be discarded
-# immediately when the socket is closed. Positive values specify an upper bound
-# for the linger period. (integer value)
+# Number of seconds to wait before all pending messages will be sent
+# after closing a socket. The default value of -1 specifies an
+# infinite linger period. The value of 0 specifies no linger period.
+# Pending messages shall be discarded immediately when the socket is
+# closed. Positive values specify an upper bound for the linger
+# period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
 #zmq_linger = -1
 
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
+# The default number of seconds that poll should wait. Poll raises
+# timeout exception when timeout expired. (integer value)
 #rpc_poll_timeout = 1
 
-# Expiration timeout in seconds of a name service record about existing target (
-# < 0 means no timeout). (integer value)
+# Expiration timeout in seconds of a name service record about
+# existing target ( < 0 means no timeout). (integer value)
 #zmq_target_expire = 300
 
-# Update period in seconds of a name service record about existing target.
-# (integer value)
+# Update period in seconds of a name service record about existing
+# target. (integer value)
 #zmq_target_update = 180
 
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
+# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
+# (boolean value)
 #use_pub_sub = false
 
 # Use ROUTER remote proxy. (boolean value)
 #use_router_proxy = false
 
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
+# This option makes direct connections dynamic or static. It makes
+# sense only with use_router_proxy=False which means to use direct
+# connections for direct message types (ignored otherwise). (boolean
+# value)
 #use_dynamic_connections = false
 
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
+# How many additional connections to a host will be made for failover
+# reasons. This option is actual only in dynamic connections mode.
+# (integer value)
 #zmq_failover_connections = 2
 
 # Minimal port number for random ports range. (port value)
@@ -196,8 +3533,8 @@
 # Maximum value: 65536
 #rpc_zmq_max_port = 65536
 
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
+# Number of retries to find free port number before fail with
+# ZMQBindError. (integer value)
 #rpc_zmq_bind_port_retries = 100
 
 # Default serialization mechanism for serializing/deserializing
@@ -207,77 +3544,83 @@
 # msgpack - <No description provided>
 #rpc_zmq_serialization = json
 
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
+# This option configures round-robin mode in zmq socket. True means
+# not keeping a queue when server side disconnects. False means to
+# keep queue and messages even if server is disconnected, when the
+# server appears we send all accumulated messages to it. (boolean
+# value)
 #zmq_immediate = true
 
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
+# Enable/disable TCP keepalive (KA) mechanism. The default value of -1
+# (or any other negative value) means to skip any overrides and leave
+# it to OS default; 0 and 1 (or any other positive value) mean to
+# disable and enable the option respectively. (integer value)
 #zmq_tcp_keepalive = -1
 
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to skip
-# any overrides and leave it to OS default. (integer value)
+# The duration between two keepalive transmissions in idle condition.
+# The unit is platform dependent, for example, seconds in Linux,
+# milliseconds in Windows etc. The default value of -1 (or any other
+# negative value and 0) means to skip any overrides and leave it to OS
+# default. (integer value)
 #zmq_tcp_keepalive_idle = -1
 
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value and
-# 0) means to skip any overrides and leave it to OS default. (integer value)
+# The number of retransmissions to be carried out before declaring
+# that remote end is not available. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
 #zmq_tcp_keepalive_cnt = -1
 
 # The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0) means
-# to skip any overrides and leave it to OS default. (integer value)
+# acknowledgement to the previous keepalive transmission is not
+# received. The unit is platform dependent, for example, seconds in
+# Linux, milliseconds in Windows etc. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
 #zmq_tcp_keepalive_intvl = -1
 
-# Maximum number of (green) threads to work concurrently. (integer value)
+# Maximum number of (green) threads to work concurrently. (integer
+# value)
 #rpc_thread_pool_size = 100
 
-# Expiration timeout in seconds of a sent/received message after which it is not
-# tracked anymore by a client/server. (integer value)
+# Expiration timeout in seconds of a sent/received message after which
+# it is not tracked anymore by a client/server. (integer value)
 #rpc_message_ttl = 300
 
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
+# Wait for message acknowledgements from receivers. This mechanism
+# works only via proxy without PUB/SUB. (boolean value)
 #rpc_use_acks = false
 
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
+# Number of seconds to wait for an ack from a cast/call. After each
+# retry attempt this timeout is multiplied by some specified
+# multiplier. (integer value)
 #rpc_ack_timeout_base = 15
 
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
+# Number to multiply base ack timeout by after each retry attempt.
+# (integer value)
 #rpc_ack_timeout_multiplier = 2
 
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
+# Default number of message sending attempts in case of any problems
+# occurred: positive value N means at most N retries, 0 means no
+# retries, None or -1 (or any other negative values) mean to retry
+# forever. This option is used only if acknowledgments are enabled.
+# (integer value)
 #rpc_retry_attempts = 3
 
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
+# List of publisher hosts SubConsumer can subscribe on. This option
+# has higher priority then the default publishers list taken from the
+# matchmaker. (list value)
 #subscribe_on =
 
-# Size of executor thread pool when executor is threading or eventlet. (integer
-# value)
+# Size of executor thread pool when executor is threading or eventlet.
+# (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
 #executor_thread_pool_size = 64
 
 # Seconds to wait for a response from a call. (integer value)
 #rpc_response_timeout = 60
 
-# The network address and optional user credentials for connecting to the
-# messaging backend, in URL format. The expected format is:
+# The network address and optional user credentials for connecting to
+# the messaging backend, in URL format. The expected format is:
 #
 # driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
 #
@@ -288,84 +3631,373 @@
 # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
 # (string value)
 #transport_url = <None>
-
-# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
-# include amqp and zmq. (string value)
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
+
+# DEPRECATED: The messaging driver to use, defaults to rabbit. Other
+# drivers include amqp and zmq. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
 #rpc_backend = rabbit
 
-# The default exchange under which topics are scoped. May be overridden by an
-# exchange name specified in the transport_url option. (string value)
+# The default exchange under which topics are scoped. May be
+# overridden by an exchange name specified in the transport_url
+# option. (string value)
 #control_exchange = openstack
 
-#
-# From oslo.service.periodic_task
-#
-
-# Some periodic tasks can be run in a separate process. Should we run them here?
-# (boolean value)
-#run_external_periodic_tasks = true
-
-#
-# From oslo.service.service
-#
-
-# Enable eventlet backdoor.  Acceptable values are 0, <port>, and <start>:<end>,
-# where 0 results in listening on a random tcp port number; <port> results in
-# listening on the specified port number (and not enabling backdoor if that port
-# is in use); and <start>:<end> results in listening on the smallest unused port
-# number within the specified range of port numbers.  The chosen port is
-# displayed in the service's log file. (string value)
-#backdoor_port = <None>
-
-# Enable eventlet backdoor, using the provided path as a unix socket that can
-# receive connections. This option is mutually exclusive with 'backdoor_port' in
-# that only one should be provided. If both are provided then the existence of
-# this option overrides the usage of that option. (string value)
-#backdoor_socket = <None>
-
-# Enables or disables logging values of all registered options when starting a
-# service (at DEBUG level). (boolean value)
-#log_options = true
-
-# Specify a timeout after which a gracefully shutdown server will exit. Zero
-# value means endless wait. (integer value)
-#graceful_shutdown_timeout = 60
-
-
-[cors]
-
-#
-# From oslo.middleware
-#
-
-# Indicate whether this resource may be shared with the domain received in the
-# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
-# slash. Example: https://horizon.example.com (list value)
-#allowed_origin = <None>
-
-# Indicate that the actual request can include user credentials (boolean value)
-#allow_credentials = true
-
-# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
-# Headers. (list value)
-#expose_headers =
-
-# Maximum cache age of CORS preflight requests. (integer value)
-#max_age = 3600
-
-# Indicate which methods can be used during the actual request. (list value)
-#allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH
-
-# Indicate which header field names may be used during the actual request. (list
-# value)
-#allow_headers =
-
-
-[database]
-
+
+[api]
+#
+# Options under this group are used to define Nova API.
+
+#
+# From nova.conf
+#
+
+#
+# This determines the strategy to use for authentication: keystone or
+# noauth2.
+# 'noauth2' is designed for testing only, as it does no actual
+# credential
+# checking. 'noauth2' provides administrative credentials only if
+# 'admin' is
+# specified as the username.
+#  (string value)
+# Possible values:
+# keystone - <No description provided>
+# noauth2 - <No description provided>
+auth_strategy = keystone
+
+#
+# When True, the 'X-Forwarded-For' header is treated as the canonical
+# remote
+# address. When False (the default), the 'remote_address' header is
+# used.
+#
+# You should only enable this if you have an HTML sanitizing proxy.
+#  (boolean value)
+#use_forwarded_for = false
+
+#
+# When gathering the existing metadata for a config drive, the
+# EC2-style
+# metadata is returned for all versions that don't appear in this
+# option.
+# As of the Liberty release, the available versions are:
+#
+# * 1.0
+# * 2007-01-19
+# * 2007-03-01
+# * 2007-08-29
+# * 2007-10-10
+# * 2007-12-15
+# * 2008-02-01
+# * 2008-09-01
+# * 2009-04-04
+#
+# The option is in the format of a single string, with each version
+# separated
+# by a space.
+#
+# Possible values:
+#
+# * Any string that represents zero or more versions, separated by
+# spaces.
+#  (string value)
+#config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01
+
+#
+# A list of vendordata providers.
+#
+# vendordata providers are how deployers can provide metadata via
+# configdrive
+# and metadata that is specific to their deployment. There are
+# currently two
+# supported providers: StaticJSON and DynamicJSON.
+#
+# StaticJSON reads a JSON file configured by the flag
+# vendordata_jsonfile_path
+# and places the JSON from that file into vendor_data.json and
+# vendor_data2.json.
+#
+# DynamicJSON is configured via the vendordata_dynamic_targets flag,
+# which is
+# documented separately. For each of the endpoints specified in that
+# flag, a
+# section is added to the vendor_data2.json.
+#
+# For more information on the requirements for implementing a
+# vendordata
+# dynamic endpoint, please see the vendordata.rst file in the nova
+# developer
+# reference.
+#
+# Possible values:
+#
+# * A list of vendordata providers, with StaticJSON and DynamicJSON
+# being
+#   current options.
+#
+# Related options:
+#
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_ssl_certfile
+# * vendordata_dynamic_connect_timeout
+# * vendordata_dynamic_read_timeout
+# * vendordata_dynamic_failure_fatal
+#  (list value)
+#vendordata_providers = StaticJSON
+
+#
+# A list of targets for the dynamic vendordata provider. These targets
+# are of
+# the form <name>@<url>.
+#
+# The dynamic vendordata provider collects metadata by contacting
+# external REST
+# services and querying them for information about the instance. This
+# behaviour
+# is documented in the vendordata.rst file in the nova developer
+# reference.
+#  (list value)
+#vendordata_dynamic_targets =
+
+#
+# Path to an optional certificate file or CA bundle to verify dynamic
+# vendordata REST services ssl certificates against.
+#
+# Possible values:
+#
+# * An empty string, or a path to a valid certificate file
+#
+# Related options:
+#
+# * vendordata_providers
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_connect_timeout
+# * vendordata_dynamic_read_timeout
+# * vendordata_dynamic_failure_fatal
+#  (string value)
+#vendordata_dynamic_ssl_certfile =
+
+#
+# Maximum wait time for an external REST service to connect.
+#
+# Possible values:
+#
+# * Any integer with a value greater than three (the TCP packet
+# retransmission
+#   timeout). Note that instance start may be blocked during this wait
+# time,
+#   so this value should be kept small.
+#
+# Related options:
+#
+# * vendordata_providers
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_ssl_certfile
+# * vendordata_dynamic_read_timeout
+# * vendordata_dynamic_failure_fatal
+#  (integer value)
+# Minimum value: 3
+#vendordata_dynamic_connect_timeout = 5
+
+#
+# Maximum wait time for an external REST service to return data once
+# connected.
+#
+# Possible values:
+#
+# * Any integer. Note that instance start is blocked during this wait
+# time,
+#   so this value should be kept small.
+#
+# Related options:
+#
+# * vendordata_providers
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_ssl_certfile
+# * vendordata_dynamic_connect_timeout
+# * vendordata_dynamic_failure_fatal
+#  (integer value)
+# Minimum value: 0
+#vendordata_dynamic_read_timeout = 5
+
+#
+# Should failures to fetch dynamic vendordata be fatal to instance
+# boot?
+#
+# Related options:
+#
+# * vendordata_providers
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_ssl_certfile
+# * vendordata_dynamic_connect_timeout
+# * vendordata_dynamic_read_timeout
+#  (boolean value)
+#vendordata_dynamic_failure_fatal = false
+
+#
+# This option is the time (in seconds) to cache metadata. When set to
+# 0,
+# metadata caching is disabled entirely; this is generally not
+# recommended for
+# performance reasons. Increasing this setting should improve response
+# times
+# of the metadata API when under heavy load. Higher values may
+# increase memory
+# usage, and result in longer times for host metadata changes to take
+# effect.
+#  (integer value)
+# Minimum value: 0
+#metadata_cache_expiration = 15
+
+#
+# Cloud providers may store custom data in vendor data file that will
+# then be
+# available to the instances via the metadata service, and to the
+# rendering of
+# config-drive. The default class for this, JsonFileVendorData, loads
+# this
+# information from a JSON file, whose path is configured by this
+# option. If
+# there is no path set by this option, the class returns an empty
+# dictionary.
+#
+# Possible values:
+#
+# * Any string representing the path to the data file, or an empty
+# string
+#     (default).
+#  (string value)
+#vendordata_jsonfile_path = <None>
+
+#
+# As a query can potentially return many thousands of items, you can
+# limit the
+# maximum number of items in a single response by setting this option.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/osapi_max_limit
+#max_limit = 1000
+
+#
+# This string is prepended to the normal URL that is returned in links
+# to the
+# OpenStack Compute API. If it is empty (the default), the URLs are
+# returned
+# unchanged.
+#
+# Possible values:
+#
+# * Any string, including an empty string (the default).
+#  (string value)
+# Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix
+#compute_link_prefix = <None>
+
+#
+# This string is prepended to the normal URL that is returned in links
+# to
+# Glance resources. If it is empty (the default), the URLs are
+# returned
+# unchanged.
+#
+# Possible values:
+#
+# * Any string, including an empty string (the default).
+#  (string value)
+# Deprecated group/name - [DEFAULT]/osapi_glance_link_prefix
+#glance_link_prefix = <None>
+
+# DEPRECATED:
+# Operators can turn off the ability for a user to take snapshots of
+# their
+# instances by setting this option to False. When disabled, any
+# attempt to
+# take a snapshot will result in a HTTP 400 response ("Bad Request").
+#  (boolean value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: This option disables the createImage server action API in a
+# non-discoverable way and is thus a barrier to interoperability.
+# Also, it is not used for other APIs that create snapshots like
+# shelve or createBackup. Disabling snapshots should be done via
+# policy if so desired.
+#allow_instance_snapshots = true
+
+# DEPRECATED:
+# This option is a list of all instance states for which network
+# address
+# information should not be returned from the API.
+#
+# Possible values:
+#
+#   A list of strings, where each string is a valid VM state, as
+# defined in
+#   nova/compute/vm_states.py. As of the Newton release, they are:
+#
+# * "active"
+# * "building"
+# * "paused"
+# * "suspended"
+# * "stopped"
+# * "rescued"
+# * "resized"
+# * "soft-delete"
+# * "deleted"
+# * "error"
+# * "shelved"
+# * "shelved_offloaded"
+#  (list value)
+# Deprecated group/name - [DEFAULT]/osapi_hide_server_address_states
+# This option is deprecated for removal since 17.0.0.
+# Its value may be silently ignored in the future.
+# Reason: This option hide the server address in server representation
+# for configured server states. Which makes GET server API controlled
+# by this config options. Due to this config options, user would not
+# be able to discover the API behavior on different clouds which leads
+# to the interop issue.
+#hide_server_address_states = building
+
+# The full path to the fping binary. (string value)
+#fping_path = /usr/sbin/fping
+
+#
+# When True, the TenantNetworkController will query the Neutron API to
+# get the
+# default networks to use.
+#
+# Related options:
+#
+# * neutron_default_tenant_id
+#  (boolean value)
+#use_neutron_default_nets = false
+
+#
+# Tenant ID for getting the default network from Neutron API (also
+# referred in
+# some places as the 'project ID') to use.
+#
+# Related options:
+#
+# * use_neutron_default_nets
+#  (string value)
+#neutron_default_tenant_id = default
+
+#
+# Enables returning of the instance password by the relevant server
+# API calls
+# such as create, rebuild, evacuate, or rescue. If the hypervisor does
+# not
+# support password injection, then the password returned will not be
+# correct,
+# so if your hypervisor does not support password injection, set this
+# to False.
+#  (boolean value)
+#enable_instance_password = true
+
+
+[api_database]
 #
 # From oslo.db
 #
@@ -377,30 +4009,30 @@
 # Deprecated group/name - [DEFAULT]/db_backend
 #backend = sqlalchemy
 
-# The SQLAlchemy connection string to use to connect to the database. (string
-# value)
+# The SQLAlchemy connection string to use to connect to the database.
+# (string value)
 # Deprecated group/name - [DEFAULT]/sql_connection
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
 #connection = <None>
-
-# The SQLAlchemy connection string to use to connect to the slave database.
-# (string value)
+connection = mysql+pymysql://nova:opnfv_secret@10.167.4.23/nova_api?charset=utf8
+# The SQLAlchemy connection string to use to connect to the slave
+# database. (string value)
 #slave_connection = <None>
 
-# The SQL mode to be used for MySQL sessions. This option, including the
-# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
-# the server configuration, set this to no value. Example: mysql_sql_mode=
-# (string value)
+# The SQL mode to be used for MySQL sessions. This option, including
+# the default, overrides any server-set SQL mode. To use whatever SQL
+# mode is set by the server configuration, set this to no value.
+# Example: mysql_sql_mode= (string value)
 #mysql_sql_mode = TRADITIONAL
 
-# If True, transparently enables support for handling MySQL Cluster (NDB).
-# (boolean value)
+# If True, transparently enables support for handling MySQL Cluster
+# (NDB). (boolean value)
 #mysql_enable_ndb = false
 
-# Connections which have been present in the connection pool longer than this
-# number of seconds will be replaced with a new one the next time they are
-# checked out from the pool. (integer value)
+# Connections which have been present in the connection pool longer
+# than this number of seconds will be replaced with a new one the next
+# time they are checked out from the pool. (integer value)
 # Deprecated group/name - [DATABASE]/idle_timeout
 # Deprecated group/name - [database]/idle_timeout
 # Deprecated group/name - [DEFAULT]/sql_idle_timeout
@@ -408,35 +4040,41 @@
 # Deprecated group/name - [sql]/idle_timeout
 #connection_recycle_time = 3600
 
-# Minimum number of SQL connections to keep open in a pool. (integer value)
+# Minimum number of SQL connections to keep open in a pool. (integer
+# value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
 # Deprecated group/name - [DATABASE]/sql_min_pool_size
 #min_pool_size = 1
 
-# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
-# indicates no limit. (integer value)
+# Maximum number of SQL connections to keep open in a pool. Setting a
+# value of 0 indicates no limit. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
 #max_pool_size = 5
-
-# Maximum number of database connection retries during startup. Set to -1 to
-# specify an infinite retry count. (integer value)
+max_pool_size = 10
+
+# Maximum number of database connection retries during startup. Set to
+# -1 to specify an infinite retry count. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
 #max_retries = 10
-
-# Interval between retries of opening a SQL connection. (integer value)
+max_retries = -1
+
+# Interval between retries of opening a SQL connection. (integer
+# value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
 # Deprecated group/name - [DATABASE]/reconnect_interval
 #retry_interval = 10
 
-# If set, use this value for max_overflow with SQLAlchemy. (integer value)
+# If set, use this value for max_overflow with SQLAlchemy. (integer
+# value)
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
 #max_overflow = 50
-
-# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
-# value)
+max_overflow = 30
+
+# Verbosity of SQL debugging information: 0=None, 100=Everything.
+# (integer value)
 # Minimum value: 0
 # Maximum value: 100
 # Deprecated group/name - [DEFAULT]/sql_connection_debug
@@ -446,82 +4084,7426 @@
 # Deprecated group/name - [DEFAULT]/sql_connection_trace
 #connection_trace = false
 
-# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
+# If set, use this value for pool_timeout with SQLAlchemy. (integer
+# value)
 # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
 #pool_timeout = <None>
 
-# Enable the experimental use of database reconnect on connection lost. (boolean
-# value)
+# Enable the experimental use of database reconnect on connection
+# lost. (boolean value)
 #use_db_reconnect = false
 
 # Seconds between retries of a database transaction. (integer value)
 #db_retry_interval = 1
 
-# If True, increases the interval between retries of a database operation up to
-# db_max_retry_interval. (boolean value)
+# If True, increases the interval between retries of a database
+# operation up to db_max_retry_interval. (boolean value)
 #db_inc_retry_interval = true
 
-# If db_inc_retry_interval is set, the maximum seconds between retries of a
-# database operation. (integer value)
+# If db_inc_retry_interval is set, the maximum seconds between retries
+# of a database operation. (integer value)
 #db_max_retry_interval = 10
 
-# Maximum retries in case of connection error or deadlock error before error is
-# raised. Set to -1 to specify an infinite retry count. (integer value)
+# Maximum retries in case of connection error or deadlock error before
+# error is raised. Set to -1 to specify an infinite retry count.
+# (integer value)
 #db_max_retries = 20
 
 #
 # From oslo.db.concurrency
 #
 
-# Enable the experimental use of thread pooling for all DB API calls (boolean
-# value)
+# Enable the experimental use of thread pooling for all DB API calls
+# (boolean value)
 # Deprecated group/name - [DEFAULT]/dbapi_use_tpool
 #use_tpool = false
-
-
-[healthcheck]
-
+[barbican]
+#
+# From castellan.config
+#
+
+# Use this endpoint to connect to Barbican, for example:
+# "http://localhost:9311/" (string value)
+#barbican_endpoint = <None>
+
+# Version of the Barbican API, for example: "v1" (string value)
+#barbican_api_version = <None>
+
+# Use this endpoint to connect to Keystone (string value)
+# Deprecated group/name - [key_manager]/auth_url
+#auth_endpoint = http://localhost/identity/v3
+auth_endpoint = http://10.167.4.35:35357/v3
+
+# Number of seconds to wait before retrying poll for key creation completion
+# (integer value)
+#retry_delay = 1
+
+# Number of times to retry poll for key creation completion (integer value)
+#number_of_retries = 60
+
+# Specifies if insecure TLS (https) requests. If False, the server's
+# certificate will not be validated (boolean value)
+#verify_ssl = true
+
+# Specifies the type of endpoint.  Allowed values are: public, private, and
+# admin (string value)
+# Possible values:
+# public - <No description provided>
+# internal - <No description provided>
+# admin - <No description provided>
+#barbican_endpoint_type = public
+
+
+[cache]
+
+#
+# From nova.conf
+#
+backend = oslo_cache.memcache_pool
+enabled = true
+memcache_servers=10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
+
+# Prefix for building the configuration dictionary for the cache
+# region. This should not need to be changed unless there is another
+# dogpile.cache region with the same configuration name. (string
+# value)
+#config_prefix = cache.oslo
+
+# Default TTL, in seconds, for any cached item in the dogpile.cache
+# region. This applies to any cached method that doesn't have an
+# explicit cache expiration time defined for it. (integer value)
+#expiration_time = 600
+
+# Cache backend module. For eventlet-based or environments with
+# hundreds of threaded servers, Memcache with pooling
+# (oslo_cache.memcache_pool) is recommended. For environments with
+# less than 100 threaded servers, Memcached (dogpile.cache.memcached)
+# or Redis (dogpile.cache.redis) is recommended. Test environments
+# with a single instance of the server can use the
+# dogpile.cache.memory backend. (string value)
+# Possible values:
+# oslo_cache.memcache_pool - <No description provided>
+# oslo_cache.dict - <No description provided>
+# oslo_cache.mongo - <No description provided>
+# oslo_cache.etcd3gw - <No description provided>
+# dogpile.cache.memcached - <No description provided>
+# dogpile.cache.pylibmc - <No description provided>
+# dogpile.cache.bmemcached - <No description provided>
+# dogpile.cache.dbm - <No description provided>
+# dogpile.cache.redis - <No description provided>
+# dogpile.cache.memory - <No description provided>
+# dogpile.cache.memory_pickle - <No description provided>
+# dogpile.cache.null - <No description provided>
+#backend = dogpile.cache.null
+
+# Arguments supplied to the backend module. Specify this option once
+# per argument to be passed to the dogpile.cache backend. Example
+# format: "<argname>:<value>". (multi valued)
+#backend_argument =
+
+# Proxy classes to import that will affect the way the dogpile.cache
+# backend functions. See the dogpile.cache documentation on changing-
+# backend-behavior. (list value)
+#proxies =
+
+# Global toggle for caching. (boolean value)
+#enabled = false
+
+# Extra debugging from the cache backend (cache keys,
+# get/set/delete/etc calls). This is only really useful if you need to
+# see the specific cache-backend get/set/delete calls with the
+# keys/values.  Typically this should be left set to false. (boolean
+# value)
+#debug_cache_backend = false
+
+# Memcache servers in the format of "host:port".
+# (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
+# (list value)
+#memcache_servers = localhost:11211
+
+# Number of seconds memcached server is considered dead before it is
+# tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool
+# backends only). (integer value)
+#memcache_dead_retry = 300
+
+# Timeout in seconds for every call to a server.
+# (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
+# (integer value)
+#memcache_socket_timeout = 3
+
+# Max total number of open connections to every memcached server.
+# (oslo_cache.memcache_pool backend only). (integer value)
+#memcache_pool_maxsize = 10
+
+# Number of seconds a connection to memcached is held unused in the
+# pool before it is closed. (oslo_cache.memcache_pool backend only).
+# (integer value)
+#memcache_pool_unused_timeout = 60
+
+# Number of seconds that an operation will wait to get a memcache
+# client connection. (integer value)
+#memcache_pool_connection_get_timeout = 10
+
+
+[cells]
+#
+# DEPRECATED: Cells options allow you to use cells v1 functionality in
+# an
+# OpenStack deployment.
+#
+# Note that the options in this group are only for cells v1
+# functionality, which
+# is considered experimental and not recommended for new deployments.
+# Cells v1
+# is being replaced with cells v2, which starting in the 15.0.0 Ocata
+# release is
+# required and all Nova deployments will be at least a cells v2 cell
+# of one.
+#
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# Enable cell v1 functionality.
+#
+# Note that cells v1 is considered experimental and not recommended
+# for new
+# Nova deployments. Cells v1 is being replaced by cells v2 which
+# starting in
+# the 15.0.0 Ocata release, all Nova deployments are at least a cells
+# v2 cell
+# of one. Setting this option, or any other options in the [cells]
+# group, is
+# not required for cells v2.
+#
+# When this functionality is enabled, it lets you to scale an
+# OpenStack
+# Compute cloud in a more distributed fashion without having to use
+# complicated technologies like database and message queue clustering.
+# Cells are configured as a tree. The top-level cell should have a
+# host
+# that runs a nova-api service, but no nova-compute services. Each
+# child cell should run all of the typical nova-* services in a
+# regular
+# Compute cloud except for nova-api. You can think of cells as a
+# normal
+# Compute deployment in that each cell has its own database server and
+# message queue broker.
+#
+# Related options:
+#
+# * name: A unique cell name must be given when this functionality
+#   is enabled.
+# * cell_type: Cell type should be defined for all cells.
+#  (boolean value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#enable = false
+
+# DEPRECATED:
+# Name of the current cell.
+#
+# This value must be unique for each cell. Name of a cell is used as
+# its id, leaving this option unset or setting the same name for
+# two or more cells may cause unexpected behaviour.
+#
+# Related options:
+#
+# * enabled: This option is meaningful only when cells service
+#   is enabled
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#name = nova
+
+# DEPRECATED:
+# Cell capabilities.
+#
+# List of arbitrary key=value pairs defining capabilities of the
+# current cell to be sent to the parent cells. These capabilities
+# are intended to be used in cells scheduler filters/weighers.
+#
+# Possible values:
+#
+# * key=value pairs list for example;
+#   ``hypervisor=xenserver;kvm,os=linux;windows``
+#  (list value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#capabilities = hypervisor=xenserver;kvm,os=linux;windows
+
+# DEPRECATED:
+# Call timeout.
+#
+# Cell messaging module waits for response(s) to be put into the
+# eventlet queue. This option defines the seconds waited for
+# response from a call to a cell.
+#
+# Possible values:
+#
+# * An integer, corresponding to the interval time in seconds.
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#call_timeout = 60
+
+# DEPRECATED:
+# Reserve percentage
+#
+# Percentage of cell capacity to hold in reserve, so the minimum
+# amount of free resource is considered to be;
+#
+#     min_free = total * (reserve_percent / 100.0)
+#
+# This option affects both memory and disk utilization.
+#
+# The primary purpose of this reserve is to ensure some space is
+# available for users who want to resize their instance to be larger.
+# Note that currently once the capacity expands into this reserve
+# space this option is ignored.
+#
+# Possible values:
+#
+# * An integer or float, corresponding to the percentage of cell
+# capacity to
+#   be held in reserve.
+#  (floating point value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#reserve_percent = 10.0
+
+# DEPRECATED:
+# Type of cell.
+#
+# When cells feature is enabled the hosts in the OpenStack Compute
+# cloud are partitioned into groups. Cells are configured as a tree.
+# The top-level cell's cell_type must be set to ``api``. All other
+# cells are defined as a ``compute cell`` by default.
+#
+# Related option:
+#
+# * quota_driver: Disable quota checking for the child cells.
+#   (nova.quota.NoopQuotaDriver)
+#  (string value)
+# Possible values:
+# api - <No description provided>
+# compute - <No description provided>
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#cell_type = compute
+
+# DEPRECATED:
+# Mute child interval.
+#
+# Number of seconds after which a lack of capability and capacity
+# update the child cell is to be treated as a mute cell. Then the
+# child cell will be weighed as recommend highly that it be skipped.
+#
+# Possible values:
+#
+# * An integer, corresponding to the interval time in seconds.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#mute_child_interval = 300
+
+# DEPRECATED:
+# Bandwidth update interval.
+#
+# Seconds between bandwidth usage cache updates for cells.
+#
+# Possible values:
+#
+# * An integer, corresponding to the interval time in seconds.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#bandwidth_update_interval = 600
+
+# DEPRECATED:
+# Instance update sync database limit.
+#
+# Number of instances to pull from the database at one time for
+# a sync. If there are more instances to update the results will
+# be paged through.
+#
+# Possible values:
+#
+# * An integer, corresponding to a number of instances.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#instance_update_sync_database_limit = 100
+
+# DEPRECATED:
+# Mute weight multiplier.
+#
+# Multiplier used to weigh mute children. Mute children cells are
+# recommended to be skipped so their weight is multiplied by this
+# negative value.
+#
+# Possible values:
+#
+# * Negative numeric number
+#  (floating point value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#mute_weight_multiplier = -10000.0
+
+# DEPRECATED:
+# Ram weight multiplier.
+#
+# Multiplier used for weighing ram. Negative numbers indicate that
+# Compute should stack VMs on one host instead of spreading out new
+# VMs to more hosts in the cell.
+#
+# Possible values:
+#
+# * Numeric multiplier
+#  (floating point value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#ram_weight_multiplier = 10.0
+
+# DEPRECATED:
+# Offset weight multiplier
+#
+# Multiplier used to weigh offset weigher. Cells with higher
+# weight_offsets in the DB will be preferred. The weight_offset
+# is a property of a cell stored in the database. It can be used
+# by a deployer to have scheduling decisions favor or disfavor
+# cells based on the setting.
+#
+# Possible values:
+#
+# * Numeric multiplier
+#  (floating point value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#offset_weight_multiplier = 1.0
+
+# DEPRECATED:
+# Instance updated at threshold
+#
+# Number of seconds after an instance was updated or deleted to
+# continue to update cells. This option lets cells manager to only
+# attempt to sync instances that have been updated recently.
+# i.e., a threshold of 3600 means to only update instances that
+# have modified in the last hour.
+#
+# Possible values:
+#
+# * Threshold in seconds
+#
+# Related options:
+#
+# * This value is used with the ``instance_update_num_instances``
+#   value in a periodic task run.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#instance_updated_at_threshold = 3600
+
+# DEPRECATED:
+# Instance update num instances
+#
+# On every run of the periodic task, nova cells manager will attempt
+# to
+# sync instance_updated_at_threshold number of instances. When the
+# manager gets the list of instances, it shuffles them so that
+# multiple
+# nova-cells services do not attempt to sync the same instances in
+# lockstep.
+#
+# Possible values:
+#
+# * Positive integer number
+#
+# Related options:
+#
+# * This value is used with the ``instance_updated_at_threshold``
+#   value in a periodic task run.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#instance_update_num_instances = 1
+
+# DEPRECATED:
+# Maximum hop count
+#
+# When processing a targeted message, if the local cell is not the
+# target, a route is defined between neighbouring cells. And the
+# message is processed across the whole routing path. This option
+# defines the maximum hop counts until reaching the target.
+#
+# Possible values:
+#
+# * Positive integer value
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#max_hop_count = 10
+
+# DEPRECATED:
+# Cells scheduler.
+#
+# The class of the driver used by the cells scheduler. This should be
+# the full Python path to the class to be used. If nothing is
+# specified
+# in this option, the CellsScheduler is used.
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#scheduler = nova.cells.scheduler.CellsScheduler
+
+# DEPRECATED:
+# RPC driver queue base.
+#
+# When sending a message to another cell by JSON-ifying the message
+# and making an RPC cast to 'process_message', a base queue is used.
+# This option defines the base queue name to be used when
+# communicating
+# between cells. Various topics by message type will be appended to
+# this.
+#
+# Possible values:
+#
+# * The base queue name to be used when communicating between cells.
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#rpc_driver_queue_base = cells.intercell
+
+# DEPRECATED:
+# Scheduler filter classes.
+#
+# Filter classes the cells scheduler should use. An entry of
+# "nova.cells.filters.all_filters" maps to all cells filters
+# included with nova. As of the Mitaka release the following
+# filter classes are available:
+#
+# Different cell filter: A scheduler hint of 'different_cell'
+# with a value of a full cell name may be specified to route
+# a build away from a particular cell.
+#
+# Image properties filter: Image metadata named
+# 'hypervisor_version_requires' with a version specification
+# may be specified to ensure the build goes to a cell which
+# has hypervisors of the required version. If either the version
+# requirement on the image or the hypervisor capability of the
+# cell is not present, this filter returns without filtering out
+# the cells.
+#
+# Target cell filter: A scheduler hint of 'target_cell' with a
+# value of a full cell name may be specified to route a build to
+# a particular cell. No error handling is done as there's no way
+# to know whether the full path is a valid.
+#
+# As an admin user, you can also add a filter that directs builds
+# to a particular cell.
+#
+#  (list value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#scheduler_filter_classes = nova.cells.filters.all_filters
+
+# DEPRECATED:
+# Scheduler weight classes.
+#
+# Weigher classes the cells scheduler should use. An entry of
+# "nova.cells.weights.all_weighers" maps to all cell weighers
+# included with nova. As of the Mitaka release the following
+# weight classes are available:
+#
+# mute_child: Downgrades the likelihood of child cells being
+# chosen for scheduling requests, which haven't sent capacity
+# or capability updates in a while. Options include
+# mute_weight_multiplier (multiplier for mute children; value
+# should be negative).
+#
+# ram_by_instance_type: Select cells with the most RAM capacity
+# for the instance type being requested. Because higher weights
+# win, Compute returns the number of available units for the
+# instance type requested. The ram_weight_multiplier option defaults
+# to 10.0 that adds to the weight by a factor of 10. Use a negative
+# number to stack VMs on one host instead of spreading out new VMs
+# to more hosts in the cell.
+#
+# weight_offset: Allows modifying the database to weight a particular
+# cell. The highest weight will be the first cell to be scheduled for
+# launching an instance. When the weight_offset of a cell is set to 0,
+# it is unlikely to be picked but it could be picked if other cells
+# have a lower weight, like if they're full. And when the
+# weight_offset
+# is set to a very high value (for example, '999999999999999'), it is
+# likely to be picked if another cell do not have a higher weight.
+#  (list value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#scheduler_weight_classes = nova.cells.weights.all_weighers
+
+# DEPRECATED:
+# Scheduler retries.
+#
+# How many retries when no cells are available. Specifies how many
+# times the scheduler tries to launch a new instance when no cells
+# are available.
+#
+# Possible values:
+#
+# * Positive integer value
+#
+# Related options:
+#
+# * This value is used with the ``scheduler_retry_delay`` value
+#   while retrying to find a suitable cell.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#scheduler_retries = 10
+
+# DEPRECATED:
+# Scheduler retry delay.
+#
+# Specifies the delay (in seconds) between scheduling retries when no
+# cell can be found to place the new instance on. When the instance
+# could not be scheduled to a cell after ``scheduler_retries`` in
+# combination with ``scheduler_retry_delay``, then the scheduling
+# of the instance failed.
+#
+# Possible values:
+#
+# * Time in seconds.
+#
+# Related options:
+#
+# * This value is used with the ``scheduler_retries`` value
+#   while retrying to find a suitable cell.
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#scheduler_retry_delay = 2
+
+# DEPRECATED:
+# DB check interval.
+#
+# Cell state manager updates cell status for all cells from the DB
+# only after this particular interval time is passed. Otherwise cached
+# status are used. If this value is 0 or negative all cell status are
+# updated from the DB whenever a state is needed.
+#
+# Possible values:
+#
+# * Interval time, in seconds.
+#
+#  (integer value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#db_check_interval = 60
+
+# DEPRECATED:
+# Optional cells configuration.
+#
+# Configuration file from which to read cells configuration. If given,
+# overrides reading cells from the database.
+#
+# Cells store all inter-cell communication data, including user names
+# and passwords, in the database. Because the cells data is not
+# updated
+# very frequently, use this option to specify a JSON file to store
+# cells data. With this configuration, the database is no longer
+# consulted when reloading the cells data. The file must have columns
+# present in the Cell model (excluding common database fields and the
+# id column). You must specify the queue connection information
+# through
+# a transport_url field, instead of username, password, and so on.
+#
+# The transport_url has the following form:
+# rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST
+#
+# Possible values:
+#
+# The scheme can be either qpid or rabbit, the following sample shows
+# this optional configuration:
+#
+#     {
+#         "parent": {
+#             "name": "parent",
+#             "api_url": "http://api.example.com:8774",
+#             "transport_url": "rabbit://rabbit.example.com",
+#             "weight_offset": 0.0,
+#             "weight_scale": 1.0,
+#             "is_parent": true
+#         },
+#         "cell1": {
+#             "name": "cell1",
+#             "api_url": "http://api.example.com:8774",
+#             "transport_url": "rabbit://rabbit1.example.com",
+#             "weight_offset": 0.0,
+#             "weight_scale": 1.0,
+#             "is_parent": false
+#         },
+#         "cell2": {
+#             "name": "cell2",
+#             "api_url": "http://api.example.com:8774",
+#             "transport_url": "rabbit://rabbit2.example.com",
+#             "weight_offset": 0.0,
+#             "weight_scale": 1.0,
+#             "is_parent": false
+#         }
+#     }
+#
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Cells v1 is being replaced with Cells v2.
+#cells_config = <None>
+
+
+[cinder]
+
+#
+# From nova.conf
+#
+
+#
+# Info to match when looking for cinder in the service catalog.
+#
+# Possible values:
+#
+# * Format is separated values of the form:
+#   <service_type>:<service_name>:<endpoint_type>
+#
+# Note: Nova does not support the Cinder v2 API since the Nova 17.0.0
+# Queens
+# release.
+#
+# Related options:
+#
+# * endpoint_template - Setting this option will override catalog_info
+#  (string value)
+#catalog_info = volumev3:cinderv3:publicURL
+catalog_info = volumev3:cinderv3:internalURL
+
+#
+# If this option is set then it will override service catalog lookup
+# with
+# this template for cinder endpoint
+#
+# Possible values:
+#
+# * URL for cinder endpoint API
+#   e.g. http://localhost:8776/v3/%(project_id)s
+#
+# Note: Nova does not support the Cinder v2 API since the Nova 17.0.0
+# Queens
+# release.
+#
+# Related options:
+#
+# * catalog_info - If endpoint_template is not set, catalog_info will
+# be used.
+#  (string value)
+#endpoint_template = <None>
+
+#
+# Region name of this node. This is used when picking the URL in the
+# service
+# catalog.
+#
+# Possible values:
+#
+# * Any string representing region name
+#  (string value)
+#os_region_name = <None>
+os_region_name = RegionOne
+
+#
+# Number of times cinderclient should retry on any failed http call.
+# 0 means connection is attempted only once. Setting it to any
+# positive integer
+# means that on failure connection is retried that many times e.g.
+# setting it
+# to 3 means total attempts to connect will be 4.
+#
+# Possible values:
+#
+# * Any integer value. 0 means connection is attempted only once
+#  (integer value)
+# Minimum value: 0
+#http_retries = 3
+
+#
+# Allow attach between instance and volume in different availability
+# zones.
+#
+# If False, volumes attached to an instance must be in the same
+# availability
+# zone in Cinder as the instance availability zone in Nova.
+# This also means care should be taken when booting an instance from a
+# volume
+# where source is not "volume" because Nova will attempt to create a
+# volume using
+# the same availability zone as what is assigned to the instance.
+# If that AZ is not in Cinder (or
+# allow_availability_zone_fallback=False in
+# cinder.conf), the volume create request will fail and the instance
+# will fail
+# the build request.
+# By default there is no availability zone restriction on volume
+# attach.
+#  (boolean value)
+#cross_az_attach = true
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internal
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = nova
+
+
+[compute]
+
+#
+# From nova.conf
+#
+
+#
+# Number of consecutive failed builds that result in disabling a
+# compute service.
+#
+# This option will cause nova-compute to set itself to a disabled
+# state
+# if a certain number of consecutive build failures occur. This will
+# prevent the scheduler from continuing to send builds to a compute
+# node that is
+# consistently failing. Note that all failures qualify and count
+# towards this
+# score, including reschedules that may have been due to racy
+# scheduler behavior.
+# Since the failures must be consecutive, it is unlikely that
+# occasional expected
+# reschedules will actually disable a compute node.
+#
+# Possible values:
+#
+# * Any positive integer representing a build failure count.
+# * Zero to never auto-disable.
+#  (integer value)
+#consecutive_build_service_disable_threshold = 10
+
+#
+# Interval for updating nova-compute-side cache of the compute node
+# resource
+# provider's aggregates and traits info.
+#
+# This option specifies the number of seconds between attempts to
+# update a
+# provider's aggregates and traits information in the local cache of
+# the compute
+# node.
+#
+# Possible values:
+#
+# * Any positive integer in seconds.
+#  (integer value)
+# Minimum value: 1
+#resource_provider_association_refresh = 300
+
+
+[conductor]
+#
+# Options under this group are used to define Conductor's
+# communication,
+# which manager should be act as a proxy between computes and
+# database,
+# and finally, how many worker processes will be used.
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# Topic exchange name on which conductor nodes listen.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There is no need to let users choose the RPC topic for all services
+# - there
+# is little gain from this. Furthermore, it makes it really easy to
+# break Nova
+# by using this option.
+#topic = conductor
+
+#
+# Number of workers for OpenStack Conductor service. The default will
+# be the
+# number of CPUs available.
+#  (integer value)
+#workers = <None>
+
+
+[console]
+#
+# Options under this group allow to tune the configuration of the
+# console proxy
+# service.
+#
+# Note: in configuration of every compute is a ``console_host``
+# option,
+# which allows to select the console proxy service to connect to.
+
+#
+# From nova.conf
+#
+
+#
+# Adds list of allowed origins to the console websocket proxy to allow
+# connections from other origin hostnames.
+# Websocket proxy matches the host header with the origin header to
+# prevent cross-site requests. This list specifies if any there are
+# values other than host are allowed in the origin header.
+#
+# Possible values:
+#
+# * A list where each element is an allowed origin hostnames, else an
+# empty list
+#  (list value)
+# Deprecated group/name - [DEFAULT]/console_allowed_origins
+#allowed_origins =
+
+
+[consoleauth]
+
+#
+# From nova.conf
+#
+
+#
+# The lifetime of a console auth token (in seconds).
+#
+# A console auth token is used in authorizing console access for a
+# user.
+# Once the auth token time to live count has elapsed, the token is
+# considered expired.  Expired tokens are then deleted.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/console_token_ttl
+#token_ttl = 600
+
+
+[crypto]
+
+#
+# From nova.conf
+#
+
+#
+# Filename of root CA (Certificate Authority). This is a container
+# format
+# and includes root certificates.
+#
+# Possible values:
+#
+# * Any file name containing root CA, cacert.pem is default
+#
+# Related options:
+#
+# * ca_path
+#  (string value)
+#ca_file = cacert.pem
+
+#
+# Filename of a private key.
+#
+# Related options:
+#
+# * keys_path
+#  (string value)
+#key_file = private/cakey.pem
+
+#
+# Filename of root Certificate Revocation List (CRL). This is a list
+# of
+# certificates that have been revoked, and therefore, entities
+# presenting
+# those (revoked) certificates should no longer be trusted.
+#
+# Related options:
+#
+# * ca_path
+#  (string value)
+#crl_file = crl.pem
+
+#
+# Directory path where keys are located.
+#
+# Related options:
+#
+# * key_file
+#  (string value)
+#keys_path = $state_path/keys
+
+#
+# Directory path where root CA is located.
+#
+# Related options:
+#
+# * ca_file
+#  (string value)
+#ca_path = $state_path/CA
+
+# Option to enable/disable use of CA for each project. (boolean value)
+#use_project_ca = false
+
+#
+# Subject for certificate for users, %s for
+# project, user, timestamp
+#  (string value)
+#user_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s
+
+#
+# Subject for certificate for projects, %s for
+# project, timestamp
+#  (string value)
+#project_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s
+
+
+[devices]
+
+#
+# From nova.conf
+#
+
+#
+# A list of the vGPU types enabled in the compute node.
+#
+# Some pGPUs (e.g. NVIDIA GRID K1) support different vGPU types. User
+# can use
+# this option to specify a list of enabled vGPU types that may be
+# assigned to a
+# guest instance. But please note that Nova only supports a single
+# type in the
+# Queens release. If more than one vGPU type is specified (as a comma-
+# separated
+# list), only the first one will be used. An example is as the
+# following:
+#     [devices]
+#     enabled_vgpu_types = GRID K100,Intel GVT-g,MxGPU.2,nvidia-11
+#  (list value)
+#enabled_vgpu_types =
+
+
+[ephemeral_storage_encryption]
+
+#
+# From nova.conf
+#
+
+#
+# Enables/disables LVM ephemeral storage encryption.
+#  (boolean value)
+#enabled = false
+
+#
+# Cipher-mode string to be used.
+#
+# The cipher and mode to be used to encrypt ephemeral storage. The set
+# of
+# cipher-mode combinations available depends on kernel support.
+# According
+# to the dm-crypt documentation, the cipher is expected to be in the
+# format:
+# "<cipher>-<chainmode>-<ivmode>".
+#
+# Possible values:
+#
+# * Any crypto option listed in ``/proc/crypto``.
+#  (string value)
+#cipher = aes-xts-plain64
+
+#
+# Encryption key length in bits.
+#
+# The bit length of the encryption key to be used to encrypt ephemeral
+# storage.
+# In XTS mode only half of the bits are used for encryption key.
+#  (integer value)
+# Minimum value: 1
+#key_size = 512
+
+
+[filter_scheduler]
+
+#
+# From nova.conf
+#
+
+#
+# Size of subset of best hosts selected by scheduler.
+#
+# New instances will be scheduled on a host chosen randomly from a
+# subset of the
+# N best hosts, where N is the value set by this option.
+#
+# Setting this to a value greater than 1 will reduce the chance that
+# multiple
+# scheduler processes handling similar requests will select the same
+# host,
+# creating a potential race condition. By selecting a host randomly
+# from the N
+# hosts that best fit the request, the chance of a conflict is
+# reduced. However,
+# the higher you set this value, the less optimal the chosen host may
+# be for a
+# given request.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * An integer, where the integer corresponds to the size of a host
+# subset. Any
+#   integer is valid, although any value less than 1 will be treated
+# as 1
+#  (integer value)
+# Minimum value: 1
+# Deprecated group/name - [DEFAULT]/scheduler_host_subset_size
+#host_subset_size = 1
+
+#
+# The number of instances that can be actively performing IO on a
+# host.
+#
+# Instances performing IO includes those in the following states:
+# build, resize,
+# snapshot, migrate, rescue, unshelve.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the 'io_ops_filter' filter is enabled.
+#
+# Possible values:
+#
+# * An integer, where the integer corresponds to the max number of
+# instances
+#   that can be actively performing IO on any given host.
+#  (integer value)
+#max_io_ops_per_host = 8
+
+#
+# Maximum number of instances that be active on a host.
+#
+# If you need to limit the number of instances on any given host, set
+# this option
+# to the maximum number of instances you want to allow. The
+# num_instances_filter
+# will reject any host that has at least as many instances as this
+# option's
+# value.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the 'num_instances_filter' filter is
+# enabled.
+#
+# Possible values:
+#
+# * An integer, where the integer corresponds to the max instances
+# that can be
+#   scheduled on a host.
+#  (integer value)
+# Minimum value: 1
+#max_instances_per_host = 50
+
+#
+# Enable querying of individual hosts for instance information.
+#
+# The scheduler may need information about the instances on a host in
+# order to
+# evaluate its filters and weighers. The most common need for this
+# information is
+# for the (anti-)affinity filters, which need to choose a host based
+# on the
+# instances already running on a host.
+#
+# If the configured filters and weighers do not need this information,
+# disabling
+# this option will improve performance. It may also be disabled when
+# the tracking
+# overhead proves too heavy, although this will cause classes
+# requiring host
+# usage data to query the database on each request instead.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# NOTE: In a multi-cell (v2) setup where the cell MQ is separated from
+# the
+# top-level, computes cannot directly communicate with the scheduler.
+# Thus,
+# this option cannot be enabled in that scenario. See also the
+# [workarounds]/disable_group_policy_check_upcall option.
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/scheduler_tracks_instance_changes
+#track_instance_changes = true
+
+#
+# Filters that the scheduler can use.
+#
+# An unordered list of the filter classes the nova scheduler may
+# apply.  Only the
+# filters specified in the 'enabled_filters' option will be used, but
+# any filter appearing in that option must also be included in this
+# list.
+#
+# By default, this is set to all filters that are included with nova.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more strings, where each string corresponds to
+# the name of
+#   a filter that may be used for selecting a host
+#
+# Related options:
+#
+# * enabled_filters
+#  (multi valued)
+# Deprecated group/name - [DEFAULT]/scheduler_available_filters
+#available_filters = nova.scheduler.filters.all_filters
+
+#
+# Filters that the scheduler will use.
+#
+# An ordered list of filter class names that will be used for
+# filtering
+# hosts. These filters will be applied in the order they are listed so
+# place your most restrictive filters first to make the filtering
+# process more
+# efficient.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more strings, where each string corresponds to
+# the name of
+#   a filter to be used for selecting a host
+#
+# Related options:
+#
+# * All of the filters in this option *must* be present in the
+#   'scheduler_available_filters' option, or a
+# SchedulerHostFilterNotFound
+#   exception will be raised.
+#  (list value)
+# Deprecated group/name - [DEFAULT]/scheduler_default_filters
+#enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
+
+# DEPRECATED:
+# Filters used for filtering baremetal hosts.
+#
+# Filters are applied in order, so place your most restrictive filters
+# first to
+# make the filtering process more efficient.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more strings, where each string corresponds to
+# the name of
+#   a filter to be used for selecting a baremetal host
+#
+# Related options:
+#
+# * If the 'scheduler_use_baremetal_filters' option is False, this
+# option has
+#   no effect.
+#  (list value)
+# Deprecated group/name - [DEFAULT]/baremetal_scheduler_default_filters
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason:
+# These filters were used to overcome some of the baremetal scheduling
+# limitations in Nova prior to the use of the Placement API. Now
+# scheduling will
+# use the custom resource class defined for each baremetal node to
+# make its
+# selection.
+#baremetal_enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter
+
+# DEPRECATED:
+# Enable baremetal filters.
+#
+# Set this to True to tell the nova scheduler that it should use the
+# filters
+# specified in the 'baremetal_enabled_filters' option. If you are not
+# scheduling baremetal nodes, leave this at the default setting of
+# False.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Related options:
+#
+# * If this option is set to True, then the filters specified in the
+#   'baremetal_enabled_filters' are used instead of the filters
+#   specified in 'enabled_filters'.
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/scheduler_use_baremetal_filters
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason:
+# These filters were used to overcome some of the baremetal scheduling
+# limitations in Nova prior to the use of the Placement API. Now
+# scheduling will
+# use the custom resource class defined for each baremetal node to
+# make its
+# selection.
+#use_baremetal_filters = false
+
+#
+# Weighers that the scheduler will use.
+#
+# Only hosts which pass the filters are weighed. The weight for any
+# host starts
+# at 0, and the weighers order these hosts by adding to or subtracting
+# from the
+# weight assigned by the previous weigher. Weights may become
+# negative. An
+# instance will be scheduled to one of the N most-weighted hosts,
+# where N is
+# 'scheduler_host_subset_size'.
+#
+# By default, this is set to all weighers that are included with Nova.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more strings, where each string corresponds to
+# the name of
+#   a weigher that will be used for selecting a host
+#  (list value)
+# Deprecated group/name - [DEFAULT]/scheduler_weight_classes
+#weight_classes = nova.scheduler.weights.all_weighers
+
+#
+# Ram weight multipler ratio.
+#
+# This option determines how hosts with more or less available RAM are
+# weighed. A
+# positive value will result in the scheduler preferring hosts with
+# more
+# available RAM, and a negative number will result in the scheduler
+# preferring
+# hosts with less available RAM. Another way to look at it is that
+# positive
+# values for this option will tend to spread instances across many
+# hosts, while
+# negative values will tend to fill up (stack) hosts as much as
+# possible before
+# scheduling to a less-used host. The absolute value, whether positive
+# or
+# negative, controls how strong the RAM weigher is relative to other
+# weighers.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the 'ram' weigher is enabled.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the
+# multipler
+#   ratio for this weigher.
+#  (floating point value)
+#ram_weight_multiplier = 1.0
+
+#
+# Disk weight multipler ratio.
+#
+# Multiplier used for weighing free disk space. Negative numbers mean
+# to
+# stack vs spread.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the 'disk' weigher is enabled.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the
+# multipler
+#   ratio for this weigher.
+#  (floating point value)
+#disk_weight_multiplier = 1.0
+
+#
+# IO operations weight multipler ratio.
+#
+# This option determines how hosts with differing workloads are
+# weighed. Negative
+# values, such as the default, will result in the scheduler preferring
+# hosts with
+# lighter workloads whereas positive values will prefer hosts with
+# heavier
+# workloads. Another way to look at it is that positive values for
+# this option
+# will tend to schedule instances onto hosts that are already busy,
+# while
+# negative values will tend to distribute the workload across more
+# hosts. The
+# absolute value, whether positive or negative, controls how strong
+# the io_ops
+# weigher is relative to other weighers.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the 'io_ops' weigher is enabled.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the
+# multipler
+#   ratio for this weigher.
+#  (floating point value)
+#io_ops_weight_multiplier = -1.0
+
+#
+# PCI device affinity weight multiplier.
+#
+# The PCI device affinity weighter computes a weighting based on the
+# number of
+# PCI devices on the host and the number of PCI devices requested by
+# the
+# instance. The ``NUMATopologyFilter`` filter must be enabled for this
+# to have
+# any significance. For more information, refer to the filter
+# documentation:
+#
+#     https://docs.openstack.org/nova/latest/user/filter-
+# scheduler.html
+#
+# Possible values:
+#
+# * A positive integer or float value, where the value corresponds to
+# the
+#   multiplier ratio for this weigher.
+#  (floating point value)
+# Minimum value: 0
+#pci_weight_multiplier = 1.0
+
+#
+# Multiplier used for weighing hosts for group soft-affinity.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to weight
+# multiplier
+#   for hosts with group soft affinity. Only a positive value are
+# meaningful, as
+#   negative values would make this behave as a soft anti-affinity
+# weigher.
+#  (floating point value)
+#soft_affinity_weight_multiplier = 1.0
+
+#
+# Multiplier used for weighing hosts for group soft-anti-affinity.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to weight
+# multiplier
+#   for hosts with group soft anti-affinity. Only a positive value are
+#   meaningful, as negative values would make this behave as a soft
+# affinity
+#   weigher.
+#  (floating point value)
+#soft_anti_affinity_weight_multiplier = 1.0
+
+#
+# Enable spreading the instances between hosts with the same best
+# weight.
+#
+# Enabling it is beneficial for cases when host_subset_size is 1
+# (default), but there is a large number of hosts with same maximal
+# weight.
+# This scenario is common in Ironic deployments where there are
+# typically many
+# baremetal nodes with identical weights returned to the scheduler.
+# In such case enabling this option will reduce contention and chances
+# for
+# rescheduling events.
+# At the same time it will make the instance packing (even in
+# unweighed case)
+# less dense.
+#  (boolean value)
+#shuffle_best_same_weighed_hosts = false
+
+#
+# The default architecture to be used when using the image properties
+# filter.
+#
+# When using the ImagePropertiesFilter, it is possible that you want
+# to define
+# a default architecture to make the user experience easier and avoid
+# having
+# something like x86_64 images landing on aarch64 compute nodes
+# because the
+# user did not specify the 'hw_architecture' property in Glance.
+#
+# Possible values:
+#
+# * CPU Architectures such as x86_64, aarch64, s390x.
+#  (string value)
+# Possible values:
+# alpha - <No description provided>
+# armv6 - <No description provided>
+# armv7l - <No description provided>
+# armv7b - <No description provided>
+# aarch64 - <No description provided>
+# cris - <No description provided>
+# i686 - <No description provided>
+# ia64 - <No description provided>
+# lm32 - <No description provided>
+# m68k - <No description provided>
+# microblaze - <No description provided>
+# microblazeel - <No description provided>
+# mips - <No description provided>
+# mipsel - <No description provided>
+# mips64 - <No description provided>
+# mips64el - <No description provided>
+# openrisc - <No description provided>
+# parisc - <No description provided>
+# parisc64 - <No description provided>
+# ppc - <No description provided>
+# ppcle - <No description provided>
+# ppc64 - <No description provided>
+# ppc64le - <No description provided>
+# ppcemb - <No description provided>
+# s390 - <No description provided>
+# s390x - <No description provided>
+# sh4 - <No description provided>
+# sh4eb - <No description provided>
+# sparc - <No description provided>
+# sparc64 - <No description provided>
+# unicore32 - <No description provided>
+# x86_64 - <No description provided>
+# xtensa - <No description provided>
+# xtensaeb - <No description provided>
+#image_properties_default_architecture = <None>
+
+#
+# List of UUIDs for images that can only be run on certain hosts.
+#
+# If there is a need to restrict some images to only run on certain
+# designated
+# hosts, list those image UUIDs here.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the 'IsolatedHostsFilter' filter is
+# enabled.
+#
+# Possible values:
+#
+# * A list of UUID strings, where each string corresponds to the UUID
+# of an
+#   image
+#
+# Related options:
+#
+# * scheduler/isolated_hosts
+# * scheduler/restrict_isolated_hosts_to_isolated_images
+#  (list value)
+#isolated_images =
+
+#
+# List of hosts that can only run certain images.
+#
+# If there is a need to restrict some images to only run on certain
+# designated
+# hosts, list those host names here.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the 'IsolatedHostsFilter' filter is
+# enabled.
+#
+# Possible values:
+#
+# * A list of strings, where each string corresponds to the name of a
+# host
+#
+# Related options:
+#
+# * scheduler/isolated_images
+# * scheduler/restrict_isolated_hosts_to_isolated_images
+#  (list value)
+#isolated_hosts =
+
+#
+# Prevent non-isolated images from being built on isolated hosts.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the 'IsolatedHostsFilter' filter is
+# enabled. Even
+# then, this option doesn't affect the behavior of requests for
+# isolated images,
+# which will *always* be restricted to isolated hosts.
+#
+# Related options:
+#
+# * scheduler/isolated_images
+# * scheduler/isolated_hosts
+#  (boolean value)
+#restrict_isolated_hosts_to_isolated_images = true
+
+#
+# Image property namespace for use in the host aggregate.
+#
+# Images and hosts can be configured so that certain images can only
+# be scheduled
+# to hosts in a particular aggregate. This is done with metadata
+# values set on
+# the host aggregate that are identified by beginning with the value
+# of this
+# option. If the host is part of an aggregate with such a metadata
+# key, the image
+# in the request spec must have the value of that metadata in its
+# properties in
+# order for the scheduler to consider the host as acceptable.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the
+# 'aggregate_image_properties_isolation' filter is
+# enabled.
+#
+# Possible values:
+#
+# * A string, where the string corresponds to an image property
+# namespace
+#
+# Related options:
+#
+# * aggregate_image_properties_isolation_separator
+#  (string value)
+#aggregate_image_properties_isolation_namespace = <None>
+
+#
+# Separator character(s) for image property namespace and name.
+#
+# When using the aggregate_image_properties_isolation filter, the
+# relevant
+# metadata keys are prefixed with the namespace defined in the
+# aggregate_image_properties_isolation_namespace configuration option
+# plus a
+# separator. This option defines the separator to be used.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect. Also note that
+# this setting
+# only affects scheduling if the
+# 'aggregate_image_properties_isolation' filter
+# is enabled.
+#
+# Possible values:
+#
+# * A string, where the string corresponds to an image property
+# namespace
+#   separator character
+#
+# Related options:
+#
+# * aggregate_image_properties_isolation_namespace
+#  (string value)
+#aggregate_image_properties_isolation_separator = .
+
+
+[glance]
+# Configuration options for the Image service
+
+#
+# From nova.conf
+#
+
+#
+# List of glance api servers endpoints available to nova.
+#
+# https is used for ssl-based glance api servers.
+#
+# NOTE: The preferred mechanism for endpoint discovery is via
+# keystoneauth1
+# loading options. Only use api_servers if you need multiple endpoints
+# and are
+# unable to use a load balancer for some reason.
+#
+# Possible values:
+#
+# * A list of any fully qualified url of the form
+# "scheme://hostname:port[/path]"
+#   (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image").
+#  (list value)
+#api_servers = <None>
+api_servers = http://10.167.4.35:9292
+
+
+#
+# Enable glance operation retries.
+#
+# Specifies the number of retries when uploading / downloading
+# an image to / from glance. 0 means no retries.
+#  (integer value)
+# Minimum value: 0
+#num_retries = 0
+
+# DEPRECATED:
+# List of url schemes that can be directly accessed.
+#
+# This option specifies a list of url schemes that can be downloaded
+# directly via the direct_url. This direct_URL can be fetched from
+# Image metadata which can be used by nova to get the
+# image more efficiently. nova-compute could benefit from this by
+# invoking a copy when it has access to the same file system as
+# glance.
+#
+# Possible values:
+#
+# * [file], Empty list (default)
+#  (list value)
+# This option is deprecated for removal since 17.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This was originally added for the 'nova.image.download.file'
+# FileTransfer
+# extension which was removed in the 16.0.0 Pike release. The
+# 'nova.image.download.modules' extension point is not maintained
+# and there is no indication of its use in production clouds.
+#allowed_direct_url_schemes =
+
+#
+# Enable image signature verification.
+#
+# nova uses the image signature metadata from glance and verifies the
+# signature
+# of a signed image while downloading that image. If the image
+# signature cannot
+# be verified or if the image signature metadata is either incomplete
+# or
+# unavailable, then nova will not boot the image and instead will
+# place the
+# instance into an error state. This provides end users with stronger
+# assurances
+# of the integrity of the image data they are using to create servers.
+#
+# Related options:
+#
+# * The options in the `key_manager` group, as the key_manager is used
+#   for the signature validation.
+# * Both enable_certificate_validation and
+# default_trusted_certificate_ids
+#   below depend on this option being enabled.
+#  (boolean value)
+verify_glance_signatures=False
+
+# DEPRECATED:
+# Enable certificate validation for image signature verification.
+#
+# During image signature verification nova will first verify the
+# validity of the
+# image's signing certificate using the set of trusted certificates
+# associated
+# with the instance. If certificate validation fails, signature
+# verification
+# will not be performed and the image will be placed into an error
+# state. This
+# provides end users with stronger assurances that the image data is
+# unmodified
+# and trustworthy. If left disabled, image signature verification can
+# still
+# occur but the end user will not have any assurance that the signing
+# certificate used to generate the image signature is still
+# trustworthy.
+#
+# Related options:
+#
+# * This option only takes effect if verify_glance_signatures is
+# enabled.
+# * The value of default_trusted_certificate_ids may be used when this
+# option
+#   is enabled.
+#  (boolean value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This option is intended to ease the transition for deployments
+# leveraging
+# image signature verification. The intended state long-term is for
+# signature
+# verification and certificate validation to always happen together.
+#enable_certificate_validation = false
+
+#
+# List of certificate IDs for certificates that should be trusted.
+#
+# May be used as a default list of trusted certificate IDs for
+# certificate
+# validation. The value of this option will be ignored if the user
+# provides a
+# list of trusted certificate IDs with an instance API request. The
+# value of
+# this option will be persisted with the instance data if signature
+# verification
+# and certificate validation are enabled and if the user did not
+# provide an
+# alternative list. If left empty when certificate validation is
+# enabled the
+# user must provide a list of trusted certificate IDs otherwise
+# certificate
+# validation will fail.
+#
+# Related options:
+#
+# * The value of this option may be used if both
+# verify_glance_signatures and
+#   enable_certificate_validation are enabled.
+#  (list value)
+#default_trusted_certificate_ids =
+
+# Enable or disable debug logging with glanceclient. (boolean value)
+#debug = false
+
+# PEM encoded Certificate Authority to use when verifying HTTPs
+# connections. (string value)
+#cafile = <None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile = <None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# The default service_type for endpoint URL discovery. (string value)
+#service_type = image
+
+# The default service_name for endpoint URL discovery. (string value)
+#service_name = <None>
+
+# List of interfaces, in order of preference, for endpoint URL. (list
+# value)
+#valid_interfaces = internal,public
+
+# The default region_name for endpoint URL discovery. (string value)
+#region_name = <None>
+
+# Always use this endpoint URL for requests for this client. NOTE: The
+# unversioned endpoint should be specified here; to request a
+# particular API version, use the `version`, `min-version`, and/or
+# `max-version` options. (string value)
+#endpoint_override = <None>
+
+
+[guestfs]
+#
+# libguestfs is a set of tools for accessing and modifying virtual
+# machine (VM) disk images. You can use this for viewing and editing
+# files inside guests, scripting changes to VMs, monitoring disk
+# used/free statistics, creating guests, P2V, V2V, performing backups,
+# cloning VMs, building VMs, formatting disks and resizing disks.
+
+#
+# From nova.conf
+#
+
+#
+# Enable/disables guestfs logging.
+#
+# This configures guestfs to debug messages and push them to OpenStack
+# logging system. When set to True, it traces libguestfs API calls and
+# enable verbose debug messages. In order to use the above feature,
+# "libguestfs" package must be installed.
+#
+# Related options:
+# Since libguestfs access and modifies VM's managed by libvirt, below
+# options
+# should be set to give access to those VM's.
+#     * libvirt.inject_key
+#     * libvirt.inject_partition
+#     * libvirt.inject_password
+#  (boolean value)
+#debug = false
+
+
+[hyperv]
+#
+# The hyperv feature allows you to configure the Hyper-V hypervisor
+# driver to be used within an OpenStack deployment.
+
+#
+# From nova.conf
+#
+
+#
+# Dynamic memory ratio
+#
+# Enables dynamic memory allocation (ballooning) when set to a value
+# greater than 1. The value expresses the ratio between the total RAM
+# assigned to an instance and its startup RAM amount. For example a
+# ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of
+# RAM allocated at startup.
+#
+# Possible values:
+#
+# * 1.0: Disables dynamic memory allocation (Default).
+# * Float values greater than 1.0: Enables allocation of total implied
+#   RAM divided by this value for startup.
+#  (floating point value)
+#dynamic_memory_ratio = 1.0
+
+#
+# Enable instance metrics collection
+#
+# Enables metrics collections for an instance by using Hyper-V's
+# metric APIs. Collected data can be retrieved by other apps and
+# services, e.g.: Ceilometer.
+#  (boolean value)
+#enable_instance_metrics_collection = false
+
+#
+# Instances path share
+#
+# The name of a Windows share mapped to the "instances_path" dir
+# and used by the resize feature to copy files to the target host.
+# If left blank, an administrative share (hidden network share) will
+# be used, looking for the same "instances_path" used locally.
+#
+# Possible values:
+#
+# * "": An administrative share will be used (Default).
+# * Name of a Windows share.
+#
+# Related options:
+#
+# * "instances_path": The directory which will be used if this option
+#   here is left blank.
+#  (string value)
+#instances_path_share =
+
+#
+# Limit CPU features
+#
+# This flag is needed to support live migration to hosts with
+# different CPU features and checked during instance creation
+# in order to limit the CPU features used by the instance.
+#  (boolean value)
+#limit_cpu_features = false
+
+#
+# Mounted disk query retry count
+#
+# The number of times to retry checking for a mounted disk.
+# The query runs until the device can be found or the retry
+# count is reached.
+#
+# Possible values:
+#
+# * Positive integer values. Values greater than 1 is recommended
+#   (Default: 10).
+#
+# Related options:
+#
+# * Time interval between disk mount retries is declared with
+#   "mounted_disk_query_retry_interval" option.
+#  (integer value)
+# Minimum value: 0
+#mounted_disk_query_retry_count = 10
+
+#
+# Mounted disk query retry interval
+#
+# Interval between checks for a mounted disk, in seconds.
+#
+# Possible values:
+#
+# * Time in seconds (Default: 5).
+#
+# Related options:
+#
+# * This option is meaningful when the mounted_disk_query_retry_count
+#   is greater than 1.
+# * The retry loop runs with mounted_disk_query_retry_count and
+#   mounted_disk_query_retry_interval configuration options.
+#  (integer value)
+# Minimum value: 0
+#mounted_disk_query_retry_interval = 5
+
+#
+# Power state check timeframe
+#
+# The timeframe to be checked for instance power state changes.
+# This option is used to fetch the state of the instance from Hyper-V
+# through the WMI interface, within the specified timeframe.
+#
+# Possible values:
+#
+# * Timeframe in seconds (Default: 60).
+#  (integer value)
+# Minimum value: 0
+#power_state_check_timeframe = 60
+
+#
+# Power state event polling interval
+#
+# Instance power state change event polling frequency. Sets the
+# listener interval for power state events to the given value.
+# This option enhances the internal lifecycle notifications of
+# instances that reboot themselves. It is unlikely that an operator
+# has to change this value.
+#
+# Possible values:
+#
+# * Time in seconds (Default: 2).
+#  (integer value)
+# Minimum value: 0
+#power_state_event_polling_interval = 2
+
+#
+# qemu-img command
+#
+# qemu-img is required for some of the image related operations
+# like converting between different image types. You can get it
+# from here: (http://qemu.weilnetz.de/) or you can install the
+# Cloudbase OpenStack Hyper-V Compute Driver
+# (https://cloudbase.it/openstack-hyperv-driver/) which automatically
+# sets the proper path for this config option. You can either give the
+# full path of qemu-img.exe or set its path in the PATH environment
+# variable and leave this option to the default value.
+#
+# Possible values:
+#
+# * Name of the qemu-img executable, in case it is in the same
+#   directory as the nova-compute service or its path is in the
+#   PATH environment variable (Default).
+# * Path of qemu-img command (DRIVELETTER:\PATH\TO\QEMU-IMG\COMMAND).
+#
+# Related options:
+#
+# * If the config_drive_cdrom option is False, qemu-img will be used
+# to
+#   convert the ISO to a VHD, otherwise the configuration drive will
+#   remain an ISO. To use configuration drive with Hyper-V, you must
+#   set the mkisofs_cmd value to the full path to an mkisofs.exe
+#   installation.
+#  (string value)
+#qemu_img_cmd = qemu-img.exe
+
+#
+# External virtual switch name
+#
+# The Hyper-V Virtual Switch is a software-based layer-2 Ethernet
+# network switch that is available with the installation of the
+# Hyper-V server role. The switch includes programmatically managed
+# and extensible capabilities to connect virtual machines to both
+# virtual networks and the physical network. In addition, Hyper-V
+# Virtual Switch provides policy enforcement for security, isolation,
+# and service levels. The vSwitch represented by this config option
+# must be an external one (not internal or private).
+#
+# Possible values:
+#
+# * If not provided, the first of a list of available vswitches
+#   is used. This list is queried using WQL.
+# * Virtual switch name.
+#  (string value)
+#vswitch_name = <None>
+
+#
+# Wait soft reboot seconds
+#
+# Number of seconds to wait for instance to shut down after soft
+# reboot request is made. We fall back to hard reboot if instance
+# does not shutdown within this window.
+#
+# Possible values:
+#
+# * Time in seconds (Default: 60).
+#  (integer value)
+# Minimum value: 0
+#wait_soft_reboot_seconds = 60
+
+#
+# Configuration drive cdrom
+#
+# OpenStack can be configured to write instance metadata to
+# a configuration drive, which is then attached to the
+# instance before it boots. The configuration drive can be
+# attached as a disk drive (default) or as a CD drive.
+#
+# Possible values:
+#
+# * True: Attach the configuration drive image as a CD drive.
+# * False: Attach the configuration drive image as a disk drive
+# (Default).
+#
+# Related options:
+#
+# * This option is meaningful with force_config_drive option set to
+# 'True'
+#   or when the REST API call to create an instance will have
+#   '--config-drive=True' flag.
+# * config_drive_format option must be set to 'iso9660' in order to
+# use
+#   CD drive as the configuration drive image.
+# * To use configuration drive with Hyper-V, you must set the
+#   mkisofs_cmd value to the full path to an mkisofs.exe installation.
+#   Additionally, you must set the qemu_img_cmd value to the full path
+#   to an qemu-img command installation.
+# * You can configure the Compute service to always create a
+# configuration
+#   drive by setting the force_config_drive option to 'True'.
+#  (boolean value)
+#config_drive_cdrom = false
+config_drive_cdrom = false
+
+#
+# Configuration drive inject password
+#
+# Enables setting the admin password in the configuration drive image.
+#
+# Related options:
+#
+# * This option is meaningful when used with other options that enable
+#   configuration drive usage with Hyper-V, such as
+# force_config_drive.
+# * Currently, the only accepted config_drive_format is 'iso9660'.
+#  (boolean value)
+#config_drive_inject_password = false
+config_drive_inject_password = false
+
+#
+# Volume attach retry count
+#
+# The number of times to retry attaching a volume. Volume attachment
+# is retried until success or the given retry count is reached.
+#
+# Possible values:
+#
+# * Positive integer values (Default: 10).
+#
+# Related options:
+#
+# * Time interval between attachment attempts is declared with
+#   volume_attach_retry_interval option.
+#  (integer value)
+# Minimum value: 0
+#volume_attach_retry_count = 10
+
+#
+# Volume attach retry interval
+#
+# Interval between volume attachment attempts, in seconds.
+#
+# Possible values:
+#
+# * Time in seconds (Default: 5).
+#
+# Related options:
+#
+# * This options is meaningful when volume_attach_retry_count
+#   is greater than 1.
+# * The retry loop runs with volume_attach_retry_count and
+#   volume_attach_retry_interval configuration options.
+#  (integer value)
+# Minimum value: 0
+#volume_attach_retry_interval = 5
+
+#
+# Enable RemoteFX feature
+#
+# This requires at least one DirectX 11 capable graphics adapter for
+# Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization
+# feature has to be enabled.
+#
+# Instances with RemoteFX can be requested with the following flavor
+# extra specs:
+#
+# **os:resolution**. Guest VM screen resolution size. Acceptable
+# values::
+#
+#     1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160
+#
+# ``3840x2160`` is only available on Windows / Hyper-V Server 2016.
+#
+# **os:monitors**. Guest VM number of monitors. Acceptable values::
+#
+#     [1, 4] - Windows / Hyper-V Server 2012 R2
+#     [1, 8] - Windows / Hyper-V Server 2016
+#
+# **os:vram**. Guest VM VRAM amount. Only available on
+# Windows / Hyper-V Server 2016. Acceptable values::
+#
+#     64, 128, 256, 512, 1024
+#  (boolean value)
+#enable_remotefx = false
+
+#
+# Use multipath connections when attaching iSCSI or FC disks.
+#
+# This requires the Multipath IO Windows feature to be enabled. MPIO
+# must be
+# configured to claim such devices.
+#  (boolean value)
+#use_multipath_io = false
+
+#
+# List of iSCSI initiators that will be used for estabilishing iSCSI
+# sessions.
+#
+# If none are specified, the Microsoft iSCSI initiator service will
+# choose the
+# initiator.
+#  (list value)
+#iscsi_initiator_list =
+
+
+
+
+[key_manager]
+
+#
+# From nova.conf
+#
+
+#
+# Fixed key returned by key manager, specified in hex.
+#
+# Possible values:
+#
+# * Empty string or a key in hex value
+#  (string value)
+#fixed_key = <None>
+api_class=castellan.key_manager.barbican_key_manager.BarbicanKeyManager
+
+# Specify the key manager implementation. Options are "barbican" and
+# "vault". Default is  "barbican". Will support the  values earlier
+# set using [key_manager]/api_class for some time. (string value)
+# Deprecated group/name - [key_manager]/api_class
+#backend = barbican
+
+# The type of authentication credential to create. Possible values are
+# 'token', 'password', 'keystone_token', and 'keystone_password'.
+# Required if no context is passed to the credential factory. (string
+# value)
+#auth_type = <None>
+
+# Token for authentication. Required for 'token' and 'keystone_token'
+# auth_type if no context is passed to the credential factory. (string
+# value)
+#token = <None>
+
+# Username for authentication. Required for 'password' auth_type.
+# Optional for the 'keystone_password' auth_type. (string value)
+#username = <None>
+
+# Password for authentication. Required for 'password' and
+# 'keystone_password' auth_type. (string value)
+#password = <None>
+
+# Use this endpoint to connect to Keystone. (string value)
+#auth_url = <None>
+
+# User ID for authentication. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#user_id = <None>
+
+# User's domain ID for authentication. Optional for 'keystone_token'
+# and 'keystone_password' auth_type. (string value)
+#user_domain_id = <None>
+
+# User's domain name for authentication. Optional for 'keystone_token'
+# and 'keystone_password' auth_type. (string value)
+#user_domain_name = <None>
+
+# Trust ID for trust scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#trust_id = <None>
+
+# Domain ID for domain scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#domain_id = <None>
+
+# Domain name for domain scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#domain_name = <None>
+
+# Project ID for project scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#project_id = <None>
+
+# Project name for project scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#project_name = <None>
+
+# Project's domain ID for project. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#project_domain_id = <None>
+
+# Project's domain name for project. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#project_domain_name = <None>
+
+# Allow fetching a new token if the current one is going to expire.
+# Optional for 'keystone_token' and 'keystone_password' auth_type.
+# (boolean value)
+#reauthenticate = true
+
+
+[keystone]
+# Configuration options for the identity service
+
+#
+# From nova.conf
+#
+
+# PEM encoded Certificate Authority to use when verifying HTTPs
+# connections. (string value)
+#cafile = <None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile = <None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# The default service_type for endpoint URL discovery. (string value)
+#service_type = identity
+
+# The default service_name for endpoint URL discovery. (string value)
+#service_name = <None>
+
+# List of interfaces, in order of preference, for endpoint URL. (list
+# value)
+#valid_interfaces = internal,public
+
+# The default region_name for endpoint URL discovery. (string value)
+#region_name = <None>
+
+# Always use this endpoint URL for requests for this client. NOTE: The
+# unversioned endpoint should be specified here; to request a
+# particular API version, use the `version`, `min-version`, and/or
+# `max-version` options. (string value)
+#endpoint_override = <None>
+
+
+[libvirt]
+#
+# Libvirt options allows cloud administrator to configure related
+# libvirt hypervisor driver to be used within an OpenStack deployment.
+#
+# Almost all of the libvirt config options are influence by
+# ``virt_type`` config
+# which describes the virtualization type (or so called domain type)
+# libvirt
+# should use for specific features such as live migration, snapshot.
+
+#
+# From nova.conf
+#
+virt_type = kvm
+
+inject_partition=-2
+inject_password=False
+
+disk_cachemodes="file=directsync,block=none"
+block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_NON_SHARED_INC
+live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST
+inject_key=True
+vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
+
+#
+# The ID of the image to boot from to rescue data from a corrupted
+# instance.
+#
+# If the rescue REST API operation doesn't provide an ID of an image
+# to
+# use, the image which is referenced by this ID is used. If this
+# option is not set, the image from the instance is used.
+#
+# Possible values:
+#
+# * An ID of an image or nothing. If it points to an *Amazon Machine
+#   Image* (AMI), consider to set the config options
+# ``rescue_kernel_id``
+#   and ``rescue_ramdisk_id`` too. If nothing is set, the image of the
+# instance
+#   is used.
+#
+# Related options:
+#
+# * ``rescue_kernel_id``: If the chosen rescue image allows the
+# separate
+#   definition of its kernel disk, the value of this option is used,
+#   if specified. This is the case when *Amazon*'s AMI/AKI/ARI image
+#   format is used for the rescue image.
+# * ``rescue_ramdisk_id``: If the chosen rescue image allows the
+# separate
+#   definition of its RAM disk, the value of this option is used if,
+#   specified. This is the case when *Amazon*'s AMI/AKI/ARI image
+#   format is used for the rescue image.
+#  (string value)
+#rescue_image_id = <None>
+
+#
+# The ID of the kernel (AKI) image to use with the rescue image.
+#
+# If the chosen rescue image allows the separate definition of its
+# kernel
+# disk, the value of this option is used, if specified. This is the
+# case
+# when *Amazon*'s AMI/AKI/ARI image format is used for the rescue
+# image.
+#
+# Possible values:
+#
+# * An ID of an kernel image or nothing. If nothing is specified, the
+# kernel
+#   disk from the instance is used if it was launched with one.
+#
+# Related options:
+#
+# * ``rescue_image_id``: If that option points to an image in
+# *Amazon*'s
+#   AMI/AKI/ARI image format, it's useful to use ``rescue_kernel_id``
+# too.
+#  (string value)
+#rescue_kernel_id = <None>
+
+#
+# The ID of the RAM disk (ARI) image to use with the rescue image.
+#
+# If the chosen rescue image allows the separate definition of its RAM
+# disk, the value of this option is used, if specified. This is the
+# case
+# when *Amazon*'s AMI/AKI/ARI image format is used for the rescue
+# image.
+#
+# Possible values:
+#
+# * An ID of a RAM disk image or nothing. If nothing is specified, the
+# RAM
+#   disk from the instance is used if it was launched with one.
+#
+# Related options:
+#
+# * ``rescue_image_id``: If that option points to an image in
+# *Amazon*'s
+#   AMI/AKI/ARI image format, it's useful to use ``rescue_ramdisk_id``
+# too.
+#  (string value)
+#rescue_ramdisk_id = <None>
+
+#
+# Describes the virtualization type (or so called domain type) libvirt
+# should
+# use.
+#
+# The choice of this type must match the underlying virtualization
+# strategy
+# you have chosen for this host.
+#
+# Possible values:
+#
+# * See the predefined set of case-sensitive values.
+#
+# Related options:
+#
+# * ``connection_uri``: depends on this
+# * ``disk_prefix``: depends on this
+# * ``cpu_mode``: depends on this
+# * ``cpu_model``: depends on this
+#  (string value)
+# Possible values:
+# kvm - <No description provided>
+# lxc - <No description provided>
+# qemu - <No description provided>
+# uml - <No description provided>
+# xen - <No description provided>
+# parallels - <No description provided>
+#virt_type = kvm
+
+#
+# Overrides the default libvirt URI of the chosen virtualization type.
+#
+# If set, Nova will use this URI to connect to libvirt.
+#
+# Possible values:
+#
+# * An URI like ``qemu:///system`` or ``xen+ssh://oirase/`` for
+# example.
+#   This is only necessary if the URI differs to the commonly known
+# URIs
+#   for the chosen virtualization type.
+#
+# Related options:
+#
+# * ``virt_type``: Influences what is used as default value here.
+#  (string value)
+#connection_uri =
+
+#
+# Algorithm used to hash the injected password.
+# Note that it must be supported by libc on the compute host
+# _and_ by libc inside *any guest image* that will be booted by this
+# compute
+# host whith requested password injection.
+# In case the specified algorithm is not supported by libc on the
+# compute host,
+# a fallback to DES algorithm will be performed.
+#
+# Related options:
+#
+# * ``inject_password``
+# * ``inject_partition``
+#  (string value)
+# Possible values:
+# SHA-512 - <No description provided>
+# SHA-256 - <No description provided>
+# MD5 - <No description provided>
+#inject_password_algorithm = MD5
+
+#
+# Allow the injection of an admin password for instance only at
+# ``create`` and
+# ``rebuild`` process.
+#
+# There is no agent needed within the image to do this. If
+# *libguestfs* is
+# available on the host, it will be used. Otherwise *nbd* is used. The
+# file
+# system of the image will be mounted and the admin password, which is
+# provided
+# in the REST API call will be injected as password for the root user.
+# If no
+# root user is available, the instance won't be launched and an error
+# is thrown.
+# Be aware that the injection is *not* possible when the instance gets
+# launched
+# from a volume.
+#
+# Possible values:
+#
+# * True: Allows the injection.
+# * False (default): Disallows the injection. Any via the REST API
+# provided
+# admin password will be silently ignored.
+#
+# Related options:
+#
+# * ``inject_partition``: That option will decide about the discovery
+# and usage
+#   of the file system. It also can disable the injection at all.
+#  (boolean value)
+#inject_password = false
+
+#
+# Allow the injection of an SSH key at boot time.
+#
+# There is no agent needed within the image to do this. If
+# *libguestfs* is
+# available on the host, it will be used. Otherwise *nbd* is used. The
+# file
+# system of the image will be mounted and the SSH key, which is
+# provided
+# in the REST API call will be injected as SSH key for the root user
+# and
+# appended to the ``authorized_keys`` of that user. The SELinux
+# context will
+# be set if necessary. Be aware that the injection is *not* possible
+# when the
+# instance gets launched from a volume.
+#
+# This config option will enable directly modifying the instance disk
+# and does
+# not affect what cloud-init may do using data from config_drive
+# option or the
+# metadata service.
+#
+# Related options:
+#
+# * ``inject_partition``: That option will decide about the discovery
+# and usage
+#   of the file system. It also can disable the injection at all.
+#  (boolean value)
+#inject_key = false
+
+#
+# Determines the way how the file system is chosen to inject data into
+# it.
+#
+# *libguestfs* will be used a first solution to inject data. If that's
+# not
+# available on the host, the image will be locally mounted on the host
+# as a
+# fallback solution. If libguestfs is not able to determine the root
+# partition
+# (because there are more or less than one root partition) or cannot
+# mount the
+# file system it will result in an error and the instance won't be
+# boot.
+#
+# Possible values:
+#
+# * -2 => disable the injection of data.
+# * -1 => find the root partition with the file system to mount with
+# libguestfs
+# *  0 => The image is not partitioned
+# * >0 => The number of the partition to use for the injection
+#
+# Related options:
+#
+# * ``inject_key``: If this option allows the injection of a SSH key
+# it depends
+#   on value greater or equal to -1 for ``inject_partition``.
+# * ``inject_password``: If this option allows the injection of an
+# admin password
+#   it depends on value greater or equal to -1 for
+# ``inject_partition``.
+# * ``guestfs`` You can enable the debug log level of libguestfs with
+# this
+#   config option. A more verbose output will help in debugging
+# issues.
+# * ``virt_type``: If you use ``lxc`` as virt_type it will be treated
+# as a
+#   single partition image
+#  (integer value)
+# Minimum value: -2
+#inject_partition = -2
+
+# DEPRECATED:
+# Enable a mouse cursor within a graphical VNC or SPICE sessions.
+#
+# This will only be taken into account if the VM is fully virtualized
+# and VNC
+# and/or SPICE is enabled. If the node doesn't support a graphical
+# framebuffer,
+# then it is valid to set this to False.
+#
+# Related options:
+# * ``[vnc]enabled``: If VNC is enabled, ``use_usb_tablet`` will have
+# an effect.
+# * ``[spice]enabled`` + ``[spice].agent_enabled``: If SPICE is
+# enabled and the
+#   spice agent is disabled, the config value of ``use_usb_tablet``
+# will have
+#   an effect.
+#  (boolean value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: This option is being replaced by the 'pointer_model' option.
+#use_usb_tablet = true
+
+#
+# The IP address or hostname to be used as the target for live
+# migration traffic.
+#
+# If this option is set to None, the hostname of the migration target
+# compute
+# node will be used.
+#
+# This option is useful in environments where the live-migration
+# traffic can
+# impact the network plane significantly. A separate network for live-
+# migration
+# traffic can then use this config option and avoids the impact on the
+# management network.
+#
+# Possible values:
+#
+# * A valid IP address or hostname, else None.
+#
+# Related options:
+#
+# * ``live_migration_tunnelled``: The live_migration_inbound_addr
+# value is
+#   ignored if tunneling is enabled.
+#  (string value)
+#live_migration_inbound_addr = <None>
+live_migration_inbound_addr = 10.167.4.53
+
+# DEPRECATED:
+# Live migration target URI to use.
+#
+# Override the default libvirt live migration target URI (which is
+# dependent
+# on virt_type). Any included "%s" is replaced with the migration
+# target
+# hostname.
+#
+# If this option is set to None (which is the default), Nova will
+# automatically
+# generate the `live_migration_uri` value based on only 4 supported
+# `virt_type`
+# in following list:
+#
+# * 'kvm': 'qemu+tcp://%s/system'
+# * 'qemu': 'qemu+tcp://%s/system'
+# * 'xen': 'xenmigr://%s/system'
+# * 'parallels': 'parallels+tcp://%s/system'
+#
+# Related options:
+#
+# * ``live_migration_inbound_addr``: If
+# ``live_migration_inbound_addr`` value
+#   is not None and ``live_migration_tunnelled`` is False, the
+# ip/hostname
+#   address of target compute node is used instead of
+# ``live_migration_uri`` as
+#   the uri for live migration.
+# * ``live_migration_scheme``: If ``live_migration_uri`` is not set,
+# the scheme
+#   used for live migration is taken from ``live_migration_scheme``
+# instead.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# live_migration_uri is deprecated for removal in favor of two other
+# options that
+# allow to change live migration scheme and target URI:
+# ``live_migration_scheme``
+# and ``live_migration_inbound_addr`` respectively.
+#live_migration_uri = <None>
+
+#
+# URI scheme used for live migration.
+#
+# Override the default libvirt live migration scheme (which is
+# dependent on
+# virt_type). If this option is set to None, nova will automatically
+# choose a
+# sensible default based on the hypervisor. It is not recommended that
+# you change
+# this unless you are very sure that hypervisor supports a particular
+# scheme.
+#
+# Related options:
+#
+# * ``virt_type``: This option is meaningful only when ``virt_type``
+# is set to
+#   `kvm` or `qemu`.
+# * ``live_migration_uri``: If ``live_migration_uri`` value is not
+# None, the
+#   scheme used for live migration is taken from
+# ``live_migration_uri`` instead.
+#  (string value)
+#live_migration_scheme = <None>
+
+#
+# Enable tunnelled migration.
+#
+# This option enables the tunnelled migration feature, where migration
+# data is
+# transported over the libvirtd connection. If enabled, we use the
+# VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure
+# the network to allow direct hypervisor to hypervisor communication.
+# If False, use the native transport. If not set, Nova will choose a
+# sensible default based on, for example the availability of native
+# encryption support in the hypervisor. Enabling this option will
+# definitely
+# impact performance massively.
+#
+# Note that this option is NOT compatible with use of block migration.
+#
+# Related options:
+#
+# * ``live_migration_inbound_addr``: The live_migration_inbound_addr
+# value is
+#   ignored if tunneling is enabled.
+#  (boolean value)
+#live_migration_tunnelled = false
+
+#
+# Maximum bandwidth(in MiB/s) to be used during migration.
+#
+# If set to 0, the hypervisor will choose a suitable default. Some
+# hypervisors
+# do not support this feature and will return an error if bandwidth is
+# not 0.
+# Please refer to the libvirt documentation for further details.
+#  (integer value)
+#live_migration_bandwidth = 0
+
+#
+# Maximum permitted downtime, in milliseconds, for live migration
+# switchover.
+#
+# Will be rounded up to a minimum of 100ms. You can increase this
+# value
+# if you want to allow live-migrations to complete faster, or avoid
+# live-migration timeout errors by allowing the guest to be paused for
+# longer during the live-migration switch over.
+#
+# Related options:
+#
+# * live_migration_completion_timeout
+#  (integer value)
+# Minimum value: 100
+#live_migration_downtime = 500
+
+#
+# Number of incremental steps to reach max downtime value.
+#
+# Will be rounded up to a minimum of 3 steps.
+#  (integer value)
+# Minimum value: 3
+#live_migration_downtime_steps = 10
+
+#
+# Time to wait, in seconds, between each step increase of the
+# migration
+# downtime.
+#
+# Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to
+# be
+# transferred, with lower bound of a minimum of 2 GiB per device.
+#  (integer value)
+# Minimum value: 3
+#live_migration_downtime_delay = 75
+
+#
+# Time to wait, in seconds, for migration to successfully complete
+# transferring
+# data before aborting the operation.
+#
+# Value is per GiB of guest RAM + disk to be transferred, with lower
+# bound of
+# a minimum of 2 GiB. Should usually be larger than downtime delay *
+# downtime
+# steps. Set to 0 to disable timeouts.
+#
+# Related options:
+#
+# * live_migration_downtime
+# * live_migration_downtime_steps
+# * live_migration_downtime_delay
+#  (integer value)
+# Note: This option can be changed without restarting.
+#live_migration_completion_timeout = 800
+
+# DEPRECATED:
+# Time to wait, in seconds, for migration to make forward progress in
+# transferring data before aborting the operation.
+#
+# Set to 0 to disable timeouts.
+#
+# This is deprecated, and now disabled by default because we have
+# found serious
+# bugs in this feature that caused false live-migration timeout
+# failures. This
+# feature will be removed or replaced in a future release.
+#  (integer value)
+# Note: This option can be changed without restarting.
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Serious bugs found in this feature.
+#live_migration_progress_timeout = 0
+
+#
+# This option allows nova to switch an on-going live migration to
+# post-copy
+# mode, i.e., switch the active VM to the one on the destination node
+# before the
+# migration is complete, therefore ensuring an upper bound on the
+# memory that
+# needs to be transferred. Post-copy requires libvirt>=1.3.3 and
+# QEMU>=2.5.0.
+#
+# When permitted, post-copy mode will be automatically activated if a
+# live-migration memory copy iteration does not make percentage
+# increase of at
+# least 10% over the last iteration.
+#
+# The live-migration force complete API also uses post-copy when
+# permitted. If
+# post-copy mode is not available, force complete falls back to
+# pausing the VM
+# to ensure the live-migration operation will complete.
+#
+# When using post-copy mode, if the source and destination hosts loose
+# network
+# connectivity, the VM being live-migrated will need to be rebooted.
+# For more
+# details, please see the Administration guide.
+#
+# Related options:
+#
+#     * live_migration_permit_auto_converge
+#  (boolean value)
+#live_migration_permit_post_copy = false
+
+#
+# This option allows nova to start live migration with auto converge
+# on.
+#
+# Auto converge throttles down CPU if a progress of on-going live
+# migration
+# is slow. Auto converge will only be used if this flag is set to True
+# and
+# post copy is not permitted or post copy is unavailable due to the
+# version
+# of libvirt and QEMU in use.
+#
+# Related options:
+#
+#     * live_migration_permit_post_copy
+#  (boolean value)
+#live_migration_permit_auto_converge = false
+
+#
+# Determine the snapshot image format when sending to the image
+# service.
+#
+# If set, this decides what format is used when sending the snapshot
+# to the
+# image service.
+# If not set, defaults to same type as source image.
+#
+# Possible values:
+#
+# * ``raw``: RAW disk format
+# * ``qcow2``: KVM default disk format
+# * ``vmdk``: VMWare default disk format
+# * ``vdi``: VirtualBox default disk format
+# * If not set, defaults to same type as source image.
+#  (string value)
+# Possible values:
+# raw - <No description provided>
+# qcow2 - <No description provided>
+# vmdk - <No description provided>
+# vdi - <No description provided>
+#snapshot_image_format = <None>
+
+#
+# Override the default disk prefix for the devices attached to an
+# instance.
+#
+# If set, this is used to identify a free disk device name for a bus.
+#
+# Possible values:
+#
+# * Any prefix which will result in a valid disk device name like
+# 'sda' or 'hda'
+#   for example. This is only necessary if the device names differ to
+# the
+#   commonly known device name prefixes for a virtualization type such
+# as: sd,
+#   xvd, uvd, vd.
+#
+# Related options:
+#
+# * ``virt_type``: Influences which device type is used, which
+# determines
+#   the default disk prefix.
+#  (string value)
+#disk_prefix = <None>
+
+# Number of seconds to wait for instance to shut down after soft
+# reboot request is made. We fall back to hard reboot if instance does
+# not shutdown within this window. (integer value)
+#wait_soft_reboot_seconds = 120
+
+#
+# Is used to set the CPU mode an instance should have.
+#
+# If virt_type="kvm|qemu", it will default to "host-model", otherwise
+# it will
+# default to "none".
+#
+# Possible values:
+#
+# * ``host-model``: Clones the host CPU feature flags
+# * ``host-passthrough``: Use the host CPU model exactly
+# * ``custom``: Use a named CPU model
+# * ``none``: Don't set a specific CPU model. For instances with
+# ``virt_type`` as KVM/QEMU, the default CPU model from QEMU will be
+# used,
+# which provides a basic set of CPU features that are compatible with
+# most
+# hosts.
+#
+# Related options:
+#
+# * ``cpu_model``: This should be set ONLY when ``cpu_mode`` is set to
+# ``custom``. Otherwise, it would result in an error and the instance
+# launch will fail.
+#
+#  (string value)
+# Possible values:
+# host-model - <No description provided>
+# host-passthrough - <No description provided>
+# custom - <No description provided>
+# none - <No description provided>
+#cpu_mode = <None>
+cpu_mode = host-passthrough
+
+#
+# Set the name of the libvirt CPU model the instance should use.
+#
+# Possible values:
+#
+# * The named CPU models listed in ``/usr/share/libvirt/cpu_map.xml``
+#
+# Related options:
+#
+# * ``cpu_mode``: This should be set to ``custom`` ONLY when you want
+# to
+# configure (via ``cpu_model``) a specific named CPU model.
+# Otherwise, it
+# would result in an error and the instance launch will fail.
+#
+# * ``virt_type``: Only the virtualization types ``kvm`` and ``qemu``
+# use this.
+#  (string value)
+#cpu_model = <None>
+
+#
+# This allows specifying granular CPU feature flags when specifying
+# CPU
+# models.  For example, to explicitly specify the ``pcid``
+# (Process-Context ID, an Intel processor feature) flag to the
+# "IvyBridge"
+# virtual CPU model::
+#
+#     [libvirt]
+#     cpu_mode = custom
+#     cpu_model = IvyBridge
+#     cpu_model_extra_flags = pcid
+#
+# Currently, the choice is restricted to only one option: ``pcid``
+# (the
+# option is case-insensitive, so ``PCID`` is also valid).  This flag
+# is
+# now required to address the guest performance degradation as a
+# result of
+# applying the "Meltdown" CVE fixes on certain Intel CPU models.
+#
+# Note that when using this config attribute to set the 'PCID' CPU
+# flag,
+# not all virtual (i.e. libvirt / QEMU) CPU models need it:
+#
+# * The only virtual CPU models that include the 'PCID' capability are
+#   Intel "Haswell", "Broadwell", and "Skylake" variants.
+#
+# * The libvirt / QEMU CPU models "Nehalem", "Westmere",
+# "SandyBridge",
+#   and "IvyBridge" will _not_ expose the 'PCID' capability by
+# default,
+#   even if the host CPUs by the same name include it.  I.e.  'PCID'
+# needs
+#   to be explicitly specified when using the said virtual CPU models.
+#
+# For now, the ``cpu_model_extra_flags`` config attribute is valid
+# only in
+# combination with ``cpu_mode`` + ``cpu_model`` options.
+#
+# Besides ``custom``, the libvirt driver has two other CPU modes: The
+# default, ``host-model``, tells it to do the right thing with respect
+# to
+# handling 'PCID' CPU flag for the guest -- *assuming* you are running
+# updated processor microcode, host and guest kernel, libvirt, and
+# QEMU.
+# The other mode, ``host-passthrough``, checks if 'PCID' is available
+# in
+# the hardware, and if so directly passes it through to the Nova
+# guests.
+# Thus, in context of 'PCID', with either of these CPU modes
+# (``host-model`` or ``host-passthrough``), there is no need to use
+# the
+# ``cpu_model_extra_flags``.
+#
+# Related options:
+#
+# * cpu_mode
+# * cpu_model
+#  (list value)
+#cpu_model_extra_flags =
+
+# Location where libvirt driver will store snapshots before uploading
+# them to image service (string value)
+#snapshots_directory = $instances_path/snapshots
+
+# Location where the Xen hvmloader is kept (string value)
+#xen_hvmloader_path = /usr/lib/xen/boot/hvmloader
+
+#
+# Specific cache modes to use for different disk types.
+#
+# For example: file=directsync,block=none,network=writeback
+#
+# For local or direct-attached storage, it is recommended that you use
+# writethrough (default) mode, as it ensures data integrity and has
+# acceptable
+# I/O performance for applications running in the guest, especially
+# for read
+# operations. However, caching mode none is recommended for remote NFS
+# storage,
+# because direct I/O operations (O_DIRECT) perform better than
+# synchronous I/O
+# operations (with O_SYNC). Caching mode none effectively turns all
+# guest I/O
+# operations into direct I/O operations on the host, which is the NFS
+# client in
+# this environment.
+#
+# Possible cache modes:
+#
+# * default: Same as writethrough.
+# * none: With caching mode set to none, the host page cache is
+# disabled, but
+#   the disk write cache is enabled for the guest. In this mode, the
+# write
+#   performance in the guest is optimal because write operations
+# bypass the host
+#   page cache and go directly to the disk write cache. If the disk
+# write cache
+#   is battery-backed, or if the applications or storage stack in the
+# guest
+#   transfer data properly (either through fsync operations or file
+# system
+#   barriers), then data integrity can be ensured. However, because
+# the host
+#   page cache is disabled, the read performance in the guest would
+# not be as
+#   good as in the modes where the host page cache is enabled, such as
+#   writethrough mode. Shareable disk devices, like for a multi-
+# attachable block
+#   storage volume, will have their cache mode set to 'none'
+# regardless of
+#   configuration.
+# * writethrough: writethrough mode is the default caching mode. With
+#   caching set to writethrough mode, the host page cache is enabled,
+# but the
+#   disk write cache is disabled for the guest. Consequently, this
+# caching mode
+#   ensures data integrity even if the applications and storage stack
+# in the
+#   guest do not transfer data to permanent storage properly (either
+# through
+#   fsync operations or file system barriers). Because the host page
+# cache is
+#   enabled in this mode, the read performance for applications
+# running in the
+#   guest is generally better. However, the write performance might be
+# reduced
+#   because the disk write cache is disabled.
+# * writeback: With caching set to writeback mode, both the host page
+# cache
+#   and the disk write cache are enabled for the guest. Because of
+# this, the
+#   I/O performance for applications running in the guest is good, but
+# the data
+#   is not protected in a power failure. As a result, this caching
+# mode is
+#   recommended only for temporary data where potential data loss is
+# not a
+#   concern.
+# * directsync: Like "writethrough", but it bypasses the host page
+# cache.
+# * unsafe: Caching mode of unsafe ignores cache transfer operations
+#   completely. As its name implies, this caching mode should be used
+# only for
+#   temporary data where data loss is not a concern. This mode can be
+# useful for
+#   speeding up guest installations, but you should switch to another
+# caching
+#   mode in production environments.
+#  (list value)
+#disk_cachemodes =
+
+# A path to a device that will be used as source of entropy on the
+# host. Permitted options are: /dev/random or /dev/hwrng (string
+# value)
+#rng_dev_path = <None>
+
+# For qemu or KVM guests, set this option to specify a default machine
+# type per host architecture. You can find a list of supported machine
+# types in your environment by checking the output of the "virsh
+# capabilities"command. The format of the value for this config option
+# is host-arch=machine-type. For example:
+# x86_64=machinetype1,armv7l=machinetype2 (list value)
+#hw_machine_type = <None>
+
+# The data source used to the populate the host "serial" UUID exposed
+# to guest in the virtual BIOS. (string value)
+# Possible values:
+# none - <No description provided>
+# os - <No description provided>
+# hardware - <No description provided>
+# auto - <No description provided>
+#sysinfo_serial = auto
+
+# A number of seconds to memory usage statistics period. Zero or
+# negative value mean to disable memory usage statistics. (integer
+# value)
+#mem_stats_period_seconds = 10
+
+# List of uid targets and ranges.Syntax is guest-uid:host-
+# uid:countMaximum of 5 allowed. (list value)
+#uid_maps =
+
+# List of guid targets and ranges.Syntax is guest-gid:host-
+# gid:countMaximum of 5 allowed. (list value)
+#gid_maps =
+
+# In a realtime host context vCPUs for guest will run in that
+# scheduling priority. Priority depends on the host kernel (usually
+# 1-99) (integer value)
+#realtime_scheduler_priority = 1
+
+#
+# This is a performance event list which could be used as monitor.
+# These events
+# will be passed to libvirt domain xml while creating a new instances.
+# Then event statistics data can be collected from libvirt.  The
+# minimum
+# libvirt version is 2.0.0. For more information about `Performance
+# monitoring
+# events`, refer https://libvirt.org/formatdomain.html#elementsPerf .
+#
+# Possible values:
+# * A string list. For example: ``enabled_perf_events = cmt, mbml,
+# mbmt``
+#   The supported events list can be found in
+#   https://libvirt.org/html/libvirt-libvirt-domain.html ,
+#   which you may need to search key words ``VIR_PERF_PARAM_*``
+#  (list value)
+#enabled_perf_events =
+
+#
+# VM Images format.
+#
+# If default is specified, then use_cow_images flag is used instead of
+# this
+# one.
+#
+# Related options:
+#
+# * virt.use_cow_images
+# * images_volume_group
+#  (string value)
+# Possible values:
+# raw - <No description provided>
+# flat - <No description provided>
+# qcow2 - <No description provided>
+# lvm - <No description provided>
+# rbd - <No description provided>
+# ploop - <No description provided>
+# default - <No description provided>
+#images_type = default
+
+#
+# LVM Volume Group that is used for VM images, when you specify
+# images_type=lvm
+#
+# Related options:
+#
+# * images_type
+#  (string value)
+#images_volume_group = <None>
+
+#
+# Create sparse logical volumes (with virtualsize) if this flag is set
+# to True.
+#  (boolean value)
+#sparse_logical_volumes = false
+
+# The RADOS pool in which rbd volumes are stored (string value)
+#images_rbd_pool = rbd
+
+# Path to the ceph configuration file to use (string value)
+#images_rbd_ceph_conf =
+
+#
+# Discard option for nova managed disks.
+#
+# Requires:
+#
+# * Libvirt >= 1.0.6
+# * Qemu >= 1.5 (raw format)
+# * Qemu >= 1.6 (qcow2 format)
+#  (string value)
+# Possible values:
+# ignore - <No description provided>
+# unmap - <No description provided>
+#hw_disk_discard = <None>
+
+# DEPRECATED: Allows image information files to be stored in non-
+# standard locations (string value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Image info files are no longer used by the image cache
+#image_info_filename_pattern = $instances_path/$image_cache_subdirectory_name/%(image)s.info
+
+# Unused resized base images younger than this will not be removed
+# (integer value)
+#remove_unused_resized_minimum_age_seconds = 3600
+
+# DEPRECATED: Write a checksum for files in _base to disk (boolean
+# value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: The image cache no longer periodically calculates checksums
+# of stored images. Data integrity can be checked at the block or
+# filesystem level.
+#checksum_base_images = false
+
+# DEPRECATED: How frequently to checksum base images (integer value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: The image cache no longer periodically calculates checksums
+# of stored images. Data integrity can be checked at the block or
+# filesystem level.
+#checksum_interval_seconds = 3600
+
+#
+# Method used to wipe ephemeral disks when they are deleted. Only
+# takes effect
+# if LVM is set as backing storage.
+#
+# Possible values:
+#
+# * none - do not wipe deleted volumes
+# * zero - overwrite volumes with zeroes
+# * shred - overwrite volume repeatedly
+#
+# Related options:
+#
+# * images_type - must be set to ``lvm``
+# * volume_clear_size
+#  (string value)
+# Possible values:
+# none - <No description provided>
+# zero - <No description provided>
+# shred - <No description provided>
+#volume_clear = zero
+
+#
+# Size of area in MiB, counting from the beginning of the allocated
+# volume,
+# that will be cleared using method set in ``volume_clear`` option.
+#
+# Possible values:
+#
+# * 0 - clear whole volume
+# * >0 - clear specified amount of MiB
+#
+# Related options:
+#
+# * images_type - must be set to ``lvm``
+# * volume_clear - must be set and the value must be different than
+# ``none``
+#   for this option to have any impact
+#  (integer value)
+# Minimum value: 0
+#volume_clear_size = 0
+
+#
+# Enable snapshot compression for ``qcow2`` images.
+#
+# Note: you can set ``snapshot_image_format`` to ``qcow2`` to force
+# all
+# snapshots to be in ``qcow2`` format, independently from their
+# original image
+# type.
+#
+# Related options:
+#
+# * snapshot_image_format
+#  (boolean value)
+#snapshot_compression = false
+
+# Use virtio for bridge interfaces with KVM/QEMU (boolean value)
+#use_virtio_for_bridges = true
+
+#
+# Use multipath connection of the iSCSI or FC volume
+#
+# Volumes can be connected in the LibVirt as multipath devices. This
+# will
+# provide high availability and fault tolerance.
+#  (boolean value)
+# Deprecated group/name - [libvirt]/iscsi_use_multipath
+#volume_use_multipath = false
+
+#
+# Number of times to scan given storage protocol to find volume.
+#  (integer value)
+# Deprecated group/name - [libvirt]/num_iscsi_scan_tries
+#num_volume_scan_tries = 5
+
+#
+# Number of times to rediscover AoE target to find volume.
+#
+# Nova provides support for block storage attaching to hosts via AOE
+# (ATA over
+# Ethernet). This option allows the user to specify the maximum number
+# of retry
+# attempts that can be made to discover the AoE device.
+#  (integer value)
+#num_aoe_discover_tries = 3
+
+#
+# The iSCSI transport iface to use to connect to target in case
+# offload support
+# is desired.
+#
+# Default format is of the form <transport_name>.<hwaddress> where
+# <transport_name> is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i,
+# qla4xxx, ocs) and
+# <hwaddress> is the MAC address of the interface and can be generated
+# via the
+# iscsiadm -m iface command. Do not confuse the iscsi_iface parameter
+# to be
+# provided here with the actual transport name.
+#  (string value)
+# Deprecated group/name - [libvirt]/iscsi_transport
+#iscsi_iface = <None>
+
+#
+# Number of times to scan iSER target to find volume.
+#
+# iSER is a server network protocol that extends iSCSI protocol to use
+# Remote
+# Direct Memory Access (RDMA). This option allows the user to specify
+# the maximum
+# number of scan attempts that can be made to find iSER volume.
+#  (integer value)
+#num_iser_scan_tries = 5
+
+#
+# Use multipath connection of the iSER volume.
+#
+# iSER volumes can be connected as multipath devices. This will
+# provide high
+# availability and fault tolerance.
+#  (boolean value)
+#iser_use_multipath = false
+
+#
+# The RADOS client name for accessing rbd(RADOS Block Devices)
+# volumes.
+#
+# Libvirt will refer to this user when connecting and authenticating
+# with
+# the Ceph RBD server.
+#  (string value)
+#rbd_user = <None>
+
+#
+# The libvirt UUID of the secret for the rbd_user volumes.
+#  (string value)
+#rbd_secret_uuid = <None>
+
+#
+# Directory where the NFS volume is mounted on the compute node.
+# The default is 'mnt' directory of the location where nova's Python
+# module
+# is installed.
+#
+# NFS provides shared storage for the OpenStack Block Storage service.
+#
+# Possible values:
+#
+# * A string representing absolute path of mount point.
+#  (string value)
+#nfs_mount_point_base = $state_path/mnt
+
+#
+# Mount options passed to the NFS client. See section of the nfs man
+# page
+# for details.
+#
+# Mount options controls the way the filesystem is mounted and how the
+# NFS client behaves when accessing files on this mount point.
+#
+# Possible values:
+#
+# * Any string representing mount options separated by commas.
+# * Example string: vers=3,lookupcache=pos
+#  (string value)
+#nfs_mount_options = <None>
+
+#
+# Directory where the Quobyte volume is mounted on the compute node.
+#
+# Nova supports Quobyte volume driver that enables storing Block
+# Storage
+# service volumes on a Quobyte storage back end. This Option specifies
+# the
+# path of the directory where Quobyte volume is mounted.
+#
+# Possible values:
+#
+# * A string representing absolute path of mount point.
+#  (string value)
+#quobyte_mount_point_base = $state_path/mnt
+
+# Path to a Quobyte Client configuration file. (string value)
+#quobyte_client_cfg = <None>
+
+#
+# Directory where the SMBFS shares are mounted on the compute node.
+#  (string value)
+#smbfs_mount_point_base = $state_path/mnt
+
+#
+# Mount options passed to the SMBFS client.
+#
+# Provide SMBFS options as a single string containing all parameters.
+# See mount.cifs man page for details. Note that the libvirt-qemu
+# ``uid``
+# and ``gid`` must be specified.
+#  (string value)
+#smbfs_mount_options =
+
+#
+# libvirt's transport method for remote file operations.
+#
+# Because libvirt cannot use RPC to copy files over network to/from
+# other
+# compute nodes, other method must be used for:
+#
+# * creating directory on remote host
+# * creating file on remote host
+# * removing file from remote host
+# * copying file to remote host
+#  (string value)
+# Possible values:
+# ssh - <No description provided>
+# rsync - <No description provided>
+#remote_filesystem_transport = ssh
+
+#
+# Directory where the Virtuozzo Storage clusters are mounted on the
+# compute
+# node.
+#
+# This option defines non-standard mountpoint for Vzstorage cluster.
+#
+# Related options:
+#
+# * vzstorage_mount_* group of parameters
+#  (string value)
+#vzstorage_mount_point_base = $state_path/mnt
+
+#
+# Mount owner user name.
+#
+# This option defines the owner user of Vzstorage cluster mountpoint.
+#
+# Related options:
+#
+# * vzstorage_mount_* group of parameters
+#  (string value)
+#vzstorage_mount_user = stack
+
+#
+# Mount owner group name.
+#
+# This option defines the owner group of Vzstorage cluster mountpoint.
+#
+# Related options:
+#
+# * vzstorage_mount_* group of parameters
+#  (string value)
+#vzstorage_mount_group = qemu
+
+#
+# Mount access mode.
+#
+# This option defines the access bits of Vzstorage cluster mountpoint,
+# in the format similar to one of chmod(1) utility, like this: 0770.
+# It consists of one to four digits ranging from 0 to 7, with missing
+# lead digits assumed to be 0's.
+#
+# Related options:
+#
+# * vzstorage_mount_* group of parameters
+#  (string value)
+#vzstorage_mount_perms = 0770
+
+#
+# Path to vzstorage client log.
+#
+# This option defines the log of cluster operations,
+# it should include "%(cluster_name)s" template to separate
+# logs from multiple shares.
+#
+# Related options:
+#
+# * vzstorage_mount_opts may include more detailed logging options.
+#  (string value)
+#vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz
+
+#
+# Path to the SSD cache file.
+#
+# You can attach an SSD drive to a client and configure the drive to
+# store
+# a local cache of frequently accessed data. By having a local cache
+# on a
+# client's SSD drive, you can increase the overall cluster performance
+# by
+# up to 10 and more times.
+# WARNING! There is a lot of SSD models which are not server grade and
+# may loose arbitrary set of data changes on power loss.
+# Such SSDs should not be used in Vstorage and are dangerous as may
+# lead
+# to data corruptions and inconsistencies. Please consult with the
+# manual
+# on which SSD models are known to be safe or verify it using
+# vstorage-hwflush-check(1) utility.
+#
+# This option defines the path which should include "%(cluster_name)s"
+# template to separate caches from multiple shares.
+#
+# Related options:
+#
+# * vzstorage_mount_opts may include more detailed cache options.
+#  (string value)
+#vzstorage_cache_path = <None>
+
+#
+# Extra mount options for pstorage-mount
+#
+# For full description of them, see
+# https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html
+# Format is a python string representation of arguments list, like:
+# "['-v', '-R', '500']"
+# Shouldn't include -c, -l, -C, -u, -g and -m as those have
+# explicit vzstorage_* options.
+#
+# Related options:
+#
+# * All other vzstorage_* options
+#  (list value)
+#vzstorage_mount_opts =
+
+
+[metrics]
+#
+# Configuration options for metrics
+#
+# Options under this group allow to adjust how values assigned to
+# metrics are
+# calculated.
+
+#
+# From nova.conf
+#
+
+#
+# When using metrics to weight the suitability of a host, you can use
+# this option
+# to change how the calculated weight influences the weight assigned
+# to a host as
+# follows:
+#
+# * >1.0: increases the effect of the metric on overall weight
+# * 1.0: no change to the calculated weight
+# * >0.0,<1.0: reduces the effect of the metric on overall weight
+# * 0.0: the metric value is ignored, and the value of the
+#   'weight_of_unavailable' option is returned instead
+# * >-1.0,<0.0: the effect is reduced and reversed
+# * -1.0: the effect is reversed
+# * <-1.0: the effect is increased proportionally and reversed
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the
+# multipler
+#   ratio for this weigher.
+#
+# Related options:
+#
+# * weight_of_unavailable
+#  (floating point value)
+#weight_multiplier = 1.0
+
+#
+# This setting specifies the metrics to be weighed and the relative
+# ratios for
+# each metric. This should be a single string value, consisting of a
+# series of
+# one or more 'name=ratio' pairs, separated by commas, where 'name' is
+# the name
+# of the metric to be weighed, and 'ratio' is the relative weight for
+# that
+# metric.
+#
+# Note that if the ratio is set to 0, the metric value is ignored, and
+# instead
+# the weight will be set to the value of the 'weight_of_unavailable'
+# option.
+#
+# As an example, let's consider the case where this option is set to:
+#
+#     ``name1=1.0, name2=-1.3``
+#
+# The final weight will be:
+#
+#     ``(name1.value * 1.0) + (name2.value * -1.3)``
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more key/value pairs separated by commas, where
+# the key is
+#   a string representing the name of a metric and the value is a
+# numeric weight
+#   for that metric. If any value is set to 0, the value is ignored
+# and the
+#   weight will be set to the value of the 'weight_of_unavailable'
+# option.
+#
+# Related options:
+#
+# * weight_of_unavailable
+#  (list value)
+#weight_setting =
+
+#
+# This setting determines how any unavailable metrics are treated. If
+# this option
+# is set to True, any hosts for which a metric is unavailable will
+# raise an
+# exception, so it is recommended to also use the MetricFilter to
+# filter out
+# those hosts before weighing.
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * True or False, where False ensures any metric being unavailable
+# for a host
+#   will set the host weight to 'weight_of_unavailable'.
+#
+# Related options:
+#
+# * weight_of_unavailable
+#  (boolean value)
+#required = true
+
+#
+# When any of the following conditions are met, this value will be
+# used in place
+# of any actual metric value:
+#
+# * One of the metrics named in 'weight_setting' is not available for
+# a host,
+#   and the value of 'required' is False
+# * The ratio specified for a metric in 'weight_setting' is 0
+# * The 'weight_multiplier' option is set to 0
+#
+# This option is only used by the FilterScheduler and its subclasses;
+# if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the
+# multipler
+#   ratio for this weigher.
+#
+# Related options:
+#
+# * weight_setting
+# * required
+# * weight_multiplier
+#  (floating point value)
+#weight_of_unavailable = -10000.0
+
+
+[mks]
+#
+# Nova compute node uses WebMKS, a desktop sharing protocol to provide
+# instance console access to VM's created by VMware hypervisors.
+#
+# Related options:
+# Following options must be set to provide console access.
+# * mksproxy_base_url
+# * enabled
+
+#
+# From nova.conf
+#
+
+#
+# Location of MKS web console proxy
+#
+# The URL in the response points to a WebMKS proxy which
+# starts proxying between client and corresponding vCenter
+# server where instance runs. In order to use the web based
+# console access, WebMKS proxy should be installed and configured
+#
+# Possible values:
+#
+# * Must be a valid URL of the form:``http://host:port/`` or
+#   ``https://host:port/``
+#  (uri value)
+#mksproxy_base_url = http://127.0.0.1:6090/
+
+#
+# Enables graphical console access for virtual machines.
+#  (boolean value)
+#enabled = false
+
+
+[neutron]
+#
+# Configuration options for neutron (network connectivity as a
+# service).
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# This option specifies the URL for connecting to Neutron.
+#
+# Possible values:
+#
+# * Any valid URL that points to the Neutron API service is
+# appropriate here.
+#   This typically matches the URL returned for the 'network' service
+# type
+#   from the Keystone service catalog.
+#  (uri value)
+# This option is deprecated for removal since 17.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Endpoint lookup uses the service catalog via common
+# keystoneauth1 Adapter configuration options. In the current release,
+# "url" will override this behavior, but will be ignored and/or
+# removed in a future release. To achieve the same result, use the
+# endpoint_override option instead.
+#url = http://127.0.0.1:9696
+
+#
+# Default name for the Open vSwitch integration bridge.
+#
+# Specifies the name of an integration bridge interface used by
+# OpenvSwitch.
+# This option is only used if Neutron does not specify the OVS bridge
+# name in
+# port binding responses.
+#  (string value)
+#ovs_bridge = br-int
+
+#
+# Default name for the floating IP pool.
+#
+# Specifies the name of floating IP pool used for allocating floating
+# IPs. This
+# option is only used if Neutron does not specify the floating IP pool
+# name in
+# port binding reponses.
+#  (string value)
+#default_floating_pool = nova
+
+#
+# Integer value representing the number of seconds to wait before
+# querying
+# Neutron for extensions.  After this number of seconds the next time
+# Nova
+# needs to create a resource in Neutron it will requery Neutron for
+# the
+# extensions that it has loaded.  Setting value to 0 will refresh the
+# extensions with no wait.
+#  (integer value)
+# Minimum value: 0
+#extension_sync_interval = 600
+extension_sync_interval=600
+
+#
+# When set to True, this option indicates that Neutron will be used to
+# proxy
+# metadata requests and resolve instance ids. Otherwise, the instance
+# ID must be
+# passed to the metadata request in the 'X-Instance-ID' header.
+#
+# Related options:
+#
+# * metadata_proxy_shared_secret
+#  (boolean value)
+#service_metadata_proxy = false
+
+#
+# This option holds the shared secret string used to validate proxy
+# requests to
+# Neutron metadata requests. In order to be used, the
+# 'X-Metadata-Provider-Signature' header must be supplied in the
+# request.
+#
+# Related options:
+#
+# * service_metadata_proxy
+#  (string value)
+#metadata_proxy_shared_secret =
+
+# PEM encoded Certificate Authority to use when verifying HTTPs
+# connections. (string value)
+#cafile = <None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile = <None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+timeout=300
+
+# Authentication type to load (string value)
+# Deprecated group/name - [neutron]/auth_plugin
+#auth_type = <None>
+auth_type = v3password
+
+# Config Section from which to load plugin specific options (string
+# value)
+#auth_section = <None>
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357/v3
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name=service
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+project_domain_name = Default
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used
+# for both the user and project domain in v3 and ignored in v2
+# authentication. (string value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will
+# be used for both the user and project domain in v3 and ignored in v2
+# authentication. (string value)
+#default_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username=neutron
+
+# User's domain id (string value)
+#user_domain_id = <None>
+
+# User's domain name (string value)
+#user_domain_name = <None>
+user_domain_name = Default
+
+# User's password (string value)
+#password = <None>
+password=opnfv_secret
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# The default service_type for endpoint URL discovery. (string value)
+#service_type = network
+
+# The default service_name for endpoint URL discovery. (string value)
+#service_name = <None>
+
+# List of interfaces, in order of preference, for endpoint URL. (list
+# value)
+#valid_interfaces = internal,public
+
+# The default region_name for endpoint URL discovery. (string value)
+#region_name = <None>
+region_name= RegionOne
+
+# Always use this endpoint URL for requests for this client. NOTE: The
+# unversioned endpoint should be specified here; to request a
+# particular API version, use the `version`, `min-version`, and/or
+# `max-version` options. (string value)
+#endpoint_override = <None>
+
+
+[notifications]
+#
+# Most of the actions in Nova which manipulate the system state
+# generate
+# notifications which are posted to the messaging component (e.g.
+# RabbitMQ) and
+# can be consumed by any service outside the OpenStack. More technical
+# details
+# at
+# https://docs.openstack.org/nova/latest/reference/notifications.html
+
+#
+# From nova.conf
+#
+
+#
+# If set, send compute.instance.update notifications on
+# instance state changes.
+#
+# Please refer to
+# https://docs.openstack.org/nova/latest/reference/notifications.html
+# for
+# additional information on notifications.
+#
+# Possible values:
+#
+# * None - no notifications
+# * "vm_state" - notifications are sent with VM state transition
+# information in
+#   the ``old_state`` and ``state`` fields. The ``old_task_state`` and
+#   ``new_task_state`` fields will be set to the current task_state of
+# the
+#   instance.
+# * "vm_and_task_state" - notifications are sent with VM and task
+# state
+#   transition information.
+#  (string value)
+# Possible values:
+# <None> - <No description provided>
+# vm_state - <No description provided>
+# vm_and_task_state - <No description provided>
+#notify_on_state_change = <None>
+notify_on_state_change = vm_and_task_state
+
+# Default notification level for outgoing notifications. (string
+# value)
+# Possible values:
+# DEBUG - <No description provided>
+# INFO - <No description provided>
+# WARN - <No description provided>
+# ERROR - <No description provided>
+# CRITICAL - <No description provided>
+# Deprecated group/name - [DEFAULT]/default_notification_level
+#default_level = INFO
+
+# DEPRECATED:
+# Default publisher_id for outgoing notifications. If you consider
+# routing
+# notifications using different publisher, change this value
+# accordingly.
+#
+# Possible values:
+#
+# * Defaults to the current hostname of this host, but it can be any
+# valid
+#   oslo.messaging publisher_id
+#
+# Related options:
+#
+# *  host - Hostname, FQDN or IP address of this host.
+#  (string value)
+# This option is deprecated for removal since 17.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This option is only used when ``monkey_patch=True`` and
+# ``monkey_patch_modules`` is configured to specify the legacy
+# notify_decorator.
+# Since the monkey_patch and monkey_patch_modules options are
+# deprecated, this
+# option is also deprecated.
+#default_publisher_id = $host
+
+#
+# Specifies which notification format shall be used by nova.
+#
+# The default value is fine for most deployments and rarely needs to
+# be changed.
+# This value can be set to 'versioned' once the infrastructure moves
+# closer to
+# consuming the newer format of notifications. After this occurs, this
+# option
+# will be removed.
+#
+# Note that notifications can be completely disabled by setting
+# ``driver=noop``
+# in the ``[oslo_messaging_notifications]`` group.
+#
+# Possible values:
+# * unversioned: Only the legacy unversioned notifications are
+# emitted.
+# * versioned: Only the new versioned notifications are emitted.
+# * both: Both the legacy unversioned and the new versioned
+# notifications are
+#   emitted. (Default)
+#
+# The list of versioned notifications is visible in
+# https://docs.openstack.org/nova/latest/reference/notifications.html
+#  (string value)
+# Possible values:
+# unversioned - <No description provided>
+# versioned - <No description provided>
+# both - <No description provided>
+#notification_format = both
+
+#
+# Specifies the topics for the versioned notifications issued by nova.
+#
+# The default value is fine for most deployments and rarely needs to
+# be changed.
+# However, if you have a third-party service that consumes versioned
+# notifications, it might be worth getting a topic for that service.
+# Nova will send a message containing a versioned notification payload
+# to each
+# topic queue in this list.
+#
+# The list of versioned notifications is visible in
+# https://docs.openstack.org/nova/latest/reference/notifications.html
+#  (list value)
+#versioned_notifications_topics = versioned_notifications
+
+#
+# If enabled, include block device information in the versioned
+# notification
+# payload. Sending block device information is disabled by default as
+# providing
+# that information can incur some overhead on the system since the
+# information
+# may need to be loaded from the database.
+#  (boolean value)
+#bdms_in_notifications = false
+
+
+[osapi_v21]
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# This option is a string representing a regular expression (regex)
+# that matches
+# the project_id as contained in URLs. If not set, it will match
+# normal UUIDs
+# created by keystone.
+#
+# Possible values:
+#
+# * A string representing any legal regular expression
+#  (string value)
+# This option is deprecated for removal since 13.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# Recent versions of nova constrain project IDs to hexadecimal
+# characters and
+# dashes. If your installation uses IDs outside of this range, you
+# should use
+# this option to provide your own regex and give you time to migrate
+# offending
+# projects to valid IDs before the next release.
+#project_id_regex = <None>
+
+
+[pci]
+
+#
+# From nova.conf
+#
+
+#
+# An alias for a PCI passthrough device requirement.
+#
+# This allows users to specify the alias in the extra specs for a
+# flavor, without
+# needing to repeat all the PCI property requirements.
+#
+# Possible Values:
+#
+# * A list of JSON values which describe the aliases. For example::
+#
+#     alias = {
+#       "name": "QuickAssist",
+#       "product_id": "0443",
+#       "vendor_id": "8086",
+#       "device_type": "type-PCI",
+#       "numa_policy": "required"
+#     }
+#
+#   This defines an alias for the Intel QuickAssist card. (multi
+# valued). Valid
+#   key values are :
+#
+#   ``name``
+#     Name of the PCI alias.
+#
+#   ``product_id``
+#     Product ID of the device in hexadecimal.
+#
+#   ``vendor_id``
+#     Vendor ID of the device in hexadecimal.
+#
+#   ``device_type``
+#     Type of PCI device. Valid values are: ``type-PCI``, ``type-PF``
+# and
+#     ``type-VF``.
+#
+#   ``numa_policy``
+#     Required NUMA affinity of device. Valid values are: ``legacy``,
+#     ``preferred`` and ``required``.
+#  (multi valued)
+# Deprecated group/name - [DEFAULT]/pci_alias
+#alias =
+
+#
+# White list of PCI devices available to VMs.
+#
+# Possible values:
+#
+# * A JSON dictionary which describe a whitelisted PCI device. It
+# should take
+#   the following format:
+#
+#     ["vendor_id": "<id>",] ["product_id": "<id>",]
+#     ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
+#      "devname": "<name>",]
+#     {"<tag>": "<tag_value>",}
+#
+#   Where '[' indicates zero or one occurrences, '{' indicates zero or
+# multiple
+#   occurrences, and '|' mutually exclusive options. Note that any
+# missing
+#   fields are automatically wildcarded.
+#
+#   Valid key values are :
+#
+#   * "vendor_id": Vendor ID of the device in hexadecimal.
+#   * "product_id": Product ID of the device in hexadecimal.
+#   * "address": PCI address of the device.
+#   * "devname": Device name of the device (for e.g. interface name).
+# Not all
+#     PCI devices have a name.
+#   * "<tag>": Additional <tag> and <tag_value> used for matching PCI
+# devices.
+#     Supported <tag>: "physical_network".
+#
+#   The address key supports traditional glob style and regular
+# expression
+#   syntax. Valid examples are:
+#
+#     passthrough_whitelist = {"devname":"eth0",
+#                              "physical_network":"physnet"}
+#     passthrough_whitelist = {"address":"*:0a:00.*"}
+#     passthrough_whitelist = {"address":":0a:00.",
+#                              "physical_network":"physnet1"}
+#     passthrough_whitelist = {"vendor_id":"1137",
+#                              "product_id":"0071"}
+#     passthrough_whitelist = {"vendor_id":"1137",
+#                              "product_id":"0071",
+#                              "address": "0000:0a:00.1",
+#                              "physical_network":"physnet1"}
+#     passthrough_whitelist = {"address":{"domain": ".*",
+#                                         "bus": "02", "slot": "01",
+#                                         "function": "[2-7]"},
+#                              "physical_network":"physnet1"}
+#     passthrough_whitelist = {"address":{"domain": ".*",
+#                                         "bus": "02", "slot":
+# "0[1-2]",
+#                                         "function": ".*"},
+#                              "physical_network":"physnet1"}
+#
+#   The following are invalid, as they specify mutually exclusive
+# options:
+#
+#     passthrough_whitelist = {"devname":"eth0",
+#                              "physical_network":"physnet",
+#                              "address":"*:0a:00.*"}
+#
+# * A JSON list of JSON dictionaries corresponding to the above
+# format. For
+#   example:
+#
+#     passthrough_whitelist = [{"product_id":"0001",
+# "vendor_id":"8086"},
+#                              {"product_id":"0002",
+# "vendor_id":"8086"}]
+#  (multi valued)
+# Deprecated group/name - [DEFAULT]/pci_passthrough_whitelist
+#passthrough_whitelist =
+
+[placement]
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# Region name of this node. This is used when picking the URL in the
+# service
+# catalog.
+#
+# Possible values:
+#
+# * Any string representing region name
+#  (string value)
+# This option is deprecated for removal since 17.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Endpoint lookup uses the service catalog via common
+# keystoneauth1 Adapter configuration options.  Use the region_name
+# option instead.
+os_region_name = RegionOne
+
+# DEPRECATED:
+# Endpoint interface for this node. This is used when picking the URL
+# in the
+# service catalog.
+#  (string value)
+# This option is deprecated for removal since 17.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Endpoint lookup uses the service catalog via common
+# keystoneauth1 Adapter configuration options.  Use the
+# valid_interfaces option instead.
+#os_interface = <None>
+
+#
+# If True, when limiting allocation candidate results, the results
+# will be
+# a random sampling of the full result set. If False, allocation
+# candidates
+# are returned in a deterministic but undefined order. That is, all
+# things
+# being equal, two requests for allocation candidates will return the
+# same
+# results in the same order; but no guarantees are made as to how that
+# order
+# is determined.
+#  (boolean value)
+#randomize_allocation_candidates = false
+
+# PEM encoded Certificate Authority to use when verifying HTTPs
+# connections. (string value)
+#cafile = <None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile = <None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [placement]/auth_plugin
+auth_type = password
+
+# Config Section from which to load plugin specific options (string
+# value)
+#auth_section = <None>
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url=http://10.167.4.35:35357/v3
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+project_name = service
+
+# Domain ID containing project (string value)
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used
+# for both the user and project domain in v3 and ignored in v2
+# authentication. (string value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will
+# be used for both the user and project domain in v3 and ignored in v2
+# authentication. (string value)
+#default_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [placement]/user_name
+username = nova
+
+# User's domain id (string value)
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User's password (string value)
+password = opnfv_secret
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# The default service_type for endpoint URL discovery. (string value)
+#service_type = placement
+
+# The default service_name for endpoint URL discovery. (string value)
+#service_name = <None>
+
+# List of interfaces, in order of preference, for endpoint URL. (list
+# value)
+# Deprecated group/name - [placement]/os_interface
+valid_interfaces = internal
+
+# The default region_name for endpoint URL discovery. (string value)
+# Deprecated group/name - [placement]/os_region_name
+#region_name = <None>
+
+# Always use this endpoint URL for requests for this client. NOTE: The
+# unversioned endpoint should be specified here; to request a
+# particular API version, use the `version`, `min-version`, and/or
+# `max-version` options. (string value)
+#endpoint_override = <None>
+
+
+[quota]
+#
+# Quota options allow to manage quotas in openstack deployment.
+
+#
+# From nova.conf
+#
+
+#
+# The number of instances allowed per project.
+#
+# Possible Values
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_instances
+#instances = 10
+
+#
+# The number of instance cores or vCPUs allowed per project.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_cores
+#cores = 20
+
+#
+# The number of megabytes of instance RAM allowed per project.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_ram
+#ram = 51200
+
+# DEPRECATED:
+# The number of floating IPs allowed per project.
+#
+# Floating IPs are not allocated to instances by default. Users need
+# to select
+# them from the pool configured by the OpenStack administrator to
+# attach to their
+# instances.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_floating_ips
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#floating_ips = 10
+
+# DEPRECATED:
+# The number of fixed IPs allowed per project.
+#
+# Unlike floating IPs, fixed IPs are allocated dynamically by the
+# network
+# component when instances boot up.  This quota value should be at
+# least the
+# number of instances allowed
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_fixed_ips
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#fixed_ips = -1
+
+#
+# The number of metadata items allowed per instance.
+#
+# Users can associate metadata with an instance during instance
+# creation. This
+# metadata takes the form of key-value pairs.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_metadata_items
+#metadata_items = 128
+
+#
+# The number of injected files allowed.
+#
+# File injection allows users to customize the personality of an
+# instance by
+# injecting data into it upon boot. Only text file injection is
+# permitted: binary
+# or ZIP files are not accepted. During file injection, any existing
+# files that
+# match specified files are renamed to include ``.bak`` extension
+# appended with a
+# timestamp.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_injected_files
+#injected_files = 5
+
+#
+# The number of bytes allowed per injected file.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_injected_file_content_bytes
+#injected_file_content_bytes = 10240
+
+#
+# The maximum allowed injected file path length.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_injected_file_path_length
+#injected_file_path_length = 255
+
+# DEPRECATED:
+# The number of security groups per project.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_security_groups
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#security_groups = 10
+
+# DEPRECATED:
+# The number of security rules per security group.
+#
+# The associated rules in each security group control the traffic to
+# instances in
+# the group.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_security_group_rules
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration
+# options.
+#security_group_rules = 20
+
+#
+# The maximum number of key pairs allowed per user.
+#
+# Users can create at least one key pair for each project and use the
+# key pair
+# for multiple instances that belong to that project.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_key_pairs
+#key_pairs = 100
+
+#
+# The maxiumum number of server groups per project.
+#
+# Server groups are used to control the affinity and anti-affinity
+# scheduling
+# policy for a group of servers or instances. Reducing the quota will
+# not affect
+# any existing group, but new servers will not be allowed into groups
+# that have
+# become over quota.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_server_groups
+#server_groups = 10
+
+#
+# The maximum number of servers per server group.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_server_group_members
+#server_group_members = 10
+
+#
+# The number of seconds until a reservation expires.
+#
+# This quota represents the time period for invalidating quota
+# reservations.
+#  (integer value)
+#reservation_expire = 86400
+
+#
+# The count of reservations until usage is refreshed.
+#
+# This defaults to 0 (off) to avoid additional load but it is useful
+# to turn on
+# to help keep quota usage up-to-date and reduce the impact of out of
+# sync usage
+# issues.
+#  (integer value)
+# Minimum value: 0
+#until_refresh = 0
+
+#
+# The number of seconds between subsequent usage refreshes.
+#
+# This defaults to 0 (off) to avoid additional load but it is useful
+# to turn on
+# to help keep quota usage up-to-date and reduce the impact of out of
+# sync usage
+# issues. Note that quotas are not updated on a periodic task, they
+# will update
+# on a new reservation if max_age has passed since the last
+# reservation.
+#  (integer value)
+# Minimum value: 0
+#max_age = 0
+
+# DEPRECATED:
+# The quota enforcer driver.
+#
+# Provides abstraction for quota checks. Users can configure a
+# specific
+# driver to use for quota checks.
+#
+# Possible values:
+#
+# * nova.quota.DbQuotaDriver (default) or any string representing
+# fully
+#   qualified class name.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/quota_driver
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+#driver = nova.quota.DbQuotaDriver
+
+#
+# Recheck quota after resource creation to prevent allowing quota to
+# be exceeded.
+#
+# This defaults to True (recheck quota after resource creation) but
+# can be set to
+# False to avoid additional load if allowing quota to be exceeded
+# because of
+# racing requests is considered acceptable. For example, when set to
+# False, if a
+# user makes highly parallel REST API requests to create servers, it
+# will be
+# possible for them to create more servers than their allowed quota
+# during the
+# race. If their quota is 10 servers, they might be able to create 50
+# during the
+# burst. After the burst, they will not be able to create any more
+# servers but
+# they will be able to keep their 50 servers until they delete them.
+#
+# The initial quota check is done before resources are created, so if
+# multiple
+# parallel requests arrive at the same time, all could pass the quota
+# check and
+# create resources, potentially exceeding quota. When recheck_quota is
+# True,
+# quota will be checked a second time after resources have been
+# created and if
+# the resource is over quota, it will be deleted and OverQuota will be
+# raised,
+# usually resulting in a 403 response to the REST API user. This makes
+# it
+# impossible for a user to exceed their quota with the caveat that it
+# will,
+# however, be possible for a REST API user to be rejected with a 403
+# response in
+# the event of a collision close to reaching their quota limit, even
+# if the user
+# has enough quota available when they made the request.
+#  (boolean value)
+#recheck_quota = true
+
+
+[rdp]
+#
+# Options under this group enable and configure Remote Desktop
+# Protocol (
+# RDP) related features.
+#
+# This group is only relevant to Hyper-V users.
+
+#
+# From nova.conf
+#
+
+#
+# Enable Remote Desktop Protocol (RDP) related features.
+#
+# Hyper-V, unlike the majority of the hypervisors employed on Nova
+# compute
+# nodes, uses RDP instead of VNC and SPICE as a desktop sharing
+# protocol to
+# provide instance console access. This option enables RDP for
+# graphical
+# console access for virtual machines created by Hyper-V.
+#
+# **Note:** RDP should only be enabled on compute nodes that support
+# the Hyper-V
+# virtualization platform.
+#
+# Related options:
+#
+# * ``compute_driver``: Must be hyperv.
+#
+#  (boolean value)
+#enabled = false
+
+#
+# The URL an end user would use to connect to the RDP HTML5 console
+# proxy.
+# The console proxy service is called with this token-embedded URL and
+# establishes the connection to the proper instance.
+#
+# An RDP HTML5 console proxy service will need to be configured to
+# listen on the
+# address configured here. Typically the console proxy service would
+# be run on a
+# controller node. The localhost address used as default would only
+# work in a
+# single node environment i.e. devstack.
+#
+# An RDP HTML5 proxy allows a user to access via the web the text or
+# graphical
+# console of any Windows server or workstation using RDP. RDP HTML5
+# console
+# proxy services include FreeRDP, wsgate.
+# See https://github.com/FreeRDP/FreeRDP-WebConnect
+#
+# Possible values:
+#
+# * <scheme>://<ip-address>:<port-number>/
+#
+#   The scheme must be identical to the scheme configured for the RDP
+# HTML5
+#   console proxy service. It is ``http`` or ``https``.
+#
+#   The IP address must be identical to the address on which the RDP
+# HTML5
+#   console proxy service is listening.
+#
+#   The port must be identical to the port on which the RDP HTML5
+# console proxy
+#   service is listening.
+#
+# Related options:
+#
+# * ``rdp.enabled``: Must be set to ``True`` for
+# ``html5_proxy_base_url`` to be
+#   effective.
+#  (uri value)
+#html5_proxy_base_url = http://127.0.0.1:6083/
+
+
+[remote_debug]
+
+#
+# From nova.conf
+#
+
+#
+# Debug host (IP or name) to connect to. This command line parameter
+# is used when
+# you want to connect to a nova service via a debugger running on a
+# different
+# host.
+#
+# Note that using the remote debug option changes how Nova uses the
+# eventlet
+# library to support async IO. This could result in failures that do
+# not occur
+# under normal operation. Use at your own risk.
+#
+# Possible Values:
+#
+#    * IP address of a remote host as a command line parameter
+#      to a nova service. For Example:
+#
+#     /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
+#     --remote_debug-host <IP address where the debugger is running>
+#  (unknown value)
+#host = <None>
+
+#
+# Debug port to connect to. This command line parameter allows you to
+# specify
+# the port you want to use to connect to a nova service via a debugger
+# running
+# on different host.
+#
+# Note that using the remote debug option changes how Nova uses the
+# eventlet
+# library to support async IO. This could result in failures that do
+# not occur
+# under normal operation. Use at your own risk.
+#
+# Possible Values:
+#
+#    * Port number you want to use as a command line parameter
+#      to a nova service. For Example:
+#
+#     /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
+#     --remote_debug-host <IP address where the debugger is running>
+#     --remote_debug-port <port> it's listening on>.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#port = <None>
+
+
+[scheduler]
+
+#
+# From nova.conf
+#
+
+#
+# The scheduler host manager to use.
+#
+# The host manager manages the in-memory picture of the hosts that the
+# scheduler
+# uses. The options values are chosen from the entry points under the
+# namespace
+# 'nova.scheduler.host_manager' in 'setup.cfg'.
+#
+# NOTE: The "ironic_host_manager" option is deprecated as of the
+# 17.0.0 Queens
+# release.
+#  (string value)
+# Possible values:
+# host_manager - <No description provided>
+# ironic_host_manager - <No description provided>
+# Deprecated group/name - [DEFAULT]/scheduler_host_manager
+#host_manager = host_manager
+
+#
+# The class of the driver used by the scheduler. This should be chosen
+# from one
+# of the entrypoints under the namespace 'nova.scheduler.driver' of
+# file
+# 'setup.cfg'. If nothing is specified in this option, the
+# 'filter_scheduler' is
+# used.
+#
+# Other options are:
+#
+# * 'caching_scheduler' which aggressively caches the system state for
+# better
+#   individual scheduler performance at the risk of more retries when
+# running
+#   multiple schedulers. [DEPRECATED]
+# * 'chance_scheduler' which simply picks a host at random.
+# [DEPRECATED]
+# * 'fake_scheduler' which is used for testing.
+#
+# Possible values:
+#
+# * Any of the drivers included in Nova:
+# ** filter_scheduler
+# ** caching_scheduler
+# ** chance_scheduler
+# ** fake_scheduler
+# * You may also set this to the entry point name of a custom
+# scheduler driver,
+#   but you will be responsible for creating and maintaining it in
+# your setup.cfg
+#   file.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/scheduler_driver
+#driver = filter_scheduler
+
+#
+# Periodic task interval.
+#
+# This value controls how often (in seconds) to run periodic tasks in
+# the
+# scheduler. The specific tasks that are run for each period are
+# determined by
+# the particular scheduler being used.
+#
+# If this is larger than the nova-service 'service_down_time' setting,
+# Nova may
+# report the scheduler service as down. This is because the scheduler
+# driver is
+# responsible for sending a heartbeat and it will only do that as
+# often as this
+# option allows. As each scheduler can work a little differently than
+# the others,
+# be sure to test this with your selected scheduler.
+#
+# Possible values:
+#
+# * An integer, where the integer corresponds to periodic task
+# interval in
+#   seconds. 0 uses the default interval (60 seconds). A negative
+# value disables
+#   periodic tasks.
+#
+# Related options:
+#
+# * ``nova-service service_down_time``
+#  (integer value)
+# Deprecated group/name - [DEFAULT]/scheduler_driver_task_period
+#periodic_task_interval = 60
+
+#
+# This is the maximum number of attempts that will be made for a given
+# instance
+# build/move operation. It limits the number of alternate hosts
+# returned by the
+# scheduler. When that list of hosts is exhausted, a
+# MaxRetriesExceeded
+# exception is raised and the instance is set to an error state.
+#
+# Possible values:
+#
+# * A positive integer, where the integer corresponds to the max
+# number of
+#   attempts that can be made when building or moving an instance.
+#          (integer value)
+# Minimum value: 1
+# Deprecated group/name - [DEFAULT]/scheduler_max_attempts
+#max_attempts = 3
+
+#
+# Periodic task interval.
+#
+# This value controls how often (in seconds) the scheduler should
+# attempt
+# to discover new hosts that have been added to cells. If negative
+# (the
+# default), no automatic discovery will occur.
+#
+# Deployments where compute nodes come and go frequently may want this
+# enabled, where others may prefer to manually discover hosts when one
+# is added to avoid any overhead from constantly checking. If enabled,
+# every time this runs, we will select any unmapped hosts out of each
+# cell database on every run.
+#  (integer value)
+# Minimum value: -1
+#discover_hosts_in_cells_interval = -1
+
+#
+# This setting determines the maximum limit on results received from
+# the
+# placement service during a scheduling operation. It effectively
+# limits
+# the number of hosts that may be considered for scheduling requests
+# that
+# match a large number of candidates.
+#
+# A value of 1 (the minimum) will effectively defer scheduling to the
+# placement
+# service strictly on "will it fit" grounds. A higher value will put
+# an upper
+# cap on the number of results the scheduler will consider during the
+# filtering
+# and weighing process. Large deployments may need to set this lower
+# than the
+# total number of hosts available to limit memory consumption, network
+# traffic,
+# etc. of the scheduler.
+#
+# This option is only used by the FilterScheduler; if you use a
+# different
+# scheduler, this option has no effect.
+#  (integer value)
+# Minimum value: 1
+#max_placement_results = 1000
+
+
+[serial_console]
+#
+# The serial console feature allows you to connect to a guest in case
+# a
+# graphical console like VNC, RDP or SPICE is not available. This is
+# only
+# currently supported for the libvirt, Ironic and hyper-v drivers.
+
+#
+# From nova.conf
+#
+
+#
+# Enable the serial console feature.
+#
+# In order to use this feature, the service ``nova-serialproxy`` needs
+# to run.
+# This service is typically executed on the controller node.
+#  (boolean value)
+#enabled = false
+
+#
+# A range of TCP ports a guest can use for its backend.
+#
+# Each instance which gets created will use one port out of this
+# range. If the
+# range is not big enough to provide another port for an new instance,
+# this
+# instance won't get launched.
+#
+# Possible values:
+#
+# * Each string which passes the regex ``\d+:\d+`` For example
+# ``10000:20000``.
+#   Be sure that the first port number is lower than the second port
+# number
+#   and that both are in range from 0 to 65535.
+#  (string value)
+#port_range = 10000:20000
+
+#
+# The URL an end user would use to connect to the ``nova-serialproxy``
+# service.
+#
+# The ``nova-serialproxy`` service is called with this token enriched
+# URL
+# and establishes the connection to the proper instance.
+#
+# Related options:
+#
+# * The IP address must be identical to the address to which the
+#   ``nova-serialproxy`` service is listening (see option
+# ``serialproxy_host``
+#   in this section).
+# * The port must be the same as in the option ``serialproxy_port`` of
+# this
+#   section.
+# * If you choose to use a secured websocket connection, then start
+# this option
+#   with ``wss://`` instead of the unsecured ``ws://``. The options
+# ``cert``
+#   and ``key`` in the ``[DEFAULT]`` section have to be set for that.
+#  (uri value)
+#base_url = ws://127.0.0.1:6083/
+
+#
+# The IP address to which proxy clients (like ``nova-serialproxy``)
+# should
+# connect to get the serial console of an instance.
+#
+# This is typically the IP address of the host of a ``nova-compute``
+# service.
+#  (string value)
+#proxyclient_address = 127.0.0.1
+
+#
+# The IP address which is used by the ``nova-serialproxy`` service to
+# listen
+# for incoming requests.
+#
+# The ``nova-serialproxy`` service listens on this IP address for
+# incoming
+# connection requests to instances which expose serial console.
+#
+# Related options:
+#
+# * Ensure that this is the same IP address which is defined in the
+# option
+#   ``base_url`` of this section or use ``0.0.0.0`` to listen on all
+# addresses.
+#  (string value)
+#serialproxy_host = 0.0.0.0
+
+#
+# The port number which is used by the ``nova-serialproxy`` service to
+# listen
+# for incoming requests.
+#
+# The ``nova-serialproxy`` service listens on this port number for
+# incoming
+# connection requests to instances which expose serial console.
+#
+# Related options:
+#
+# * Ensure that this is the same port number which is defined in the
+# option
+#   ``base_url`` of this section.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#serialproxy_port = 6083
+
+
+[service_user]
+#
+# Configuration options for service to service authentication using a
+# service
+# token. These options allow sending a service token along with the
+# user's token
+# when contacting external REST APIs.
+
+#
+# From nova.conf
+#
+
+#
+# When True, if sending a user token to a REST API, also send a
+# service token.
+#
+# Nova often reuses the user token provided to the nova-api to talk to
+# other REST
+# APIs, such as Cinder, Glance and Neutron. It is possible that while
+# the user
+# token was valid when the request was made to Nova, the token may
+# expire before
+# it reaches the other service. To avoid any failures, and to make it
+# clear it is
+# Nova calling the service on the user's behalf, we include a service
+# token along
+# with the user token. Should the user's token have expired, a valid
+# service
+# token ensures the REST API request will still be accepted by the
+# keystone
+# middleware.
+#  (boolean value)
+#send_service_user_token = false
+
+# PEM encoded Certificate Authority to use when verifying HTTPs
+# connections. (string value)
+#cafile = <None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile = <None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [service_user]/auth_plugin
+#auth_type = <None>
+
+# Config Section from which to load plugin specific options (string
+# value)
+#auth_section = <None>
+
+# Authentication URL (string value)
+#auth_url = <None>
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used
+# for both the user and project domain in v3 and ignored in v2
+# authentication. (string value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will
+# be used for both the user and project domain in v3 and ignored in v2
+# authentication. (string value)
+#default_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [service_user]/user_name
+#username = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User's password (string value)
+#password = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+
+[spice]
+#
+# SPICE console feature allows you to connect to a guest virtual
+# machine.
+# SPICE is a replacement for fairly limited VNC protocol.
+#
+# Following requirements must be met in order to use SPICE:
+#
+# * Virtualization driver must be libvirt
+# * spice.enabled set to True
+# * vnc.enabled set to False
+# * update html5proxy_base_url
+# * update server_proxyclient_address
+
+#
+# From nova.conf
+#
+
+#
+# Enable SPICE related features.
+#
+# Related options:
+#
+# * VNC must be explicitly disabled to get access to the SPICE
+# console. Set the
+#   enabled option to False in the [vnc] section to disable the VNC
+# console.
+#  (boolean value)
+#enabled = false
+enabled = false
+#
+# Enable the SPICE guest agent support on the instances.
+#
+# The Spice agent works with the Spice protocol to offer a better
+# guest console
+# experience. However, the Spice console can still be used without the
+# Spice
+# Agent. With the Spice agent installed the following features are
+# enabled:
+#
+# * Copy & Paste of text and images between the guest and client
+# machine
+# * Automatic adjustment of resolution when the client screen changes
+# - e.g.
+#   if you make the Spice console full screen the guest resolution
+# will adjust to
+#   match it rather than letterboxing.
+# * Better mouse integration - The mouse can be captured and released
+# without
+#   needing to click inside the console or press keys to release it.
+# The
+#   performance of mouse movement is also improved.
+#  (boolean value)
+#agent_enabled = true
+
+#
+# Location of the SPICE HTML5 console proxy.
+#
+# End user would use this URL to connect to the `nova-
+# spicehtml5proxy``
+# service. This service will forward request to the console of an
+# instance.
+#
+# In order to use SPICE console, the service ``nova-spicehtml5proxy``
+# should be
+# running. This service is typically launched on the controller node.
+#
+# Possible values:
+#
+# * Must be a valid URL of the form:
+# ``http://host:port/spice_auto.html``
+#   where host is the node running ``nova-spicehtml5proxy`` and the
+# port is
+#   typically 6082. Consider not using default value as it is not well
+# defined
+#   for any real deployment.
+#
+# Related options:
+#
+# * This option depends on ``html5proxy_host`` and ``html5proxy_port``
+# options.
+#   The access URL returned by the compute node must have the host
+#   and port where the ``nova-spicehtml5proxy`` service is listening.
+#  (uri value)
+#html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html
+html5proxy_base_url = https://172.30.10.101:6080/spice_auto.html
+
+#
+# The  address where the SPICE server running on the instances should
+# listen.
+#
+# Typically, the ``nova-spicehtml5proxy`` proxy client runs on the
+# controller
+# node and connects over the private network to this address on the
+# compute
+# node(s).
+#
+# Possible values:
+#
+# * IP address to listen on.
+#  (string value)
+#server_listen = 127.0.0.1
+
+#
+# The address used by ``nova-spicehtml5proxy`` client to connect to
+# instance
+# console.
+#
+# Typically, the ``nova-spicehtml5proxy`` proxy client runs on the
+# controller node and connects over the private network to this
+# address on the
+# compute node(s).
+#
+# Possible values:
+#
+# * Any valid IP address on the compute node.
+#
+# Related options:
+#
+# * This option depends on the ``server_listen`` option.
+#   The proxy client must be able to access the address specified in
+#   ``server_listen`` using the value of this option.
+#  (string value)
+#server_proxyclient_address = 127.0.0.1
+
+#
+# A keyboard layout which is supported by the underlying hypervisor on
+# this
+# node.
+#
+# Possible values:
+# * This is usually an 'IETF language tag' (default is 'en-us'). If
+# you
+#   use QEMU as hypervisor, you should find the list of supported
+# keyboard
+#   layouts at /usr/share/qemu/keymaps.
+#  (string value)
+#keymap = en-us
+
+#
+# IP address or a hostname on which the ``nova-spicehtml5proxy``
+# service
+# listens for incoming requests.
+#
+# Related options:
+#
+# * This option depends on the ``html5proxy_base_url`` option.
+#   The ``nova-spicehtml5proxy`` service must be listening on a host
+# that is
+#   accessible from the HTML5 client.
+#  (unknown value)
+#html5proxy_host = 0.0.0.0
+
+#
+# Port on which the ``nova-spicehtml5proxy`` service listens for
+# incoming
+# requests.
+#
+# Related options:
+#
+# * This option depends on the ``html5proxy_base_url`` option.
+#   The ``nova-spicehtml5proxy`` service must be listening on a port
+# that is
+#   accessible from the HTML5 client.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#html5proxy_port = 6082
+
+
+[upgrade_levels]
+#
+# upgrade_levels options are used to set version cap for RPC
+# messages sent between different nova services.
+#
+# By default all services send messages using the latest version
+# they know about.
+#
+# The compute upgrade level is an important part of rolling upgrades
+# where old and new nova-compute services run side by side.
+#
+# The other options can largely be ignored, and are only kept to
+# help with a possible future backport issue.
+
+#
+# From nova.conf
+#
+
+#
+# Compute RPC API version cap.
+#
+# By default, we always send messages using the most recent version
+# the client knows about.
+#
+# Where you have old and new compute services running, you should set
+# this to the lowest deployed version. This is to guarantee that all
+# services never send messages that one of the compute nodes can't
+# understand. Note that we only support upgrading from release N to
+# release N+1.
+#
+# Set this option to "auto" if you want to let the compute RPC module
+# automatically determine what version to use based on the service
+# versions in the deployment.
+#
+# Possible values:
+#
+# * By default send the latest version the client knows about
+# * 'auto': Automatically determines what version to use based on
+#   the service versions in the deployment.
+# * A string representing a version number in the format 'N.N';
+#   for example, possible values might be '1.12' or '2.0'.
+# * An OpenStack release name, in lower case, such as 'mitaka' or
+#   'liberty'.
+#  (string value)
+#compute = <None>
+
+# Cells RPC API version cap (string value)
+#cells = <None>
+
+# Intercell RPC API version cap (string value)
+#intercell = <None>
+
+# Cert RPC API version cap (string value)
+#cert = <None>
+
+# Scheduler RPC API version cap (string value)
+#scheduler = <None>
+
+# Conductor RPC API version cap (string value)
+#conductor = <None>
+
+# Console RPC API version cap (string value)
+#console = <None>
+
+# Consoleauth RPC API version cap (string value)
+#consoleauth = <None>
+
+# Network RPC API version cap (string value)
+#network = <None>
+
+# Base API RPC API version cap (string value)
+#baseapi = <None>
+
+
+[vault]
+
+#
+# From nova.conf
+#
+
+# root token for vault (string value)
+#root_token_id = <None>
+
+# Use this endpoint to connect to Vault, for example:
+# "http://127.0.0.1:8200" (string value)
+#vault_url = http://127.0.0.1:8200
+
+# Absolute path to ca cert file (string value)
+#ssl_ca_crt_file = <None>
+
+# SSL Enabled/Disabled (boolean value)
+#use_ssl = false
+
+
+[vendordata_dynamic_auth]
+#
+# Options within this group control the authentication of the
+# vendordata
+# subsystem of the metadata API server (and config drive) with
+# external systems.
+
+#
+# From nova.conf
+#
+
+# PEM encoded Certificate Authority to use when verifying HTTPs
+# connections. (string value)
+#cafile = <None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile = <None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [vendordata_dynamic_auth]/auth_plugin
+#auth_type = <None>
+
+# Config Section from which to load plugin specific options (string
+# value)
+#auth_section = <None>
+
+# Authentication URL (string value)
+#auth_url = <None>
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used
+# for both the user and project domain in v3 and ignored in v2
+# authentication. (string value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will
+# be used for both the user and project domain in v3 and ignored in v2
+# authentication. (string value)
+#default_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [vendordata_dynamic_auth]/user_name
+#username = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User's password (string value)
+#password = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+[vnc]
+#
+# Virtual Network Computer (VNC) can be used to provide remote desktop
+# console access to instances for tenants and/or administrators.
+
+#
+# From nova.conf
+#
+
+#
+# Enable VNC related features.
+#
+# Guests will get created with graphical devices to support this.
+# Clients
+# (for example Horizon) can then establish a VNC connection to the
+# guest.
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/vnc_enabled
+enabled = true
+novncproxy_base_url=https://172.30.10.101:6080/vnc_auto.html
+novncproxy_port=6080
+vncserver_listen=10.167.4.53
+vncserver_proxyclient_address=10.167.4.53
+
+#
+# Keymap for VNC.
+#
+# The keyboard mapping (keymap) determines which keyboard layout a VNC
+# session should use by default.
+#
+# Possible values:
+#
+# * A keyboard layout which is supported by the underlying hypervisor
+# on
+#   this node. This is usually an 'IETF language tag' (for example
+#   'en-us').  If you use QEMU as hypervisor, you should find the
+# list
+#   of supported keyboard layouts at ``/usr/share/qemu/keymaps``.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vnc_keymap
+keymap = en-us
+
+#
+# The IP address or hostname on which an instance should listen to for
+# incoming VNC connection requests on this node.
+#  (unknown value)
+# Deprecated group/name - [DEFAULT]/vncserver_listen
+# Deprecated group/name - [vnc]/vncserver_listen
+#server_listen = 127.0.0.1
+
+#
+# Private, internal IP address or hostname of VNC console proxy.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients.
+#
+# This option sets the private address to which proxy clients, such as
+# ``nova-xvpvncproxy``, should connect to.
+#  (unknown value)
+# Deprecated group/name - [DEFAULT]/vncserver_proxyclient_address
+# Deprecated group/name - [vnc]/vncserver_proxyclient_address
+#server_proxyclient_address = 127.0.0.1
+
+#
+# Public address of noVNC VNC console proxy.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. noVNC provides
+# VNC support through a websocket-based client.
+#
+# This option sets the public base URL to which client systems will
+# connect. noVNC clients can use this address to connect to the noVNC
+# instance and, by extension, the VNC sessions.
+#
+# Related options:
+#
+# * novncproxy_host
+# * novncproxy_port
+#  (uri value)
+#novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html
+
+#
+# IP address or hostname that the XVP VNC console proxy should bind
+# to.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. Xen provides
+# the Xenserver VNC Proxy, or XVP, as an alternative to the
+# websocket-based noVNC proxy used by Libvirt. In contrast to noVNC,
+# XVP clients are Java-based.
+#
+# This option sets the private address to which the XVP VNC console
+# proxy
+# service should bind to.
+#
+# Related options:
+#
+# * xvpvncproxy_port
+# * xvpvncproxy_base_url
+#  (unknown value)
+#xvpvncproxy_host = 0.0.0.0
+
+#
+# Port that the XVP VNC console proxy should bind to.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. Xen provides
+# the Xenserver VNC Proxy, or XVP, as an alternative to the
+# websocket-based noVNC proxy used by Libvirt. In contrast to noVNC,
+# XVP clients are Java-based.
+#
+# This option sets the private port to which the XVP VNC console proxy
+# service should bind to.
+#
+# Related options:
+#
+# * xvpvncproxy_host
+# * xvpvncproxy_base_url
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#xvpvncproxy_port = 6081
+
+#
+# Public URL address of XVP VNC console proxy.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. Xen provides
+# the Xenserver VNC Proxy, or XVP, as an alternative to the
+# websocket-based noVNC proxy used by Libvirt. In contrast to noVNC,
+# XVP clients are Java-based.
+#
+# This option sets the public base URL to which client systems will
+# connect. XVP clients can use this address to connect to the XVP
+# instance and, by extension, the VNC sessions.
+#
+# Related options:
+#
+# * xvpvncproxy_host
+# * xvpvncproxy_port
+#  (uri value)
+#xvpvncproxy_base_url = http://127.0.0.1:6081/console
+
+#
+# IP address that the noVNC console proxy should bind to.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. noVNC provides
+# VNC support through a websocket-based client.
+#
+# This option sets the private address to which the noVNC console
+# proxy
+# service should bind to.
+#
+# Related options:
+#
+# * novncproxy_port
+# * novncproxy_base_url
+#  (string value)
+#novncproxy_host = 0.0.0.0
+
+#
+# Port that the noVNC console proxy should bind to.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. noVNC provides
+# VNC support through a websocket-based client.
+#
+# This option sets the private port to which the noVNC console proxy
+# service should bind to.
+#
+# Related options:
+#
+# * novncproxy_host
+# * novncproxy_base_url
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#novncproxy_port = 6080
+
+#
+# The authentication schemes to use with the compute node.
+#
+# Control what RFB authentication schemes are permitted for
+# connections between
+# the proxy and the compute host. If multiple schemes are enabled, the
+# first
+# matching scheme will be used, thus the strongest schemes should be
+# listed
+# first.
+#
+# Possible values:
+#
+# * ``none``: allow connection without authentication
+# * ``vencrypt``: use VeNCrypt authentication scheme
+#
+# Related options:
+#
+# * ``[vnc]vencrypt_client_key``, ``[vnc]vencrypt_client_cert``: must
+# also be set
+#  (list value)
+#auth_schemes = none
+
+# The path to the client certificate PEM file (for x509)
+#
+# The fully qualified path to a PEM file containing the private key
+# which the VNC
+# proxy server presents to the compute node during VNC authentication.
+#
+# Related options:
+#
+# * ``vnc.auth_schemes``: must include ``vencrypt``
+# * ``vnc.vencrypt_client_cert``: must also be set
+#  (string value)
+#vencrypt_client_key = <None>
+
+# The path to the client key file (for x509)
+#
+# The fully qualified path to a PEM file containing the x509
+# certificate which
+# the VNC proxy server presents to the compute node during VNC
+# authentication.
+#
+# Realted options:
+#
+# * ``vnc.auth_schemes``: must include ``vencrypt``
+# * ``vnc.vencrypt_client_key``: must also be set
+#  (string value)
+#vencrypt_client_cert = <None>
+
+# The path to the CA certificate PEM file
+#
+# The fully qualified path to a PEM file containing one or more x509
+# certificates
+# for the certificate authorities used by the compute node VNC server.
+#
+# Related options:
+#
+# * ``vnc.auth_schemes``: must include ``vencrypt``
+#  (string value)
+#vencrypt_ca_certs = <None>
+
+
+[workarounds]
+#
+# A collection of workarounds used to mitigate bugs or issues found in
+# system
+# tools (e.g. Libvirt or QEMU) or Nova itself under certain
+# conditions. These
+# should only be enabled in exceptional circumstances. All options are
+# linked
+# against bug IDs, where more information on the issue can be found.
+
+#
+# From nova.conf
+#
+
+#
+# Use sudo instead of rootwrap.
+#
+# Allow fallback to sudo for performance reasons.
+#
+# For more information, refer to the bug report:
+#
+#   https://bugs.launchpad.net/nova/+bug/1415106
+#
+# Possible values:
+#
+# * True: Use sudo instead of rootwrap
+# * False: Use rootwrap as usual
+#
+# Interdependencies to other options:
+#
+# * Any options that affect 'rootwrap' will be ignored.
+#  (boolean value)
+#disable_rootwrap = false
+
+#
+# Disable live snapshots when using the libvirt driver.
+#
+# Live snapshots allow the snapshot of the disk to happen without an
+# interruption to the guest, using coordination with a guest agent to
+# quiesce the filesystem.
+#
+# When using libvirt 1.2.2 live snapshots fail intermittently under
+# load
+# (likely related to concurrent libvirt/qemu operations). This config
+# option provides a mechanism to disable live snapshot, in favor of
+# cold
+# snapshot, while this is resolved. Cold snapshot causes an instance
+# outage while the guest is going through the snapshotting process.
+#
+# For more information, refer to the bug report:
+#
+#   https://bugs.launchpad.net/nova/+bug/1334398
+#
+# Possible values:
+#
+# * True: Live snapshot is disabled when using libvirt
+# * False: Live snapshots are always used when snapshotting (as long
+# as
+#   there is a new enough libvirt and the backend storage supports it)
+#  (boolean value)
+#disable_libvirt_livesnapshot = false
+disable_libvirt_livesnapshot = true
+
+#
+# Enable handling of events emitted from compute drivers.
+#
+# Many compute drivers emit lifecycle events, which are events that
+# occur when,
+# for example, an instance is starting or stopping. If the instance is
+# going
+# through task state changes due to an API operation, like resize, the
+# events
+# are ignored.
+#
+# This is an advanced feature which allows the hypervisor to signal to
+# the
+# compute service that an unexpected state change has occurred in an
+# instance
+# and that the instance can be shutdown automatically. Unfortunately,
+# this can
+# race in some conditions, for example in reboot operations or when
+# the compute
+# service or when host is rebooted (planned or due to an outage). If
+# such races
+# are common, then it is advisable to disable this feature.
+#
+# Care should be taken when this feature is disabled and
+# 'sync_power_state_interval' is set to a negative value. In this
+# case, any
+# instances that get out of sync between the hypervisor and the Nova
+# database
+# will have to be synchronized manually.
+#
+# For more information, refer to the bug report:
+#
+#   https://bugs.launchpad.net/bugs/1444630
+#
+# Interdependencies to other options:
+#
+# * If ``sync_power_state_interval`` is negative and this feature is
+# disabled,
+#   then instances that get out of sync between the hypervisor and the
+# Nova
+#   database will have to be synchronized manually.
+#  (boolean value)
+#handle_virt_lifecycle_events = true
+
+#
+# Disable the server group policy check upcall in compute.
+#
+# In order to detect races with server group affinity policy, the
+# compute
+# service attempts to validate that the policy was not violated by the
+# scheduler. It does this by making an upcall to the API database to
+# list
+# the instances in the server group for one that it is booting, which
+# violates
+# our api/cell isolation goals. Eventually this will be solved by
+# proper affinity
+# guarantees in the scheduler and placement service, but until then,
+# this late
+# check is needed to ensure proper affinity policy.
+#
+# Operators that desire api/cell isolation over this check should
+# enable this flag, which will avoid making that upcall from compute.
+#
+# Related options:
+#
+# * [filter_scheduler]/track_instance_changes also relies on upcalls
+# from the
+#   compute service to the scheduler service.
+#  (boolean value)
+#disable_group_policy_check_upcall = false
+
+
+[wsgi]
+#
+# Options under this group are used to configure WSGI (Web Server
+# Gateway
+# Interface). WSGI is used to serve API requests.
+
+#
+# From nova.conf
+#
+
+#
+# This option represents a file name for the paste.deploy config for
+# nova-api.
+#
+# Possible values:
+#
+# * A string representing file name for the paste.deploy config.
+#  (string value)
+api_paste_config = /etc/nova/api-paste.ini
+
+# DEPRECATED:
+# It represents a python format string that is used as the template to
+# generate
+# log lines. The following values can be formatted into it: client_ip,
+# date_time, request_line, status_code, body_length, wall_seconds.
+#
+# This option is used for building custom request loglines when
+# running
+# nova-api under eventlet. If used under uwsgi or apache, this option
+# has no effect.
+#
+# Possible values:
+#
+# * '%(client_ip)s "%(request_line)s" status: %(status_code)s'
+#   'len: %(body_length)s time: %(wall_seconds).7f' (default)
+# * Any formatted string formed by specific values.
+#  (string value)
+# This option is deprecated for removal since 16.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This option only works when running nova-api under eventlet, and
+# encodes very eventlet specific pieces of information. Starting in
+# Pike
+# the preferred model for running nova-api is under uwsgi or apache
+# mod_wsgi.
+#wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f
+
+#
+# This option specifies the HTTP header used to determine the protocol
+# scheme
+# for the original request, even if it was removed by a SSL
+# terminating proxy.
+#
+# Possible values:
+#
+# * None (default) - the request scheme is not influenced by any HTTP
+# headers
+# * Valid HTTP header, like HTTP_X_FORWARDED_PROTO
+#
+# WARNING: Do not set this unless you know what you are doing.
+#
+# Make sure ALL of the following are true before setting this
+# (assuming the
+# values from the example above):
+# * Your API is behind a proxy.
+# * Your proxy strips the X-Forwarded-Proto header from all incoming
+# requests.
+#   In other words, if end users include that header in their
+# requests, the proxy
+#   will discard it.
+# * Your proxy sets the X-Forwarded-Proto header and sends it to API,
+# but only
+#   for requests that originally come in via HTTPS.
+#
+# If any of those are not true, you should keep this setting set to
+# None.
+#
+#  (string value)
+#secure_proxy_ssl_header = <None>
+
+#
+# This option allows setting path to the CA certificate file that
+# should be used
+# to verify connecting clients.
+#
+# Possible values:
+#
+# * String representing path to the CA certificate file.
+#
+# Related options:
+#
+# * enabled_ssl_apis
+#  (string value)
+#ssl_ca_file = <None>
+
+#
+# This option allows setting path to the SSL certificate of API
+# server.
+#
+# Possible values:
+#
+# * String representing path to the SSL certificate.
+#
+# Related options:
+#
+# * enabled_ssl_apis
+#  (string value)
+#ssl_cert_file = <None>
+
+#
+# This option specifies the path to the file where SSL private key of
+# API
+# server is stored when SSL is in effect.
+#
+# Possible values:
+#
+# * String representing path to the SSL private key.
+#
+# Related options:
+#
+# * enabled_ssl_apis
+#  (string value)
+#ssl_key_file = <None>
+
+#
+# This option sets the value of TCP_KEEPIDLE in seconds for each
+# server socket.
+# It specifies the duration of time to keep connection active. TCP
+# generates a
+# KEEPALIVE transmission for an application that requests to keep
+# connection
+# active. Not supported on OS X.
+#
+# Related options:
+#
+# * keep_alive
+#  (integer value)
+# Minimum value: 0
+#tcp_keepidle = 600
+
+#
+# This option specifies the size of the pool of greenthreads used by
+# wsgi.
+# It is possible to limit the number of concurrent connections using
+# this
+# option.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/wsgi_default_pool_size
+#default_pool_size = 1000
+
+#
+# This option specifies the maximum line size of message headers to be
+# accepted.
+# max_header_line may need to be increased when using large tokens
+# (typically
+# those generated by the Keystone v3 API with big service catalogs).
+#
+# Since TCP is a stream based protocol, in order to reuse a
+# connection, the HTTP
+# has to have a way to indicate the end of the previous response and
+# beginning
+# of the next. Hence, in a keep_alive case, all messages must have a
+# self-defined message length.
+#  (integer value)
+# Minimum value: 0
+#max_header_line = 16384
+
+#
+# This option allows using the same TCP connection to send and receive
+# multiple
+# HTTP requests/responses, as opposed to opening a new one for every
+# single
+# request/response pair. HTTP keep-alive indicates HTTP connection
+# reuse.
+#
+# Possible values:
+#
+# * True : reuse HTTP connection.
+# * False : closes the client socket connection explicitly.
+#
+# Related options:
+#
+# * tcp_keepidle
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/wsgi_keep_alive
+#keep_alive = true
+
+#
+# This option specifies the timeout for client connections' socket
+# operations.
+# If an incoming connection is idle for this number of seconds it will
+# be
+# closed. It indicates timeout on individual read/writes on the socket
+# connection. To wait forever set to 0.
+#  (integer value)
+# Minimum value: 0
+#client_socket_timeout = 900
+
+
+[xenserver]
+#
+# XenServer options are used when the compute_driver is set to use
+# XenServer (compute_driver=xenapi.XenAPIDriver).
+#
+# Must specify connection_url, connection_password and
+# ovs_integration_bridge to
+# use compute_driver=xenapi.XenAPIDriver.
+
+#
+# From nova.conf
+#
+
+#
+# Number of seconds to wait for agent's reply to a request.
+#
+# Nova configures/performs certain administrative actions on a server
+# with the
+# help of an agent that's installed on the server. The communication
+# between
+# Nova and the agent is achieved via sharing messages, called records,
+# over
+# xenstore, a shared storage across all the domains on a Xenserver
+# host.
+# Operations performed by the agent on behalf of nova are: 'version','
+# key_init',
+# 'password','resetnetwork','inject_file', and 'agentupdate'.
+#
+# To perform one of the above operations, the xapi 'agent' plugin
+# writes the
+# command and its associated parameters to a certain location known to
+# the domain
+# and awaits response. On being notified of the message, the agent
+# performs
+# appropriate actions on the server and writes the result back to
+# xenstore. This
+# result is then read by the xapi 'agent' plugin to determine the
+# success/failure
+# of the operation.
+#
+# This config option determines how long the xapi 'agent' plugin shall
+# wait to
+# read the response off of xenstore for a given request/command. If
+# the agent on
+# the instance fails to write the result in this time period, the
+# operation is
+# considered to have timed out.
+#
+# Related options:
+#
+# * ``agent_version_timeout``
+# * ``agent_resetnetwork_timeout``
+#
+#  (integer value)
+# Minimum value: 0
+#agent_timeout = 30
+
+#
+# Number of seconds to wait for agent't reply to version request.
+#
+# This indicates the amount of time xapi 'agent' plugin waits for the
+# agent to
+# respond to the 'version' request specifically. The generic timeout
+# for agent
+# communication ``agent_timeout`` is ignored in this case.
+#
+# During the build process the 'version' request is used to determine
+# if the
+# agent is available/operational to perform other requests such as
+# 'resetnetwork', 'password', 'key_init' and 'inject_file'. If the
+# 'version' call
+# fails, the other configuration is skipped. So, this configuration
+# option can
+# also be interpreted as time in which agent is expected to be fully
+# operational.
+#  (integer value)
+# Minimum value: 0
+#agent_version_timeout = 300
+
+#
+# Number of seconds to wait for agent's reply to resetnetwork
+# request.
+#
+# This indicates the amount of time xapi 'agent' plugin waits for the
+# agent to
+# respond to the 'resetnetwork' request specifically. The generic
+# timeout for
+# agent communication ``agent_timeout`` is ignored in this case.
+#  (integer value)
+# Minimum value: 0
+#agent_resetnetwork_timeout = 60
+
+#
+# Path to locate guest agent on the server.
+#
+# Specifies the path in which the XenAPI guest agent should be
+# located. If the
+# agent is present, network configuration is not injected into the
+# image.
+#
+# Related options:
+#
+# For this option to have an effect:
+# * ``flat_injected`` should be set to ``True``
+# * ``compute_driver`` should be set to ``xenapi.XenAPIDriver``
+#
+#  (string value)
+#agent_path = usr/sbin/xe-update-networking
+
+#
+# Disables the use of XenAPI agent.
+#
+# This configuration option suggests whether the use of agent should
+# be enabled
+# or not regardless of what image properties are present. Image
+# properties have
+# an effect only when this is set to ``True``. Read description of
+# config option
+# ``use_agent_default`` for more information.
+#
+# Related options:
+#
+# * ``use_agent_default``
+#
+#  (boolean value)
+#disable_agent = false
+
+#
+# Whether or not to use the agent by default when its usage is enabled
+# but not
+# indicated by the image.
+#
+# The use of XenAPI agent can be disabled altogether using the
+# configuration
+# option ``disable_agent``. However, if it is not disabled, the use of
+# an agent
+# can still be controlled by the image in use through one of its
+# properties,
+# ``xenapi_use_agent``. If this property is either not present or
+# specified
+# incorrectly on the image, the use of agent is determined by this
+# configuration
+# option.
+#
+# Note that if this configuration is set to ``True`` when the agent is
+# not
+# present, the boot times will increase significantly.
+#
+# Related options:
+#
+# * ``disable_agent``
+#
+#  (boolean value)
+#use_agent_default = false
+
+# Timeout in seconds for XenAPI login. (integer value)
+# Minimum value: 0
+#login_timeout = 10
+
+#
+# Maximum number of concurrent XenAPI connections.
+#
+# In nova, multiple XenAPI requests can happen at a time.
+# Configuring this option will parallelize access to the XenAPI
+# session, which allows you to make concurrent XenAPI connections.
+#  (integer value)
+# Minimum value: 1
+#connection_concurrent = 5
+
+#
+# Cache glance images locally.
+#
+# The value for this option must be chosen from the choices listed
+# here. Configuring a value other than these will default to 'all'.
+#
+# Note: There is nothing that deletes these images.
+#
+# Possible values:
+#
+# * `all`: will cache all images.
+# * `some`: will only cache images that have the
+#   image_property `cache_in_nova=True`.
+# * `none`: turns off caching entirely.
+#  (string value)
+# Possible values:
+# all - <No description provided>
+# some - <No description provided>
+# none - <No description provided>
+#cache_images = all
+
+#
+# Compression level for images.
+#
+# By setting this option we can configure the gzip compression level.
+# This option sets GZIP environment variable before spawning tar -cz
+# to force the compression level. It defaults to none, which means the
+# GZIP environment variable is not set and the default (usually -6)
+# is used.
+#
+# Possible values:
+#
+# * Range is 1-9, e.g., 9 for gzip -9, 9 being most
+#   compressed but most CPU intensive on dom0.
+# * Any values out of this range will default to None.
+#  (integer value)
+# Minimum value: 1
+# Maximum value: 9
+#image_compression_level = <None>
+
+# Default OS type used when uploading an image to glance (string
+# value)
+#default_os_type = linux
+
+# Time in secs to wait for a block device to be created (integer
+# value)
+# Minimum value: 1
+#block_device_creation_timeout = 10
+
+#
+# Maximum size in bytes of kernel or ramdisk images.
+#
+# Specifying the maximum size of kernel or ramdisk will avoid copying
+# large files to dom0 and fill up /boot/guest.
+#  (integer value)
+#max_kernel_ramdisk_size = 16777216
+
+#
+# Filter for finding the SR to be used to install guest instances on.
+#
+# Possible values:
+#
+# * To use the Local Storage in default XenServer/XCP installations
+#   set this flag to other-config:i18n-key=local-storage.
+# * To select an SR with a different matching criteria, you could
+#   set it to other-config:my_favorite_sr=true.
+# * To fall back on the Default SR, as displayed by XenCenter,
+#   set this flag to: default-sr:true.
+#  (string value)
+#sr_matching_filter = default-sr:true
+
+#
+# Whether to use sparse_copy for copying data on a resize down.
+# (False will use standard dd). This speeds up resizes down
+# considerably since large runs of zeros won't have to be rsynced.
+#  (boolean value)
+#sparse_copy = true
+
+#
+# Maximum number of retries to unplug VBD.
+# If set to 0, should try once, no retries.
+#  (integer value)
+# Minimum value: 0
+#num_vbd_unplug_retries = 10
+
+#
+# Name of network to use for booting iPXE ISOs.
+#
+# An iPXE ISO is a specially crafted ISO which supports iPXE booting.
+# This feature gives a means to roll your own image.
+#
+# By default this option is not set. Enable this option to
+# boot an iPXE ISO.
+#
+# Related Options:
+#
+# * `ipxe_boot_menu_url`
+# * `ipxe_mkisofs_cmd`
+#  (string value)
+#ipxe_network_name = <None>
+
+#
+# URL to the iPXE boot menu.
+#
+# An iPXE ISO is a specially crafted ISO which supports iPXE booting.
+# This feature gives a means to roll your own image.
+#
+# By default this option is not set. Enable this option to
+# boot an iPXE ISO.
+#
+# Related Options:
+#
+# * `ipxe_network_name`
+# * `ipxe_mkisofs_cmd`
+#  (string value)
+#ipxe_boot_menu_url = <None>
+
+#
+# Name and optionally path of the tool used for ISO image creation.
+#
+# An iPXE ISO is a specially crafted ISO which supports iPXE booting.
+# This feature gives a means to roll your own image.
+#
+# Note: By default `mkisofs` is not present in the Dom0, so the
+# package can either be manually added to Dom0 or include the
+# `mkisofs` binary in the image itself.
+#
+# Related Options:
+#
+# * `ipxe_network_name`
+# * `ipxe_boot_menu_url`
+#  (string value)
+#ipxe_mkisofs_cmd = mkisofs
+
+#
+# URL for connection to XenServer/Xen Cloud Platform. A special value
+# of unix://local can be used to connect to the local unix socket.
+#
+# Possible values:
+#
+# * Any string that represents a URL. The connection_url is
+#   generally the management network IP address of the XenServer.
+# * This option must be set if you chose the XenServer driver.
+#  (string value)
+#connection_url = <None>
+
+# Username for connection to XenServer/Xen Cloud Platform (string
+# value)
+#connection_username = root
+
+# Password for connection to XenServer/Xen Cloud Platform (string
+# value)
+#connection_password = <None>
+
+#
+# The interval used for polling of coalescing vhds.
+#
+# This is the interval after which the task of coalesce VHD is
+# performed, until it reaches the max attempts that is set by
+# vhd_coalesce_max_attempts.
+#
+# Related options:
+#
+# * `vhd_coalesce_max_attempts`
+#  (floating point value)
+# Minimum value: 0
+#vhd_coalesce_poll_interval = 5.0
+
+#
+# Ensure compute service is running on host XenAPI connects to.
+# This option must be set to false if the 'independent_compute'
+# option is set to true.
+#
+# Possible values:
+#
+# * Setting this option to true will make sure that compute service
+#   is running on the same host that is specified by connection_url.
+# * Setting this option to false, doesn't perform the check.
+#
+# Related options:
+#
+# * `independent_compute`
+#  (boolean value)
+#check_host = true
+
+#
+# Max number of times to poll for VHD to coalesce.
+#
+# This option determines the maximum number of attempts that can be
+# made for coalescing the VHD before giving up.
+#
+# Related opitons:
+#
+# * `vhd_coalesce_poll_interval`
+#  (integer value)
+# Minimum value: 0
+#vhd_coalesce_max_attempts = 20
+
+# Base path to the storage repository on the XenServer host. (string
+# value)
+#sr_base_path = /var/run/sr-mount
+
+#
+# The iSCSI Target Host.
+#
+# This option represents the hostname or ip of the iSCSI Target.
+# If the target host is not present in the connection information from
+# the volume provider then the value from this option is taken.
+#
+# Possible values:
+#
+# * Any string that represents hostname/ip of Target.
+#  (unknown value)
+#target_host = <None>
+
+#
+# The iSCSI Target Port.
+#
+# This option represents the port of the iSCSI Target. If the
+# target port is not present in the connection information from the
+# volume provider then the value from this option is taken.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#target_port = 3260
+
+#
+# Used to prevent attempts to attach VBDs locally, so Nova can
+# be run in a VM on a different host.
+#
+# Related options:
+#
+# * ``CONF.flat_injected`` (Must be False)
+# * ``CONF.xenserver.check_host`` (Must be False)
+# * ``CONF.default_ephemeral_format`` (Must be unset or 'ext3')
+# * Joining host aggregates (will error if attempted)
+# * Swap disks for Windows VMs (will error if attempted)
+# * Nova-based auto_configure_disk (will error if attempted)
+#  (boolean value)
+#independent_compute = false
+
+#
+# Wait time for instances to go to running state.
+#
+# Provide an integer value representing time in seconds to set the
+# wait time for an instance to go to running state.
+#
+# When a request to create an instance is received by nova-api and
+# communicated to nova-compute, the creation of the instance occurs
+# through interaction with Xen via XenAPI in the compute node. Once
+# the node on which the instance(s) are to be launched is decided by
+# nova-schedule and the launch is triggered, a certain amount of wait
+# time is involved until the instance(s) can become available and
+# 'running'. This wait time is defined by running_timeout. If the
+# instances do not go to running state within this specified wait
+# time, the launch expires and the instance(s) are set to 'error'
+# state.
+#  (integer value)
+# Minimum value: 0
+#running_timeout = 60
+
+# DEPRECATED:
+# The XenAPI VIF driver using XenServer Network APIs.
+#
+# Provide a string value representing the VIF XenAPI vif driver to use
+# for
+# plugging virtual network interfaces.
+#
+# Xen configuration uses bridging within the backend domain to allow
+# all VMs to appear on the network as individual hosts. Bridge
+# interfaces are used to create a XenServer VLAN network in which
+# the VIFs for the VM instances are plugged. If no VIF bridge driver
+# is plugged, the bridge is not made available. This configuration
+# option takes in a value for the VIF driver.
+#
+# Possible values:
+#
+# * nova.virt.xenapi.vif.XenAPIOpenVswitchDriver (default)
+# * nova.virt.xenapi.vif.XenAPIBridgeDriver (deprecated)
+#
+# Related options:
+#
+# * ``vlan_interface``
+# * ``ovs_integration_bridge``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There are only two in-tree vif drivers for XenServer.
+# XenAPIBridgeDriver is for
+# nova-network which is deprecated and XenAPIOpenVswitchDriver is for
+# Neutron
+# which is the default configuration for Nova since the 15.0.0 Ocata
+# release. In
+# the future the "use_neutron" configuration option will be used to
+# determine
+# which vif driver to use.
+#vif_driver = nova.virt.xenapi.vif.XenAPIOpenVswitchDriver
+
+#
+# Dom0 plugin driver used to handle image uploads.
+#
+# Provide a string value representing a plugin driver required to
+# handle the image uploading to GlanceStore.
+#
+# Images, and snapshots from XenServer need to be uploaded to the data
+# store for use. image_upload_handler takes in a value for the Dom0
+# plugin driver. This driver is then called to uplaod images to the
+# GlanceStore.
+#  (string value)
+#image_upload_handler = nova.virt.xenapi.image.glance.GlanceStore
+
+#
+# Number of seconds to wait for SR to settle if the VDI
+# does not exist when first introduced.
+#
+# Some SRs, particularly iSCSI connections are slow to see the VDIs
+# right after they got introduced. Setting this option to a
+# time interval will make the SR to wait for that time period
+# before raising VDI not found exception.
+#  (integer value)
+# Minimum value: 0
+#introduce_vdi_retry_wait = 20
+
+#
+# The name of the integration Bridge that is used with xenapi
+# when connecting with Open vSwitch.
+#
+# Note: The value of this config option is dependent on the
+# environment, therefore this configuration value must be set
+# accordingly if you are using XenAPI.
+#
+# Possible values:
+#
+# * Any string that represents a bridge name.
+#  (string value)
+#ovs_integration_bridge = <None>
+
+#
+# When adding new host to a pool, this will append a --force flag to
+# the
+# command, forcing hosts to join a pool, even if they have different
+# CPUs.
+#
+# Since XenServer version 5.6 it is possible to create a pool of hosts
+# that have
+# different CPU capabilities. To accommodate CPU differences,
+# XenServer limited
+# features it uses to determine CPU compatibility to only the ones
+# that are
+# exposed by CPU and support for CPU masking was added.
+# Despite this effort to level differences between CPUs, it is still
+# possible
+# that adding new host will fail, thus option to force join was
+# introduced.
+#  (boolean value)
+#use_join_force = true
+
+#
+# Publicly visible name for this console host.
+#
+# Possible values:
+#
+# * Current hostname (default) or any string representing hostname.
+#  (string value)
+#console_public_hostname = <current_hostname>
+
+
+[xvp]
+#
+# Configuration options for XVP.
+#
+# xvp (Xen VNC Proxy) is a proxy server providing password-protected
+# VNC-based
+# access to the consoles of virtual machines hosted on Citrix
+# XenServer.
+
+#
+# From nova.conf
+#
+
+# XVP conf template (string value)
+#console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template
+
+# Generated XVP conf file (string value)
+#console_xvp_conf = /etc/xvp.conf
+
+# XVP master process pid file (string value)
+#console_xvp_pid = /var/run/xvp.pid
+
+# XVP log file (string value)
+#console_xvp_log = /var/log/xvp.log
+
+# Port for XVP to multiplex VNC connections on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#console_xvp_multiplex_port = 5900
+
+[matchmaker_redis]
+
+[oslo_messaging_notifications]
+#
+# From oslo.messaging
+#
+
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+driver = messagingv2
+
+# A URL representing the messaging driver to use for notifications. If
+# not set, we fall back to the same configuration used for RPC.
+# (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+
+# The maximum number of attempts to re-send a notification message
+# which failed to be delivered due to a recoverable error. 0 - No
+# retry, -1 - indefinite (integer value)
+#retry = -1
+[oslo_messaging_rabbit]
+#
+# From oslo.messaging
+#
+
+# Use durable queues in AMQP. (boolean value)
+# Deprecated group/name - [DEFAULT]/amqp_durable_queues
+# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
+#amqp_durable_queues = false
+
+# Auto-delete queues in AMQP. (boolean value)
+#amqp_auto_delete = false
+
+# Enable SSL (boolean value)
+#ssl = <None>
+
+
+# How long to wait before reconnecting in response to an AMQP consumer
+# cancel notification. (floating point value)
+#kombu_reconnect_delay = 1.0
+
+# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression
+# will not be used. This option may not be available in future
+# versions. (string value)
+#kombu_compression = <None>
+
+# How long to wait a missing client before abandoning to send it its
+# replies. This value should not be longer than rpc_response_timeout.
+# (integer value)
+# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
+#kombu_missing_consumer_retry_timeout = 60
+
+# Determines how the next RabbitMQ node is chosen in case the one we
+# are currently connected to becomes unavailable. Takes effect only if
+# more than one RabbitMQ node is provided in config. (string value)
+# Possible values:
+# round-robin - <No description provided>
+# shuffle - <No description provided>
+#kombu_failover_strategy = round-robin
+
+# DEPRECATED: The RabbitMQ broker address where a single node is used.
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_host = localhost
+
+# DEPRECATED: The RabbitMQ broker port where a single node is used.
+# (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_port = 5672
+
+# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_hosts = $rabbit_host:$rabbit_port
+
+# DEPRECATED: The RabbitMQ userid. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_userid = guest
+
+# DEPRECATED: The RabbitMQ password. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_password = guest
+
+# The RabbitMQ login method. (string value)
+# Possible values:
+# PLAIN - <No description provided>
+# AMQPLAIN - <No description provided>
+# RABBIT-CR-DEMO - <No description provided>
+#rabbit_login_method = AMQPLAIN
+
+# DEPRECATED: The RabbitMQ virtual host. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_virtual_host = /
+
+# How frequently to retry connecting with RabbitMQ. (integer value)
+#rabbit_retry_interval = 1
+
+# How long to backoff for between retries when connecting to RabbitMQ.
+# (integer value)
+#rabbit_retry_backoff = 2
+
+# Maximum interval of RabbitMQ connection retries. Default is 30
+# seconds. (integer value)
+#rabbit_interval_max = 30
+
+# DEPRECATED: Maximum number of RabbitMQ connection retries. Default
+# is 0 (infinite retry count). (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#rabbit_max_retries = 0
+
+# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change
+# this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0,
+# queue mirroring is no longer controlled by the x-ha-policy argument
+# when declaring a queue. If you just want to make sure that all
+# queues (except those with auto-generated names) are mirrored across
+# all nodes, run: "rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-
+# mode": "all"}' " (boolean value)
+#rabbit_ha_queues = false
+
+# Positive integer representing duration in seconds for queue TTL
+# (x-expires). Queues which are unused for the duration of the TTL are
+# automatically deleted. The parameter affects only reply and fanout
+# queues. (integer value)
+# Minimum value: 1
+#rabbit_transient_queues_ttl = 1800
+
+# Specifies the number of messages to prefetch. Setting to zero allows
+# unlimited messages. (integer value)
+#rabbit_qos_prefetch_count = 64
+
+# Number of seconds after which the Rabbit broker is considered down
+# if heartbeat's keep-alive fails (0 disable the heartbeat).
+# EXPERIMENTAL (integer value)
+#heartbeat_timeout_threshold = 60
+
+# How often times during the heartbeat_timeout_threshold we check the
+# heartbeat. (integer value)
+#heartbeat_rate = 2
+
+# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
+# (boolean value)
+#fake_rabbit = false
+
+# Maximum number of channels to allow (integer value)
+#channel_max = <None>
+
+# The maximum byte size for an AMQP frame (integer value)
+#frame_max = <None>
+
+# How often to send heartbeats for consumer's connections (integer
+# value)
+#heartbeat_interval = 3
+
+# Arguments passed to ssl.wrap_socket (dict value)
+#ssl_options = <None>
+
+# Set socket timeout in seconds for connection's socket (floating
+# point value)
+#socket_timeout = 0.25
+
+# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating
+# point value)
+#tcp_user_timeout = 0.25
+
+# Set delay for reconnection to some host which has connection error
+# (floating point value)
+#host_connection_reconnect_delay = 0.25
+
+# Connection factory implementation (string value)
+# Possible values:
+# new - <No description provided>
+# single - <No description provided>
+# read_write - <No description provided>
+#connection_factory = single
+
+# Maximum number of connections to keep queued. (integer value)
+#pool_max_size = 30
+
+# Maximum number of connections to create above `pool_max_size`.
+# (integer value)
+#pool_max_overflow = 0
+
+# Default number of seconds to wait for a connections to available
+# (integer value)
+#pool_timeout = 30
+
+# Lifetime of a connection (since creation) in seconds or None for no
+# recycling. Expired connections are closed on acquire. (integer
+# value)
+#pool_recycle = 600
+
+# Threshold at which inactive (since release) connections are
+# considered stale in seconds or None for no staleness. Stale
+# connections are closed on acquire. (integer value)
+#pool_stale = 60
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#default_serializer_type = json
+
+# Persist notification messages. (boolean value)
+#notification_persistence = false
+
+# Exchange name for sending notifications (string value)
+#default_notification_exchange = ${control_exchange}_notification
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# notification listener. (integer value)
+#notification_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending notification, -1 means infinite retry. (integer value)
+#default_notification_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending notification message (floating point value)
+#notification_retry_delay = 0.25
+
+# Time to live for rpc queues without consumers in seconds. (integer
+# value)
+#rpc_queue_expiration = 60
+
+# Exchange name for sending RPC messages (string value)
+#default_rpc_exchange = ${control_exchange}_rpc
+
+# Exchange name for receiving RPC replies (string value)
+#rpc_reply_exchange = ${control_exchange}_rpc_reply
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc listener. (integer value)
+#rpc_listener_prefetch_count = 100
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc reply listener. (integer value)
+#rpc_reply_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending reply. -1 means infinite retry during rpc_timeout (integer
+# value)
+#rpc_reply_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending reply. (floating point value)
+#rpc_reply_retry_delay = 0.25
+
+# Reconnecting retry count in case of connectivity problem during
+# sending RPC message, -1 means infinite retry. If actual retry
+# attempts in not 0 the rpc request could be processed more than one
+# time (integer value)
+#default_rpc_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending RPC message (floating point value)
+#rpc_retry_delay = 0.25
+
+
+[oslo_policy]
+
+[database]
+#
+# From oslo.db
+#
+
+# If True, SQLite uses synchronous mode. (boolean value)
+#sqlite_synchronous = true
+
+# The back end to use for the database. (string value)
+# Deprecated group/name - [DEFAULT]/db_backend
+#backend = sqlalchemy
+
+# The SQLAlchemy connection string to use to connect to the database.
+# (string value)
+# Deprecated group/name - [DEFAULT]/sql_connection
+# Deprecated group/name - [DATABASE]/sql_connection
+# Deprecated group/name - [sql]/connection
+#connection = <None>
+connection = mysql+pymysql://nova:opnfv_secret@10.167.4.23/nova?charset=utf8
+# The SQLAlchemy connection string to use to connect to the slave
+# database. (string value)
+#slave_connection = <None>
+
+# The SQL mode to be used for MySQL sessions. This option, including
+# the default, overrides any server-set SQL mode. To use whatever SQL
+# mode is set by the server configuration, set this to no value.
+# Example: mysql_sql_mode= (string value)
+#mysql_sql_mode = TRADITIONAL
+
+# If True, transparently enables support for handling MySQL Cluster
+# (NDB). (boolean value)
+#mysql_enable_ndb = false
+
+# Connections which have been present in the connection pool longer
+# than this number of seconds will be replaced with a new one the next
+# time they are checked out from the pool. (integer value)
+# Deprecated group/name - [DATABASE]/idle_timeout
+# Deprecated group/name - [database]/idle_timeout
+# Deprecated group/name - [DEFAULT]/sql_idle_timeout
+# Deprecated group/name - [DATABASE]/sql_idle_timeout
+# Deprecated group/name - [sql]/idle_timeout
+#connection_recycle_time = 3600
+
+# Minimum number of SQL connections to keep open in a pool. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_min_pool_size
+# Deprecated group/name - [DATABASE]/sql_min_pool_size
+#min_pool_size = 1
+
+# Maximum number of SQL connections to keep open in a pool. Setting a
+# value of 0 indicates no limit. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_pool_size
+# Deprecated group/name - [DATABASE]/sql_max_pool_size
+#max_pool_size = 5
+max_pool_size = 10
+
+# Maximum number of database connection retries during startup. Set to
+# -1 to specify an infinite retry count. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_retries
+# Deprecated group/name - [DATABASE]/sql_max_retries
+#max_retries = 10
+max_retries = -1
+
+# Interval between retries of opening a SQL connection. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_retry_interval
+# Deprecated group/name - [DATABASE]/reconnect_interval
+#retry_interval = 10
+
+# If set, use this value for max_overflow with SQLAlchemy. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_max_overflow
+# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
+#max_overflow = 50
+max_overflow = 30
+
+# Verbosity of SQL debugging information: 0=None, 100=Everything.
+# (integer value)
+# Minimum value: 0
+# Maximum value: 100
+# Deprecated group/name - [DEFAULT]/sql_connection_debug
+#connection_debug = 0
+
+# Add Python stack traces to SQL as comment strings. (boolean value)
+# Deprecated group/name - [DEFAULT]/sql_connection_trace
+#connection_trace = false
+
+# If set, use this value for pool_timeout with SQLAlchemy. (integer
+# value)
+# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
+#pool_timeout = <None>
+
+# Enable the experimental use of database reconnect on connection
+# lost. (boolean value)
+#use_db_reconnect = false
+
+# Seconds between retries of a database transaction. (integer value)
+#db_retry_interval = 1
+
+# If True, increases the interval between retries of a database
+# operation up to db_max_retry_interval. (boolean value)
+#db_inc_retry_interval = true
+
+# If db_inc_retry_interval is set, the maximum seconds between retries
+# of a database operation. (integer value)
+#db_max_retry_interval = 10
+
+# Maximum retries in case of connection error or deadlock error before
+# error is raised. Set to -1 to specify an infinite retry count.
+# (integer value)
+#db_max_retries = 20
+
+#
+# From oslo.db.concurrency
+#
+
+# Enable the experimental use of thread pooling for all DB API calls
+# (boolean value)
+# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
+#use_tpool = false
+
+[oslo_middleware]
 #
 # From oslo.middleware
 #
 
-# DEPRECATED: The path to respond to healtcheck requests on. (string value)
+# The maximum body size for each  request, in bytes. (integer value)
+# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
+# Deprecated group/name - [DEFAULT]/max_request_body_size
+#max_request_body_size = 114688
+
+# DEPRECATED: The HTTP Header that will be used to determine what the
+# original request protocol scheme was, even if it was hidden by a SSL
+# termination proxy. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
-#path = /healthcheck
-
-# Show more detailed information as part of the response (boolean value)
-#detailed = false
-
-# Additional backends that can perform health checks and report that information
-# back as part of a request. (list value)
-#backends =
-
-# Check the presence of a file to determine if an application is running on a
-# port. Used by DisableByFileHealthcheck plugin. (string value)
-#disable_by_file_path = <None>
-
-# Check the presence of a file based on a port to determine if an application is
-# running on a port. Expects a "port:path" list of strings. Used by
-# DisableByFilesPortsHealthcheck plugin. (list value)
-#disable_by_file_paths =
-
+#secure_proxy_ssl_header = X-Forwarded-Proto
+
+# Whether the application is behind a proxy or not. This determines if
+# the middleware should parse the headers or not. (boolean value)
+enable_proxy_headers_parsing = True
 
 [keystone_authtoken]
-
 #
 # From keystonemiddleware.auth_token
 #
 
 # Complete "public" Identity API endpoint. This endpoint should not be an
-# "admin" endpoint, as it should be accessible by all end users. Unauthenticated
-# clients are redirected to this endpoint to authenticate. Although this
-# endpoint should ideally be unversioned, client support in the wild varies. If
-# you're using a versioned v2 endpoint here, then this should *not* be the same
-# endpoint the service user utilizes for validating tokens, because normal end
-# users may not be able to reach that endpoint. (string value)
+# "admin" endpoint, as it should be accessible by all end users.
+# Unauthenticated clients are redirected to this endpoint to authenticate.
+# Although this endpoint should ideally be unversioned, client support in the
+# wild varies. If you're using a versioned v2 endpoint here, then this should
+# *not* be the same endpoint the service user utilizes for validating tokens,
+# because normal end users may not be able to reach that endpoint. (string
+# value)
 # Deprecated group/name - [keystone_authtoken]/auth_uri
 #www_authenticate_uri = <None>
+www_authenticate_uri = http://10.167.4.35:5000
 
 # DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
 # be an "admin" endpoint, as it should be accessible by all end users.
@@ -534,9 +11516,10 @@
 # release. (string value)
 # This option is deprecated for removal since Queens.
 # Its value may be silently ignored in the future.
-# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri and
-# will be removed in the S  release.
+# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
+# and will be removed in the S  release.
 #auth_uri = <None>
+auth_uri = http://10.167.4.35:5000
 
 # API version of the admin Identity API endpoint. (string value)
 #auth_version = <None>
@@ -549,8 +11532,8 @@
 # value)
 #http_connect_timeout = <None>
 
-# How many times are we trying to reconnect when communicating with Identity API
-# Server. (integer value)
+# How many times are we trying to reconnect when communicating with Identity
+# API Server. (integer value)
 #http_request_max_retries = 3
 
 # Request environment key where the Swift cache object is stored. When
@@ -574,10 +11557,11 @@
 
 # The region in which the identity server can be found. (string value)
 #region_name = <None>
+region_name = RegionOne
 
 # DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P release.
-# (string value)
+# has been deprecated in the Ocata release and will be removed in the P
+# release. (string value)
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
@@ -589,15 +11573,15 @@
 #memcached_servers = <None>
 
 # In order to prevent excessive effort spent validating tokens, the middleware
-# caches previously-seen tokens for a configurable duration (in seconds). Set to
-# -1 to disable caching completely. (integer value)
+# caches previously-seen tokens for a configurable duration (in seconds). Set
+# to -1 to disable caching completely. (integer value)
 #token_cache_time = 300
 
 # DEPRECATED: Determines the frequency at which the list of revoked tokens is
 # retrieved from the Identity service (in seconds). A high number of revocation
 # events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in the
-# Ocata release and will be removed in the P release. (integer value)
+# performance. Only valid for PKI tokens. This option has been deprecated in
+# the Ocata release and will be removed in the P release. (integer value)
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
@@ -622,8 +11606,8 @@
 # tried again. (integer value)
 #memcache_pool_dead_retry = 300
 
-# (Optional) Maximum total number of open connections to every memcached server.
-# (integer value)
+# (Optional) Maximum total number of open connections to every memcached
+# server. (integer value)
 #memcache_pool_maxsize = 10
 
 # (Optional) Socket timeout in seconds for communicating with a memcached
@@ -649,11 +11633,11 @@
 
 # Used to control the use and type of token binding. Can be set to: "disabled"
 # to not check token binding. "permissive" (default) to validate binding
-# information if the bind type is of a form known to the server and ignore it if
-# not. "strict" like "permissive" but if the bind type is unknown the token will
-# be rejected. "required" any form of token binding is needed to be allowed.
-# Finally the name of a binding method that must be present in tokens. (string
-# value)
+# information if the bind type is of a form known to the server and ignore it
+# if not. "strict" like "permissive" but if the bind type is unknown the token
+# will be rejected. "required" any form of token binding is needed to be
+# allowed. Finally the name of a binding method that must be present in tokens.
+# (string value)
 #enforce_token_bind = permissive
 
 # DEPRECATED: If true, the revocation list will be checked for cached tokens.
@@ -680,886 +11664,127 @@
 # A choice of roles that must be present in a service token. Service tokens are
 # allowed to request that an expired token can be used and so this check should
 # tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present. For
-# backwards compatibility reasons this currently only affects the allow_expired
-# check. (list value)
+# here are applied as an ANY check so any role in this list must be present.
+# For backwards compatibility reasons this currently only affects the
+# allow_expired check. (list value)
 #service_token_roles = service
 
-# For backwards compatibility reasons we must let valid service tokens pass that
-# don't pass the service_token_roles check as valid. Setting this true will
-# become the default in a future release and should be enabled if possible.
-# (boolean value)
+# For backwards compatibility reasons we must let valid service tokens pass
+# that don't pass the service_token_roles check as valid. Setting this true
+# will become the default in a future release and should be enabled if
+# possible. (boolean value)
 #service_token_roles_required = false
-
-# Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
-# (string value)
-#auth_admin_prefix =
-
-# Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
-# (string value)
-#auth_host = 127.0.0.1
-
-# Port of the admin Identity API endpoint. Deprecated, use identity_uri.
-# (integer value)
-#auth_port = 35357
-
-# Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
-# (string value)
-# Possible values:
-# http - <No description provided>
-# https - <No description provided>
-#auth_protocol = https
-
-# Complete admin Identity API endpoint. This should specify the unversioned root
-# endpoint e.g. https://localhost:35357/ (string value)
-#identity_uri = <None>
-
-# This option is deprecated and may be removed in a future release. Single
-# shared secret with the Keystone configuration used for bootstrapping a
-# Keystone installation, or otherwise bypassing the normal authentication
-# process. This option should not be used, use `admin_user` and `admin_password`
-# instead. (string value)
-#admin_token = <None>
-
-# Service username. (string value)
-#admin_user = <None>
-
-# Service user password. (string value)
-#admin_password = <None>
-
-# Service tenant name. (string value)
-#admin_tenant_name = admin
 
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
 #auth_type = <None>
+auth_type = password
 
 # Config Section from which to load plugin specific options (string value)
 #auth_section = <None>
 
-
-[matchmaker_redis]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Host to locate redis. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
-
-# DEPRECATED: Use this port to connect to redis host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
-
-# DEPRECATED: Password for Redis server (optional). (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#password =
-
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
-# [host:port, host1:port ... ] (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#sentinel_hosts =
-
-# Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
-
-# Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
-
-# Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
-
-# Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
-
-
-[oslo_concurrency]
-
-#
-# From oslo.concurrency
-#
-
-# Enables or disables inter-process locks. (boolean value)
-#disable_process_locking = false
-
-# Directory to use for lock files.  For security, the specified directory should
-# only be writable by the user running the processes that need locking. Defaults
-# to environment variable OSLO_LOCK_PATH. If OSLO_LOCK_PATH is not set in the
-# environment, use the Python tempfile.gettempdir function to find a suitable
-# location. If external locks are used, a lock path must be set. (string value)
-#lock_path = /tmp
-
-
-[oslo_messaging_amqp]
-
-#
-# From oslo.messaging
-#
-
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
 # value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional) (string
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internal
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
 # value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# By default SSL checks that the name in the server's certificate matches the
-# hostname in the transport_url. In some configurations it may be preferable to
-# use the virtual hostname instead, for example if the server uses the Server
-# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
-# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
-# virtual host name instead of the DNS name. (boolean value)
-#ssl_verify_vhost = false
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# Enable virtual host support for those message buses that do not natively
-# support virtual hosting (such as qpidd). When set to true the virtual host
-# name will be added to all message bus addresses, effectively creating a
-# private 'subnet' per virtual host. Set to False if the message bus supports
-# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
-# as the name of the virtual host. (boolean value)
-#pseudo_vhost = true
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-robin
-# fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# DEPRECATED: Pool Size for Kafka Consumers (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#pool_size = 10
-
-# DEPRECATED: The pool size limit for connections expiration policy (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_min_size = 2
-
-# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating point
-# value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
-
-[oslo_messaging_notifications]
-
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are messaging,
-# messagingv2, routing, log, test, noop (multi valued)
-# Deprecated group/name - [DEFAULT]/notification_driver
-#driver =
-
-# A URL representing the messaging driver to use for notifications. If not set,
-# we fall back to the same configuration used for RPC. (string value)
-# Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
-
-# AMQP topic used for OpenStack notifications. (list value)
-# Deprecated group/name - [rpc_notifier2]/topics
-# Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
-
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
-
-
-[oslo_messaging_rabbit]
-
-#
-# From oslo.messaging
-#
-
-# Use durable queues in AMQP. (boolean value)
-# Deprecated group/name - [DEFAULT]/amqp_durable_queues
-# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
-#amqp_durable_queues = false
-
-# Auto-delete queues in AMQP. (boolean value)
-#amqp_auto_delete = false
-
-# Enable SSL (boolean value)
-#ssl = <None>
-
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
-
-# How long to wait before reconnecting in response to an AMQP consumer cancel
-# notification. (floating point value)
-#kombu_reconnect_delay = 1.0
-
-# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
-# be used. This option may not be available in future versions. (string value)
-#kombu_compression = <None>
-
-# How long to wait a missing client before abandoning to send it its replies.
-# This value should not be longer than rpc_response_timeout. (integer value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
-#kombu_missing_consumer_retry_timeout = 60
-
-# Determines how the next RabbitMQ node is chosen in case the one we are
-# currently connected to becomes unavailable. Takes effect only if more than one
-# RabbitMQ node is provided in config. (string value)
-# Possible values:
-# round-robin - <No description provided>
-# shuffle - <No description provided>
-#kombu_failover_strategy = round-robin
-
-# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_host = localhost
-
-# DEPRECATED: The RabbitMQ broker port where a single node is used. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_port = 5672
-
-# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_hosts = $rabbit_host:$rabbit_port
-
-# DEPRECATED: The RabbitMQ userid. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_userid = guest
-
-# DEPRECATED: The RabbitMQ password. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_password = guest
-
-# The RabbitMQ login method. (string value)
-# Possible values:
-# PLAIN - <No description provided>
-# AMQPLAIN - <No description provided>
-# RABBIT-CR-DEMO - <No description provided>
-#rabbit_login_method = AMQPLAIN
-
-# DEPRECATED: The RabbitMQ virtual host. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_virtual_host = /
-
-# How frequently to retry connecting with RabbitMQ. (integer value)
-#rabbit_retry_interval = 1
-
-# How long to backoff for between retries when connecting to RabbitMQ. (integer
-# value)
-#rabbit_retry_backoff = 2
-
-# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
-# (integer value)
-#rabbit_interval_max = 30
-
-# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
-# (infinite retry count). (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#rabbit_max_retries = 0
-
-# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
-# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
-# is no longer controlled by the x-ha-policy argument when declaring a queue. If
-# you just want to make sure that all queues (except those with auto-generated
-# names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
-# '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
-#rabbit_ha_queues = false
-
-# Positive integer representing duration in seconds for queue TTL (x-expires).
-# Queues which are unused for the duration of the TTL are automatically deleted.
-# The parameter affects only reply and fanout queues. (integer value)
-# Minimum value: 1
-#rabbit_transient_queues_ttl = 1800
-
-# Specifies the number of messages to prefetch. Setting to zero allows unlimited
-# messages. (integer value)
-#rabbit_qos_prefetch_count = 64
-
-# Number of seconds after which the Rabbit broker is considered down if
-# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
-# value)
-#heartbeat_timeout_threshold = 60
-
-# How often times during the heartbeat_timeout_threshold we check the heartbeat.
-# (integer value)
-#heartbeat_rate = 2
-
-# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
-#fake_rabbit = false
-
-# Maximum number of channels to allow (integer value)
-#channel_max = <None>
-
-# The maximum byte size for an AMQP frame (integer value)
-#frame_max = <None>
-
-# How often to send heartbeats for consumer's connections (integer value)
-#heartbeat_interval = 3
-
-# Arguments passed to ssl.wrap_socket (dict value)
-#ssl_options = <None>
-
-# Set socket timeout in seconds for connection's socket (floating point value)
-#socket_timeout = 0.25
-
-# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value)
-#tcp_user_timeout = 0.25
-
-# Set delay for reconnection to some host which has connection error (floating
-# point value)
-#host_connection_reconnect_delay = 0.25
-
-# Connection factory implementation (string value)
-# Possible values:
-# new - <No description provided>
-# single - <No description provided>
-# read_write - <No description provided>
-#connection_factory = single
-
-# Maximum number of connections to keep queued. (integer value)
-#pool_max_size = 30
-
-# Maximum number of connections to create above `pool_max_size`. (integer value)
-#pool_max_overflow = 0
-
-# Default number of seconds to wait for a connections to available (integer
-# value)
-#pool_timeout = 30
-
-# Lifetime of a connection (since creation) in seconds or None for no recycling.
-# Expired connections are closed on acquire. (integer value)
-#pool_recycle = 600
-
-# Threshold at which inactive (since release) connections are considered stale
-# in seconds or None for no staleness. Stale connections are closed on acquire.
-# (integer value)
-#pool_stale = 60
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#default_serializer_type = json
-
-# Persist notification messages. (boolean value)
-#notification_persistence = false
-
-# Exchange name for sending notifications (string value)
-#default_notification_exchange = ${control_exchange}_notification
-
-# Max number of not acknowledged message which RabbitMQ can send to notification
-# listener. (integer value)
-#notification_listener_prefetch_count = 100
-
-# Reconnecting retry count in case of connectivity problem during sending
-# notification, -1 means infinite retry. (integer value)
-#default_notification_retry_attempts = -1
-
-# Reconnecting retry delay in case of connectivity problem during sending
-# notification message (floating point value)
-#notification_retry_delay = 0.25
-
-# Time to live for rpc queues without consumers in seconds. (integer value)
-#rpc_queue_expiration = 60
-
-# Exchange name for sending RPC messages (string value)
-#default_rpc_exchange = ${control_exchange}_rpc
-
-# Exchange name for receiving RPC replies (string value)
-#rpc_reply_exchange = ${control_exchange}_rpc_reply
-
-# Max number of not acknowledged message which RabbitMQ can send to rpc
-# listener. (integer value)
-#rpc_listener_prefetch_count = 100
-
-# Max number of not acknowledged message which RabbitMQ can send to rpc reply
-# listener. (integer value)
-#rpc_reply_listener_prefetch_count = 100
-
-# Reconnecting retry count in case of connectivity problem during sending reply.
-# -1 means infinite retry during rpc_timeout (integer value)
-#rpc_reply_retry_attempts = -1
-
-# Reconnecting retry delay in case of connectivity problem during sending reply.
-# (floating point value)
-#rpc_reply_retry_delay = 0.25
-
-# Reconnecting retry count in case of connectivity problem during sending RPC
-# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
-# request could be processed more than one time (integer value)
-#default_rpc_retry_attempts = -1
-
-# Reconnecting retry delay in case of connectivity problem during sending RPC
-# message (floating point value)
-#rpc_retry_delay = 0.25
-
-
-[oslo_messaging_zmq]
-
-#
-# From oslo.messaging
-#
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger period.
-# The value of 0 specifies no linger period. Pending messages shall be discarded
-# immediately when the socket is closed. Positive values specify an upper bound
-# for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target (
-# < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to skip
-# any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value and
-# 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0) means
-# to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is not
-# tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
-
-[oslo_middleware]
-
-#
-# From oslo.middleware
-#
-
-# The maximum body size for each  request, in bytes. (integer value)
-# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
-# Deprecated group/name - [DEFAULT]/max_request_body_size
-#max_request_body_size = 114688
-
-# DEPRECATED: The HTTP Header that will be used to determine what the original
-# request protocol scheme was, even if it was hidden by a SSL termination proxy.
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
 # (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#secure_proxy_ssl_header = X-Forwarded-Proto
-
-# Whether the application is behind a proxy or not. This determines if the
-# middleware should parse the headers or not. (boolean value)
-#enable_proxy_headers_parsing = false
-
-
-[oslo_policy]
-
-#
-# From oslo.policy
-#
-
-# This option controls whether or not to enforce scope when evaluating policies.
-# If ``True``, the scope of the token used in the request is compared to the
-# ``scope_types`` of the policy being enforced. If the scopes do not match, an
-# ``InvalidScope`` exception will be raised. If ``False``, a message will be
-# logged informing operators that policies are being invoked with mismatching
-# scope. (boolean value)
-#enforce_scope = false
-
-# The file that defines policies. (string value)
-#policy_file = policy.json
-
-# Default rule. Enforced when a requested rule is not found. (string value)
-#policy_default_rule = default
-
-# Directories where policy configuration files are stored. They can be relative
-# to any directory in the search path defined by the config_dir option, or
-# absolute paths. The file defined by policy_file must exist for these
-# directories to be searched.  Missing or empty directories are ignored. (multi
-# valued)
-#policy_dirs = policy.d
-
-# Content Type to send and receive data for REST based policy check (string
-# value)
-# Possible values:
-# application/x-www-form-urlencoded - <No description provided>
-# application/json - <No description provided>
-#remote_content_type = application/x-www-form-urlencoded
-
-# server identity verification for REST based policy check (boolean value)
-#remote_ssl_verify_server_crt = false
-
-# Absolute path to ca cert file for REST based policy check (string value)
-#remote_ssl_ca_crt_file = <None>
-
-# Absolute path to client cert for REST based policy check (string value)
-#remote_ssl_client_crt_file = <None>
-
-# Absolute path client key file REST based policy check (string value)
-#remote_ssl_client_key_file = <None>
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = nova
+

2018-09-01 23:07:16,127 [salt.state       :1941][INFO    ][18151] Completed state [/etc/nova/nova.conf] at time 23:07:16.127170 duration_in_ms=365.025
2018-09-01 23:07:16,127 [salt.state       :1770][INFO    ][18151] Running state [/etc/default/nova-compute] at time 23:07:16.127595
2018-09-01 23:07:16,127 [salt.state       :1803][INFO    ][18151] Executing state file.managed for [/etc/default/nova-compute]
2018-09-01 23:07:16,141 [salt.fileclient  :1215][INFO    ][18151] Fetching file from saltenv 'base', ** done ** 'nova/files/default'
2018-09-01 23:07:16,146 [salt.state       :290 ][INFO    ][18151] File changed:
New file
2018-09-01 23:07:16,146 [salt.state       :1941][INFO    ][18151] Completed state [/etc/default/nova-compute] at time 23:07:16.146514 duration_in_ms=18.918
2018-09-01 23:07:16,147 [salt.state       :1770][INFO    ][18151] Running state [nova-compute] at time 23:07:16.147625
2018-09-01 23:07:16,147 [salt.state       :1803][INFO    ][18151] Executing state service.running for [nova-compute]
2018-09-01 23:07:16,148 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemctl', 'status', 'nova-compute.service', '-n', '0'] in directory '/root'
2018-09-01 23:07:16,159 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemctl', 'is-active', 'nova-compute.service'] in directory '/root'
2018-09-01 23:07:16,167 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemctl', 'is-enabled', 'nova-compute.service'] in directory '/root'
2018-09-01 23:07:16,173 [salt.state       :290 ][INFO    ][18151] The service nova-compute is already running
2018-09-01 23:07:16,174 [salt.state       :1941][INFO    ][18151] Completed state [nova-compute] at time 23:07:16.174055 duration_in_ms=26.43
2018-09-01 23:07:16,174 [salt.state       :1770][INFO    ][18151] Running state [nova-compute] at time 23:07:16.174204
2018-09-01 23:07:16,174 [salt.state       :1803][INFO    ][18151] Executing state service.mod_watch for [nova-compute]
2018-09-01 23:07:16,174 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemctl', 'is-active', 'nova-compute.service'] in directory '/root'
2018-09-01 23:07:16,180 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'nova-compute.service'] in directory '/root'
2018-09-01 23:07:16,199 [salt.state       :290 ][INFO    ][18151] {'nova-compute': True}
2018-09-01 23:07:16,200 [salt.state       :1941][INFO    ][18151] Completed state [nova-compute] at time 23:07:16.200122 duration_in_ms=25.917
2018-09-01 23:07:16,200 [salt.state       :1770][INFO    ][18151] Running state [/etc/default/libvirtd] at time 23:07:16.200632
2018-09-01 23:07:16,200 [salt.state       :1803][INFO    ][18151] Executing state file.managed for [/etc/default/libvirtd]
2018-09-01 23:07:16,215 [salt.fileclient  :1215][INFO    ][18151] Fetching file from saltenv 'base', ** done ** 'nova/files/queens/libvirt.Debian'
2018-09-01 23:07:16,222 [salt.state       :290 ][INFO    ][18151] File changed:
--- 
+++ 
@@ -1,17 +1,13 @@
-# Defaults for libvirtd initscript (/etc/init.d/libvirtd)
+# Defaults for libvirt-bin initscript (/etc/init.d/libvirt-bin)
 # This is a POSIX shell fragment
 
 # Start libvirtd to handle qemu/kvm:
 start_libvirtd="yes"
 
 # options passed to libvirtd, add "-l" to listen on tcp
-#libvirtd_opts=""
-
+# Don't use "-d" option with systemd
+libvirtd_opts="-l"
+LIBVIRTD_ARGS="--listen"
 # pass in location of kerberos keytab
 #export KRB5_KTNAME=/etc/libvirt/libvirt.keytab
 
-# Whether to mount a systemd like cgroup layout (only
-# useful when not running systemd)
-#mount_cgroups=yes
-# Which cgroups to mount
-#cgroups="memory devices"

2018-09-01 23:07:16,222 [salt.state       :1941][INFO    ][18151] Completed state [/etc/default/libvirtd] at time 23:07:16.222746 duration_in_ms=22.113
2018-09-01 23:07:16,223 [salt.state       :1770][INFO    ][18151] Running state [service.systemctl_reload] at time 23:07:16.223575
2018-09-01 23:07:16,223 [salt.state       :1803][INFO    ][18151] Executing state module.wait for [service.systemctl_reload]
2018-09-01 23:07:16,223 [salt.state       :290 ][INFO    ][18151] No changes made for service.systemctl_reload
2018-09-01 23:07:16,224 [salt.state       :1941][INFO    ][18151] Completed state [service.systemctl_reload] at time 23:07:16.224040 duration_in_ms=0.465
2018-09-01 23:07:16,224 [salt.state       :1770][INFO    ][18151] Running state [service.systemctl_reload] at time 23:07:16.224158
2018-09-01 23:07:16,224 [salt.state       :1803][INFO    ][18151] Executing state module.mod_watch for [service.systemctl_reload]
2018-09-01 23:07:16,224 [salt.utils.decorators:613 ][WARNING ][18151] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2018-09-01 23:07:16,224 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemctl', '--system', 'daemon-reload'] in directory '/root'
2018-09-01 23:07:16,298 [salt.state       :290 ][INFO    ][18151] {'ret': True}
2018-09-01 23:07:16,298 [salt.state       :1941][INFO    ][18151] Completed state [service.systemctl_reload] at time 23:07:16.298604 duration_in_ms=74.445
2018-09-01 23:07:16,299 [salt.state       :1770][INFO    ][18151] Running state [/etc/libvirt/qemu.conf] at time 23:07:16.299080
2018-09-01 23:07:16,299 [salt.state       :1803][INFO    ][18151] Executing state file.managed for [/etc/libvirt/qemu.conf]
2018-09-01 23:07:16,315 [salt.fileclient  :1215][INFO    ][18151] Fetching file from saltenv 'base', ** done ** 'nova/files/queens/qemu.conf.Debian'
2018-09-01 23:07:16,410 [salt.state       :290 ][INFO    ][18151] File changed:
--- 
+++ 
@@ -1,61 +1,8 @@
+
 # Master configuration file for the QEMU driver.
 # All settings described here are optional - if omitted, sensible
 # defaults are used.
 
-# Use of TLS requires that x509 certificates be issued. The default is
-# to keep them in /etc/pki/qemu. This directory must contain
-#
-#  ca-cert.pem - the CA master certificate
-#  server-cert.pem - the server certificate signed with ca-cert.pem
-#  server-key.pem  - the server private key
-#
-# and optionally may contain
-#
-#  dh-params.pem - the DH params configuration file
-#
-# If the directory does not exist, libvirtd will fail to start. If the
-# directory doesn't contain the necessary files, QEMU domains will fail
-# to start if they are configured to use TLS.
-#
-# In order to overwrite the default path alter the following. This path
-# definition will be used as the default path for other *_tls_x509_cert_dir
-# configuration settings if their default path does not exist or is not
-# specifically set.
-#
-#default_tls_x509_cert_dir = "/etc/pki/qemu"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client who does not have a
-# certificate signed by the CA in /etc/pki/qemu/ca-cert.pem
-#
-# The default_tls_x509_cert_dir directory must also contain
-#
-#  client-cert.pem - the client certificate signed with the ca-cert.pem
-#  client-key.pem - the client private key
-#
-#default_tls_x509_verify = 1
-
-#
-# Libvirt assumes the server-key.pem file is unencrypted by default.
-# To use an encrypted server-key.pem file, the password to decrypt
-# the PEM file is required. This can be provided by creating a secret
-# object in libvirt and then to uncomment this setting to set the UUID
-# of the secret.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#default_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
-
-
 # VNC is configured to listen on 127.0.0.1 by default.
 # To make it listen on all public interfaces, uncomment
 # this next option.
@@ -69,9 +16,9 @@
 # unix socket. This prevents unprivileged access from users on the
 # host machine, though most VNC clients do not support it.
 #
-# This will only be enabled for VNC configurations that have listen
-# type=address but without any address specified. This setting takes
-# preference over vnc_listen.
+# This will only be enabled for VNC configurations that do not have
+# a hardcoded 'listen' or 'socket' value. This setting takes preference
+# over vnc_listen.
 #
 #vnc_auto_unix_socket = 1
 
@@ -85,12 +32,15 @@
 #
 #vnc_tls = 1
 
-
-# In order to override the default TLS certificate location for
-# vnc certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but vnc_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
+# Use of TLS requires that x509 certificates be issued. The
+# default it to keep them in /etc/pki/libvirt-vnc. This directory
+# must contain
+#
+#  ca-cert.pem - the CA master certificate
+#  server-cert.pem - the server certificate signed with ca-cert.pem
+#  server-key.pem  - the server private key
+#
+# This option allows the certificate directory to be changed
 #
 #vnc_tls_x509_cert_dir = "/etc/pki/libvirt-vnc"
 
@@ -100,15 +50,10 @@
 # an encrypted channel.
 #
 # It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the vnc_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
+# issuing a x509 certificate to every client who needs to connect.
+#
+# Enabling this option will reject any client who does not have a
+# certificate signed by the CA in /etc/pki/libvirt-vnc/ca-cert.pem
 #
 #vnc_tls_x509_verify = 1
 
@@ -172,24 +117,17 @@
 #spice_tls = 1
 
 
-# In order to override the default TLS certificate location for
-# spice certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but spice_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
+# Use of TLS requires that x509 certificates be issued. The
+# default it to keep them in /etc/pki/libvirt-spice. This directory
+# must contain
+#
+#  ca-cert.pem - the CA master certificate
+#  server-cert.pem - the server certificate signed with ca-cert.pem
+#  server-key.pem  - the server private key
+#
+# This option allows the certificate directory to be changed.
 #
 #spice_tls_x509_cert_dir = "/etc/pki/libvirt-spice"
-
-
-# Enable this option to have SPICE served over an automatically created
-# unix socket. This prevents unprivileged access from users on the
-# host machine.
-#
-# This will only be enabled for SPICE configurations that have listen
-# type=address but without any address specified. This setting takes
-# preference over spice_listen.
-#
-#spice_auto_unix_socket = 1
 
 
 # The default SPICE password. This parameter is only used if the
@@ -216,123 +154,6 @@
 # point to the directory, and create a qemu.conf in that location
 #
 #spice_sasl_dir = "/some/directory/sasl2"
-
-# Enable use of TLS encryption on the chardev TCP transports.
-#
-# It is necessary to setup CA and issue a server certificate
-# before enabling this.
-#
-#chardev_tls = 1
-
-
-# In order to override the default TLS certificate location for character
-# device TCP certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but chardev_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
-#
-#chardev_tls_x509_cert_dir = "/etc/pki/libvirt-chardev"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the chardev_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
-#
-#chardev_tls_x509_verify = 1
-
-
-# Uncomment and use the following option to override the default secret
-# UUID provided in the default_tls_x509_secret_uuid parameter.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#chardev_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
-
-
-# Enable use of TLS encryption for all VxHS network block devices that
-# don't specifically disable.
-#
-# When the VxHS network block device server is set up appropriately,
-# x509 certificates are required for authentication between the clients
-# (qemu processes) and the remote VxHS server.
-#
-# It is necessary to setup CA and issue the client certificate before
-# enabling this.
-#
-#vxhs_tls = 1
-
-
-# In order to override the default TLS certificate location for VxHS
-# backed storage, supply a valid path to the certificate directory.
-# This is used to authenticate the VxHS block device clients to the VxHS
-# server.
-#
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but vxhs_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
-#
-# VxHS block device clients expect the client certificate and key to be
-# present in the certificate directory along with the CA master certificate.
-# If using the default environment, default_tls_x509_verify must be configured.
-# Since this is only a client the server-key.pem certificate is not needed.
-# Thus a VxHS directory must contain the following:
-#
-#  ca-cert.pem - the CA master certificate
-#  client-cert.pem - the client certificate signed with the ca-cert.pem
-#  client-key.pem - the client private key
-#
-#vxhs_tls_x509_cert_dir = "/etc/pki/libvirt-vxhs"
-
-
-# In order to override the default TLS certificate location for migration
-# certificates, supply a valid path to the certificate directory. If the
-# provided path does not exist, libvirtd will fail to start. If the path is
-# not provided, but migrate_tls = 1, then the default_tls_x509_cert_dir path
-# will be used. Once/if a default certificate is enabled/defined, migration
-# will then be able to use the certificate via migration API flags.
-#
-#migrate_tls_x509_cert_dir = "/etc/pki/libvirt-migrate"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the migrate_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
-#
-#migrate_tls_x509_verify = 1
-
-
-# Uncomment and use the following option to override the default secret
-# UUID provided in the default_tls_x509_secret_uuid parameter.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#migrate_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
 
 
 # By default, if no graphical front end is configured, libvirt will disable
@@ -416,10 +237,9 @@
 # Set to 0 to disable file ownership changes.
 #dynamic_ownership = 1
 
-
 # What cgroup controllers to make use of with QEMU guests
 #
-#  - 'cpu' - use for scheduler tunables
+#  - 'cpu' - use for schedular tunables
 #  - 'devices' - use for device whitelisting
 #  - 'memory' - use for memory tunables
 #  - 'blkio' - use for block devices I/O tunables
@@ -451,19 +271,11 @@
 #    "/dev/null", "/dev/full", "/dev/zero",
 #    "/dev/random", "/dev/urandom",
 #    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
-#    "/dev/rtc","/dev/hpet"
+#    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio"
 #]
-#
-# RDMA migration requires the following extra files to be added to the list:
-#   "/dev/infiniband/rdma_cm",
-#   "/dev/infiniband/issm0",
-#   "/dev/infiniband/issm1",
-#   "/dev/infiniband/umad0",
-#   "/dev/infiniband/umad1",
-#   "/dev/infiniband/uverbs0"
-
-
-# The default format for QEMU/KVM guest save images is raw; that is, the
+
+
+# The default format for Qemu/KVM guest save images is raw; that is, the
 # memory from the domain is dumped out directly to a file.  If you have
 # guests with a large amount of memory, however, this can take up quite
 # a bit of space.  If you would like to compress the images while they
@@ -517,20 +329,15 @@
 # unspecified here, determination of a host mount point in /proc/mounts
 # will be attempted.  Specifying an explicit mount overrides detection
 # of the same in /proc/mounts.  Setting the mount point to "" will
-# disable guest hugepage backing. If desired, multiple mount points can
-# be specified at once, separated by comma and enclosed in square
-# brackets, for example:
-#
-#     hugetlbfs_mount = ["/dev/hugepages2M", "/dev/hugepages1G"]
-#
-# The size of huge page served by specific mount point is determined by
-# libvirt at the daemon startup.
-#
-# NB, within these mount points, guests will create memory backing
-# files in a location of $MOUNTPOINT/libvirt/qemu
+# disable guest hugepage backing.
+#
+# NB, within this mount point, guests will create memory backing files
+# in a location of $MOUNTPOINT/libvirt/qemu
 #
 #hugetlbfs_mount = "/dev/hugepages"
-
+#hugetlbfs_mount = ["/run/hugepages/kvm", "/mnt/hugepages_1GB"]
+hugetlbfs_mount = ["/mnt/hugepages_1G"]
+security_driver="none"
 
 # Path to the setuid helper for creating tap devices.  This executable
 # is used to create <source type='bridge'> interfaces when libvirtd is
@@ -566,42 +373,6 @@
 # The same applies to max_files which sets the limit on the maximum
 # number of opened files.
 #
-#max_processes = 0
-#max_files = 0
-
-# If max_core is set to a non-zero integer, then QEMU will be
-# permitted to create core dumps when it crashes, provided its
-# RAM size is smaller than the limit set.
-#
-# Be warned that the core dump will include a full copy of the
-# guest RAM, if the 'dump_guest_core' setting has been enabled,
-# or if the guest XML contains
-#
-#   <memory dumpcore="on">...guest ram...</memory>
-#
-# If guest RAM is to be included, ensure the max_core limit
-# is set to at least the size of the largest expected guest
-# plus another 1GB for any QEMU host side memory mappings.
-#
-# As a special case it can be set to the string "unlimited" to
-# to allow arbitrarily sized core dumps.
-#
-# By default the core dump size is set to 0 disabling all dumps
-#
-# Size is a positive integer specifying bytes or the
-# string "unlimited"
-#
-#max_core = "unlimited"
-
-# Determine if guest RAM is included in QEMU core dumps. By
-# default guest RAM will be excluded if a new enough QEMU is
-# present. Setting this to '1' will force guest RAM to always
-# be included in QEMU core dumps.
-#
-# This setting will be ignored if the guest XML has set the
-# dumpcore attribute on the <memory> element.
-#
-#dump_guest_core = 1
 
 # mac_filter enables MAC addressed based filtering on bridge ports.
 # This currently requires ebtables to be installed.
@@ -628,13 +399,11 @@
 #allow_disk_format_probing = 1
 
 
-# In order to prevent accidentally starting two domains that
-# share one writable disk, libvirt offers two approaches for
-# locking files. The first one is sanlock, the other one,
-# virtlockd, is then our own implementation. Accepted values
-# are "sanlock" and "lockd".
-#
-#lock_manager = "lockd"
+# To enable 'Sanlock' project based locking of the file
+# content (to prevent two VMs writing to the same
+# disk), uncomment this
+#
+#lock_manager = "sanlock"
 
 
 
@@ -676,17 +445,10 @@
 #seccomp_sandbox = 1
 
 
+
 # Override the listen address for all incoming migrations. Defaults to
 # 0.0.0.0, or :: if both host and qemu are capable of IPv6.
-#migration_address = "0.0.0.0"
-
-
-# The default hostname or IP address which will be used by a migration
-# source for transferring migration data to this host.  The migration
-# source has to be able to resolve this hostname and connect to it so
-# setting "localhost" will not work.  By default, the host's configured
-# hostname is used.
-#migration_host = "host.example.com"
+#migration_address = "127.0.0.1"
 
 
 # Override the port range used for incoming migrations.
@@ -698,36 +460,12 @@
 #
 #migration_port_min = 49152
 #migration_port_max = 49215
-
-
-
-# Timestamp QEMU's log messages (if QEMU supports it)
-#
-# Defaults to 1.
-#
-#log_timestamp = 0
-
-
-# Location of master nvram file
-#
-# When a domain is configured to use UEFI instead of standard
-# BIOS it may use a separate storage for UEFI variables. If
-# that's the case libvirt creates the variable store per domain
-# using this master file as image. Each UEFI firmware can,
-# however, have different variables store. Therefore the nvram is
-# a list of strings when a single item is in form of:
-#   ${PATH_TO_UEFI_FW}:${PATH_TO_UEFI_VARS}.
-# Later, when libvirt creates per domain variable store, this list is
-# searched for the master image. The UEFI firmware can be called
-# differently for different guest architectures. For instance, it's OVMF
-# for x86_64 and i686, but it's AAVMF for aarch64. The libvirt default
-# follows this scheme.
-#nvram = [
-#   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
-#   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
-#   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
-#   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
-#]
+cgroup_device_acl = [
+    "/dev/null", "/dev/full", "/dev/zero",
+    "/dev/random", "/dev/urandom",
+    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
+    "/dev/rtc", "/dev/hpet","/dev/net/tun",
+]
 
 # The backend to use for handling stdout/stderr output from
 # QEMU processes.
@@ -743,41 +481,3 @@
 #          rollover when a size limit is hit.
 #
 #stdio_handler = "logd"
-
-# QEMU gluster libgfapi log level, debug levels are 0-9, with 9 being the
-# most verbose, and 0 representing no debugging output.
-#
-# The current logging levels defined in the gluster GFAPI are:
-#
-#    0 - None
-#    1 - Emergency
-#    2 - Alert
-#    3 - Critical
-#    4 - Error
-#    5 - Warning
-#    6 - Notice
-#    7 - Info
-#    8 - Debug
-#    9 - Trace
-#
-# Defaults to 4
-#
-#gluster_debug_level = 9
-
-# To enhance security, QEMU driver is capable of creating private namespaces
-# for each domain started. Well, so far only "mount" namespace is supported. If
-# enabled it means qemu process is unable to see all the devices on the system,
-# only those configured for the domain in question. Libvirt then manages
-# devices entries throughout the domain lifetime. This namespace is turned on
-# by default.
-#namespaces = [ "mount" ]
-
-# This directory is used for memoryBacking source if configured as file.
-# NOTE: big files will be stored here
-#memory_backing_dir = "/var/lib/libvirt/qemu/ram"
-
-# The following two values set the default RX/TX ring buffer size for virtio
-# interfaces. These values are taken unless overridden in domain XML. For more
-# info consult docs to corresponding attributes from domain XML.
-#rx_queue_size = 1024
-#tx_queue_size = 1024

2018-09-01 23:07:16,410 [salt.state       :1941][INFO    ][18151] Completed state [/etc/libvirt/qemu.conf] at time 23:07:16.410527 duration_in_ms=111.445
2018-09-01 23:07:16,410 [salt.state       :1770][INFO    ][18151] Running state [/etc/libvirt/libvirtd.conf] at time 23:07:16.410861
2018-09-01 23:07:16,411 [salt.state       :1803][INFO    ][18151] Executing state file.managed for [/etc/libvirt/libvirtd.conf]
2018-09-01 23:07:16,425 [salt.fileclient  :1215][INFO    ][18151] Fetching file from saltenv 'base', ** done ** 'nova/files/queens/libvirtd.conf.Debian'
2018-09-01 23:07:16,516 [salt.state       :290 ][INFO    ][18151] File changed:
--- 
+++ 
@@ -1,6 +1,7 @@
+
 # Master libvirt daemon configuration file
 #
-# For further information consult https://libvirt.org/format.html
+# For further information consult http://libvirt.org/format.html
 #
 # NOTE: the tests/daemon-conf regression test script requires
 # that each "PARAMETER = VALUE" line in this file have the parameter
@@ -20,6 +21,10 @@
 #
 # This is enabled by default, uncomment this to disable it
 #listen_tls = 0
+listen_tls = 0
+listen_tcp = 1
+auth_tcp = "none"
+
 
 # Listen for unencrypted TCP connections on the public TCP/IP port.
 # NB, must pass the --listen flag to the libvirtd process for this to
@@ -48,10 +53,6 @@
 # Override the default configuration which binds to all network
 # interfaces. This can be a numeric IPv4/6 address, or hostname
 #
-# If the libvirtd service is started in parallel with network
-# startup (e.g. with systemd), binding to addresses other than
-# the wildcards (0.0.0.0/::) might not be available yet.
-#
 #listen_addr = "192.168.0.1"
 
 
@@ -67,7 +68,7 @@
 # unique on the immediate broadcast network.
 #
 # The default is "Virtualization Host HOSTNAME", where HOSTNAME
-# is substituted for the short hostname of the machine (without domain)
+# is subsituted for the short hostname of the machine (without domain)
 #
 #mdns_name = "Virtualization Host Joe Demo"
 
@@ -82,14 +83,14 @@
 # without becoming root.
 #
 # This is restricted to 'root' by default.
-unix_sock_group = "libvirt"
+unix_sock_group = "libvirtd"
 
 # Set the UNIX socket permissions for the R/O socket. This is used
 # for monitoring VM status only
 #
-# Default allows any user. If setting group ownership, you may want to
-# restrict this too.
-unix_sock_ro_perms = "0777"
+# Default allows any user. If setting group ownership may want to
+# restrict this to:
+#unix_sock_ro_perms = "0777"
 
 # Set the UNIX socket permissions for the R/W socket. This is used
 # for full management of VMs
@@ -98,19 +99,11 @@
 # the default will change to allow everyone (eg, 0777)
 #
 # If not using PolicyKit and setting group ownership for access
-# control, then you may want to relax this too.
+# control then you may want to relax this to:
 unix_sock_rw_perms = "0770"
-
-# Set the UNIX socket permissions for the admin interface socket.
-#
-# Default allows only owner (root), do not change it unless you are
-# sure to whom you are exposing the access to.
-#unix_sock_admin_perms = "0700"
 
 # Set the name of the directory in which sockets will be found/created.
 #unix_sock_dir = "/var/run/libvirt"
-
-
 
 #################################################################
 #
@@ -125,7 +118,7 @@
 #  - sasl: use SASL infrastructure. The actual auth scheme is then
 #          controlled from /etc/sasl2/libvirt.conf. For the TCP
 #          socket only GSSAPI & DIGEST-MD5 mechanisms will be used.
-#          For non-TCP or TLS sockets, any scheme is allowed.
+#          For non-TCP or TLS sockets,  any scheme is allowed.
 #
 #  - polkit: use PolicyKit to authenticate. This is only suitable
 #            for use on the UNIX sockets. The default policy will
@@ -156,6 +149,7 @@
 # use, always enable SASL and use the GSSAPI or DIGEST-MD5
 # mechanism in /etc/sasl2/libvirt.conf
 #auth_tcp = "sasl"
+#auth_tcp = "none"
 
 # Change the authentication scheme for TLS sockets.
 #
@@ -167,15 +161,6 @@
 #auth_tls = "none"
 
 
-# Change the API access control scheme
-#
-# By default an authenticated user is allowed access
-# to all APIs. Access drivers can place restrictions
-# on this. By default the 'nop' driver is enabled,
-# meaning no access control checks are done once a
-# client has authenticated with libvirtd
-#
-#access_drivers = [ "polkit" ]
 
 #################################################################
 #
@@ -228,7 +213,7 @@
 #tls_no_verify_certificate = 1
 
 
-# A whitelist of allowed x509 Distinguished Names
+# A whitelist of allowed x509  Distinguished Names
 # This list may contain wildcards such as
 #
 #    "C=GB,ST=London,L=London,O=Red Hat,CN=*"
@@ -242,7 +227,7 @@
 #tls_allowed_dn_list = ["DN1", "DN2"]
 
 
-# A whitelist of allowed SASL usernames. The format for username
+# A whitelist of allowed SASL usernames. The format for usernames
 # depends on the SASL authentication mechanism. Kerberos usernames
 # look like username@REALM
 #
@@ -259,13 +244,6 @@
 #sasl_allowed_username_list = ["joe@EXAMPLE.COM", "fred@EXAMPLE.COM" ]
 
 
-# Override the compile time default TLS priority string. The
-# default is usually "NORMAL" unless overridden at build time.
-# Only set this is it is desired for libvirt to deviate from
-# the global default settings.
-#
-#tls_priority="NORMAL"
-
 
 #################################################################
 #
@@ -274,22 +252,12 @@
 
 # The maximum number of concurrent client connections to allow
 # over all sockets combined.
-#max_clients = 5000
-
-# The maximum length of queue of connections waiting to be
-# accepted by the daemon. Note, that some protocols supporting
-# retransmission may obey this so that a later reattempt at
-# connection succeeds.
-#max_queued_clients = 1000
-
-# The maximum length of queue of accepted but not yet
-# authenticated clients. The default value is 20. Set this to
-# zero to turn this feature off.
-#max_anonymous_clients = 20
+#max_clients = 20
+
 
 # The minimum limit sets the number of workers to start up
 # initially. If the number of active clients exceeds this,
-# then more threads are spawned, up to max_workers limit.
+# then more threads are spawned, upto max_workers limit.
 # Typically you'd want max_workers to equal maximum number
 # of clients allowed
 #min_workers = 5
@@ -297,25 +265,25 @@
 
 
 # The number of priority workers. If all workers from above
-# pool are stuck, some calls marked as high priority
+# pool will stuck, some calls marked as high priority
 # (notably domainDestroy) can be executed in this pool.
 #prio_workers = 5
 
+# Total global limit on concurrent RPC calls. Should be
+# at least as large as max_workers. Beyond this, RPC requests
+# will be read into memory and queued. This directly impact
+# memory usage, currently each request requires 256 KB of
+# memory. So by default upto 5 MB of memory is used
+#
+# XXX this isn't actually enforced yet, only the per-client
+# limit is used so far
+#max_requests = 20
+
 # Limit on concurrent requests from a single client
 # connection. To avoid one client monopolizing the server
-# this should be a small fraction of the global max_workers
-# parameter.
+# this should be a small fraction of the global max_requests
+# and max_workers parameter
 #max_client_requests = 5
-
-# Same processing controls, but this time for the admin interface.
-# For description of each option, be so kind to scroll few lines
-# upwards.
-
-#admin_min_workers = 1
-#admin_max_workers = 5
-#admin_max_clients = 5
-#admin_max_queued_clients = 5
-#admin_max_client_requests = 5
 
 #################################################################
 #
@@ -324,34 +292,23 @@
 
 # Logging level: 4 errors, 3 warnings, 2 information, 1 debug
 # basically 1 will log everything possible
-# Note: Journald may employ rate limiting of the messages logged
-# and thus lock up the libvirt daemon. To use the debug level with
-# journald you have to specify it explicitly in 'log_outputs', otherwise
-# only information level messages will be logged.
 #log_level = 3
-
 # Logging filters:
 # A filter allows to select a different logging level for a given category
 # of logs
 # The format for a filter is one of:
 #    x:name
 #    x:+name
-
-#      where name is a string which is matched against the category
-#      given in the VIR_LOG_INIT() at the top of each libvirt source
-#      file, e.g., "remote", "qemu", or "util.json" (the name in the
-#      filter can be a substring of the full category name, in order
-#      to match multiple similar categories), the optional "+" prefix
-#      tells libvirt to log stack trace for each message matching
-#      name, and x is the minimal level where matching messages should
-#      be logged:
-
+#      where name is a string which is matched against source file name,
+#      e.g., "remote", "qemu", or "util/json", the optional "+" prefix
+#      tells libvirt to log stack trace for each message matching name,
+#      and x is the minimal level where matching messages should be logged:
 #    1: DEBUG
 #    2: INFO
 #    3: WARNING
 #    4: ERROR
 #
-# Multiple filters can be defined in a single @filters, they just need to be
+# Multiple filter can be defined in a single @filters, they just need to be
 # separated by spaces.
 #
 # e.g. to only get warning or errors from the remote layer and only errors
@@ -367,26 +324,23 @@
 #      use syslog for the output and use the given name as the ident
 #    x:file:file_path
 #      output to a file, with the given filepath
-#    x:journald
-#      output to journald logging system
 # In all case the x prefix is the minimal level, acting as a filter
 #    1: DEBUG
 #    2: INFO
 #    3: WARNING
 #    4: ERROR
 #
-# Multiple outputs can be defined, they just need to be separated by spaces.
+# Multiple output can be defined, they just need to be separated by spaces.
 # e.g. to log all warnings and errors to syslog under the libvirtd ident:
 #log_outputs="3:syslog:libvirtd"
 #
 
-# Log debug buffer size:
-#
-# This configuration option is no longer used, since the global
-# log buffer functionality has been removed. Please configure
-# suitable log_outputs/log_filters settings to obtain logs.
+# Log debug buffer size: default 64
+# The daemon keeps an internal debug log buffer which will be dumped in case
+# of crash or upon receiving a SIGUSR2 signal. This setting allows to override
+# the default buffer size in kilobytes.
+# If value is 0 or less the debug log buffer is deactivated
 #log_buffer_size = 64
-
 
 ##################################################################
 #
@@ -407,16 +361,10 @@
 
 ###################################################################
 # UUID of the host:
-# Host UUID is read from one of the sources specified in host_uuid_source.
-#
-# - 'smbios': fetch the UUID from 'dmidecode -s system-uuid'
-# - 'machine-id': fetch the UUID from /etc/machine-id
-#
-# The host_uuid_source default is 'smbios'. If 'dmidecode' does not provide
-# a valid UUID a temporary UUID will be generated.
-#
-# Another option is to specify host UUID in host_uuid.
-#
+# Provide the UUID of the host here in case the command
+# 'dmidecode -s system-uuid' does not provide a valid uuid. In case
+# 'dmidecode' does not provide a valid UUID and none is provided here, a
+# temporary UUID will be generated.
 # Keep the format of the example UUID below. UUID must not have all digits
 # be the same.
 
@@ -424,12 +372,11 @@
 # it with the output of the 'uuidgen' command and then
 # uncomment this entry
 #host_uuid = "00000000-0000-0000-0000-000000000000"
-#host_uuid_source = "smbios"
 
 ###################################################################
 # Keepalive protocol:
 # This allows libvirtd to detect broken client connections or even
-# dead clients.  A keepalive message is sent to a client after
+# dead client.  A keepalive message is sent to a client after
 # keepalive_interval seconds of inactivity to check if the client is
 # still responding; keepalive_count is a maximum number of keepalive
 # messages that are allowed to be sent to the client without getting
@@ -438,31 +385,15 @@
 # keepalive_interval * (keepalive_count + 1) seconds since the last
 # message received from the client.  If keepalive_interval is set to
 # -1, libvirtd will never send keepalive requests; however clients
-# can still send them and the daemon will send responses.  When
+# can still send them and the deamon will send responses.  When
 # keepalive_count is set to 0, connections will be automatically
 # closed after keepalive_interval seconds of inactivity without
 # sending any keepalive messages.
 #
 #keepalive_interval = 5
 #keepalive_count = 5
-
-#
-# These configuration options are no longer used.  There is no way to
-# restrict such clients from connecting since they first need to
-# connect in order to ask for keepalive.
+#
+# If set to 1, libvirtd will refuse to talk to clients that do not
+# support keepalive protocol.  Defaults to 0.
 #
 #keepalive_required = 1
-#admin_keepalive_required = 1
-
-# Keepalive settings for the admin interface
-#admin_keepalive_interval = 5
-#admin_keepalive_count = 5
-
-###################################################################
-# Open vSwitch:
-# This allows to specify a timeout for openvswitch calls made by
-# libvirt. The ovs-vsctl utility is used for the configuration and
-# its timeout option is set by default to 5 seconds to avoid
-# potential infinite waits blocking libvirt.
-#
-#ovs_timeout = 5

2018-09-01 23:07:16,516 [salt.state       :1941][INFO    ][18151] Completed state [/etc/libvirt/libvirtd.conf] at time 23:07:16.516865 duration_in_ms=106.004
2018-09-01 23:07:16,517 [salt.state       :1770][INFO    ][18151] Running state [virsh net-destroy default] at time 23:07:16.517889
2018-09-01 23:07:16,518 [salt.state       :1803][INFO    ][18151] Executing state cmd.run for [virsh net-destroy default]
2018-09-01 23:07:16,518 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command 'virsh net-list | grep default' in directory '/root'
2018-09-01 23:07:16,535 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command 'virsh net-destroy default' in directory '/root'
2018-09-01 23:07:16,734 [salt.state       :290 ][INFO    ][18151] {'pid': 7659, 'retcode': 0, 'stderr': '', 'stdout': 'Network default destroyed'}
2018-09-01 23:07:16,734 [salt.state       :1941][INFO    ][18151] Completed state [virsh net-destroy default] at time 23:07:16.734295 duration_in_ms=216.405
2018-09-01 23:07:16,735 [salt.state       :1770][INFO    ][18151] Running state [libvirtd] at time 23:07:16.735905
2018-09-01 23:07:16,736 [salt.state       :1803][INFO    ][18151] Executing state service.running for [libvirtd]
2018-09-01 23:07:16,736 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemctl', 'status', 'libvirtd.service', '-n', '0'] in directory '/root'
2018-09-01 23:07:16,746 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemctl', 'is-active', 'libvirtd.service'] in directory '/root'
2018-09-01 23:07:16,756 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemctl', 'is-enabled', 'libvirtd.service'] in directory '/root'
2018-09-01 23:07:16,765 [salt.state       :290 ][INFO    ][18151] The service libvirtd is already running
2018-09-01 23:07:16,766 [salt.state       :1941][INFO    ][18151] Completed state [libvirtd] at time 23:07:16.766051 duration_in_ms=30.146
2018-09-01 23:07:16,766 [salt.state       :1770][INFO    ][18151] Running state [libvirtd] at time 23:07:16.766207
2018-09-01 23:07:16,766 [salt.state       :1803][INFO    ][18151] Executing state service.mod_watch for [libvirtd]
2018-09-01 23:07:16,766 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemctl', 'is-active', 'libvirtd.service'] in directory '/root'
2018-09-01 23:07:16,776 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18151] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'libvirtd.service'] in directory '/root'
2018-09-01 23:07:16,840 [salt.state       :290 ][INFO    ][18151] {'libvirtd': True}
2018-09-01 23:07:16,840 [salt.state       :1941][INFO    ][18151] Completed state [libvirtd] at time 23:07:16.840867 duration_in_ms=74.658
2018-09-01 23:07:16,843 [salt.minion      :1708][INFO    ][18151] Returning information for job: 20180901230445633919
2018-09-01 23:07:17,785 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command test.ping with jid 20180901230717776430
2018-09-01 23:07:17,792 [salt.minion      :1431][INFO    ][7958] Starting a new job with PID 7958
2018-09-01 23:07:17,802 [salt.minion      :1708][INFO    ][7958] Returning information for job: 20180901230717776430
2018-09-01 23:07:17,926 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command match.grain with jid 20180901230717915983
2018-09-01 23:07:17,933 [salt.minion      :1431][INFO    ][7963] Starting a new job with PID 7963
2018-09-01 23:07:17,936 [salt.minion      :1708][INFO    ][7963] Returning information for job: 20180901230717915983
2018-09-01 23:09:57,864 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command state.sls with jid 20180901230957855152
2018-09-01 23:09:57,875 [salt.minion      :1431][INFO    ][8039] Starting a new job with PID 8039
2018-09-01 23:10:02,454 [salt.state       :905 ][INFO    ][8039] Loading fresh modules for state activity
2018-09-01 23:10:02,483 [salt.fileclient  :1215][INFO    ][8039] Fetching file from saltenv 'base', ** done ** 'barbican/init.sls'
2018-09-01 23:10:02,504 [salt.fileclient  :1215][INFO    ][8039] Fetching file from saltenv 'base', ** done ** 'barbican/client/init.sls'
2018-09-01 23:10:02,517 [salt.fileclient  :1215][INFO    ][8039] Fetching file from saltenv 'base', ** done ** 'barbican/client/service.sls'
2018-09-01 23:10:02,529 [salt.fileclient  :1215][INFO    ][8039] Fetching file from saltenv 'base', ** done ** 'barbican/map.jinja'
2018-09-01 23:10:02,553 [salt.fileclient  :1215][INFO    ][8039] Fetching file from saltenv 'base', ** done ** 'barbican/client/resources/init.sls'
2018-09-01 23:10:02,567 [salt.fileclient  :1215][INFO    ][8039] Fetching file from saltenv 'base', ** done ** 'barbican/client/resources/v1.sls'
2018-09-01 23:10:02,911 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901231002899392
2018-09-01 23:10:02,920 [salt.minion      :1431][INFO    ][8057] Starting a new job with PID 8057
2018-09-01 23:10:02,935 [salt.minion      :1708][INFO    ][8057] Returning information for job: 20180901231002899392
2018-09-01 23:10:03,626 [salt.state       :1770][INFO    ][8039] Running state [python-barbicanclient] at time 23:10:03.626036
2018-09-01 23:10:03,626 [salt.state       :1803][INFO    ][8039] Executing state pkg.installed for [python-barbicanclient]
2018-09-01 23:10:03,626 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8039] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:10:03,926 [salt.state       :290 ][INFO    ][8039] All specified packages are already installed
2018-09-01 23:10:03,926 [salt.state       :1941][INFO    ][8039] Completed state [python-barbicanclient] at time 23:10:03.926491 duration_in_ms=300.456
2018-09-01 23:10:03,927 [salt.minion      :1708][INFO    ][8039] Returning information for job: 20180901230957855152
2018-09-01 23:25:39,354 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command state.sls with jid 20180901232539345009
2018-09-01 23:25:39,365 [salt.minion      :1431][INFO    ][8332] Starting a new job with PID 8332
2018-09-01 23:25:43,961 [salt.state       :905 ][INFO    ][8332] Loading fresh modules for state activity
2018-09-01 23:25:43,992 [salt.fileclient  :1215][INFO    ][8332] Fetching file from saltenv 'base', ** done ** 'ceilometer/init.sls'
2018-09-01 23:25:44,009 [salt.fileclient  :1215][INFO    ][8332] Fetching file from saltenv 'base', ** done ** 'ceilometer/agent.sls'
2018-09-01 23:25:44,416 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901232544403486
2018-09-01 23:25:44,423 [salt.minion      :1431][INFO    ][8360] Starting a new job with PID 8360
2018-09-01 23:25:44,431 [salt.minion      :1708][INFO    ][8360] Returning information for job: 20180901232544403486
2018-09-01 23:25:45,114 [salt.state       :1770][INFO    ][8332] Running state [ceilometer-agent-compute] at time 23:25:45.114584
2018-09-01 23:25:45,114 [salt.state       :1803][INFO    ][8332] Executing state pkg.installed for [ceilometer-agent-compute]
2018-09-01 23:25:45,115 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:25:45,400 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['apt-cache', '-q', 'policy', 'ceilometer-agent-compute'] in directory '/root'
2018-09-01 23:25:45,467 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-09-01 23:25:47,157 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-09-01 23:25:47,178 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'ceilometer-agent-compute'] in directory '/root'
2018-09-01 23:25:54,590 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command saltutil.find_job with jid 20180901232554578337
2018-09-01 23:25:54,601 [salt.minion      :1431][INFO    ][9278] Starting a new job with PID 9278
2018-09-01 23:25:54,618 [salt.minion      :1708][INFO    ][9278] Returning information for job: 20180901232554578337
2018-09-01 23:26:02,015 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2018-09-01 23:26:02,044 [salt.state       :290 ][INFO    ][8332] Made the following changes:
'python-pysnmp4' changed from 'absent' to '4.2.5-1'
'python-pysnmp4-mibs' changed from 'absent' to '0.1.3-1'
'python-pysnmp2' changed from 'absent' to '1'
'python-pysnmp-common' changed from 'absent' to '1'
'ceilometer-agent-compute' changed from 'absent' to '1:10.0.1-2~u16.04+mcp1'
'python-pam' changed from 'absent' to '0.4.2-13.2ubuntu2'
'python-cotyledon' changed from 'absent' to '1.6.3-1.1~u16.04+mcp2'
'python2.7-twisted-core' changed from 'absent' to '1'
'python-twisted' changed from 'absent' to '16.0.0-1ubuntu0.2'
'python-ceilometer' changed from 'absent' to '1:10.0.1-2~u16.04+mcp1'
'ceilometer-common' changed from 'absent' to '1:10.0.1-2~u16.04+mcp1'
'libsmi2ldbl' changed from 'absent' to '0.4.8+dfsg2-11'
'python2.7-twisted' changed from 'absent' to '1'
'python-croniter' changed from 'absent' to '0.3.8-1'
'python-setproctitle' changed from 'absent' to '1.1.8-1build2'
'python-twisted-core' changed from 'absent' to '16.0.0-1ubuntu0.2'
'python-jsonpath-rw' changed from 'absent' to '1.4.0-1'
'python-attr' changed from 'absent' to '17.4.0-2~cloud0'
'python-service-identity' changed from 'absent' to '16.0.0-2'
'python-serial' changed from 'absent' to '3.0.1-1'
'smitools' changed from 'absent' to '0.4.8+dfsg2-11'
'python-jsonpath-rw-ext' changed from 'absent' to '1.1.3-1~cloud0'
'python2.7-twisted-bin' changed from 'absent' to '1'
'python-pysnmp4-apps' changed from 'absent' to '0.3.2-1'
'python-twisted-bin' changed from 'absent' to '16.0.0-1ubuntu0.2'

2018-09-01 23:26:02,058 [salt.state       :905 ][INFO    ][8332] Loading fresh modules for state activity
2018-09-01 23:26:02,170 [salt.state       :1941][INFO    ][8332] Completed state [ceilometer-agent-compute] at time 23:26:02.170651 duration_in_ms=17056.067
2018-09-01 23:26:02,173 [salt.state       :1770][INFO    ][8332] Running state [/etc/ceilometer/ceilometer.conf] at time 23:26:02.173234
2018-09-01 23:26:02,173 [salt.state       :1803][INFO    ][8332] Executing state file.managed for [/etc/ceilometer/ceilometer.conf]
2018-09-01 23:26:02,196 [salt.fileclient  :1215][INFO    ][8332] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/queens/ceilometer-agent.conf.Debian'
2018-09-01 23:26:02,307 [salt.state       :290 ][INFO    ][8332] File changed:
--- 
+++ 
@@ -1,4 +1,321 @@
 [DEFAULT]
+#
+# From oslo.messaging
+#
+
+# Size of RPC connection pool. (integer value)
+#rpc_conn_pool_size = 30
+
+# The pool size limit for connections expiration policy (integer
+# value)
+#conn_pool_min_size = 2
+
+# The time-to-live in sec of idle connections in the pool (integer
+# value)
+#conn_pool_ttl = 1200
+
+# ZeroMQ bind address. Should be a wildcard (*), an ethernet
+# interface, or IP. The "host" option should point or resolve to this
+# address. (string value)
+#rpc_zmq_bind_address = *
+
+# MatchMaker driver. (string value)
+# Possible values:
+# redis - <No description provided>
+# sentinel - <No description provided>
+# dummy - <No description provided>
+#rpc_zmq_matchmaker = redis
+
+# Number of ZeroMQ contexts, defaults to 1. (integer value)
+#rpc_zmq_contexts = 1
+
+# Maximum number of ingress messages to locally buffer per topic.
+# Default is unlimited. (integer value)
+#rpc_zmq_topic_backlog = <None>
+
+# Directory for holding IPC sockets. (string value)
+#rpc_zmq_ipc_dir = /var/run/openstack
+
+# Name of this node. Must be a valid hostname, FQDN, or IP address.
+# Must match "host" option, if running Nova. (string value)
+#rpc_zmq_host = localhost
+
+# Number of seconds to wait before all pending messages will be sent
+# after closing a socket. The default value of -1 specifies an
+# infinite linger period. The value of 0 specifies no linger period.
+# Pending messages shall be discarded immediately when the socket is
+# closed. Positive values specify an upper bound for the linger
+# period. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
+#zmq_linger = -1
+
+# The default number of seconds that poll should wait. Poll raises
+# timeout exception when timeout expired. (integer value)
+#rpc_poll_timeout = 1
+
+# Expiration timeout in seconds of a name service record about
+# existing target ( < 0 means no timeout). (integer value)
+#zmq_target_expire = 300
+
+# Update period in seconds of a name service record about existing
+# target. (integer value)
+#zmq_target_update = 180
+
+# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
+# (boolean value)
+#use_pub_sub = false
+
+# Use ROUTER remote proxy. (boolean value)
+#use_router_proxy = false
+
+# This option makes direct connections dynamic or static. It makes
+# sense only with use_router_proxy=False which means to use direct
+# connections for direct message types (ignored otherwise). (boolean
+# value)
+#use_dynamic_connections = false
+
+# How many additional connections to a host will be made for failover
+# reasons. This option is actual only in dynamic connections mode.
+# (integer value)
+#zmq_failover_connections = 2
+
+# Minimal port number for random ports range. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#rpc_zmq_min_port = 49153
+
+# Maximal port number for random ports range. (integer value)
+# Minimum value: 1
+# Maximum value: 65536
+#rpc_zmq_max_port = 65536
+
+# Number of retries to find free port number before fail with
+# ZMQBindError. (integer value)
+#rpc_zmq_bind_port_retries = 100
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#rpc_zmq_serialization = json
+
+# This option configures round-robin mode in zmq socket. True means
+# not keeping a queue when server side disconnects. False means to
+# keep queue and messages even if server is disconnected, when the
+# server appears we send all accumulated messages to it. (boolean
+# value)
+#zmq_immediate = true
+
+# Enable/disable TCP keepalive (KA) mechanism. The default value of -1
+# (or any other negative value) means to skip any overrides and leave
+# it to OS default; 0 and 1 (or any other positive value) mean to
+# disable and enable the option respectively. (integer value)
+#zmq_tcp_keepalive = -1
+
+# The duration between two keepalive transmissions in idle condition.
+# The unit is platform dependent, for example, seconds in Linux,
+# milliseconds in Windows etc. The default value of -1 (or any other
+# negative value and 0) means to skip any overrides and leave it to OS
+# default. (integer value)
+#zmq_tcp_keepalive_idle = -1
+
+# The number of retransmissions to be carried out before declaring
+# that remote end is not available. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_cnt = -1
+
+# The duration between two successive keepalive retransmissions, if
+# acknowledgement to the previous keepalive transmission is not
+# received. The unit is platform dependent, for example, seconds in
+# Linux, milliseconds in Windows etc. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_intvl = -1
+
+# Maximum number of (green) threads to work concurrently. (integer
+# value)
+#rpc_thread_pool_size = 100
+
+# Expiration timeout in seconds of a sent/received message after which
+# it is not tracked anymore by a client/server. (integer value)
+#rpc_message_ttl = 300
+
+# Wait for message acknowledgements from receivers. This mechanism
+# works only via proxy without PUB/SUB. (boolean value)
+#rpc_use_acks = false
+
+# Number of seconds to wait for an ack from a cast/call. After each
+# retry attempt this timeout is multiplied by some specified
+# multiplier. (integer value)
+#rpc_ack_timeout_base = 15
+
+# Number to multiply base ack timeout by after each retry attempt.
+# (integer value)
+#rpc_ack_timeout_multiplier = 2
+
+# Default number of message sending attempts in case of any problems
+# occurred: positive value N means at most N retries, 0 means no
+# retries, None or -1 (or any other negative values) mean to retry
+# forever. This option is used only if acknowledgments are enabled.
+# (integer value)
+#rpc_retry_attempts = 3
+
+# List of publisher hosts SubConsumer can subscribe on. This option
+# has higher priority then the default publishers list taken from the
+# matchmaker. (list value)
+#subscribe_on =
+
+# Size of executor thread pool when executor is threading or eventlet.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
+#executor_thread_pool_size = 64
+
+# Seconds to wait for a response from a call. (integer value)
+#rpc_response_timeout = 60
+
+# The network address and optional user credentials for connecting to
+# the messaging backend, in URL format. The expected format is:
+#
+# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
+#
+# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
+#
+# For full details on the fields in the URL see the documentation of
+# oslo_messaging.TransportURL at
+# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
+# (string value)
+#transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
+
+# DEPRECATED: The messaging driver to use, defaults to rabbit. Other
+# drivers include amqp and zmq. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rpc_backend = rabbit
+
+# The default exchange under which topics are scoped. May be
+# overridden by an exchange name specified in the transport_url
+# option. (string value)
+#control_exchange = openstack
+
+#
+# From oslo.log
+#
+
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
+# Note: This option can be changed without restarting.
+#debug = false
+
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
+# Note: This option can be changed without restarting.
+# Deprecated group/name - [DEFAULT]/log_config
+
+# Defines the format string for %%(asctime)s in log records. Default:
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
+#log_date_format = %Y-%m-%d %H:%M:%S
+
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logfile
+#log_file = <None>
+
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logdir
+#log_dir = <None>
+
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
+# (boolean value)
+#use_journal = false
+
+# Syslog facility to receive log lines. This option is ignored if
+# log_config_append is set. (string value)
+#syslog_log_facility = LOG_USER
+
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_json = false
+
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_stderr = false
+
+# Format string to use for log messages with context. (string value)
+#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+# Format string to use for log messages when context is undefined.
+# (string value)
+#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
+#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+
+# Prefix each line of exception output with this format. (string
+# value)
+#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
+
+# Defines the format string for %(user_identity)s that is used in
+# logging_context_format_string. (string value)
+#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
+
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+
+# Enables or disables publication of error events. (boolean value)
+#publish_errors = false
+
+# The format for an instance that is passed with the log message.
+# (string value)
+#instance_format = "[instance: %(uuid)s] "
+
+# The format for an instance UUID that is passed with the log message.
+# (string value)
+#instance_uuid_format = "[instance: %(uuid)s] "
+
+# Interval, number of seconds, of log rate limiting. (integer value)
+#rate_limit_interval = 0
+
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
+#rate_limit_burst = 0
+
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
+#rate_limit_except_level = CRITICAL
+
+# Enables or disables fatal status of deprecations. (boolean value)
+#fatal_deprecations = false
 
 #
 # From ceilometer
@@ -27,7 +344,7 @@
 #libvirt_uri =
 
 # Swift reseller prefix. Must be on par with reseller_prefix in proxy-
-# server.conf. (string value)
+# agent.conf. (string value)
 #reseller_prefix = AUTH_
 
 # Configuration file for pipeline definition. (string value)
@@ -55,7 +372,7 @@
 
 # Name of this node, which must be valid in an AMQP key. Can be an opaque
 # identifier. For ZeroMQ only, must be a valid host name, FQDN, or IP address.
-# (unknown value)
+# (host address value)
 #host = <your_hostname>
 
 # Timeout seconds for HTTP requests. Set it to None to disable timeout.
@@ -66,334 +383,6 @@
 # (integer value)
 # Minimum value: 1
 #max_parallel_requests = 64
-
-#
-# From oslo.log
-#
-
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
-# Note: This option can be changed without restarting.
-#debug = false
-
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
-# Note: This option can be changed without restarting.
-# Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
-
-# Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
-#log_date_format = %Y-%m-%d %H:%M:%S
-
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
-# Deprecated group/name - [DEFAULT]/logfile
-#log_file = <None>
-
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
-# Deprecated group/name - [DEFAULT]/logdir
-#log_dir = <None>
-
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
-# (boolean value)
-#watch_log_file = false
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
-
-# Syslog facility to receive log lines. This option is ignored if
-# log_config_append is set. (string value)
-#syslog_log_facility = LOG_USER
-
-# Use JSON formatting for logging. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_json = false
-
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
-#use_stderr = false
-
-# Format string to use for log messages with context. (string value)
-#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
-
-# Format string to use for log messages when context is undefined. (string
-# value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string value)
-#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
-
-# Defines the format string for %(user_identity)s that is used in
-# logging_context_format_string. (string value)
-#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
-
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
-
-# Enables or disables publication of error events. (boolean value)
-#publish_errors = false
-
-# The format for an instance that is passed with the log message. (string
-# value)
-#instance_format = "[instance: %(uuid)s] "
-
-# The format for an instance UUID that is passed with the log message. (string
-# value)
-#instance_uuid_format = "[instance: %(uuid)s] "
-
-# Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
-
-# Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
-
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
-#rate_limit_except_level = CRITICAL
-
-# Enables or disables fatal status of deprecations. (boolean value)
-#fatal_deprecations = false
-
-#
-# From oslo.messaging
-#
-
-# Size of RPC connection pool. (integer value)
-#rpc_conn_pool_size = 30
-
-# The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
-
-# The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
-# Size of executor thread pool when executor is threading or eventlet. (integer
-# value)
-# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
-#executor_thread_pool_size = 64
-
-# Seconds to wait for a response from a call. (integer value)
-#rpc_response_timeout = 60
-
-# The network address and optional user credentials for connecting to the
-# messaging backend, in URL format. The expected format is:
-#
-# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
-#
-# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
-#
-# For full details on the fields in the URL see the documentation of
-# oslo_messaging.TransportURL at
-# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
-# (string value)
-#transport_url = <None>
-
-# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
-# include amqp and zmq. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rpc_backend = rabbit
-
-# The default exchange under which topics are scoped. May be overridden by an
-# exchange name specified in the transport_url option. (string value)
-#control_exchange = openstack
-
-#
-# From oslo.service.service
-#
-
-# Enable eventlet backdoor.  Acceptable values are 0, <port>, and
-# <start>:<end>, where 0 results in listening on a random tcp port number;
-# <port> results in listening on the specified port number (and not enabling
-# backdoor if that port is in use); and <start>:<end> results in listening on
-# the smallest unused port number within the specified range of port numbers.
-# The chosen port is displayed in the service's log file. (string value)
-#backdoor_port = <None>
-
-# Enable eventlet backdoor, using the provided path as a unix socket that can
-# receive connections. This option is mutually exclusive with 'backdoor_port'
-# in that only one should be provided. If both are provided then the existence
-# of this option overrides the usage of that option. (string value)
-#backdoor_socket = <None>
-
-# Enables or disables logging values of all registered options when starting a
-# service (at DEBUG level). (boolean value)
-#log_options = true
-
-# Specify a timeout after which a gracefully shutdown server will exit. Zero
-# value means endless wait. (integer value)
-#graceful_shutdown_timeout = 60
 
 
 [compute]
@@ -413,6 +402,7 @@
 # workload_partitioning - <No description provided>
 # libvirt_metadata - <No description provided>
 #instance_discovery_method = libvirt_metadata
+instance_discovery_method = libvirt_metadata
 
 # New instances will be discovered periodically based on this option (in
 # seconds). By default, the agent discovers instances according to pipeline
@@ -434,55 +424,6 @@
 #resource_cache_expiry = 3600
 
 
-[coordination]
-
-#
-# From ceilometer
-#
-
-# The backend URL to use for distributed coordination. If left empty, per-
-# deployment central agent and per-host compute agent won't do workload
-# partitioning and will only function correctly if a single instance of that
-# service is running. (string value)
-#backend_url = <None>
-
-# Number of seconds between checks to see if group membership has changed
-# (floating point value)
-#check_watchers = 10.0
-
-
-[dispatcher_gnocchi]
-
-#
-# From ceilometer
-#
-
-# DEPRECATED: Gnocchi project used to filter out samples generated by Gnocchi
-# service activity (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#filter_project = gnocchi
-
-# DEPRECATED: The archive policy to use when the dispatcher create a new
-# metric. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#archive_policy = <None>
-
-# DEPRECATED: The Yaml file that defines mapping between samples and gnocchi
-# resources/metrics (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#resources_definition_file = gnocchi_resources.yaml
-
-# DEPRECATED: Number of seconds before request to gnocchi times out (floating
-# point value)
-# Minimum value: 0
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#request_timeout = 6.05
-
-
 [event]
 
 #
@@ -558,52 +499,6 @@
 # Tolerance of IPMI/NM polling failures before disable this pollster. Negative
 # indicates retrying forever. (integer value)
 #polling_retry = 3
-
-
-[matchmaker_redis]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Host to locate redis. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
-
-# DEPRECATED: Use this port to connect to redis host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
-
-# DEPRECATED: Password for Redis server (optional). (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#password =
-
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
-# [host:port, host1:port ... ] (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#sentinel_hosts =
-
-# Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
-
-# Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
-
-# Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
-
-# Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
 
 
 [meter]
@@ -622,7 +517,7 @@
 
 # List directory to find files of defining meter notifications. (multi valued)
 #meter_definitions_dirs = /etc/ceilometer/meters.d
-#meter_definitions_dirs = /build/ceilometer-XKjd36/ceilometer-10.0.1/ceilometer/data/meters.d
+#meter_definitions_dirs = /usr/src/git/ceilometer/ceilometer/data/meters.d
 
 
 [notification]
@@ -631,17 +526,22 @@
 # From ceilometer
 #
 
-# Number of queues to parallelize workload across. This value should be larger
-# than the number of active notification agents for optimal results. WARNING:
-# Once set, lowering this value may result in lost data. (integer value)
+# DEPRECATED: Number of queues to parallelize workload across. This value
+# should be larger than the number of active notification agents for optimal
+# results. WARNING: Once set, lowering this value may result in lost data.
+# (integer value)
 # Minimum value: 1
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
 #pipeline_processing_queues = 10
 
 # Acknowledge message when event persistence fails. (boolean value)
 #ack_on_event_error = true
 
-# Enable workload partitioning, allowing multiple notification agents to be run
-# simultaneously. (boolean value)
+# DEPRECATED: Enable workload partitioning, allowing multiple notification
+# agents to be run simultaneously. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
 #workload_partitioning = false
 
 # Messaging URLs to listen for notifications. Example:
@@ -690,726 +590,6 @@
 #notification_control_exchanges = aodh
 
 
-[oslo_concurrency]
-
-#
-# From oslo.concurrency
-#
-
-# Enables or disables inter-process locks. (boolean value)
-#disable_process_locking = false
-
-# Directory to use for lock files.  For security, the specified directory
-# should only be writable by the user running the processes that need locking.
-# Defaults to environment variable OSLO_LOCK_PATH. If OSLO_LOCK_PATH is not set
-# in the environment, use the Python tempfile.gettempdir function to find a
-# suitable location. If external locks are used, a lock path must be set.
-# (string value)
-#lock_path = /tmp
-
-
-[oslo_messaging_amqp]
-
-#
-# From oslo.messaging
-#
-
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string
-# value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string
-# value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional)
-# (string value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# By default SSL checks that the name in the server's certificate matches the
-# hostname in the transport_url. In some configurations it may be preferable to
-# use the virtual hostname instead, for example if the server uses the Server
-# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
-# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
-# virtual host name instead of the DNS name. (boolean value)
-#ssl_verify_vhost = false
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# Enable virtual host support for those message buses that do not natively
-# support virtual hosting (such as qpidd). When set to true the virtual host
-# name will be added to all message bus addresses, effectively creating a
-# private 'subnet' per virtual host. Set to False if the message bus supports
-# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
-# as the name of the virtual host. (boolean value)
-#pseudo_vhost = true
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-
-# robin fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# DEPRECATED: Pool Size for Kafka Consumers (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#pool_size = 10
-
-# DEPRECATED: The pool size limit for connections expiration policy (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_min_size = 2
-
-# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating
-# point value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
-
-[oslo_messaging_notifications]
-
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are
-# messaging, messagingv2, routing, log, test, noop (multi valued)
-# Deprecated group/name - [DEFAULT]/notification_driver
-#driver =
-
-# A URL representing the messaging driver to use for notifications. If not set,
-# we fall back to the same configuration used for RPC. (string value)
-# Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
-
-# AMQP topic used for OpenStack notifications. (list value)
-# Deprecated group/name - [rpc_notifier2]/topics
-# Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
-
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
-
-
-[oslo_messaging_rabbit]
-
-#
-# From oslo.messaging
-#
-
-# Use durable queues in AMQP. (boolean value)
-# Deprecated group/name - [DEFAULT]/amqp_durable_queues
-# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
-#amqp_durable_queues = false
-
-# Auto-delete queues in AMQP. (boolean value)
-#amqp_auto_delete = false
-
-# Enable SSL (boolean value)
-#ssl = <None>
-
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
-
-# How long to wait before reconnecting in response to an AMQP consumer cancel
-# notification. (floating point value)
-#kombu_reconnect_delay = 1.0
-
-# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
-# be used. This option may not be available in future versions. (string value)
-#kombu_compression = <None>
-
-# How long to wait a missing client before abandoning to send it its replies.
-# This value should not be longer than rpc_response_timeout. (integer value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
-#kombu_missing_consumer_retry_timeout = 60
-
-# Determines how the next RabbitMQ node is chosen in case the one we are
-# currently connected to becomes unavailable. Takes effect only if more than
-# one RabbitMQ node is provided in config. (string value)
-# Possible values:
-# round-robin - <No description provided>
-# shuffle - <No description provided>
-#kombu_failover_strategy = round-robin
-
-# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_host = localhost
-
-# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
-# value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_port = 5672
-
-# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_hosts = $rabbit_host:$rabbit_port
-
-# DEPRECATED: The RabbitMQ userid. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_userid = guest
-
-# DEPRECATED: The RabbitMQ password. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_password = guest
-
-# The RabbitMQ login method. (string value)
-# Possible values:
-# PLAIN - <No description provided>
-# AMQPLAIN - <No description provided>
-# RABBIT-CR-DEMO - <No description provided>
-#rabbit_login_method = AMQPLAIN
-
-# DEPRECATED: The RabbitMQ virtual host. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_virtual_host = /
-
-# How frequently to retry connecting with RabbitMQ. (integer value)
-#rabbit_retry_interval = 1
-
-# How long to backoff for between retries when connecting to RabbitMQ. (integer
-# value)
-#rabbit_retry_backoff = 2
-
-# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
-# (integer value)
-#rabbit_interval_max = 30
-
-# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
-# (infinite retry count). (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#rabbit_max_retries = 0
-
-# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
-# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
-# is no longer controlled by the x-ha-policy argument when declaring a queue.
-# If you just want to make sure that all queues (except those with auto-
-# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
-# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
-#rabbit_ha_queues = false
-
-# Positive integer representing duration in seconds for queue TTL (x-expires).
-# Queues which are unused for the duration of the TTL are automatically
-# deleted. The parameter affects only reply and fanout queues. (integer value)
-# Minimum value: 1
-#rabbit_transient_queues_ttl = 1800
-
-# Specifies the number of messages to prefetch. Setting to zero allows
-# unlimited messages. (integer value)
-#rabbit_qos_prefetch_count = 64
-
-# Number of seconds after which the Rabbit broker is considered down if
-# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
-# value)
-#heartbeat_timeout_threshold = 60
-
-# How often times during the heartbeat_timeout_threshold we check the
-# heartbeat. (integer value)
-#heartbeat_rate = 2
-
-# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
-#fake_rabbit = false
-
-# Maximum number of channels to allow (integer value)
-#channel_max = <None>
-
-# The maximum byte size for an AMQP frame (integer value)
-#frame_max = <None>
-
-# How often to send heartbeats for consumer's connections (integer value)
-#heartbeat_interval = 3
-
-# Arguments passed to ssl.wrap_socket (dict value)
-#ssl_options = <None>
-
-# Set socket timeout in seconds for connection's socket (floating point value)
-#socket_timeout = 0.25
-
-# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
-# value)
-#tcp_user_timeout = 0.25
-
-# Set delay for reconnection to some host which has connection error (floating
-# point value)
-#host_connection_reconnect_delay = 0.25
-
-# Connection factory implementation (string value)
-# Possible values:
-# new - <No description provided>
-# single - <No description provided>
-# read_write - <No description provided>
-#connection_factory = single
-
-# Maximum number of connections to keep queued. (integer value)
-#pool_max_size = 30
-
-# Maximum number of connections to create above `pool_max_size`. (integer
-# value)
-#pool_max_overflow = 0
-
-# Default number of seconds to wait for a connections to available (integer
-# value)
-#pool_timeout = 30
-
-# Lifetime of a connection (since creation) in seconds or None for no
-# recycling. Expired connections are closed on acquire. (integer value)
-#pool_recycle = 600
-
-# Threshold at which inactive (since release) connections are considered stale
-# in seconds or None for no staleness. Stale connections are closed on acquire.
-# (integer value)
-#pool_stale = 60
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#default_serializer_type = json
-
-# Persist notification messages. (boolean value)
-#notification_persistence = false
-
-# Exchange name for sending notifications (string value)
-#default_notification_exchange = ${control_exchange}_notification
-
-# Max number of not acknowledged message which RabbitMQ can send to
-# notification listener. (integer value)
-#notification_listener_prefetch_count = 100
-
-# Reconnecting retry count in case of connectivity problem during sending
-# notification, -1 means infinite retry. (integer value)
-#default_notification_retry_attempts = -1
-
-# Reconnecting retry delay in case of connectivity problem during sending
-# notification message (floating point value)
-#notification_retry_delay = 0.25
-
-# Time to live for rpc queues without consumers in seconds. (integer value)
-#rpc_queue_expiration = 60
-
-# Exchange name for sending RPC messages (string value)
-#default_rpc_exchange = ${control_exchange}_rpc
-
-# Exchange name for receiving RPC replies (string value)
-#rpc_reply_exchange = ${control_exchange}_rpc_reply
-
-# Max number of not acknowledged message which RabbitMQ can send to rpc
-# listener. (integer value)
-#rpc_listener_prefetch_count = 100
-
-# Max number of not acknowledged message which RabbitMQ can send to rpc reply
-# listener. (integer value)
-#rpc_reply_listener_prefetch_count = 100
-
-# Reconnecting retry count in case of connectivity problem during sending
-# reply. -1 means infinite retry during rpc_timeout (integer value)
-#rpc_reply_retry_attempts = -1
-
-# Reconnecting retry delay in case of connectivity problem during sending
-# reply. (floating point value)
-#rpc_reply_retry_delay = 0.25
-
-# Reconnecting retry count in case of connectivity problem during sending RPC
-# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
-# request could be processed more than one time (integer value)
-#default_rpc_retry_attempts = -1
-
-# Reconnecting retry delay in case of connectivity problem during sending RPC
-# message (floating point value)
-#rpc_retry_delay = 0.25
-
-
-[oslo_messaging_zmq]
-
-#
-# From oslo.messaging
-#
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
-
 [polling]
 
 #
@@ -1438,6 +618,7 @@
 # Deprecated group/name - [publisher_rpc]/metering_secret
 # Deprecated group/name - [publisher]/metering_secret
 #telemetry_secret = change this for valid signing
+telemetry_secret=opnfv_secret
 
 
 [publisher_notifier]
@@ -1475,72 +656,109 @@
 #
 # From ceilometer-auth
 #
-
-# Authentication type to load (string value)
-# Deprecated group/name - [service_credentials]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internalURL
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
 
 # Authentication URL (string value)
 #auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
 
 # Scope for system operations (string value)
 #system_scope = <None>
 
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-# Deprecated group/name - [service_credentials]/tenant_id
-#project_id = <None>
-
-# Project name to scope to (string value)
-# Deprecated group/name - [service_credentials]/tenant_name
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
 
 # Trust ID (string value)
 #trust_id = <None>
 
-# Optional domain ID to use with v3 and v2 parameters. It will be used for both
-# the user and project domain in v3 and ignored in v2 authentication. (string
-# value)
-#default_domain_id = <None>
-
-# Optional domain name to use with v3 API and v2 parameters. It will be used
-# for both the user and project domain in v3 and ignored in v2 authentication.
-# (string value)
-#default_domain_name = <None>
-
-# User id (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [service_credentials]/user_name
-#username = <None>
-
 # User's domain id (string value)
 #user_domain_id = <None>
+user_domain_id = default
 
 # User's domain name (string value)
 #user_domain_name = <None>
 
-# User's password (string value)
-#password = <None>
-
-# Region name to use for OpenStack service endpoints. (string value)
-# Deprecated group/name - [DEFAULT]/os_region_name
-#region_name = <None>
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = ceilometer
 
 # Type of endpoint in Identity service catalog to use for communication with
 # OpenStack services. (string value)
@@ -1553,7 +771,7 @@
 # internalURL - <No description provided>
 # adminURL - <No description provided>
 # Deprecated group/name - [service_credentials]/os_endpoint_type
-#interface = public
+interface = internal
 
 
 [service_types]
@@ -1587,59 +805,299 @@
 # Deprecated group/name - [service_types]/cinderv2
 #cinder = volumev3
 
-
-[vmware]
-
-#
-# From ceilometer
-#
-
-# IP address of the VMware vSphere host. (unknown value)
-#host_ip = 127.0.0.1
-
-# Port of the VMware vSphere host. (port value)
+[xenapi]
+
+#
+# From ceilometer
+#
+
+# URL for connection to XenServer/Xen Cloud Platform. (string value)
+#connection_url = <None>
+
+# Username for connection to XenServer/Xen Cloud Platform. (string value)
+#connection_username = root
+
+# Password for connection to XenServer/Xen Cloud Platform. (string value)
+#connection_password = <None>
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+#
+# From oslo.messaging
+#
+
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+driver = messagingv2
+
+# A URL representing the messaging driver to use for notifications. If
+# not set, we fall back to the same configuration used for RPC.
+# (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+topics = notifications
+
+# The maximum number of attempts to re-send a notification message
+# which failed to be delivered due to a recoverable error. 0 - No
+# retry, -1 - indefinite (integer value)
+#retry = -1
+[oslo_messaging_rabbit]
+#
+# From oslo.messaging
+#
+
+# Use durable queues in AMQP. (boolean value)
+# Deprecated group/name - [DEFAULT]/amqp_durable_queues
+# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
+#amqp_durable_queues = false
+
+# Auto-delete queues in AMQP. (boolean value)
+#amqp_auto_delete = false
+
+# Enable SSL (boolean value)
+#ssl = <None>
+
+
+# How long to wait before reconnecting in response to an AMQP consumer
+# cancel notification. (floating point value)
+#kombu_reconnect_delay = 1.0
+
+# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression
+# will not be used. This option may not be available in future
+# versions. (string value)
+#kombu_compression = <None>
+
+# How long to wait a missing client before abandoning to send it its
+# replies. This value should not be longer than rpc_response_timeout.
+# (integer value)
+# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
+#kombu_missing_consumer_retry_timeout = 60
+
+# Determines how the next RabbitMQ node is chosen in case the one we
+# are currently connected to becomes unavailable. Takes effect only if
+# more than one RabbitMQ node is provided in config. (string value)
+# Possible values:
+# round-robin - <No description provided>
+# shuffle - <No description provided>
+#kombu_failover_strategy = round-robin
+
+# DEPRECATED: The RabbitMQ broker address where a single node is used.
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_host = localhost
+
+# DEPRECATED: The RabbitMQ broker port where a single node is used.
+# (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#host_port = 443
-
-# Username of VMware vSphere. (string value)
-#host_username =
-
-# Password of VMware vSphere. (string value)
-#host_password =
-
-# CA bundle file to use in verifying the vCenter server certificate. (string
-# value)
-#ca_file = <None>
-
-# If true, the vCenter server certificate is not verified. If false, then the
-# default CA truststore is used for verification. This option is ignored if
-# "ca_file" is set. (boolean value)
-#insecure = false
-
-# Number of times a VMware vSphere API may be retried. (integer value)
-#api_retry_count = 10
-
-# Sleep time in seconds for polling an ongoing async task. (floating point
-# value)
-#task_poll_interval = 0.5
-
-# Optional vim service WSDL location e.g http://<server>/vimService.wsdl.
-# Optional over-ride to default location for bug work-arounds. (string value)
-#wsdl_location = <None>
-
-
-[xenapi]
-
-#
-# From ceilometer
-#
-
-# URL for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_url = <None>
-
-# Username for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_username = root
-
-# Password for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_password = <None>
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_port = 5672
+
+# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_hosts = $rabbit_host:$rabbit_port
+
+# DEPRECATED: The RabbitMQ userid. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_userid = guest
+
+# DEPRECATED: The RabbitMQ password. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_password = guest
+
+# The RabbitMQ login method. (string value)
+# Possible values:
+# PLAIN - <No description provided>
+# AMQPLAIN - <No description provided>
+# RABBIT-CR-DEMO - <No description provided>
+#rabbit_login_method = AMQPLAIN
+
+# DEPRECATED: The RabbitMQ virtual host. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_virtual_host = /
+
+# How frequently to retry connecting with RabbitMQ. (integer value)
+#rabbit_retry_interval = 1
+
+# How long to backoff for between retries when connecting to RabbitMQ.
+# (integer value)
+#rabbit_retry_backoff = 2
+
+# Maximum interval of RabbitMQ connection retries. Default is 30
+# seconds. (integer value)
+#rabbit_interval_max = 30
+
+# DEPRECATED: Maximum number of RabbitMQ connection retries. Default
+# is 0 (infinite retry count). (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#rabbit_max_retries = 0
+
+# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change
+# this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0,
+# queue mirroring is no longer controlled by the x-ha-policy argument
+# when declaring a queue. If you just want to make sure that all
+# queues (except those with auto-generated names) are mirrored across
+# all nodes, run: "rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-
+# mode": "all"}' " (boolean value)
+#rabbit_ha_queues = false
+
+# Positive integer representing duration in seconds for queue TTL
+# (x-expires). Queues which are unused for the duration of the TTL are
+# automatically deleted. The parameter affects only reply and fanout
+# queues. (integer value)
+# Minimum value: 1
+#rabbit_transient_queues_ttl = 1800
+
+# Specifies the number of messages to prefetch. Setting to zero allows
+# unlimited messages. (integer value)
+#rabbit_qos_prefetch_count = 64
+
+# Number of seconds after which the Rabbit broker is considered down
+# if heartbeat's keep-alive fails (0 disable the heartbeat).
+# EXPERIMENTAL (integer value)
+#heartbeat_timeout_threshold = 60
+
+# How often times during the heartbeat_timeout_threshold we check the
+# heartbeat. (integer value)
+#heartbeat_rate = 2
+
+# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
+# (boolean value)
+#fake_rabbit = false
+
+# Maximum number of channels to allow (integer value)
+#channel_max = <None>
+
+# The maximum byte size for an AMQP frame (integer value)
+#frame_max = <None>
+
+# How often to send heartbeats for consumer's connections (integer
+# value)
+#heartbeat_interval = 3
+
+# Arguments passed to ssl.wrap_socket (dict value)
+#ssl_options = <None>
+
+# Set socket timeout in seconds for connection's socket (floating
+# point value)
+#socket_timeout = 0.25
+
+# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating
+# point value)
+#tcp_user_timeout = 0.25
+
+# Set delay for reconnection to some host which has connection error
+# (floating point value)
+#host_connection_reconnect_delay = 0.25
+
+# Connection factory implementation (string value)
+# Possible values:
+# new - <No description provided>
+# single - <No description provided>
+# read_write - <No description provided>
+#connection_factory = single
+
+# Maximum number of connections to keep queued. (integer value)
+#pool_max_size = 30
+
+# Maximum number of connections to create above `pool_max_size`.
+# (integer value)
+#pool_max_overflow = 0
+
+# Default number of seconds to wait for a connections to available
+# (integer value)
+#pool_timeout = 30
+
+# Lifetime of a connection (since creation) in seconds or None for no
+# recycling. Expired connections are closed on acquire. (integer
+# value)
+#pool_recycle = 600
+
+# Threshold at which inactive (since release) connections are
+# considered stale in seconds or None for no staleness. Stale
+# connections are closed on acquire. (integer value)
+#pool_stale = 60
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#default_serializer_type = json
+
+# Persist notification messages. (boolean value)
+#notification_persistence = false
+
+# Exchange name for sending notifications (string value)
+#default_notification_exchange = ${control_exchange}_notification
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# notification listener. (integer value)
+#notification_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending notification, -1 means infinite retry. (integer value)
+#default_notification_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending notification message (floating point value)
+#notification_retry_delay = 0.25
+
+# Time to live for rpc queues without consumers in seconds. (integer
+# value)
+#rpc_queue_expiration = 60
+
+# Exchange name for sending RPC messages (string value)
+#default_rpc_exchange = ${control_exchange}_rpc
+
+# Exchange name for receiving RPC replies (string value)
+#rpc_reply_exchange = ${control_exchange}_rpc_reply
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc listener. (integer value)
+#rpc_listener_prefetch_count = 100
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc reply listener. (integer value)
+#rpc_reply_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending reply. -1 means infinite retry during rpc_timeout (integer
+# value)
+#rpc_reply_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending reply. (floating point value)
+#rpc_reply_retry_delay = 0.25
+
+# Reconnecting retry count in case of connectivity problem during
+# sending RPC message, -1 means infinite retry. If actual retry
+# attempts in not 0 the rpc request could be processed more than one
+# time (integer value)
+#default_rpc_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending RPC message (floating point value)
+#rpc_retry_delay = 0.25
+

2018-09-01 23:26:02,308 [salt.state       :1941][INFO    ][8332] Completed state [/etc/ceilometer/ceilometer.conf] at time 23:26:02.307998 duration_in_ms=134.762
2018-09-01 23:26:02,308 [salt.state       :1770][INFO    ][8332] Running state [/etc/default/ceilometer-agent-compute] at time 23:26:02.308215
2018-09-01 23:26:02,308 [salt.state       :1803][INFO    ][8332] Executing state file.managed for [/etc/default/ceilometer-agent-compute]
2018-09-01 23:26:02,321 [salt.fileclient  :1215][INFO    ][8332] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/default'
2018-09-01 23:26:02,326 [salt.state       :290 ][INFO    ][8332] File changed:
New file
2018-09-01 23:26:02,326 [salt.state       :1941][INFO    ][8332] Completed state [/etc/default/ceilometer-agent-compute] at time 23:26:02.326347 duration_in_ms=18.131
2018-09-01 23:26:02,326 [salt.state       :1770][INFO    ][8332] Running state [/etc/ceilometer/pipeline.yaml] at time 23:26:02.326553
2018-09-01 23:26:02,326 [salt.state       :1803][INFO    ][8332] Executing state file.managed for [/etc/ceilometer/pipeline.yaml]
2018-09-01 23:26:02,339 [salt.fileclient  :1215][INFO    ][8332] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/queens/pipeline.yaml'
2018-09-01 23:26:02,377 [salt.state       :290 ][INFO    ][8332] File changed:
New file
2018-09-01 23:26:02,377 [salt.state       :1941][INFO    ][8332] Completed state [/etc/ceilometer/pipeline.yaml] at time 23:26:02.377242 duration_in_ms=50.689
2018-09-01 23:26:02,377 [salt.state       :1770][INFO    ][8332] Running state [/etc/ceilometer/event_pipeline.yaml] at time 23:26:02.377443
2018-09-01 23:26:02,377 [salt.state       :1803][INFO    ][8332] Executing state file.managed for [/etc/ceilometer/event_pipeline.yaml]
2018-09-01 23:26:02,390 [salt.fileclient  :1215][INFO    ][8332] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/queens/event_pipeline.yaml'
2018-09-01 23:26:02,423 [salt.state       :290 ][INFO    ][8332] File changed:
New file
2018-09-01 23:26:02,423 [salt.state       :1941][INFO    ][8332] Completed state [/etc/ceilometer/event_pipeline.yaml] at time 23:26:02.423695 duration_in_ms=46.252
2018-09-01 23:26:02,423 [salt.state       :1770][INFO    ][8332] Running state [/etc/ceilometer/polling.yaml] at time 23:26:02.423900
2018-09-01 23:26:02,424 [salt.state       :1803][INFO    ][8332] Executing state file.managed for [/etc/ceilometer/polling.yaml]
2018-09-01 23:26:02,436 [salt.fileclient  :1215][INFO    ][8332] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/queens/polling.yaml'
2018-09-01 23:26:02,468 [salt.state       :290 ][INFO    ][8332] File changed:
--- 
+++ 
@@ -1,27 +1,7 @@
+
 ---
 sources:
-    - name: some_pollsters
-      interval: 300
+    - name: default_pollsters
+      interval: 180
       meters:
-        - cpu
-        - cpu_l3_cache
-        - memory.usage
-        - network.incoming.bytes
-        - network.incoming.packets
-        - network.outgoing.bytes
-        - network.outgoing.packets
-        - disk.device.read.bytes
-        - disk.device.read.requests
-        - disk.device.write.bytes
-        - disk.device.write.requests
-        - hardware.cpu.util
-        - hardware.memory.used
-        - hardware.memory.total
-        - hardware.memory.buffer
-        - hardware.memory.cached
-        - hardware.memory.swap.avail
-        - hardware.memory.swap.total
-        - hardware.system_stats.io.outgoing.blocks
-        - hardware.system_stats.io.incoming.blocks
-        - hardware.network.ip.incoming.datagrams
-        - hardware.network.ip.outgoing.datagrams
+        - "*"

2018-09-01 23:26:02,468 [salt.state       :1941][INFO    ][8332] Completed state [/etc/ceilometer/polling.yaml] at time 23:26:02.468767 duration_in_ms=44.867
2018-09-01 23:26:02,773 [salt.state       :1770][INFO    ][8332] Running state [ceilometer-agent-compute] at time 23:26:02.773184
2018-09-01 23:26:02,773 [salt.state       :1803][INFO    ][8332] Executing state service.running for [ceilometer-agent-compute]
2018-09-01 23:26:02,774 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['systemctl', 'status', 'ceilometer-agent-compute.service', '-n', '0'] in directory '/root'
2018-09-01 23:26:02,783 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['systemctl', 'is-active', 'ceilometer-agent-compute.service'] in directory '/root'
2018-09-01 23:26:02,788 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['systemctl', 'is-enabled', 'ceilometer-agent-compute.service'] in directory '/root'
2018-09-01 23:26:02,794 [salt.state       :290 ][INFO    ][8332] The service ceilometer-agent-compute is already running
2018-09-01 23:26:02,794 [salt.state       :1941][INFO    ][8332] Completed state [ceilometer-agent-compute] at time 23:26:02.794450 duration_in_ms=21.267
2018-09-01 23:26:02,794 [salt.state       :1770][INFO    ][8332] Running state [ceilometer-agent-compute] at time 23:26:02.794613
2018-09-01 23:26:02,794 [salt.state       :1803][INFO    ][8332] Executing state service.mod_watch for [ceilometer-agent-compute]
2018-09-01 23:26:02,795 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['systemctl', 'is-active', 'ceilometer-agent-compute.service'] in directory '/root'
2018-09-01 23:26:02,802 [salt.loaded.int.module.cmdmod:395 ][INFO    ][8332] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'ceilometer-agent-compute.service'] in directory '/root'
2018-09-01 23:26:02,913 [salt.state       :290 ][INFO    ][8332] {'ceilometer-agent-compute': True}
2018-09-01 23:26:02,913 [salt.state       :1941][INFO    ][8332] Completed state [ceilometer-agent-compute] at time 23:26:02.913334 duration_in_ms=118.719
2018-09-01 23:26:02,914 [salt.minion      :1708][INFO    ][8332] Returning information for job: 20180901232539345009
2018-09-01 23:31:53,880 [salt.minion      :1307][INFO    ][3467] User sudo_ubuntu Executing command cp.push_dir with jid 20180901233153869249
2018-09-01 23:31:53,889 [salt.minion      :1431][INFO    ][9853] Starting a new job with PID 9853
