2017-09-27 08:42:25,710 [salt.cli.daemons ][WARNING ][891] IMPORTANT: Do not use md5 hashing algorithm! Please set "hash_type" to sha256 in Salt Minion config!
2017-09-27 08:42:43,958 [py.warnings      ][WARNING ][1010] /usr/lib/python2.7/dist-packages/salt/utils/templates.py:73: DeprecationWarning: Starting in 2015.5, cmd.run uses python_shell=False by default, which doesn't support shellisms (pipes, env variables, etc). cmd.run is currently aliased to cmd.shell to prevent breakage. Please switch to cmd.shell or set python_shell=True to avoid breakage in the future, when this aliasing is removed.

2017-09-27 08:42:45,468 [salt.utils.decorators][ERROR   ][1010] Exception encountered when attempting to inspect frame in dependency decorator: list index out of range
2017-09-27 08:42:45,472 [salt.utils.decorators][ERROR   ][1010] Exception encountered when attempting to inspect frame in dependency decorator: list index out of range
2017-09-27 08:42:45,475 [salt.utils.decorators][ERROR   ][1010] Exception encountered when attempting to inspect frame in dependency decorator: list index out of range
2017-09-27 08:42:45,478 [salt.utils.decorators][ERROR   ][1010] Exception encountered when attempting to inspect frame in dependency decorator: list index out of range
2017-09-27 08:42:51,138 [salt.utils.decorators][ERROR   ][1010] Exception encountered when attempting to inspect frame in dependency decorator: list index out of range
2017-09-27 08:42:51,238 [salt.utils.decorators][ERROR   ][1010] Exception encountered when attempting to inspect frame in dependency decorator: list index out of range
2017-09-27 08:42:51,371 [salt.utils.decorators][ERROR   ][1010] Exception encountered when attempting to inspect frame in dependency decorator: list index out of range
2017-09-27 08:42:51,378 [salt.utils.decorators][ERROR   ][1010] Exception encountered when attempting to inspect frame in dependency decorator: list index out of range
2017-09-27 08:42:51,384 [salt.utils.decorators][ERROR   ][1010] Exception encountered when attempting to inspect frame in dependency decorator: list index out of range
2017-09-27 08:43:00,222 [salt.loaded.int.module.cmdmod][ERROR   ][1010] Command 'while true; do salt-call saltutil.running|grep fun: && continue; salt-call --local service.restart salt-minion; break; done' failed with return code: None
2017-09-27 08:44:59,723 [salt.loaded.int.module.cmdmod][INFO    ][13226] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
2017-09-27 08:44:59,735 [salt.loaded.int.module.cmdmod][INFO    ][13226] Executing command ['systemctl', 'is-enabled', 'salt-minion.service'] in directory '/root'
2017-09-27 08:44:59,752 [salt.loaded.int.module.cmdmod][INFO    ][13226] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'salt-minion.service'] in directory '/root'
2017-09-27 08:44:59,787 [salt.utils.parsers][WARNING ][891] Minion received a SIGTERM. Exiting.
2017-09-27 08:45:00,207 [salt.cli.daemons ][INFO    ][13281] Setting up the Salt Minion "ctl03.baremetal-mcp-ocata-odl-ha.local"
2017-09-27 08:45:00,771 [salt.minion      ][INFO    ][13281] Creating minion process manager
2017-09-27 08:45:00,772 [salt.cli.daemons ][WARNING ][13281] IMPORTANT: Do not use md5 hashing algorithm! Please set "hash_type" to sha256 in Salt Minion config!
2017-09-27 08:45:00,772 [salt.cli.daemons ][INFO    ][13281] The Salt Minion is starting up
2017-09-27 08:45:00,772 [salt.minion      ][INFO    ][13281] Minion is starting as user 'root'
2017-09-27 08:45:00,772 [salt.utils.event ][INFO    ][13281] Starting pull socket on /var/run/salt/minion/minion_event_c5406b34a5_pull.ipc
2017-09-27 08:45:01,667 [salt.loaded.int.module.cmdmod][INFO    ][13281] Executing command ['date', '+%z'] in directory '/root'
2017-09-27 08:45:01,717 [salt.utils.schedule][INFO    ][13281] Updating job settings for scheduled job: __mine_interval
2017-09-27 08:45:01,720 [salt.minion      ][INFO    ][13281] Added mine.update to scheduler
2017-09-27 08:45:01,751 [salt.minion      ][INFO    ][13281] Minion is ready to receive requests!
2017-09-27 08:45:02,752 [salt.utils.schedule][INFO    ][13281] Running scheduled job: __mine_interval
2017-09-27 08:46:13,931 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command state.apply with jid 20170927084613915687
2017-09-27 08:46:13,960 [salt.minion      ][INFO    ][13542] Starting a new job with PID 13542
2017-09-27 08:46:15,893 [salt.state       ][INFO    ][13542] Loading fresh modules for state activity
2017-09-27 08:46:15,931 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/init.sls'
2017-09-27 08:46:15,960 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/init.sls'
2017-09-27 08:46:16,025 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:16,125 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/env.sls'
2017-09-27 08:46:16,513 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:16,720 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/profile.sls'
2017-09-27 08:46:16,745 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:16,801 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/repo.sls'
2017-09-27 08:46:16,895 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:17,028 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/package.sls'
2017-09-27 08:46:17,081 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:17,158 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/timezone.sls'
2017-09-27 08:46:17,191 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:17,278 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/kernel.sls'
2017-09-27 08:46:17,320 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:17,404 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/cpu.sls'
2017-09-27 08:46:17,446 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:17,516 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/sysfs.sls'
2017-09-27 08:46:17,538 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:17,602 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/locale.sls'
2017-09-27 08:46:17,646 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:17,710 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/user.sls'
2017-09-27 08:46:17,734 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:17,825 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/group.sls'
2017-09-27 08:46:17,884 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:17,927 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/limit.sls'
2017-09-27 08:46:17,973 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:18,017 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/system/systemd.sls'
2017-09-27 08:46:18,049 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:18,145 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/network/init.sls'
2017-09-27 08:46:18,174 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:18,238 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/network/hostname.sls'
2017-09-27 08:46:18,256 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:18,315 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/network/host.sls'
2017-09-27 08:46:18,354 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:18,443 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/network/interface.sls'
2017-09-27 08:46:18,502 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:18,550 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/network/proxy.sls'
2017-09-27 08:46:18,564 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:18,609 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/storage/init.sls'
2017-09-27 08:46:18,635 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:18,682 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'ntp/init.sls'
2017-09-27 08:46:18,693 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'ntp/client.sls'
2017-09-27 08:46:18,718 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'ntp/map.jinja'
2017-09-27 08:46:18,749 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'ntp/server.sls'
2017-09-27 08:46:18,773 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'ntp/map.jinja'
2017-09-27 08:46:18,800 [salt.state       ][INFO    ][13542] Running state [/etc/environment] at time 08:46:18.799761
2017-09-27 08:46:18,801 [salt.state       ][INFO    ][13542] Executing state file.blockreplace for /etc/environment
2017-09-27 08:46:18,812 [salt.state       ][INFO    ][13542] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
 PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
 # SALT MANAGED VARIABLES - DO NOT EDIT - START
+# 
 # # SALT MANAGED VARIABLES - END

2017-09-27 08:46:18,813 [salt.state       ][INFO    ][13542] Completed state [/etc/environment] at time 08:46:18.812664 duration_in_ms=12.903
2017-09-27 08:46:18,813 [salt.state       ][INFO    ][13542] Running state [/etc/profile.d] at time 08:46:18.813180
2017-09-27 08:46:18,814 [salt.state       ][INFO    ][13542] Executing state file.directory for /etc/profile.d
2017-09-27 08:46:18,815 [salt.state       ][INFO    ][13542] Directory /etc/profile.d is in the correct state
2017-09-27 08:46:18,816 [salt.state       ][INFO    ][13542] Completed state [/etc/profile.d] at time 08:46:18.815471 duration_in_ms=2.291
2017-09-27 08:46:19,136 [salt.state       ][INFO    ][13542] Running state [linux_repo_prereq_pkgs] at time 08:46:19.136114
2017-09-27 08:46:19,137 [salt.state       ][INFO    ][13542] Executing state pkg.installed for linux_repo_prereq_pkgs
2017-09-27 08:46:19,137 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 08:46:19,601 [salt.state       ][INFO    ][13542] All specified packages are already installed
2017-09-27 08:46:19,601 [salt.state       ][INFO    ][13542] Completed state [linux_repo_prereq_pkgs] at time 08:46:19.601034 duration_in_ms=464.919
2017-09-27 08:46:19,601 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_holdback] at time 08:46:19.601402
2017-09-27 08:46:19,602 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_holdback
2017-09-27 08:46:19,602 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_holdback is not present
2017-09-27 08:46:19,603 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_holdback] at time 08:46:19.602473 duration_in_ms=1.071
2017-09-27 08:46:19,603 [salt.state       ][INFO    ][13542] Running state [/etc/apt/preferences.d/mirantis_openstack_holdback] at time 08:46:19.602770
2017-09-27 08:46:19,603 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/apt/preferences.d/mirantis_openstack_holdback
2017-09-27 08:46:19,627 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/preferences_repo'
2017-09-27 08:46:19,652 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:19,726 [salt.state       ][INFO    ][13542] File /etc/apt/preferences.d/mirantis_openstack_holdback is in the correct state
2017-09-27 08:46:19,727 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/preferences.d/mirantis_openstack_holdback] at time 08:46:19.726927 duration_in_ms=124.156
2017-09-27 08:46:19,730 [salt.state       ][INFO    ][13542] Running state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-holdback main] at time 08:46:19.730157
2017-09-27 08:46:19,731 [salt.state       ][INFO    ][13542] Executing state pkgrepo.managed for deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-holdback main
2017-09-27 08:46:20,153 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-key', 'add', '/var/cache/salt/minion/extrn_files/base/mirror.fuel-infra.org/mcp-repos/ocata/xenial/archive-mcpocata.key'] in directory '/root'
2017-09-27 08:46:20,260 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:22,525 [salt.state       ][INFO    ][13542] Configured package repo 'deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-holdback main'
2017-09-27 08:46:22,526 [salt.state       ][INFO    ][13542] Completed state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-holdback main] at time 08:46:22.525645 duration_in_ms=2795.489
2017-09-27 08:46:22,526 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/99proxies-salt-uca] at time 08:46:22.526262
2017-09-27 08:46:22,527 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-uca
2017-09-27 08:46:22,528 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/99proxies-salt-uca is not present
2017-09-27 08:46:22,528 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/99proxies-salt-uca] at time 08:46:22.528049 duration_in_ms=1.787
2017-09-27 08:46:22,529 [salt.state       ][INFO    ][13542] Running state [/etc/apt/preferences.d/uca] at time 08:46:22.528572
2017-09-27 08:46:22,529 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/preferences.d/uca
2017-09-27 08:46:22,530 [salt.state       ][INFO    ][13542] File /etc/apt/preferences.d/uca is not present
2017-09-27 08:46:22,530 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/preferences.d/uca] at time 08:46:22.530195 duration_in_ms=1.622
2017-09-27 08:46:22,532 [salt.state       ][INFO    ][13542] Running state [deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/ocata main] at time 08:46:22.531709
2017-09-27 08:46:22,532 [salt.state       ][INFO    ][13542] Executing state pkgrepo.managed for deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/ocata main
2017-09-27 08:46:22,619 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-key', 'export', 'EC4926EA'] in directory '/root'
2017-09-27 08:46:22,699 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:24,038 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084624011098
2017-09-27 08:46:24,057 [salt.minion      ][INFO    ][14814] Starting a new job with PID 14814
2017-09-27 08:46:24,074 [salt.minion      ][INFO    ][14814] Returning information for job: 20170927084624011098
2017-09-27 08:46:24,996 [salt.state       ][INFO    ][13542] Configured package repo 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/ocata main'
2017-09-27 08:46:24,997 [salt.state       ][INFO    ][13542] Completed state [deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/ocata main] at time 08:46:24.997207 duration_in_ms=2465.496
2017-09-27 08:46:24,998 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_updates] at time 08:46:24.997862
2017-09-27 08:46:24,998 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_updates
2017-09-27 08:46:24,999 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_updates is not present
2017-09-27 08:46:24,1000 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_updates] at time 08:46:24.999571 duration_in_ms=1.709
2017-09-27 08:46:25,000 [salt.state       ][INFO    ][13542] Running state [/etc/apt/preferences.d/mirantis_openstack_updates] at time 08:46:25.000061
2017-09-27 08:46:25,001 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/apt/preferences.d/mirantis_openstack_updates
2017-09-27 08:46:25,021 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/preferences_repo'
2017-09-27 08:46:25,045 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:25,112 [salt.state       ][INFO    ][13542] File /etc/apt/preferences.d/mirantis_openstack_updates is in the correct state
2017-09-27 08:46:25,112 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/preferences.d/mirantis_openstack_updates] at time 08:46:25.112374 duration_in_ms=112.313
2017-09-27 08:46:25,114 [salt.state       ][INFO    ][13542] Running state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-updates main] at time 08:46:25.113812
2017-09-27 08:46:25,114 [salt.state       ][INFO    ][13542] Executing state pkgrepo.managed for deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-updates main
2017-09-27 08:46:25,198 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-key', 'add', '/var/cache/salt/minion/extrn_files/base/mirror.fuel-infra.org/mcp-repos/ocata/xenial/archive-mcpocata.key'] in directory '/root'
2017-09-27 08:46:25,374 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:28,495 [salt.state       ][INFO    ][13542] Configured package repo 'deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-updates main'
2017-09-27 08:46:28,495 [salt.state       ][INFO    ][13542] Completed state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-updates main] at time 08:46:28.495446 duration_in_ms=3381.634
2017-09-27 08:46:28,496 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_security] at time 08:46:28.495771
2017-09-27 08:46:28,496 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_security
2017-09-27 08:46:28,496 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_security is not present
2017-09-27 08:46:28,497 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_security] at time 08:46:28.496723 duration_in_ms=0.953
2017-09-27 08:46:28,497 [salt.state       ][INFO    ][13542] Running state [/etc/apt/preferences.d/mirantis_openstack_security] at time 08:46:28.497023
2017-09-27 08:46:28,497 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/apt/preferences.d/mirantis_openstack_security
2017-09-27 08:46:28,517 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/preferences_repo'
2017-09-27 08:46:28,534 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:28,567 [salt.state       ][INFO    ][13542] File /etc/apt/preferences.d/mirantis_openstack_security is in the correct state
2017-09-27 08:46:28,568 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/preferences.d/mirantis_openstack_security] at time 08:46:28.567736 duration_in_ms=70.712
2017-09-27 08:46:28,569 [salt.state       ][INFO    ][13542] Running state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-security main] at time 08:46:28.568529
2017-09-27 08:46:28,569 [salt.state       ][INFO    ][13542] Executing state pkgrepo.managed for deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-security main
2017-09-27 08:46:28,637 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-key', 'add', '/var/cache/salt/minion/extrn_files/base/mirror.fuel-infra.org/mcp-repos/ocata/xenial/archive-mcpocata.key'] in directory '/root'
2017-09-27 08:46:28,753 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:31,399 [salt.state       ][INFO    ][13542] Configured package repo 'deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-security main'
2017-09-27 08:46:31,400 [salt.state       ][INFO    ][13542] Completed state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-security main] at time 08:46:31.400115 duration_in_ms=2831.583
2017-09-27 08:46:31,401 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/99proxies-salt-mk_openstack] at time 08:46:31.400703
2017-09-27 08:46:31,401 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-mk_openstack
2017-09-27 08:46:31,402 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/99proxies-salt-mk_openstack is not present
2017-09-27 08:46:31,403 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mk_openstack] at time 08:46:31.402467 duration_in_ms=1.764
2017-09-27 08:46:31,403 [salt.state       ][INFO    ][13542] Running state [/etc/apt/preferences.d/mk_openstack] at time 08:46:31.402967
2017-09-27 08:46:31,404 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/apt/preferences.d/mk_openstack
2017-09-27 08:46:31,445 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/preferences_repo'
2017-09-27 08:46:31,470 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:31,514 [salt.state       ][INFO    ][13542] File /etc/apt/preferences.d/mk_openstack is in the correct state
2017-09-27 08:46:31,514 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/preferences.d/mk_openstack] at time 08:46:31.514325 duration_in_ms=111.357
2017-09-27 08:46:31,515 [salt.state       ][INFO    ][13542] Running state [deb [arch=amd64] http://apt-mk.mirantis.com/xenial/ nightly ocata] at time 08:46:31.515443
2017-09-27 08:46:31,516 [salt.state       ][INFO    ][13542] Executing state pkgrepo.managed for deb [arch=amd64] http://apt-mk.mirantis.com/xenial/ nightly ocata
2017-09-27 08:46:31,629 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-key', 'add', '/var/cache/salt/minion/extrn_files/base/apt-mk.mirantis.com/public.gpg'] in directory '/root'
2017-09-27 08:46:31,793 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:33,946 [salt.state       ][INFO    ][13542] Configured package repo 'deb [arch=amd64] http://apt-mk.mirantis.com/xenial/ nightly ocata'
2017-09-27 08:46:33,947 [salt.state       ][INFO    ][13542] Completed state [deb [arch=amd64] http://apt-mk.mirantis.com/xenial/ nightly ocata] at time 08:46:33.946519 duration_in_ms=2431.076
2017-09-27 08:46:33,947 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/99proxies-salt-mcp_extra] at time 08:46:33.946728
2017-09-27 08:46:33,947 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-mcp_extra
2017-09-27 08:46:33,947 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/99proxies-salt-mcp_extra is not present
2017-09-27 08:46:33,947 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mcp_extra] at time 08:46:33.947290 duration_in_ms=0.563
2017-09-27 08:46:33,947 [salt.state       ][INFO    ][13542] Running state [/etc/apt/preferences.d/mcp_extra] at time 08:46:33.947429
2017-09-27 08:46:33,948 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/apt/preferences.d/mcp_extra
2017-09-27 08:46:33,965 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/preferences_repo'
2017-09-27 08:46:33,983 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:34,015 [salt.state       ][INFO    ][13542] File /etc/apt/preferences.d/mcp_extra is in the correct state
2017-09-27 08:46:34,015 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/preferences.d/mcp_extra] at time 08:46:34.015246 duration_in_ms=67.817
2017-09-27 08:46:34,016 [salt.state       ][INFO    ][13542] Running state [deb [arch=amd64] http://apt-mk.mirantis.com/xenial/ nightly extra] at time 08:46:34.015921
2017-09-27 08:46:34,016 [salt.state       ][INFO    ][13542] Executing state pkgrepo.managed for deb [arch=amd64] http://apt-mk.mirantis.com/xenial/ nightly extra
2017-09-27 08:46:34,074 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-key', 'add', '/var/cache/salt/minion/extrn_files/base/apt-mk.mirantis.com/public.gpg'] in directory '/root'
2017-09-27 08:46:34,113 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084634098463
2017-09-27 08:46:34,144 [salt.minion      ][INFO    ][16763] Starting a new job with PID 16763
2017-09-27 08:46:34,158 [salt.minion      ][INFO    ][16763] Returning information for job: 20170927084634098463
2017-09-27 08:46:34,217 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:36,386 [salt.state       ][INFO    ][13542] Configured package repo 'deb [arch=amd64] http://apt-mk.mirantis.com/xenial/ nightly extra'
2017-09-27 08:46:36,387 [salt.state       ][INFO    ][13542] Completed state [deb [arch=amd64] http://apt-mk.mirantis.com/xenial/ nightly extra] at time 08:46:36.387075 duration_in_ms=2371.15
2017-09-27 08:46:36,388 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack] at time 08:46:36.387633
2017-09-27 08:46:36,388 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack
2017-09-27 08:46:36,389 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack is not present
2017-09-27 08:46:36,389 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack] at time 08:46:36.389137 duration_in_ms=1.504
2017-09-27 08:46:36,390 [salt.state       ][INFO    ][13542] Running state [/etc/apt/preferences.d/mirantis_openstack] at time 08:46:36.389595
2017-09-27 08:46:36,390 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/apt/preferences.d/mirantis_openstack
2017-09-27 08:46:36,415 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/preferences_repo'
2017-09-27 08:46:36,434 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:36,475 [salt.state       ][INFO    ][13542] File /etc/apt/preferences.d/mirantis_openstack is in the correct state
2017-09-27 08:46:36,476 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/preferences.d/mirantis_openstack] at time 08:46:36.475780 duration_in_ms=86.184
2017-09-27 08:46:36,477 [salt.state       ][INFO    ][13542] Running state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata main] at time 08:46:36.476726
2017-09-27 08:46:36,477 [salt.state       ][INFO    ][13542] Executing state pkgrepo.managed for deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata main
2017-09-27 08:46:36,652 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-key', 'add', '/var/cache/salt/minion/extrn_files/base/mirror.fuel-infra.org/mcp-repos/ocata/xenial/archive-mcpocata.key'] in directory '/root'
2017-09-27 08:46:36,782 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:39,218 [salt.state       ][INFO    ][13542] Configured package repo 'deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata main'
2017-09-27 08:46:39,218 [salt.state       ][INFO    ][13542] Completed state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata main] at time 08:46:39.218328 duration_in_ms=2741.599
2017-09-27 08:46:39,219 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_hotfix] at time 08:46:39.218689
2017-09-27 08:46:39,219 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_hotfix
2017-09-27 08:46:39,219 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_hotfix is not present
2017-09-27 08:46:39,220 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_hotfix] at time 08:46:39.219714 duration_in_ms=1.026
2017-09-27 08:46:39,220 [salt.state       ][INFO    ][13542] Running state [/etc/apt/preferences.d/mirantis_openstack_hotfix] at time 08:46:39.220029
2017-09-27 08:46:39,220 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/apt/preferences.d/mirantis_openstack_hotfix
2017-09-27 08:46:39,239 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/preferences_repo'
2017-09-27 08:46:39,257 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:39,292 [salt.state       ][INFO    ][13542] File /etc/apt/preferences.d/mirantis_openstack_hotfix is in the correct state
2017-09-27 08:46:39,292 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/preferences.d/mirantis_openstack_hotfix] at time 08:46:39.292060 duration_in_ms=72.031
2017-09-27 08:46:39,293 [salt.state       ][INFO    ][13542] Running state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-hotfix main] at time 08:46:39.292940
2017-09-27 08:46:39,293 [salt.state       ][INFO    ][13542] Executing state pkgrepo.managed for deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-hotfix main
2017-09-27 08:46:39,352 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-key', 'add', '/var/cache/salt/minion/extrn_files/base/mirror.fuel-infra.org/mcp-repos/ocata/xenial/archive-mcpocata.key'] in directory '/root'
2017-09-27 08:46:39,467 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:41,808 [salt.state       ][INFO    ][13542] Configured package repo 'deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-hotfix main'
2017-09-27 08:46:41,809 [salt.state       ][INFO    ][13542] Completed state [deb http://mirror.fuel-infra.org/mcp-repos/ocata/xenial ocata-hotfix main] at time 08:46:41.808906 duration_in_ms=2515.962
2017-09-27 08:46:41,810 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/99proxies-salt-salt] at time 08:46:41.809604
2017-09-27 08:46:41,810 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-salt
2017-09-27 08:46:41,811 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/99proxies-salt-salt is not present
2017-09-27 08:46:41,811 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/99proxies-salt-salt] at time 08:46:41.811276 duration_in_ms=1.672
2017-09-27 08:46:41,812 [salt.state       ][INFO    ][13542] Running state [/etc/apt/preferences.d/salt] at time 08:46:41.811761
2017-09-27 08:46:41,812 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/preferences.d/salt
2017-09-27 08:46:41,813 [salt.state       ][INFO    ][13542] File /etc/apt/preferences.d/salt is not present
2017-09-27 08:46:41,813 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/preferences.d/salt] at time 08:46:41.813319 duration_in_ms=1.557
2017-09-27 08:46:41,815 [salt.state       ][INFO    ][13542] Running state [deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/2016.3 xenial main] at time 08:46:41.814889
2017-09-27 08:46:41,815 [salt.state       ][INFO    ][13542] Executing state pkgrepo.managed for deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/2016.3 xenial main
2017-09-27 08:46:42,115 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-key', 'add', '/var/cache/salt/minion/extrn_files/base/repo.saltstack.com/apt/ubuntu/16.04/amd64/2016.3/SALTSTACK-GPG-KEY.pub'] in directory '/root'
2017-09-27 08:46:42,245 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:44,135 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084644119889
2017-09-27 08:46:44,163 [salt.minion      ][INFO    ][19314] Starting a new job with PID 19314
2017-09-27 08:46:44,205 [salt.minion      ][INFO    ][19314] Returning information for job: 20170927084644119889
2017-09-27 08:46:44,945 [salt.state       ][INFO    ][13542] Configured package repo 'deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/2016.3 xenial main'
2017-09-27 08:46:44,946 [salt.state       ][INFO    ][13542] Completed state [deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/2016.3 xenial main] at time 08:46:44.946054 duration_in_ms=3131.165
2017-09-27 08:46:44,947 [salt.state       ][INFO    ][13542] Running state [linux_extra_packages_purged] at time 08:46:44.946673
2017-09-27 08:46:44,947 [salt.state       ][INFO    ][13542] Executing state pkg.purged for linux_extra_packages_purged
2017-09-27 08:46:44,957 [salt.state       ][INFO    ][13542] All specified packages are already absent
2017-09-27 08:46:44,957 [salt.state       ][INFO    ][13542] Completed state [linux_extra_packages_purged] at time 08:46:44.957326 duration_in_ms=10.654
2017-09-27 08:46:44,958 [salt.state       ][INFO    ][13542] Running state [linux_extra_packages_latest] at time 08:46:44.957827
2017-09-27 08:46:44,958 [salt.state       ][INFO    ][13542] Executing state pkg.latest for linux_extra_packages_latest
2017-09-27 08:46:44,963 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 08:46:48,116 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-cache', '-q', 'policy', 'python-msgpack'] in directory '/root'
2017-09-27 08:46:48,162 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-cache', '-q', 'policy', 'python-pymysql'] in directory '/root'
2017-09-27 08:46:48,224 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['apt-cache', '-q', 'policy', 'mcelog'] in directory '/root'
2017-09-27 08:46:48,316 [salt.state       ][INFO    ][13542] All packages are up-to-date (mcelog, python-msgpack, python-pymysql).
2017-09-27 08:46:48,317 [salt.state       ][INFO    ][13542] Completed state [linux_extra_packages_latest] at time 08:46:48.317132 duration_in_ms=3359.302
2017-09-27 08:46:48,318 [salt.state       ][INFO    ][13542] Running state [UTC] at time 08:46:48.318450
2017-09-27 08:46:48,319 [salt.state       ][INFO    ][13542] Executing state timezone.system for UTC
2017-09-27 08:46:48,320 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['timedatectl'] in directory '/root'
2017-09-27 08:46:48,358 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['timedatectl'] in directory '/root'
2017-09-27 08:46:48,371 [salt.state       ][INFO    ][13542] Timezone UTC already set, UTC already set to UTC
2017-09-27 08:46:48,372 [salt.state       ][INFO    ][13542] Completed state [UTC] at time 08:46:48.371489 duration_in_ms=53.037
2017-09-27 08:46:48,373 [salt.state       ][INFO    ][13542] Running state [nf_conntrack] at time 08:46:48.372647
2017-09-27 08:46:48,373 [salt.state       ][INFO    ][13542] Executing state kmod.present for nf_conntrack
2017-09-27 08:46:48,374 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'lsmod' in directory '/root'
2017-09-27 08:46:48,386 [salt.state       ][INFO    ][13542] Kernel module nf_conntrack is already present
2017-09-27 08:46:48,387 [salt.state       ][INFO    ][13542] Completed state [nf_conntrack] at time 08:46:48.386504 duration_in_ms=13.855
2017-09-27 08:46:48,388 [salt.state       ][INFO    ][13542] Running state [kernel.panic] at time 08:46:48.387522
2017-09-27 08:46:48,388 [salt.state       ][INFO    ][13542] Executing state sysctl.present for kernel.panic
2017-09-27 08:46:48,399 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,422 [salt.state       ][INFO    ][13542] Sysctl value kernel.panic = 60 is already set
2017-09-27 08:46:48,423 [salt.state       ][INFO    ][13542] Completed state [kernel.panic] at time 08:46:48.422967 duration_in_ms=35.443
2017-09-27 08:46:48,424 [salt.state       ][INFO    ][13542] Running state [net.ipv4.tcp_keepalive_probes] at time 08:46:48.423517
2017-09-27 08:46:48,424 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.tcp_keepalive_probes
2017-09-27 08:46:48,425 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,447 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.tcp_keepalive_probes = 8 is already set
2017-09-27 08:46:48,448 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.tcp_keepalive_probes] at time 08:46:48.447902 duration_in_ms=24.381
2017-09-27 08:46:48,449 [salt.state       ][INFO    ][13542] Running state [fs.file-max] at time 08:46:48.448528
2017-09-27 08:46:48,449 [salt.state       ][INFO    ][13542] Executing state sysctl.present for fs.file-max
2017-09-27 08:46:48,450 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,469 [salt.state       ][INFO    ][13542] Sysctl value fs.file-max = 124165 is already set
2017-09-27 08:46:48,470 [salt.state       ][INFO    ][13542] Completed state [fs.file-max] at time 08:46:48.469853 duration_in_ms=21.323
2017-09-27 08:46:48,470 [salt.state       ][INFO    ][13542] Running state [net.core.somaxconn] at time 08:46:48.470447
2017-09-27 08:46:48,471 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.core.somaxconn
2017-09-27 08:46:48,472 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,490 [salt.state       ][INFO    ][13542] Sysctl value net.core.somaxconn = 4096 is already set
2017-09-27 08:46:48,490 [salt.state       ][INFO    ][13542] Completed state [net.core.somaxconn] at time 08:46:48.490333 duration_in_ms=19.884
2017-09-27 08:46:48,491 [salt.state       ][INFO    ][13542] Running state [net.ipv4.tcp_max_syn_backlog] at time 08:46:48.490838
2017-09-27 08:46:48,491 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.tcp_max_syn_backlog
2017-09-27 08:46:48,492 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,508 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.tcp_max_syn_backlog = 8192 is already set
2017-09-27 08:46:48,509 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.tcp_max_syn_backlog] at time 08:46:48.508820 duration_in_ms=17.98
2017-09-27 08:46:48,509 [salt.state       ][INFO    ][13542] Running state [net.ipv4.tcp_tw_reuse] at time 08:46:48.509322
2017-09-27 08:46:48,510 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.tcp_tw_reuse
2017-09-27 08:46:48,511 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,529 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.tcp_tw_reuse = 1 is already set
2017-09-27 08:46:48,529 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.tcp_tw_reuse] at time 08:46:48.529320 duration_in_ms=19.997
2017-09-27 08:46:48,530 [salt.state       ][INFO    ][13542] Running state [net.ipv4.tcp_congestion_control] at time 08:46:48.529682
2017-09-27 08:46:48,530 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.tcp_congestion_control
2017-09-27 08:46:48,531 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,549 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.tcp_congestion_control = yeah is already set
2017-09-27 08:46:48,550 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.tcp_congestion_control] at time 08:46:48.549942 duration_in_ms=20.258
2017-09-27 08:46:48,550 [salt.state       ][INFO    ][13542] Running state [net.ipv4.tcp_retries2] at time 08:46:48.550315
2017-09-27 08:46:48,551 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.tcp_retries2
2017-09-27 08:46:48,551 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,567 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.tcp_retries2 = 5 is already set
2017-09-27 08:46:48,568 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.tcp_retries2] at time 08:46:48.567797 duration_in_ms=17.481
2017-09-27 08:46:48,568 [salt.state       ][INFO    ][13542] Running state [net.ipv4.tcp_fin_timeout] at time 08:46:48.568161
2017-09-27 08:46:48,569 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.tcp_fin_timeout
2017-09-27 08:46:48,569 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,587 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.tcp_fin_timeout = 30 is already set
2017-09-27 08:46:48,587 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.tcp_fin_timeout] at time 08:46:48.587152 duration_in_ms=18.99
2017-09-27 08:46:48,588 [salt.state       ][INFO    ][13542] Running state [net.ipv4.tcp_slow_start_after_idle] at time 08:46:48.587515
2017-09-27 08:46:48,588 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.tcp_slow_start_after_idle
2017-09-27 08:46:48,588 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,604 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.tcp_slow_start_after_idle = 0 is already set
2017-09-27 08:46:48,604 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.tcp_slow_start_after_idle] at time 08:46:48.604069 duration_in_ms=16.552
2017-09-27 08:46:48,604 [salt.state       ][INFO    ][13542] Running state [vm.swappiness] at time 08:46:48.604443
2017-09-27 08:46:48,605 [salt.state       ][INFO    ][13542] Executing state sysctl.present for vm.swappiness
2017-09-27 08:46:48,605 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,622 [salt.state       ][INFO    ][13542] Sysctl value vm.swappiness = 10 is already set
2017-09-27 08:46:48,622 [salt.state       ][INFO    ][13542] Completed state [vm.swappiness] at time 08:46:48.622278 duration_in_ms=17.834
2017-09-27 08:46:48,623 [salt.state       ][INFO    ][13542] Running state [net.ipv4.tcp_keepalive_intvl] at time 08:46:48.622646
2017-09-27 08:46:48,623 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.tcp_keepalive_intvl
2017-09-27 08:46:48,624 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,639 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.tcp_keepalive_intvl = 3 is already set
2017-09-27 08:46:48,639 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.tcp_keepalive_intvl] at time 08:46:48.639015 duration_in_ms=16.368
2017-09-27 08:46:48,639 [salt.state       ][INFO    ][13542] Running state [net.ipv4.neigh.default.gc_thresh1] at time 08:46:48.639376
2017-09-27 08:46:48,640 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.neigh.default.gc_thresh1
2017-09-27 08:46:48,640 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,655 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.neigh.default.gc_thresh1 = 4096 is already set
2017-09-27 08:46:48,655 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.neigh.default.gc_thresh1] at time 08:46:48.655420 duration_in_ms=16.043
2017-09-27 08:46:48,656 [salt.state       ][INFO    ][13542] Running state [net.ipv4.neigh.default.gc_thresh2] at time 08:46:48.655803
2017-09-27 08:46:48,656 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.neigh.default.gc_thresh2
2017-09-27 08:46:48,657 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,672 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.neigh.default.gc_thresh2 = 8192 is already set
2017-09-27 08:46:48,672 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.neigh.default.gc_thresh2] at time 08:46:48.672097 duration_in_ms=16.293
2017-09-27 08:46:48,672 [salt.state       ][INFO    ][13542] Running state [net.ipv4.neigh.default.gc_thresh3] at time 08:46:48.672467
2017-09-27 08:46:48,673 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.neigh.default.gc_thresh3
2017-09-27 08:46:48,673 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,691 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.neigh.default.gc_thresh3 = 16384 is already set
2017-09-27 08:46:48,691 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.neigh.default.gc_thresh3] at time 08:46:48.691147 duration_in_ms=18.679
2017-09-27 08:46:48,692 [salt.state       ][INFO    ][13542] Running state [net.core.netdev_max_backlog] at time 08:46:48.691527
2017-09-27 08:46:48,692 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.core.netdev_max_backlog
2017-09-27 08:46:48,692 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,713 [salt.state       ][INFO    ][13542] Sysctl value net.core.netdev_max_backlog = 261144 is already set
2017-09-27 08:46:48,713 [salt.state       ][INFO    ][13542] Completed state [net.core.netdev_max_backlog] at time 08:46:48.713158 duration_in_ms=21.63
2017-09-27 08:46:48,714 [salt.state       ][INFO    ][13542] Running state [net.ipv4.tcp_keepalive_time] at time 08:46:48.713548
2017-09-27 08:46:48,714 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.ipv4.tcp_keepalive_time
2017-09-27 08:46:48,715 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,733 [salt.state       ][INFO    ][13542] Sysctl value net.ipv4.tcp_keepalive_time = 30 is already set
2017-09-27 08:46:48,733 [salt.state       ][INFO    ][13542] Completed state [net.ipv4.tcp_keepalive_time] at time 08:46:48.733293 duration_in_ms=19.743
2017-09-27 08:46:48,734 [salt.state       ][INFO    ][13542] Running state [net.nf_conntrack_max] at time 08:46:48.733900
2017-09-27 08:46:48,734 [salt.state       ][INFO    ][13542] Executing state sysctl.present for net.nf_conntrack_max
2017-09-27 08:46:48,735 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'sysctl -a' in directory '/root'
2017-09-27 08:46:48,751 [salt.state       ][INFO    ][13542] Sysctl value net.nf_conntrack_max = 1048576 is already set
2017-09-27 08:46:48,752 [salt.state       ][INFO    ][13542] Completed state [net.nf_conntrack_max] at time 08:46:48.752078 duration_in_ms=18.166
2017-09-27 08:46:48,753 [salt.state       ][INFO    ][13542] Running state [linux_sysfs_package] at time 08:46:48.752668
2017-09-27 08:46:48,753 [salt.state       ][INFO    ][13542] Executing state pkg.installed for linux_sysfs_package
2017-09-27 08:46:48,758 [salt.state       ][INFO    ][13542] All specified packages are already installed
2017-09-27 08:46:48,758 [salt.state       ][INFO    ][13542] Completed state [linux_sysfs_package] at time 08:46:48.758293 duration_in_ms=5.625
2017-09-27 08:46:48,760 [salt.state       ][INFO    ][13542] Running state [/etc/sysfs.d] at time 08:46:48.759737
2017-09-27 08:46:48,760 [salt.state       ][INFO    ][13542] Executing state file.directory for /etc/sysfs.d
2017-09-27 08:46:48,761 [salt.state       ][INFO    ][13542] Directory /etc/sysfs.d is in the correct state
2017-09-27 08:46:48,761 [salt.state       ][INFO    ][13542] Completed state [/etc/sysfs.d] at time 08:46:48.761296 duration_in_ms=1.559
2017-09-27 08:46:48,762 [salt.state       ][INFO    ][13542] Running state [ondemand] at time 08:46:48.762145
2017-09-27 08:46:48,763 [salt.state       ][INFO    ][13542] Executing state service.dead for ondemand
2017-09-27 08:46:48,763 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['systemctl', 'status', 'ondemand.service', '-n', '0'] in directory '/root'
2017-09-27 08:46:48,776 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['systemctl', 'is-active', 'ondemand.service'] in directory '/root'
2017-09-27 08:46:48,786 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['systemctl', 'is-enabled', 'ondemand.service'] in directory '/root'
2017-09-27 08:46:48,800 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'runlevel' in directory '/root'
2017-09-27 08:46:48,808 [salt.state       ][INFO    ][13542] The service ondemand is already dead
2017-09-27 08:46:48,808 [salt.state       ][INFO    ][13542] Completed state [ondemand] at time 08:46:48.807902 duration_in_ms=45.756
2017-09-27 08:46:48,809 [salt.state       ][INFO    ][13542] Running state [cs_CZ.UTF-8] at time 08:46:48.808552
2017-09-27 08:46:48,809 [salt.state       ][INFO    ][13542] Executing state locale.present for cs_CZ.UTF-8
2017-09-27 08:46:48,809 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'locale -a' in directory '/root'
2017-09-27 08:46:48,816 [salt.state       ][INFO    ][13542] Locale cs_CZ.UTF-8 is already present
2017-09-27 08:46:48,816 [salt.state       ][INFO    ][13542] Completed state [cs_CZ.UTF-8] at time 08:46:48.816427 duration_in_ms=7.872
2017-09-27 08:46:48,817 [salt.state       ][INFO    ][13542] Running state [en_US.UTF-8] at time 08:46:48.816794
2017-09-27 08:46:48,817 [salt.state       ][INFO    ][13542] Executing state locale.present for en_US.UTF-8
2017-09-27 08:46:48,818 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'locale -a' in directory '/root'
2017-09-27 08:46:48,825 [salt.state       ][INFO    ][13542] Locale en_US.UTF-8 is already present
2017-09-27 08:46:48,825 [salt.state       ][INFO    ][13542] Completed state [en_US.UTF-8] at time 08:46:48.824935 duration_in_ms=8.139
2017-09-27 08:46:48,826 [salt.state       ][INFO    ][13542] Running state [en_US.UTF-8] at time 08:46:48.826112
2017-09-27 08:46:48,826 [salt.state       ][INFO    ][13542] Executing state locale.system for en_US.UTF-8
2017-09-27 08:46:48,827 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'localectl' in directory '/root'
2017-09-27 08:46:48,862 [salt.state       ][INFO    ][13542] System locale en_US.UTF-8 already set
2017-09-27 08:46:48,862 [salt.state       ][INFO    ][13542] Completed state [en_US.UTF-8] at time 08:46:48.862355 duration_in_ms=36.24
2017-09-27 08:46:48,863 [salt.state       ][INFO    ][13542] Running state [root] at time 08:46:48.863352
2017-09-27 08:46:48,864 [salt.state       ][INFO    ][13542] Executing state user.present for root
2017-09-27 08:46:48,864 [salt.state       ][INFO    ][13542] User root is present and up to date
2017-09-27 08:46:48,865 [salt.state       ][INFO    ][13542] Completed state [root] at time 08:46:48.864645 duration_in_ms=1.293
2017-09-27 08:46:48,866 [salt.state       ][INFO    ][13542] Running state [/root] at time 08:46:48.865589
2017-09-27 08:46:48,866 [salt.state       ][INFO    ][13542] Executing state file.directory for /root
2017-09-27 08:46:48,866 [salt.state       ][INFO    ][13542] Directory /root is in the correct state
2017-09-27 08:46:48,867 [salt.state       ][INFO    ][13542] Completed state [/root] at time 08:46:48.866495 duration_in_ms=0.905
2017-09-27 08:46:48,867 [salt.state       ][INFO    ][13542] Running state [/etc/sudoers.d/90-salt-user-root] at time 08:46:48.866721
2017-09-27 08:46:48,867 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/sudoers.d/90-salt-user-root
2017-09-27 08:46:48,867 [salt.state       ][INFO    ][13542] File /etc/sudoers.d/90-salt-user-root is not present
2017-09-27 08:46:48,867 [salt.state       ][INFO    ][13542] Completed state [/etc/sudoers.d/90-salt-user-root] at time 08:46:48.867418 duration_in_ms=0.697
2017-09-27 08:46:48,868 [salt.state       ][INFO    ][13542] Running state [ubuntu] at time 08:46:48.867653
2017-09-27 08:46:48,868 [salt.state       ][INFO    ][13542] Executing state user.present for ubuntu
2017-09-27 08:46:48,869 [salt.state       ][INFO    ][13542] User ubuntu is present and up to date
2017-09-27 08:46:48,869 [salt.state       ][INFO    ][13542] Completed state [ubuntu] at time 08:46:48.868763 duration_in_ms=1.11
2017-09-27 08:46:48,870 [salt.state       ][INFO    ][13542] Running state [/home/ubuntu] at time 08:46:48.869564
2017-09-27 08:46:48,870 [salt.state       ][INFO    ][13542] Executing state file.directory for /home/ubuntu
2017-09-27 08:46:48,870 [salt.state       ][INFO    ][13542] Directory /home/ubuntu is in the correct state
2017-09-27 08:46:48,870 [salt.state       ][INFO    ][13542] Completed state [/home/ubuntu] at time 08:46:48.870434 duration_in_ms=0.87
2017-09-27 08:46:48,871 [salt.state       ][INFO    ][13542] Running state [/etc/sudoers.d/90-salt-user-ubuntu] at time 08:46:48.871119
2017-09-27 08:46:48,871 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/sudoers.d/90-salt-user-ubuntu
2017-09-27 08:46:48,889 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/sudoer'
2017-09-27 08:46:48,892 [salt.state       ][INFO    ][13542] File /etc/sudoers.d/90-salt-user-ubuntu is in the correct state
2017-09-27 08:46:48,892 [salt.state       ][INFO    ][13542] Completed state [/etc/sudoers.d/90-salt-user-ubuntu] at time 08:46:48.892238 duration_in_ms=21.118
2017-09-27 08:46:48,892 [salt.state       ][INFO    ][13542] Running state [/etc/security/limits.d/90-salt-default.conf] at time 08:46:48.892387
2017-09-27 08:46:48,893 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/security/limits.d/90-salt-default.conf
2017-09-27 08:46:48,908 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/limits.conf'
2017-09-27 08:46:48,930 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:48,984 [salt.state       ][INFO    ][13542] File /etc/security/limits.d/90-salt-default.conf is in the correct state
2017-09-27 08:46:48,984 [salt.state       ][INFO    ][13542] Completed state [/etc/security/limits.d/90-salt-default.conf] at time 08:46:48.984174 duration_in_ms=91.786
2017-09-27 08:46:48,984 [salt.state       ][INFO    ][13542] Running state [/etc/systemd/system.conf.d/90-salt.conf] at time 08:46:48.984350
2017-09-27 08:46:48,985 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/systemd/system.conf.d/90-salt.conf
2017-09-27 08:46:49,004 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/systemd.conf'
2017-09-27 08:46:49,024 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/map.jinja'
2017-09-27 08:46:49,073 [salt.state       ][INFO    ][13542] File /etc/systemd/system.conf.d/90-salt.conf is in the correct state
2017-09-27 08:46:49,073 [salt.state       ][INFO    ][13542] Completed state [/etc/systemd/system.conf.d/90-salt.conf] at time 08:46:49.072956 duration_in_ms=88.606
2017-09-27 08:46:49,074 [salt.state       ][INFO    ][13542] Running state [service.systemctl_reload] at time 08:46:49.074003
2017-09-27 08:46:49,074 [salt.state       ][INFO    ][13542] Executing state module.wait for service.systemctl_reload
2017-09-27 08:46:49,074 [salt.state       ][INFO    ][13542] No changes made for service.systemctl_reload
2017-09-27 08:46:49,074 [salt.state       ][INFO    ][13542] Completed state [service.systemctl_reload] at time 08:46:49.074464 duration_in_ms=0.461
2017-09-27 08:46:49,075 [salt.state       ][INFO    ][13542] Running state [/etc/hostname] at time 08:46:49.074595
2017-09-27 08:46:49,075 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/hostname
2017-09-27 08:46:49,092 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'linux/files/hostname'
2017-09-27 08:46:49,093 [salt.state       ][INFO    ][13542] File /etc/hostname is in the correct state
2017-09-27 08:46:49,093 [salt.state       ][INFO    ][13542] Completed state [/etc/hostname] at time 08:46:49.093337 duration_in_ms=18.74
2017-09-27 08:46:49,095 [salt.state       ][INFO    ][13542] Running state [hostname ctl03] at time 08:46:49.094486
2017-09-27 08:46:49,095 [salt.state       ][INFO    ][13542] Executing state cmd.wait for hostname ctl03
2017-09-27 08:46:49,095 [salt.state       ][INFO    ][13542] No changes made for hostname ctl03
2017-09-27 08:46:49,095 [salt.state       ][INFO    ][13542] Completed state [hostname ctl03] at time 08:46:49.094954 duration_in_ms=0.468
2017-09-27 08:46:49,095 [salt.state       ][INFO    ][13542] Running state [mdb02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.095262
2017-09-27 08:46:49,095 [salt.state       ][INFO    ][13542] Executing state host.present for mdb02.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,096 [salt.state       ][INFO    ][13542] Host mdb02.baremetal-mcp-ocata-odl-ha.local (10.167.4.77) already present
2017-09-27 08:46:49,096 [salt.state       ][INFO    ][13542] Completed state [mdb02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.095822 duration_in_ms=0.56
2017-09-27 08:46:49,096 [salt.state       ][INFO    ][13542] Running state [mdb02] at time 08:46:49.095956
2017-09-27 08:46:49,096 [salt.state       ][INFO    ][13542] Executing state host.present for mdb02
2017-09-27 08:46:49,096 [salt.state       ][INFO    ][13542] Host mdb02 (10.167.4.77) already present
2017-09-27 08:46:49,096 [salt.state       ][INFO    ][13542] Completed state [mdb02] at time 08:46:49.096469 duration_in_ms=0.512
2017-09-27 08:46:49,097 [salt.state       ][INFO    ][13542] Running state [mdb03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.096613
2017-09-27 08:46:49,097 [salt.state       ][INFO    ][13542] Executing state host.present for mdb03.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,097 [salt.state       ][INFO    ][13542] Host mdb03.baremetal-mcp-ocata-odl-ha.local (10.167.4.78) already present
2017-09-27 08:46:49,097 [salt.state       ][INFO    ][13542] Completed state [mdb03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.097141 duration_in_ms=0.529
2017-09-27 08:46:49,097 [salt.state       ][INFO    ][13542] Running state [mdb03] at time 08:46:49.097277
2017-09-27 08:46:49,097 [salt.state       ][INFO    ][13542] Executing state host.present for mdb03
2017-09-27 08:46:49,098 [salt.state       ][INFO    ][13542] Host mdb03 (10.167.4.78) already present
2017-09-27 08:46:49,098 [salt.state       ][INFO    ][13542] Completed state [mdb03] at time 08:46:49.097812 duration_in_ms=0.535
2017-09-27 08:46:49,098 [salt.state       ][INFO    ][13542] Running state [mdb01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.097946
2017-09-27 08:46:49,098 [salt.state       ][INFO    ][13542] Executing state host.present for mdb01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,098 [salt.state       ][INFO    ][13542] Host mdb01.baremetal-mcp-ocata-odl-ha.local (10.167.4.76) already present
2017-09-27 08:46:49,098 [salt.state       ][INFO    ][13542] Completed state [mdb01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.098459 duration_in_ms=0.512
2017-09-27 08:46:49,099 [salt.state       ][INFO    ][13542] Running state [mdb01] at time 08:46:49.098595
2017-09-27 08:46:49,099 [salt.state       ][INFO    ][13542] Executing state host.present for mdb01
2017-09-27 08:46:49,099 [salt.state       ][INFO    ][13542] Host mdb01 (10.167.4.76) already present
2017-09-27 08:46:49,099 [salt.state       ][INFO    ][13542] Completed state [mdb01] at time 08:46:49.099101 duration_in_ms=0.506
2017-09-27 08:46:49,099 [salt.state       ][INFO    ][13542] Running state [mdb] at time 08:46:49.099232
2017-09-27 08:46:49,099 [salt.state       ][INFO    ][13542] Executing state host.present for mdb
2017-09-27 08:46:49,100 [salt.state       ][INFO    ][13542] Host mdb (10.167.4.75) already present
2017-09-27 08:46:49,100 [salt.state       ][INFO    ][13542] Completed state [mdb] at time 08:46:49.099744 duration_in_ms=0.511
2017-09-27 08:46:49,100 [salt.state       ][INFO    ][13542] Running state [mdb.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.099876
2017-09-27 08:46:49,100 [salt.state       ][INFO    ][13542] Executing state host.present for mdb.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,100 [salt.state       ][INFO    ][13542] Host mdb.baremetal-mcp-ocata-odl-ha.local (10.167.4.75) already present
2017-09-27 08:46:49,100 [salt.state       ][INFO    ][13542] Completed state [mdb.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.100386 duration_in_ms=0.509
2017-09-27 08:46:49,101 [salt.state       ][INFO    ][13542] Running state [cfg01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.100529
2017-09-27 08:46:49,101 [salt.state       ][INFO    ][13542] Executing state host.present for cfg01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,101 [salt.state       ][INFO    ][13542] Host cfg01.baremetal-mcp-ocata-odl-ha.local (10.167.4.100) already present
2017-09-27 08:46:49,101 [salt.state       ][INFO    ][13542] Completed state [cfg01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.101083 duration_in_ms=0.554
2017-09-27 08:46:49,101 [salt.state       ][INFO    ][13542] Running state [cfg01] at time 08:46:49.101216
2017-09-27 08:46:49,101 [salt.state       ][INFO    ][13542] Executing state host.present for cfg01
2017-09-27 08:46:49,102 [salt.state       ][INFO    ][13542] Host cfg01 (10.167.4.100) already present
2017-09-27 08:46:49,102 [salt.state       ][INFO    ][13542] Completed state [cfg01] at time 08:46:49.101741 duration_in_ms=0.524
2017-09-27 08:46:49,102 [salt.state       ][INFO    ][13542] Running state [prx01] at time 08:46:49.101890
2017-09-27 08:46:49,102 [salt.state       ][INFO    ][13542] Executing state host.present for prx01
2017-09-27 08:46:49,102 [salt.state       ][INFO    ][13542] Host prx01 (10.167.4.81) already present
2017-09-27 08:46:49,102 [salt.state       ][INFO    ][13542] Completed state [prx01] at time 08:46:49.102422 duration_in_ms=0.531
2017-09-27 08:46:49,103 [salt.state       ][INFO    ][13542] Running state [prx01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.102579
2017-09-27 08:46:49,103 [salt.state       ][INFO    ][13542] Executing state host.present for prx01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,103 [salt.state       ][INFO    ][13542] Host prx01.baremetal-mcp-ocata-odl-ha.local (10.167.4.81) already present
2017-09-27 08:46:49,103 [salt.state       ][INFO    ][13542] Completed state [prx01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.103148 duration_in_ms=0.567
2017-09-27 08:46:49,103 [salt.state       ][INFO    ][13542] Running state [kvm01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.103294
2017-09-27 08:46:49,103 [salt.state       ][INFO    ][13542] Executing state host.present for kvm01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,104 [salt.state       ][INFO    ][13542] Host kvm01.baremetal-mcp-ocata-odl-ha.local (10.167.4.141) already present
2017-09-27 08:46:49,104 [salt.state       ][INFO    ][13542] Completed state [kvm01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.103829 duration_in_ms=0.535
2017-09-27 08:46:49,104 [salt.state       ][INFO    ][13542] Running state [kvm01] at time 08:46:49.103963
2017-09-27 08:46:49,104 [salt.state       ][INFO    ][13542] Executing state host.present for kvm01
2017-09-27 08:46:49,104 [salt.state       ][INFO    ][13542] Host kvm01 (10.167.4.141) already present
2017-09-27 08:46:49,104 [salt.state       ][INFO    ][13542] Completed state [kvm01] at time 08:46:49.104472 duration_in_ms=0.509
2017-09-27 08:46:49,105 [salt.state       ][INFO    ][13542] Running state [kvm03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.104616
2017-09-27 08:46:49,105 [salt.state       ][INFO    ][13542] Executing state host.present for kvm03.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,105 [salt.state       ][INFO    ][13542] Host kvm03.baremetal-mcp-ocata-odl-ha.local (10.167.4.143) already present
2017-09-27 08:46:49,105 [salt.state       ][INFO    ][13542] Completed state [kvm03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.105148 duration_in_ms=0.531
2017-09-27 08:46:49,105 [salt.state       ][INFO    ][13542] Running state [kvm03] at time 08:46:49.105282
2017-09-27 08:46:49,105 [salt.state       ][INFO    ][13542] Executing state host.present for kvm03
2017-09-27 08:46:49,106 [salt.state       ][INFO    ][13542] Host kvm03 (10.167.4.143) already present
2017-09-27 08:46:49,106 [salt.state       ][INFO    ][13542] Completed state [kvm03] at time 08:46:49.105827 duration_in_ms=0.544
2017-09-27 08:46:49,106 [salt.state       ][INFO    ][13542] Running state [kvm02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.105967
2017-09-27 08:46:49,106 [salt.state       ][INFO    ][13542] Executing state host.present for kvm02.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,107 [salt.state       ][INFO    ][13542] Host kvm02.baremetal-mcp-ocata-odl-ha.local (10.167.4.142) already present
2017-09-27 08:46:49,107 [salt.state       ][INFO    ][13542] Completed state [kvm02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.106834 duration_in_ms=0.867
2017-09-27 08:46:49,107 [salt.state       ][INFO    ][13542] Running state [kvm02] at time 08:46:49.106997
2017-09-27 08:46:49,107 [salt.state       ][INFO    ][13542] Executing state host.present for kvm02
2017-09-27 08:46:49,108 [salt.state       ][INFO    ][13542] Host kvm02 (10.167.4.142) already present
2017-09-27 08:46:49,108 [salt.state       ][INFO    ][13542] Completed state [kvm02] at time 08:46:49.107610 duration_in_ms=0.613
2017-09-27 08:46:49,108 [salt.state       ][INFO    ][13542] Running state [dbs] at time 08:46:49.107749
2017-09-27 08:46:49,108 [salt.state       ][INFO    ][13542] Executing state host.present for dbs
2017-09-27 08:46:49,108 [salt.state       ][INFO    ][13542] Host dbs (10.167.4.50) already present
2017-09-27 08:46:49,108 [salt.state       ][INFO    ][13542] Completed state [dbs] at time 08:46:49.108263 duration_in_ms=0.513
2017-09-27 08:46:49,108 [salt.state       ][INFO    ][13542] Running state [dbs.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.108395
2017-09-27 08:46:49,109 [salt.state       ][INFO    ][13542] Executing state host.present for dbs.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,109 [salt.state       ][INFO    ][13542] Host dbs.baremetal-mcp-ocata-odl-ha.local (10.167.4.50) already present
2017-09-27 08:46:49,109 [salt.state       ][INFO    ][13542] Completed state [dbs.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.108962 duration_in_ms=0.567
2017-09-27 08:46:49,109 [salt.state       ][INFO    ][13542] Running state [prx.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.109103
2017-09-27 08:46:49,109 [salt.state       ][INFO    ][13542] Executing state host.present for prx.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,110 [salt.state       ][INFO    ][13542] Host prx.baremetal-mcp-ocata-odl-ha.local (10.167.4.80) already present
2017-09-27 08:46:49,110 [salt.state       ][INFO    ][13542] Completed state [prx.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.109647 duration_in_ms=0.544
2017-09-27 08:46:49,110 [salt.state       ][INFO    ][13542] Running state [prx] at time 08:46:49.109783
2017-09-27 08:46:49,110 [salt.state       ][INFO    ][13542] Executing state host.present for prx
2017-09-27 08:46:49,110 [salt.state       ][INFO    ][13542] Host prx (10.167.4.80) already present
2017-09-27 08:46:49,110 [salt.state       ][INFO    ][13542] Completed state [prx] at time 08:46:49.110290 duration_in_ms=0.507
2017-09-27 08:46:49,110 [salt.state       ][INFO    ][13542] Running state [prx02] at time 08:46:49.110429
2017-09-27 08:46:49,111 [salt.state       ][INFO    ][13542] Executing state host.present for prx02
2017-09-27 08:46:49,111 [salt.state       ][INFO    ][13542] Host prx02 (10.167.4.82) already present
2017-09-27 08:46:49,111 [salt.state       ][INFO    ][13542] Completed state [prx02] at time 08:46:49.110940 duration_in_ms=0.51
2017-09-27 08:46:49,111 [salt.state       ][INFO    ][13542] Running state [prx02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.111072
2017-09-27 08:46:49,111 [salt.state       ][INFO    ][13542] Executing state host.present for prx02.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,111 [salt.state       ][INFO    ][13542] Host prx02.baremetal-mcp-ocata-odl-ha.local (10.167.4.82) already present
2017-09-27 08:46:49,112 [salt.state       ][INFO    ][13542] Completed state [prx02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.111581 duration_in_ms=0.509
2017-09-27 08:46:49,112 [salt.state       ][INFO    ][13542] Running state [msg02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.111714
2017-09-27 08:46:49,112 [salt.state       ][INFO    ][13542] Executing state host.present for msg02.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,112 [salt.state       ][INFO    ][13542] Host msg02.baremetal-mcp-ocata-odl-ha.local (10.167.4.42) already present
2017-09-27 08:46:49,112 [salt.state       ][INFO    ][13542] Completed state [msg02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.112225 duration_in_ms=0.511
2017-09-27 08:46:49,112 [salt.state       ][INFO    ][13542] Running state [msg02] at time 08:46:49.112359
2017-09-27 08:46:49,112 [salt.state       ][INFO    ][13542] Executing state host.present for msg02
2017-09-27 08:46:49,113 [salt.state       ][INFO    ][13542] Host msg02 (10.167.4.42) already present
2017-09-27 08:46:49,113 [salt.state       ][INFO    ][13542] Completed state [msg02] at time 08:46:49.112880 duration_in_ms=0.521
2017-09-27 08:46:49,113 [salt.state       ][INFO    ][13542] Running state [msg03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.113037
2017-09-27 08:46:49,113 [salt.state       ][INFO    ][13542] Executing state host.present for msg03.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,113 [salt.state       ][INFO    ][13542] Host msg03.baremetal-mcp-ocata-odl-ha.local (10.167.4.43) already present
2017-09-27 08:46:49,114 [salt.state       ][INFO    ][13542] Completed state [msg03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.113606 duration_in_ms=0.569
2017-09-27 08:46:49,114 [salt.state       ][INFO    ][13542] Running state [msg03] at time 08:46:49.113742
2017-09-27 08:46:49,114 [salt.state       ][INFO    ][13542] Executing state host.present for msg03
2017-09-27 08:46:49,114 [salt.state       ][INFO    ][13542] Host msg03 (10.167.4.43) already present
2017-09-27 08:46:49,114 [salt.state       ][INFO    ][13542] Completed state [msg03] at time 08:46:49.114282 duration_in_ms=0.54
2017-09-27 08:46:49,114 [salt.state       ][INFO    ][13542] Running state [msg01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.114422
2017-09-27 08:46:49,115 [salt.state       ][INFO    ][13542] Executing state host.present for msg01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,115 [salt.state       ][INFO    ][13542] Host msg01.baremetal-mcp-ocata-odl-ha.local (10.167.4.41) already present
2017-09-27 08:46:49,115 [salt.state       ][INFO    ][13542] Completed state [msg01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.114935 duration_in_ms=0.512
2017-09-27 08:46:49,115 [salt.state       ][INFO    ][13542] Running state [msg01] at time 08:46:49.115070
2017-09-27 08:46:49,115 [salt.state       ][INFO    ][13542] Executing state host.present for msg01
2017-09-27 08:46:49,116 [salt.state       ][INFO    ][13542] Host msg01 (10.167.4.41) already present
2017-09-27 08:46:49,116 [salt.state       ][INFO    ][13542] Completed state [msg01] at time 08:46:49.115599 duration_in_ms=0.529
2017-09-27 08:46:49,116 [salt.state       ][INFO    ][13542] Running state [msg] at time 08:46:49.115733
2017-09-27 08:46:49,116 [salt.state       ][INFO    ][13542] Executing state host.present for msg
2017-09-27 08:46:49,116 [salt.state       ][INFO    ][13542] Host msg (10.167.4.40) already present
2017-09-27 08:46:49,116 [salt.state       ][INFO    ][13542] Completed state [msg] at time 08:46:49.116255 duration_in_ms=0.523
2017-09-27 08:46:49,116 [salt.state       ][INFO    ][13542] Running state [msg.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.116390
2017-09-27 08:46:49,117 [salt.state       ][INFO    ][13542] Executing state host.present for msg.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,117 [salt.state       ][INFO    ][13542] Host msg.baremetal-mcp-ocata-odl-ha.local (10.167.4.40) already present
2017-09-27 08:46:49,117 [salt.state       ][INFO    ][13542] Completed state [msg.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.116915 duration_in_ms=0.525
2017-09-27 08:46:49,117 [salt.state       ][INFO    ][13542] Running state [cfg01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.117079
2017-09-27 08:46:49,117 [salt.state       ][INFO    ][13542] Executing state host.present for cfg01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,118 [salt.state       ][INFO    ][13542] Host cfg01.baremetal-mcp-ocata-odl-ha.local (10.167.4.100) already present
2017-09-27 08:46:49,118 [salt.state       ][INFO    ][13542] Completed state [cfg01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.117719 duration_in_ms=0.64
2017-09-27 08:46:49,118 [salt.state       ][INFO    ][13542] Running state [cfg01] at time 08:46:49.117867
2017-09-27 08:46:49,118 [salt.state       ][INFO    ][13542] Executing state host.present for cfg01
2017-09-27 08:46:49,118 [salt.state       ][INFO    ][13542] Host cfg01 (10.167.4.100) already present
2017-09-27 08:46:49,119 [salt.state       ][INFO    ][13542] Completed state [cfg01] at time 08:46:49.118478 duration_in_ms=0.611
2017-09-27 08:46:49,119 [salt.state       ][INFO    ][13542] Running state [cmp002] at time 08:46:49.118654
2017-09-27 08:46:49,119 [salt.state       ][INFO    ][13542] Executing state host.present for cmp002
2017-09-27 08:46:49,119 [salt.state       ][INFO    ][13542] Host cmp002 (10.167.4.102) already present
2017-09-27 08:46:49,119 [salt.state       ][INFO    ][13542] Completed state [cmp002] at time 08:46:49.119192 duration_in_ms=0.537
2017-09-27 08:46:49,119 [salt.state       ][INFO    ][13542] Running state [cmp002.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.119334
2017-09-27 08:46:49,119 [salt.state       ][INFO    ][13542] Executing state host.present for cmp002.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,120 [salt.state       ][INFO    ][13542] Host cmp002.baremetal-mcp-ocata-odl-ha.local (10.167.4.102) already present
2017-09-27 08:46:49,120 [salt.state       ][INFO    ][13542] Completed state [cmp002.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.120106 duration_in_ms=0.772
2017-09-27 08:46:49,120 [salt.state       ][INFO    ][13542] Running state [cmp001] at time 08:46:49.120307
2017-09-27 08:46:49,121 [salt.state       ][INFO    ][13542] Executing state host.present for cmp001
2017-09-27 08:46:49,121 [salt.state       ][INFO    ][13542] Host cmp001 (10.167.4.101) already present
2017-09-27 08:46:49,121 [salt.state       ][INFO    ][13542] Completed state [cmp001] at time 08:46:49.121131 duration_in_ms=0.824
2017-09-27 08:46:49,121 [salt.state       ][INFO    ][13542] Running state [cmp001.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.121344
2017-09-27 08:46:49,122 [salt.state       ][INFO    ][13542] Executing state host.present for cmp001.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,122 [salt.state       ][INFO    ][13542] Host cmp001.baremetal-mcp-ocata-odl-ha.local (10.167.4.101) already present
2017-09-27 08:46:49,122 [salt.state       ][INFO    ][13542] Completed state [cmp001.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.122199 duration_in_ms=0.854
2017-09-27 08:46:49,122 [salt.state       ][INFO    ][13542] Running state [dbs01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.122425
2017-09-27 08:46:49,123 [salt.state       ][INFO    ][13542] Executing state host.present for dbs01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,123 [salt.state       ][INFO    ][13542] Host dbs01.baremetal-mcp-ocata-odl-ha.local (10.167.4.51) already present
2017-09-27 08:46:49,124 [salt.state       ][INFO    ][13542] Completed state [dbs01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.123514 duration_in_ms=1.089
2017-09-27 08:46:49,124 [salt.state       ][INFO    ][13542] Running state [dbs01] at time 08:46:49.123744
2017-09-27 08:46:49,124 [salt.state       ][INFO    ][13542] Executing state host.present for dbs01
2017-09-27 08:46:49,124 [salt.state       ][INFO    ][13542] Host dbs01 (10.167.4.51) already present
2017-09-27 08:46:49,125 [salt.state       ][INFO    ][13542] Completed state [dbs01] at time 08:46:49.124559 duration_in_ms=0.805
2017-09-27 08:46:49,125 [salt.state       ][INFO    ][13542] Running state [dbs02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.124738
2017-09-27 08:46:49,125 [salt.state       ][INFO    ][13542] Executing state host.present for dbs02.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,125 [salt.state       ][INFO    ][13542] Host dbs02.baremetal-mcp-ocata-odl-ha.local (10.167.4.52) already present
2017-09-27 08:46:49,125 [salt.state       ][INFO    ][13542] Completed state [dbs02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.125447 duration_in_ms=0.709
2017-09-27 08:46:49,126 [salt.state       ][INFO    ][13542] Running state [dbs02] at time 08:46:49.125646
2017-09-27 08:46:49,126 [salt.state       ][INFO    ][13542] Executing state host.present for dbs02
2017-09-27 08:46:49,126 [salt.state       ][INFO    ][13542] Host dbs02 (10.167.4.52) already present
2017-09-27 08:46:49,126 [salt.state       ][INFO    ][13542] Completed state [dbs02] at time 08:46:49.126354 duration_in_ms=0.707
2017-09-27 08:46:49,127 [salt.state       ][INFO    ][13542] Running state [dbs03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.126537
2017-09-27 08:46:49,127 [salt.state       ][INFO    ][13542] Executing state host.present for dbs03.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,127 [salt.state       ][INFO    ][13542] Host dbs03.baremetal-mcp-ocata-odl-ha.local (10.167.4.53) already present
2017-09-27 08:46:49,128 [salt.state       ][INFO    ][13542] Completed state [dbs03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.127606 duration_in_ms=1.068
2017-09-27 08:46:49,128 [salt.state       ][INFO    ][13542] Running state [dbs03] at time 08:46:49.127943
2017-09-27 08:46:49,128 [salt.state       ][INFO    ][13542] Executing state host.present for dbs03
2017-09-27 08:46:49,129 [salt.state       ][INFO    ][13542] Host dbs03 (10.167.4.53) already present
2017-09-27 08:46:49,129 [salt.state       ][INFO    ][13542] Completed state [dbs03] at time 08:46:49.128978 duration_in_ms=1.036
2017-09-27 08:46:49,129 [salt.state       ][INFO    ][13542] Running state [odl01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.129147
2017-09-27 08:46:49,129 [salt.state       ][INFO    ][13542] Executing state host.present for odl01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,130 [salt.state       ][INFO    ][13542] Host odl01.baremetal-mcp-ocata-odl-ha.local (10.167.4.111) already present
2017-09-27 08:46:49,130 [salt.state       ][INFO    ][13542] Completed state [odl01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.129714 duration_in_ms=0.566
2017-09-27 08:46:49,130 [salt.state       ][INFO    ][13542] Running state [odl01] at time 08:46:49.129849
2017-09-27 08:46:49,130 [salt.state       ][INFO    ][13542] Executing state host.present for odl01
2017-09-27 08:46:49,130 [salt.state       ][INFO    ][13542] Host odl01 (10.167.4.111) already present
2017-09-27 08:46:49,130 [salt.state       ][INFO    ][13542] Completed state [odl01] at time 08:46:49.130364 duration_in_ms=0.514
2017-09-27 08:46:49,131 [salt.state       ][INFO    ][13542] Running state [mas01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.130499
2017-09-27 08:46:49,131 [salt.state       ][INFO    ][13542] Executing state host.present for mas01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,131 [salt.state       ][INFO    ][13542] Host mas01.baremetal-mcp-ocata-odl-ha.local (10.167.4.3) already present
2017-09-27 08:46:49,131 [salt.state       ][INFO    ][13542] Completed state [mas01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.131008 duration_in_ms=0.509
2017-09-27 08:46:49,131 [salt.state       ][INFO    ][13542] Running state [mas01] at time 08:46:49.131140
2017-09-27 08:46:49,131 [salt.state       ][INFO    ][13542] Executing state host.present for mas01
2017-09-27 08:46:49,132 [salt.state       ][INFO    ][13542] Host mas01 (10.167.4.3) already present
2017-09-27 08:46:49,132 [salt.state       ][INFO    ][13542] Completed state [mas01] at time 08:46:49.131651 duration_in_ms=0.51
2017-09-27 08:46:49,132 [salt.state       ][INFO    ][13542] Running state [ctl02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.131784
2017-09-27 08:46:49,132 [salt.state       ][INFO    ][13542] Executing state host.present for ctl02.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,132 [salt.state       ][INFO    ][13542] Host ctl02.baremetal-mcp-ocata-odl-ha.local (10.167.4.12) already present
2017-09-27 08:46:49,132 [salt.state       ][INFO    ][13542] Completed state [ctl02.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.132368 duration_in_ms=0.583
2017-09-27 08:46:49,133 [salt.state       ][INFO    ][13542] Running state [ctl02] at time 08:46:49.132505
2017-09-27 08:46:49,133 [salt.state       ][INFO    ][13542] Executing state host.present for ctl02
2017-09-27 08:46:49,133 [salt.state       ][INFO    ][13542] Host ctl02 (10.167.4.12) already present
2017-09-27 08:46:49,133 [salt.state       ][INFO    ][13542] Completed state [ctl02] at time 08:46:49.133045 duration_in_ms=0.54
2017-09-27 08:46:49,133 [salt.state       ][INFO    ][13542] Running state [ctl03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.133205
2017-09-27 08:46:49,133 [salt.state       ][INFO    ][13542] Executing state host.present for ctl03.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,134 [salt.state       ][INFO    ][13542] Host ctl03.baremetal-mcp-ocata-odl-ha.local (10.167.4.13) already present
2017-09-27 08:46:49,134 [salt.state       ][INFO    ][13542] Completed state [ctl03.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.133789 duration_in_ms=0.584
2017-09-27 08:46:49,134 [salt.state       ][INFO    ][13542] Running state [ctl03] at time 08:46:49.133925
2017-09-27 08:46:49,134 [salt.state       ][INFO    ][13542] Executing state host.present for ctl03
2017-09-27 08:46:49,134 [salt.state       ][INFO    ][13542] Host ctl03 (10.167.4.13) already present
2017-09-27 08:46:49,134 [salt.state       ][INFO    ][13542] Completed state [ctl03] at time 08:46:49.134453 duration_in_ms=0.528
2017-09-27 08:46:49,135 [salt.state       ][INFO    ][13542] Running state [file.replace] at time 08:46:49.135229
2017-09-27 08:46:49,135 [salt.state       ][INFO    ][13542] Executing state module.run for file.replace
2017-09-27 08:46:49,536 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command 'grep -q "ctl03 ctl03.baremetal-mcp-ocata-odl-ha.local" /etc/hosts' in directory '/root'
2017-09-27 08:46:49,552 [salt.state       ][INFO    ][13542] onlyif execution failed
2017-09-27 08:46:49,594 [salt.state       ][INFO    ][13542] Completed state [file.replace] at time 08:46:49.593638 duration_in_ms=458.406
2017-09-27 08:46:49,594 [salt.state       ][INFO    ][13542] Running state [ctl01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.594116
2017-09-27 08:46:49,594 [salt.state       ][INFO    ][13542] Executing state host.present for ctl01.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,595 [salt.state       ][INFO    ][13542] Host ctl01.baremetal-mcp-ocata-odl-ha.local (10.167.4.11) already present
2017-09-27 08:46:49,595 [salt.state       ][INFO    ][13542] Completed state [ctl01.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.595129 duration_in_ms=1.013
2017-09-27 08:46:49,595 [salt.state       ][INFO    ][13542] Running state [ctl01] at time 08:46:49.595328
2017-09-27 08:46:49,596 [salt.state       ][INFO    ][13542] Executing state host.present for ctl01
2017-09-27 08:46:49,596 [salt.state       ][INFO    ][13542] Host ctl01 (10.167.4.11) already present
2017-09-27 08:46:49,596 [salt.state       ][INFO    ][13542] Completed state [ctl01] at time 08:46:49.596045 duration_in_ms=0.717
2017-09-27 08:46:49,596 [salt.state       ][INFO    ][13542] Running state [ctl] at time 08:46:49.596236
2017-09-27 08:46:49,596 [salt.state       ][INFO    ][13542] Executing state host.present for ctl
2017-09-27 08:46:49,597 [salt.state       ][INFO    ][13542] Host ctl (10.167.4.10) already present
2017-09-27 08:46:49,597 [salt.state       ][INFO    ][13542] Completed state [ctl] at time 08:46:49.596975 duration_in_ms=0.739
2017-09-27 08:46:49,597 [salt.state       ][INFO    ][13542] Running state [ctl.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.597173
2017-09-27 08:46:49,597 [salt.state       ][INFO    ][13542] Executing state host.present for ctl.baremetal-mcp-ocata-odl-ha.local
2017-09-27 08:46:49,598 [salt.state       ][INFO    ][13542] Host ctl.baremetal-mcp-ocata-odl-ha.local (10.167.4.10) already present
2017-09-27 08:46:49,598 [salt.state       ][INFO    ][13542] Completed state [ctl.baremetal-mcp-ocata-odl-ha.local] at time 08:46:49.597891 duration_in_ms=0.718
2017-09-27 08:46:49,598 [salt.state       ][INFO    ][13542] Running state [ens2] at time 08:46:49.598073
2017-09-27 08:46:49,598 [salt.state       ][INFO    ][13542] Executing state network.managed for ens2
2017-09-27 08:46:50,002 [salt.state       ][INFO    ][13542] Interface ens2 is up to date.
2017-09-27 08:46:50,002 [salt.state       ][INFO    ][13542] Completed state [ens2] at time 08:46:50.002265 duration_in_ms=404.19
2017-09-27 08:46:50,003 [salt.state       ][INFO    ][13542] Running state [ens3] at time 08:46:50.002552
2017-09-27 08:46:50,003 [salt.state       ][INFO    ][13542] Executing state network.managed for ens3
2017-09-27 08:46:50,411 [salt.state       ][INFO    ][13542] Interface ens3 is up to date.
2017-09-27 08:46:50,411 [salt.state       ][INFO    ][13542] Completed state [ens3] at time 08:46:50.411410 duration_in_ms=408.858
2017-09-27 08:46:50,412 [salt.state       ][INFO    ][13542] Running state [/etc/profile.d/proxy.sh] at time 08:46:50.411583
2017-09-27 08:46:50,412 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/profile.d/proxy.sh
2017-09-27 08:46:50,412 [salt.state       ][INFO    ][13542] File /etc/profile.d/proxy.sh is not present
2017-09-27 08:46:50,412 [salt.state       ][INFO    ][13542] Completed state [/etc/profile.d/proxy.sh] at time 08:46:50.412084 duration_in_ms=0.501
2017-09-27 08:46:50,412 [salt.state       ][INFO    ][13542] Running state [/etc/apt/apt.conf.d/95proxies] at time 08:46:50.412216
2017-09-27 08:46:50,412 [salt.state       ][INFO    ][13542] Executing state file.absent for /etc/apt/apt.conf.d/95proxies
2017-09-27 08:46:50,413 [salt.state       ][INFO    ][13542] File /etc/apt/apt.conf.d/95proxies is not present
2017-09-27 08:46:50,413 [salt.state       ][INFO    ][13542] Completed state [/etc/apt/apt.conf.d/95proxies] at time 08:46:50.412663 duration_in_ms=0.446
2017-09-27 08:46:50,413 [salt.state       ][INFO    ][13542] Running state [ntp] at time 08:46:50.412800
2017-09-27 08:46:50,413 [salt.state       ][INFO    ][13542] Executing state pkg.installed for ntp
2017-09-27 08:46:50,416 [salt.state       ][INFO    ][13542] Package ntp is already installed
2017-09-27 08:46:50,416 [salt.state       ][INFO    ][13542] Completed state [ntp] at time 08:46:50.415842 duration_in_ms=3.042
2017-09-27 08:46:50,417 [salt.state       ][INFO    ][13542] Running state [/etc/ntp.conf] at time 08:46:50.416561
2017-09-27 08:46:50,417 [salt.state       ][INFO    ][13542] Executing state file.managed for /etc/ntp.conf
2017-09-27 08:46:50,433 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'ntp/files/ntp.conf'
2017-09-27 08:46:50,465 [salt.fileclient  ][INFO    ][13542] Fetching file from saltenv 'base', ** done ** 'ntp/map.jinja'
2017-09-27 08:46:50,473 [salt.state       ][INFO    ][13542] File /etc/ntp.conf is in the correct state
2017-09-27 08:46:50,473 [salt.state       ][INFO    ][13542] Completed state [/etc/ntp.conf] at time 08:46:50.473218 duration_in_ms=56.655
2017-09-27 08:46:50,475 [salt.state       ][INFO    ][13542] Running state [ntp] at time 08:46:50.474979
2017-09-27 08:46:50,475 [salt.state       ][INFO    ][13542] Executing state service.running for ntp
2017-09-27 08:46:50,476 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['systemctl', 'status', 'ntp.service', '-n', '0'] in directory '/root'
2017-09-27 08:46:50,488 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['systemctl', 'is-active', 'ntp.service'] in directory '/root'
2017-09-27 08:46:50,499 [salt.loaded.int.module.cmdmod][INFO    ][13542] Executing command ['systemctl', 'is-enabled', 'ntp.service'] in directory '/root'
2017-09-27 08:46:50,512 [salt.state       ][INFO    ][13542] The service ntp is already running
2017-09-27 08:46:50,513 [salt.state       ][INFO    ][13542] Completed state [ntp] at time 08:46:50.512496 duration_in_ms=37.516
2017-09-27 08:46:50,515 [salt.minion      ][INFO    ][13542] Returning information for job: 20170927084613915687
2017-09-27 08:48:23,850 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command ssh.set_auth_key with jid 20170927084823834597
2017-09-27 08:48:23,879 [salt.minion      ][INFO    ][20067] Starting a new job with PID 20067
2017-09-27 08:48:23,896 [salt.minion      ][INFO    ][20067] Returning information for job: 20170927084823834597
2017-09-27 08:48:24,910 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command pkg.upgrade with jid 20170927084824893459
2017-09-27 08:48:24,948 [salt.minion      ][INFO    ][20072] Starting a new job with PID 20072
2017-09-27 08:48:24,977 [salt.loaded.int.module.cmdmod][INFO    ][20072] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 08:48:25,591 [salt.loaded.int.module.cmdmod][INFO    ][20072] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'upgrade'] in directory '/root'
2017-09-27 08:48:34,995 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084834977152
2017-09-27 08:48:35,028 [salt.minion      ][INFO    ][20087] Starting a new job with PID 20087
2017-09-27 08:48:41,959 [salt.minion      ][INFO    ][20087] Returning information for job: 20170927084834977152
2017-09-27 08:48:45,164 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084845150782
2017-09-27 08:48:45,200 [salt.minion      ][INFO    ][20107] Starting a new job with PID 20107
2017-09-27 08:48:45,218 [salt.minion      ][INFO    ][20107] Returning information for job: 20170927084845150782
2017-09-27 08:48:55,272 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084855258116
2017-09-27 08:48:55,301 [salt.minion      ][INFO    ][20143] Starting a new job with PID 20143
2017-09-27 08:48:55,321 [salt.minion      ][INFO    ][20143] Returning information for job: 20170927084855258116
2017-09-27 08:49:05,370 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084905356523
2017-09-27 08:49:05,393 [salt.minion      ][INFO    ][20253] Starting a new job with PID 20253
2017-09-27 08:49:05,411 [salt.minion      ][INFO    ][20253] Returning information for job: 20170927084905356523
2017-09-27 08:49:15,586 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084915568906
2017-09-27 08:49:15,617 [salt.minion      ][INFO    ][20258] Starting a new job with PID 20258
2017-09-27 08:49:16,713 [salt.minion      ][INFO    ][20258] Returning information for job: 20170927084915568906
2017-09-27 08:49:25,617 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084925597998
2017-09-27 08:49:25,642 [salt.minion      ][INFO    ][20353] Starting a new job with PID 20353
2017-09-27 08:49:25,661 [salt.minion      ][INFO    ][20353] Returning information for job: 20170927084925597998
2017-09-27 08:49:35,812 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084935795330
2017-09-27 08:49:35,837 [salt.minion      ][INFO    ][20376] Starting a new job with PID 20376
2017-09-27 08:49:35,985 [salt.minion      ][INFO    ][20376] Returning information for job: 20170927084935795330
2017-09-27 08:49:45,950 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084945933169
2017-09-27 08:49:45,977 [salt.minion      ][INFO    ][20409] Starting a new job with PID 20409
2017-09-27 08:49:46,017 [salt.minion      ][INFO    ][20409] Returning information for job: 20170927084945933169
2017-09-27 08:49:56,091 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927084956073378
2017-09-27 08:49:56,127 [salt.minion      ][INFO    ][20421] Starting a new job with PID 20421
2017-09-27 08:49:56,145 [salt.minion      ][INFO    ][20421] Returning information for job: 20170927084956073378
2017-09-27 08:50:06,247 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085006224773
2017-09-27 08:50:06,280 [salt.minion      ][INFO    ][20444] Starting a new job with PID 20444
2017-09-27 08:50:06,299 [salt.minion      ][INFO    ][20444] Returning information for job: 20170927085006224773
2017-09-27 08:50:16,387 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085016369727
2017-09-27 08:50:16,421 [salt.minion      ][INFO    ][20477] Starting a new job with PID 20477
2017-09-27 08:50:16,789 [salt.minion      ][INFO    ][20477] Returning information for job: 20170927085016369727
2017-09-27 08:50:26,490 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085026473225
2017-09-27 08:50:26,523 [salt.minion      ][INFO    ][20504] Starting a new job with PID 20504
2017-09-27 08:50:26,551 [salt.minion      ][INFO    ][20504] Returning information for job: 20170927085026473225
2017-09-27 08:50:36,692 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085036674370
2017-09-27 08:50:37,013 [salt.minion      ][INFO    ][20532] Starting a new job with PID 20532
2017-09-27 08:50:37,032 [salt.minion      ][INFO    ][20532] Returning information for job: 20170927085036674370
2017-09-27 08:50:46,786 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085046770557
2017-09-27 08:50:47,009 [salt.minion      ][INFO    ][20537] Starting a new job with PID 20537
2017-09-27 08:50:48,789 [salt.minion      ][INFO    ][20537] Returning information for job: 20170927085046770557
2017-09-27 08:50:56,896 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085056879607
2017-09-27 08:50:56,928 [salt.minion      ][INFO    ][20542] Starting a new job with PID 20542
2017-09-27 08:50:56,943 [salt.minion      ][INFO    ][20542] Returning information for job: 20170927085056879607
2017-09-27 08:51:07,032 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085107016198
2017-09-27 08:51:07,062 [salt.minion      ][INFO    ][20552] Starting a new job with PID 20552
2017-09-27 08:51:07,079 [salt.minion      ][INFO    ][20552] Returning information for job: 20170927085107016198
2017-09-27 08:51:17,213 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085117197468
2017-09-27 08:51:17,243 [salt.minion      ][INFO    ][20557] Starting a new job with PID 20557
2017-09-27 08:51:17,261 [salt.minion      ][INFO    ][20557] Returning information for job: 20170927085117197468
2017-09-27 08:51:27,291 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085127275514
2017-09-27 08:51:27,322 [salt.minion      ][INFO    ][20562] Starting a new job with PID 20562
2017-09-27 08:51:27,340 [salt.minion      ][INFO    ][20562] Returning information for job: 20170927085127275514
2017-09-27 08:51:37,506 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085137488682
2017-09-27 08:51:37,542 [salt.minion      ][INFO    ][20572] Starting a new job with PID 20572
2017-09-27 08:51:37,561 [salt.minion      ][INFO    ][20572] Returning information for job: 20170927085137488682
2017-09-27 08:51:47,602 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085147587739
2017-09-27 08:51:47,630 [salt.minion      ][INFO    ][20577] Starting a new job with PID 20577
2017-09-27 08:51:47,648 [salt.minion      ][INFO    ][20577] Returning information for job: 20170927085147587739
2017-09-27 08:51:57,703 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085157688879
2017-09-27 08:51:58,148 [salt.minion      ][INFO    ][20598] Starting a new job with PID 20598
2017-09-27 08:51:58,193 [salt.minion      ][INFO    ][20598] Returning information for job: 20170927085157688879
2017-09-27 08:52:07,817 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085207801836
2017-09-27 08:52:07,843 [salt.minion      ][INFO    ][20626] Starting a new job with PID 20626
2017-09-27 08:52:07,862 [salt.minion      ][INFO    ][20626] Returning information for job: 20170927085207801836
2017-09-27 08:52:17,957 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085217942962
2017-09-27 08:52:17,993 [salt.minion      ][INFO    ][20653] Starting a new job with PID 20653
2017-09-27 08:52:18,010 [salt.minion      ][INFO    ][20653] Returning information for job: 20170927085217942962
2017-09-27 08:52:28,178 [salt.minion      ][INFO    ][13281] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927085228163598
2017-09-27 08:52:28,204 [salt.minion      ][INFO    ][20691] Starting a new job with PID 20691
2017-09-27 08:52:28,221 [salt.minion      ][INFO    ][20691] Returning information for job: 20170927085228163598
2017-09-27 08:52:30,249 [salt.utils.parsers][WARNING ][13281] Minion received a SIGTERM. Exiting.
2017-09-27 08:52:30,249 [salt.cli.daemons ][INFO    ][13281] The salt minion is shutting down..
2017-09-27 08:52:30,252 [salt.cli.daemons ][INFO    ][13281] The Salt Minion is shut down
2017-09-27 08:54:32,134 [salt.cli.daemons ][INFO    ][25853] Setting up the Salt Minion "ctl03.baremetal-mcp-ocata-odl-ha.local"
2017-09-27 08:54:32,700 [salt.minion      ][INFO    ][25853] Creating minion process manager
2017-09-27 08:54:33,285 [salt.cli.daemons ][WARNING ][25853] IMPORTANT: Do not use md5 hashing algorithm! Please set "hash_type" to sha256 in Salt Minion config!
2017-09-27 08:54:33,286 [salt.cli.daemons ][INFO    ][25853] The Salt Minion is starting up
2017-09-27 08:54:33,286 [salt.minion      ][INFO    ][25853] Minion is starting as user 'root'
2017-09-27 08:54:33,288 [salt.utils.event ][INFO    ][25853] Starting pull socket on /var/run/salt/minion/minion_event_c5406b34a5_pull.ipc
2017-09-27 08:54:34,470 [salt.loaded.int.module.cmdmod][INFO    ][25853] Executing command ['date', '+%z'] in directory '/root'
2017-09-27 08:54:34,485 [salt.utils.schedule][INFO    ][25853] Updating job settings for scheduled job: __mine_interval
2017-09-27 08:54:34,488 [salt.minion      ][INFO    ][25853] Added mine.update to scheduler
2017-09-27 08:54:35,716 [salt.minion      ][INFO    ][25853] Minion is ready to receive requests!
2017-09-27 08:54:36,718 [salt.utils.schedule][INFO    ][25853] Running scheduled job: __mine_interval
2017-09-27 08:54:42,006 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command pkg.upgrade with jid 20170927085441990154
2017-09-27 08:54:42,027 [salt.minion      ][INFO    ][25954] Starting a new job with PID 25954
2017-09-27 08:54:42,050 [salt.loaded.int.module.cmdmod][INFO    ][25954] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 08:54:42,550 [salt.loaded.int.module.cmdmod][INFO    ][25954] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'upgrade'] in directory '/root'
2017-09-27 08:54:43,086 [salt.loaded.int.module.cmdmod][INFO    ][25954] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 08:54:43,123 [salt.minion      ][INFO    ][25954] Returning information for job: 20170927085441990154
2017-09-27 08:55:13,100 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command pkg.upgrade with jid 20170927085513086606
2017-09-27 08:55:13,122 [salt.minion      ][INFO    ][25973] Starting a new job with PID 25973
2017-09-27 08:55:13,145 [salt.loaded.int.module.cmdmod][INFO    ][25973] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 08:55:13,647 [salt.loaded.int.module.cmdmod][INFO    ][25973] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'upgrade'] in directory '/root'
2017-09-27 08:55:14,176 [salt.loaded.int.module.cmdmod][INFO    ][25973] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 08:55:14,213 [salt.minion      ][INFO    ][25973] Returning information for job: 20170927085513086606
2017-09-27 08:55:44,184 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command pkg.upgrade with jid 20170927085544169178
2017-09-27 08:55:44,207 [salt.minion      ][INFO    ][25993] Starting a new job with PID 25993
2017-09-27 08:55:44,233 [salt.loaded.int.module.cmdmod][INFO    ][25993] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 08:55:44,805 [salt.loaded.int.module.cmdmod][INFO    ][25993] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'upgrade'] in directory '/root'
2017-09-27 08:55:45,387 [salt.loaded.int.module.cmdmod][INFO    ][25993] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 08:55:45,411 [salt.minion      ][INFO    ][25993] Returning information for job: 20170927085544169178
2017-09-27 09:13:05,285 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927091305270778
2017-09-27 09:13:05,309 [salt.minion      ][INFO    ][26816] Starting a new job with PID 26816
2017-09-27 09:13:08,029 [salt.state       ][INFO    ][26816] Loading fresh modules for state activity
2017-09-27 09:13:08,065 [salt.fileclient  ][INFO    ][26816] Fetching file from saltenv 'base', ** done ** 'opendaylight/client.sls'
2017-09-27 09:13:08,100 [salt.fileclient  ][INFO    ][26816] Fetching file from saltenv 'base', ** done ** 'opendaylight/map.jinja'
2017-09-27 09:13:08,415 [salt.state       ][INFO    ][26816] Running state [opendaylight_client_packages] at time 09:13:08.414612
2017-09-27 09:13:08,415 [salt.state       ][INFO    ][26816] Executing state pkg.installed for opendaylight_client_packages
2017-09-27 09:13:08,415 [salt.loaded.int.module.cmdmod][INFO    ][26816] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:13:09,036 [salt.loaded.int.module.cmdmod][INFO    ][26816] Executing command ['apt-cache', '-q', 'policy', 'python-networking-odl'] in directory '/root'
2017-09-27 09:13:09,139 [salt.loaded.int.module.cmdmod][INFO    ][26816] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 09:13:11,954 [salt.loaded.int.module.cmdmod][INFO    ][26816] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-networking-odl'] in directory '/root'
2017-09-27 09:13:15,380 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091315365270
2017-09-27 09:13:15,403 [salt.minion      ][INFO    ][27369] Starting a new job with PID 27369
2017-09-27 09:13:15,419 [salt.minion      ][INFO    ][27369] Returning information for job: 20170927091315365270
2017-09-27 09:13:25,598 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091325569721
2017-09-27 09:13:25,624 [salt.minion      ][INFO    ][27485] Starting a new job with PID 27485
2017-09-27 09:13:25,643 [salt.minion      ][INFO    ][27485] Returning information for job: 20170927091325569721
2017-09-27 09:13:35,630 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091335614204
2017-09-27 09:13:35,655 [salt.minion      ][INFO    ][27613] Starting a new job with PID 27613
2017-09-27 09:13:35,671 [salt.minion      ][INFO    ][27613] Returning information for job: 20170927091335614204
2017-09-27 09:13:45,844 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091345826902
2017-09-27 09:13:45,869 [salt.minion      ][INFO    ][27714] Starting a new job with PID 27714
2017-09-27 09:13:45,927 [salt.minion      ][INFO    ][27714] Returning information for job: 20170927091345826902
2017-09-27 09:13:55,883 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091355868835
2017-09-27 09:13:55,906 [salt.minion      ][INFO    ][27854] Starting a new job with PID 27854
2017-09-27 09:13:55,924 [salt.minion      ][INFO    ][27854] Returning information for job: 20170927091355868835
2017-09-27 09:14:06,106 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091406083325
2017-09-27 09:14:06,129 [salt.minion      ][INFO    ][28003] Starting a new job with PID 28003
2017-09-27 09:14:06,146 [salt.minion      ][INFO    ][28003] Returning information for job: 20170927091406083325
2017-09-27 09:14:16,318 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091416302344
2017-09-27 09:14:16,341 [salt.minion      ][INFO    ][28145] Starting a new job with PID 28145
2017-09-27 09:14:16,363 [salt.minion      ][INFO    ][28145] Returning information for job: 20170927091416302344
2017-09-27 09:14:26,353 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091426337873
2017-09-27 09:14:26,375 [salt.minion      ][INFO    ][28279] Starting a new job with PID 28279
2017-09-27 09:14:26,388 [salt.minion      ][INFO    ][28279] Returning information for job: 20170927091426337873
2017-09-27 09:14:36,379 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091436364647
2017-09-27 09:14:36,401 [salt.minion      ][INFO    ][28425] Starting a new job with PID 28425
2017-09-27 09:14:36,418 [salt.minion      ][INFO    ][28425] Returning information for job: 20170927091436364647
2017-09-27 09:14:46,602 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091446586693
2017-09-27 09:14:46,629 [salt.minion      ][INFO    ][28665] Starting a new job with PID 28665
2017-09-27 09:14:46,654 [salt.minion      ][INFO    ][28665] Returning information for job: 20170927091446586693
2017-09-27 09:14:56,627 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091456610064
2017-09-27 09:14:56,650 [salt.minion      ][INFO    ][28788] Starting a new job with PID 28788
2017-09-27 09:14:56,666 [salt.minion      ][INFO    ][28788] Returning information for job: 20170927091456610064
2017-09-27 09:15:06,644 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091506630442
2017-09-27 09:15:06,669 [salt.minion      ][INFO    ][28798] Starting a new job with PID 28798
2017-09-27 09:15:06,688 [salt.minion      ][INFO    ][28798] Returning information for job: 20170927091506630442
2017-09-27 09:15:16,674 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091516658551
2017-09-27 09:15:16,697 [salt.minion      ][INFO    ][29006] Starting a new job with PID 29006
2017-09-27 09:15:16,720 [salt.minion      ][INFO    ][29006] Returning information for job: 20170927091516658551
2017-09-27 09:15:26,695 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091526681273
2017-09-27 09:15:26,717 [salt.minion      ][INFO    ][29346] Starting a new job with PID 29346
2017-09-27 09:15:26,736 [salt.minion      ][INFO    ][29346] Returning information for job: 20170927091526681273
2017-09-27 09:15:36,716 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091536701469
2017-09-27 09:15:36,736 [salt.minion      ][INFO    ][29639] Starting a new job with PID 29639
2017-09-27 09:15:36,761 [salt.minion      ][INFO    ][29639] Returning information for job: 20170927091536701469
2017-09-27 09:15:45,211 [salt.loaded.int.module.cmdmod][INFO    ][26816] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:15:45,256 [salt.state       ][INFO    ][26816] Made the following changes:
'python-routes' changed from 'absent' to '2.2-1ubuntu2'
'python-kombu' changed from 'absent' to '3.0.33-1ubuntu2'
'python-oslo.concurrency' changed from 'absent' to '3.18.0-1~u16.04+mcp4'
'python-sqlparse' changed from 'absent' to '0.1.18-1'
'python-pycadf' changed from 'absent' to '2.2.0-2~u16.04+mcp1'
'python-sqlalchemy' changed from 'absent' to '1.0.13+ds1-1~u16.04+mcp1'
'libtiff5' changed from 'absent' to '4.0.6-1ubuntu0.2'
'python-secretstorage' changed from 'absent' to '2.1.3-1'
'python-formencode' changed from 'absent' to '1.3.0-0ubuntu5'
'python-functools32' changed from 'absent' to '3.2.3.2-2'
'python-pastedeploy' changed from 'absent' to '1.5.2-1'
'python-migrate' changed from 'absent' to '0.10.0-3ubuntu2'
'python-cachetools' changed from 'absent' to '1.1.6-1~u16.04+mcp1'
'python-ply-yacc-3.5' changed from 'absent' to '1'
'python-blinker' changed from 'absent' to '1.3.dfsg2-1build1'
'ieee-data' changed from 'absent' to '20150531.1'
'python-roman' changed from 'absent' to '2.0.0-2'
'python-pastescript' changed from 'absent' to '1.7.5-3build1'
'python-bs4' changed from 'absent' to '4.4.1-1'
'python-tenacity' changed from 'absent' to '3.3.0-1~u16.04+mcp1'
'python-oslo.versionedobjects' changed from 'absent' to '1.21.0-1~u16.04+mcp2'
'python-setuptools' changed from 'absent' to '33.1.1-1~cloud0'
'docutils-doc' changed from 'absent' to '0.12+dfsg-1'
'python-dbus' changed from 'absent' to '1.2.0-3'
'python-fixtures' changed from 'absent' to '3.0.0-1~u16.04+mcp1'
'python-monotonic' changed from 'absent' to '0.6-2'
'python-httplib2' changed from 'absent' to '0.9.1+dfsg-1'
'python-testtools' changed from 'absent' to '1.8.1-0ubuntu1'
'python-docutils' changed from 'absent' to '0.12+dfsg-1'
'python-anyjson' changed from 'absent' to '0.3.3-1build1'
'python-prettytable' changed from 'absent' to '0.7.2-3'
'python-netaddr' changed from 'absent' to '0.7.18-1'
'python-dnspython' changed from 'absent' to '1.14.0-3~u16.04+mcp1'
'python-babel' changed from 'absent' to '2.3.4+dfsg.1-2~u16.04+mcp2'
'python2.7-paramiko' changed from 'absent' to '1'
'python-html5lib' changed from 'absent' to '0.999-4'
'python-pil' changed from 'absent' to '3.1.2-0ubuntu1.1'
'python-oslo.privsep' changed from 'absent' to '1.16.0-1~u16.04+mcp2'
'python2.7-ryu' changed from 'absent' to '1'
'python-oslo.db' changed from 'absent' to '4.17.0-1~u16.04+mcp6'
'python2.7-sqlalchemy-ext' changed from 'absent' to '1'
'python-pika' changed from 'absent' to '0.10.0-1'
'python-oslo.rootwrap' changed from 'absent' to '5.4.0-1~u16.04+mcp5'
'python-tz' changed from 'absent' to '2014.10~dfsg1-0ubuntu2'
'python-oslo.reports' changed from 'absent' to '1.17.0-1~u16.04+mcp5'
'python-extras' changed from 'absent' to '0.0.3-3'
'python-funcsigs' changed from 'absent' to '1.0.2-3~cloud0'
'python-ply-lex-3.5' changed from 'absent' to '1'
'python-scgi' changed from 'absent' to '1.13-1.1build1'
'python2.7-pil' changed from 'absent' to '1'
'python-pecan' changed from 'absent' to '1.1.2-1~u16.04+mcp1'
'python-repoze.lru' changed from 'absent' to '0.6-6'
'python-oslo.service' changed from 'absent' to '1.19.0-1~u16.04+mcp3'
'formencode-i18n' changed from 'absent' to '1.3.0-0ubuntu5'
'python2.7-testtools' changed from 'absent' to '1'
'docutils' changed from 'absent' to '1'
'python-designateclient' changed from 'absent' to '2.6.0-1~u16.04+mcp2'
'python2.7-neutron' changed from 'absent' to '1'
'python2.7-dbus' changed from 'absent' to '1'
'python-oslo.middleware' changed from 'absent' to '3.23.1-0~u16.04+mcp4'
'python-pygments' changed from 'absent' to '2.1+dfsg-1'
'python-pillow' changed from 'absent' to '1'
'libpaperg' changed from 'absent' to '1'
'python2.7-netifaces' changed from 'absent' to '1'
'liblcms2-2' changed from 'absent' to '2.6-3ubuntu2'
'docutils-common' changed from 'absent' to '0.12+dfsg-1'
'python-oslo.context' changed from 'absent' to '2.12.1-0~u16.04+mcp3'
'python-neutronclient' changed from 'absent' to '1:6.1.0-1~u16.04+mcp9'
'python-neutron-lib' changed from 'absent' to '1.1.0-1~u16.04+mcp4'
'python-oslo.cache' changed from 'absent' to '1.17.0-1~u16.04+mcp2'
'python-neutron' changed from 'absent' to '2:10.0.3-1~u16.04+mcp28'
'python2.7-pyinotify' changed from 'absent' to '1'
'python-fasteners' changed from 'absent' to '0.12.0-2ubuntu1'
'python-pyparsing' changed from 'absent' to '2.1.10+dfsg1-1~u16.04+mcp1'
'python-babel-localedata' changed from 'absent' to '2.3.4+dfsg.1-2~u16.04+mcp2'
'python-positional' changed from 'absent' to '1.1.1-3~u16.04+mcp1'
'python-singledispatch' changed from 'absent' to '3.4.0.3-2'
'python-cmd2' changed from 'absent' to '0.6.8-1'
'python-distribute' changed from 'absent' to '1'
'python-tinyrpc' changed from 'absent' to '0.5-1~u16.04+mcp1'
'python-oslo-log' changed from 'absent' to '1'
'python-iso8601' changed from 'absent' to '0.1.11-1'
'alembic' changed from 'absent' to '0.8.10-1~u16.04+mcp1'
'libwebpmux1' changed from 'absent' to '0.4.4-1'
'python-oslo.policy' changed from 'absent' to '1.18.0-1~u16.04+mcp2'
'python-paste' changed from 'absent' to '1.7.5.1-6ubuntu3'
'python-lxml' changed from 'absent' to '3.5.0-1build1'
'python-oslo.config' changed from 'absent' to '1:3.22.0-1~u16.04+mcp2'
'python-paramiko' changed from 'absent' to '2.0.0-1~u16.04+mcp1'
'python-futurist' changed from 'absent' to '0.13.0-2'
'libpaper1' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python-webob' changed from 'absent' to '1.6.2-1~u16.04+mcp1'
'python2.7-gi' changed from 'absent' to '1'
'python-linecache2' changed from 'absent' to '1.0.0-2'
'python-cffi' changed from 'absent' to '1.5.2-1ubuntu1'
'python-pastedeploy-tpl' changed from 'absent' to '1.5.2-1'
'python-oauthlib' changed from 'absent' to '1.0.3-1'
'python-oslo-db' changed from 'absent' to '1'
'python-mimeparse' changed from 'absent' to '0.1.4-1build1'
'python-gi' changed from 'absent' to '3.20.0-0ubuntu1'
'python2.7-waitress' changed from 'absent' to '1'
'python-contextlib2' changed from 'absent' to '0.5.1-1'
'libjpeg8' changed from 'absent' to '8c-2ubuntu8'
'python-oslo.serialization' changed from 'absent' to '2.16.0-1~u16.04+mcp2'
'python-oslo.utils' changed from 'absent' to '3.22.0-1~u16.04+mcp5'
'python-pika-pool' changed from 'absent' to '0.1.3-1ubuntu1'
'python-keyrings.alt' changed from 'absent' to '1.1.1-1'
'python2.7-iso8601' changed from 'absent' to '1'
'python-simplejson' changed from 'absent' to '3.8.1-1ubuntu2'
'python-wrapt' changed from 'absent' to '1.8.0-5build2'
'python-simplegeneric' changed from 'absent' to '0.8.1-1'
'python-weakrefmethod' changed from 'absent' to '1.0-1'
'python-openid' changed from 'absent' to '2.2.5-6'
'python-ryu' changed from 'absent' to '4.9-1~u16.04+mcp1'
'python2.7-cmd2' changed from 'absent' to '1'
'python-networking-odl' changed from 'absent' to '1:4.0.0-0ubuntu1~cloud0'
'python-osc-lib' changed from 'absent' to '1.3.0-1~u16.04+mcp5'
'libpaper-utils' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python-cliff' changed from 'absent' to '2.4.0-1~u16.04+mcp1'
'python-oslo.i18n' changed from 'absent' to '3.12.0-1~u16.04+mcp2'
'python-webtest' changed from 'absent' to '2.0.18-1ubuntu1'
'python-appdirs' changed from 'absent' to '1.4.0-2'
'python-alembic' changed from 'absent' to '0.8.10-1~u16.04+mcp1'
'python-statsd' changed from 'absent' to '3.2.1-2~cloud0'
'libxslt1.1' changed from 'absent' to '1.1.28-2.1ubuntu0.1'
'python-waitress' changed from 'absent' to '0.8.10-1'
'python-oslo-utils' changed from 'absent' to '1'
'python2.7-simplejson' changed from 'absent' to '1'
'python-novaclient' changed from 'absent' to '2:7.1.0-1~u16.04+mcp8'
'python-unicodecsv' changed from 'absent' to '0.14.1-1'
'python-memcache' changed from 'absent' to '1.57-1'
'python-mock' changed from 'absent' to '2.0.0-1~u16.04+mcp1'
'python-rfc3986' changed from 'absent' to '0.3.1-2~u16.04+mcp1'
'python-eventlet' changed from 'absent' to '0.18.4-1ubuntu1'
'python-unittest2' changed from 'absent' to '1.1.0-6.1'
'python2.7-pyparsing' changed from 'absent' to '1'
'python-oslo.log' changed from 'absent' to '3.20.1-0~u16.04+mcp1'
'python-pyinotify' changed from 'absent' to '0.9.6-1~u16.04+mcp1'
'libjpeg-turbo8' changed from 'absent' to '1.4.2-0ubuntu3'
'python-amqp' changed from 'absent' to '1.4.9-1'
'python-pbr' changed from 'absent' to '1.10.0-1~u16.04+mcp1'
'libwebp5' changed from 'absent' to '0.4.4-1'
'python-logutils' changed from 'absent' to '0.3.3-5'
'python-netifaces' changed from 'absent' to '0.10.4-0.1build2'
'python-decorator' changed from 'absent' to '4.0.6-1'
'python-osprofiler' changed from 'absent' to '1.4.0-1~u16.04+mcp1'
'python-os-client-config' changed from 'absent' to '1.26.0-2~u16.04+mcp2'
'python-oslo.messaging' changed from 'absent' to '5.17.1-0~u16.04+mcp10'
'python-pycparser' changed from 'absent' to '2.14+dfsg-2build1'
'python-debtcollector' changed from 'absent' to '1.3.0-2'
'python-dogpile.cache' changed from 'absent' to '0.6.2-1~u16.04+mcp1'
'python-jsonschema' changed from 'absent' to '2.5.1-4'
'python2.7-lxml' changed from 'absent' to '1'
'python-jwt' changed from 'absent' to '1.3.0-1ubuntu0.1'
'python-keystoneclient' changed from 'absent' to '1:3.10.0-1~u16.04+mcp2'
'python-greenlet' changed from 'absent' to '0.4.9-2fakesync1'
'python-sqlalchemy-ext' changed from 'absent' to '1.0.13+ds1-1~u16.04+mcp1'
'python-keystoneauth1' changed from 'absent' to '2.18.0-1~u16.04+mcp4'
'python-keyring' changed from 'absent' to '8.5.1-1~u16.04+mcp1'
'python-traceback2' changed from 'absent' to '1.4.0-3'
'python-stevedore' changed from 'absent' to '1.17.1-2~u16.04+mcp1'
'python-tempita' changed from 'absent' to '0.5.2-1build1'
'python-ply' changed from 'absent' to '3.7-1'
'python-pyroute2' changed from 'absent' to '0.4.12-2~u16.04+mcp1'
'python-posix-ipc' changed from 'absent' to '0.9.8-2build2'
'python-keystonemiddleware' changed from 'absent' to '4.14.0-1~u16.04+mcp3'
'pycadf-common' changed from 'absent' to '2.2.0-2~u16.04+mcp1'
'python-openvswitch' changed from 'absent' to '2.6.1-0~u16.04+mcp2'
'python2.7-pyroute2' changed from 'absent' to '1'
'python-oslo-rootwrap' changed from 'absent' to '1'
'python-requestsexceptions' changed from 'absent' to '1.1.2-0ubuntu1'
'python-oslo-context' changed from 'absent' to '1'
'python2.7-ply' changed from 'absent' to '1'
'libjbig0' changed from 'absent' to '2.1-3.1'

2017-09-27 09:15:45,282 [salt.state       ][INFO    ][26816] Loading fresh modules for state activity
2017-09-27 09:15:45,322 [salt.state       ][INFO    ][26816] Completed state [opendaylight_client_packages] at time 09:15:45.322072 duration_in_ms=156907.457
2017-09-27 09:15:45,331 [salt.minion      ][INFO    ][26816] Returning information for job: 20170927091305270778
2017-09-27 09:15:47,702 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command test.ping with jid 20170927091547684965
2017-09-27 09:15:47,725 [salt.minion      ][INFO    ][29858] Starting a new job with PID 29858
2017-09-27 09:15:47,790 [salt.minion      ][INFO    ][29858] Returning information for job: 20170927091547684965
2017-09-27 09:16:06,560 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927091606545971
2017-09-27 09:16:06,582 [salt.minion      ][INFO    ][29863] Starting a new job with PID 29863
2017-09-27 09:16:07,276 [salt.state       ][INFO    ][29863] Loading fresh modules for state activity
2017-09-27 09:16:07,357 [salt.fileclient  ][INFO    ][29863] Fetching file from saltenv 'base', ** done ** 'keepalived/init.sls'
2017-09-27 09:16:07,387 [salt.fileclient  ][INFO    ][29863] Fetching file from saltenv 'base', ** done ** 'keepalived/cluster.sls'
2017-09-27 09:16:07,430 [salt.fileclient  ][INFO    ][29863] Fetching file from saltenv 'base', ** done ** 'keepalived/map.jinja'
2017-09-27 09:16:08,877 [salt.state       ][INFO    ][29863] Running state [lsof] at time 09:16:08.876804
2017-09-27 09:16:08,877 [salt.state       ][INFO    ][29863] Executing state pkg.installed for lsof
2017-09-27 09:16:08,878 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:16:09,418 [salt.state       ][INFO    ][29863] Package lsof is already installed
2017-09-27 09:16:09,419 [salt.state       ][INFO    ][29863] Completed state [lsof] at time 09:16:09.418979 duration_in_ms=542.175
2017-09-27 09:16:09,419 [salt.state       ][INFO    ][29863] Running state [keepalived] at time 09:16:09.419389
2017-09-27 09:16:09,420 [salt.state       ][INFO    ][29863] Executing state pkg.installed for keepalived
2017-09-27 09:16:09,442 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 09:16:12,111 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'keepalived'] in directory '/root'
2017-09-27 09:16:16,605 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927091616589680
2017-09-27 09:16:16,632 [salt.minion      ][INFO    ][30491] Starting a new job with PID 30491
2017-09-27 09:16:16,652 [salt.minion      ][INFO    ][30491] Returning information for job: 20170927091616589680
2017-09-27 09:16:20,283 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:16:20,320 [salt.state       ][INFO    ][29863] Made the following changes:
'libsnmp30' changed from 'absent' to '5.7.3+dfsg-1ubuntu4'
'libsnmp-base' changed from 'absent' to '5.7.3+dfsg-1ubuntu4'
'keepalived' changed from 'absent' to '1:1.2.19-1ubuntu0.2'
'ipvsadm' changed from 'absent' to '1:1.28-3'
'libsensors4' changed from 'absent' to '1:3.4.0-2'

2017-09-27 09:16:20,333 [salt.state       ][INFO    ][29863] Loading fresh modules for state activity
2017-09-27 09:16:20,359 [salt.state       ][INFO    ][29863] Completed state [keepalived] at time 09:16:20.358579 duration_in_ms=10939.189
2017-09-27 09:16:20,361 [salt.state       ][INFO    ][29863] Running state [/etc/keepalived/keepalived.conf] at time 09:16:20.360625
2017-09-27 09:16:20,361 [salt.state       ][INFO    ][29863] Executing state file.managed for /etc/keepalived/keepalived.conf
2017-09-27 09:16:20,399 [salt.fileclient  ][INFO    ][29863] Fetching file from saltenv 'base', ** done ** 'keepalived/files/keepalived.conf'
2017-09-27 09:16:20,438 [salt.fileclient  ][INFO    ][29863] Fetching file from saltenv 'base', ** done ** 'keepalived/map.jinja'
2017-09-27 09:16:20,447 [salt.state       ][INFO    ][29863] File changed:
New file
2017-09-27 09:16:20,447 [salt.state       ][INFO    ][29863] Completed state [/etc/keepalived/keepalived.conf] at time 09:16:20.447326 duration_in_ms=86.7
2017-09-27 09:16:20,526 [salt.state       ][INFO    ][29863] Running state [keepalived] at time 09:16:20.525903
2017-09-27 09:16:20,527 [salt.state       ][INFO    ][29863] Executing state service.running for keepalived
2017-09-27 09:16:20,531 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['systemctl', 'status', 'keepalived.service', '-n', '0'] in directory '/root'
2017-09-27 09:16:20,550 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['systemctl', 'is-active', 'keepalived.service'] in directory '/root'
2017-09-27 09:16:20,568 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['systemctl', 'is-enabled', 'keepalived.service'] in directory '/root'
2017-09-27 09:16:20,582 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['systemctl', 'is-enabled', 'keepalived.service'] in directory '/root'
2017-09-27 09:16:20,597 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'keepalived.service'] in directory '/root'
2017-09-27 09:16:20,676 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['systemctl', 'is-active', 'keepalived.service'] in directory '/root'
2017-09-27 09:16:20,693 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['systemctl', 'is-enabled', 'keepalived.service'] in directory '/root'
2017-09-27 09:16:20,707 [salt.loaded.int.module.cmdmod][INFO    ][29863] Executing command ['systemctl', 'is-enabled', 'keepalived.service'] in directory '/root'
2017-09-27 09:16:20,719 [salt.state       ][INFO    ][29863] {'keepalived': True}
2017-09-27 09:16:20,720 [salt.state       ][INFO    ][29863] Completed state [keepalived] at time 09:16:20.719723 duration_in_ms=193.817
2017-09-27 09:16:20,721 [salt.minion      ][INFO    ][29863] Returning information for job: 20170927091606545971
2017-09-27 09:20:35,513 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command pillar.get with jid 20170927092035507772
2017-09-27 09:20:35,547 [salt.minion      ][INFO    ][30944] Starting a new job with PID 30944
2017-09-27 09:20:35,554 [salt.minion      ][INFO    ][30944] Returning information for job: 20170927092035507772
2017-09-27 09:54:18,463 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927095418456040
2017-09-27 09:54:18,485 [salt.minion      ][INFO    ][32531] Starting a new job with PID 32531
2017-09-27 09:54:21,119 [salt.state       ][INFO    ][32531] Loading fresh modules for state activity
2017-09-27 09:54:21,165 [salt.fileclient  ][INFO    ][32531] Fetching file from saltenv 'base', ** done ** 'memcached/init.sls'
2017-09-27 09:54:21,192 [salt.fileclient  ][INFO    ][32531] Fetching file from saltenv 'base', ** done ** 'memcached/server.sls'
2017-09-27 09:54:21,222 [salt.fileclient  ][INFO    ][32531] Fetching file from saltenv 'base', ** done ** 'memcached/map.jinja'
2017-09-27 09:54:22,437 [salt.state       ][INFO    ][32531] Running state [python-memcache] at time 09:54:22.436983
2017-09-27 09:54:22,437 [salt.state       ][INFO    ][32531] Executing state pkg.installed for python-memcache
2017-09-27 09:54:22,438 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:54:22,876 [salt.state       ][INFO    ][32531] Package python-memcache is already installed
2017-09-27 09:54:22,877 [salt.state       ][INFO    ][32531] Completed state [python-memcache] at time 09:54:22.876836 duration_in_ms=439.852
2017-09-27 09:54:22,877 [salt.state       ][INFO    ][32531] Running state [memcached] at time 09:54:22.877160
2017-09-27 09:54:22,877 [salt.state       ][INFO    ][32531] Executing state pkg.installed for memcached
2017-09-27 09:54:22,895 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 09:54:25,602 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'memcached'] in directory '/root'
2017-09-27 09:54:28,554 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927095428539611
2017-09-27 09:54:28,576 [salt.minion      ][INFO    ][680] Starting a new job with PID 680
2017-09-27 09:54:28,597 [salt.minion      ][INFO    ][680] Returning information for job: 20170927095428539611
2017-09-27 09:54:29,979 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:54:30,028 [salt.state       ][INFO    ][32531] Made the following changes:
'memcached' changed from 'absent' to '1.4.25-2ubuntu1.2'

2017-09-27 09:54:30,052 [salt.state       ][INFO    ][32531] Loading fresh modules for state activity
2017-09-27 09:54:30,096 [salt.state       ][INFO    ][32531] Completed state [memcached] at time 09:54:30.096401 duration_in_ms=7219.238
2017-09-27 09:54:30,100 [salt.state       ][INFO    ][32531] Running state [/etc/memcached.conf] at time 09:54:30.099683
2017-09-27 09:54:30,100 [salt.state       ][INFO    ][32531] Executing state file.managed for /etc/memcached.conf
2017-09-27 09:54:30,132 [salt.fileclient  ][INFO    ][32531] Fetching file from saltenv 'base', ** done ** 'memcached/files/memcached.conf'
2017-09-27 09:54:30,156 [salt.fileclient  ][INFO    ][32531] Fetching file from saltenv 'base', ** done ** 'memcached/map.jinja'
2017-09-27 09:54:30,169 [salt.state       ][INFO    ][32531] File changed:
--- 
+++ 
@@ -1,11 +1,10 @@
+
 # memcached default config file
 # 2003 - Jay Bonci <jaybonci@debian.org>
-# This configuration file is read by the start-memcached script provided as
-# part of the Debian GNU/Linux distribution.
+# This configuration file is read by the start-memcached script provided as part of the Debian GNU/Linux distribution. 
 
 # Run memcached as a daemon. This command is implied, and is not needed for the
-# daemon to run. See the README.Debian that comes with this package for more
-# information.
+# daemon to run. See the README.Debian that comes with this package for more information.
 -d
 
 # Log memcached's output to /var/log/memcached
@@ -18,13 +17,13 @@
 # -vv
 
 # Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
-# Note that the daemon will grow to this size, but does not start out holding this much
-# memory
+# Note that the daemon will grow to this size, but does not start out holding this much memory
 -m 64
 
 # Default connection port is 11211
 -p 11211
 
+-U 11211
 # Run the daemon as root. The start-memcached will default to running as root if no
 # -u command is present in this config file
 -u memcache
@@ -32,10 +31,12 @@
 # Specify which IP address to listen on. The default is to listen on all IP addresses
 # This parameter is one of the only security measures that memcached has, so make sure
 # it's listening on a firewalled interface.
--l 127.0.0.1
+-l 0.0.0.0
 
 # Limit the number of simultaneous incoming connections. The daemon default is 1024
 # -c 1024
+# Mirantis
+-c 8192
 
 # Lock down all paged memory. Consult with the README and homepage before you do this
 # -k
@@ -45,3 +46,6 @@
 
 # Maximize core file limit
 # -r
+
+# Number of threads to use to process incoming requests.
+-t 1
2017-09-27 09:54:30,170 [salt.state       ][INFO    ][32531] Completed state [/etc/memcached.conf] at time 09:54:30.170224 duration_in_ms=70.539
2017-09-27 09:54:30,280 [salt.state       ][INFO    ][32531] Running state [memcached] at time 09:54:30.279587
2017-09-27 09:54:30,280 [salt.state       ][INFO    ][32531] Executing state service.running for memcached
2017-09-27 09:54:30,282 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['systemctl', 'status', 'memcached.service', '-n', '0'] in directory '/root'
2017-09-27 09:54:30,300 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['systemctl', 'is-active', 'memcached.service'] in directory '/root'
2017-09-27 09:54:30,315 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['systemctl', 'is-enabled', 'memcached.service'] in directory '/root'
2017-09-27 09:54:30,331 [salt.state       ][INFO    ][32531] The service memcached is already running
2017-09-27 09:54:30,331 [salt.state       ][INFO    ][32531] Completed state [memcached] at time 09:54:30.331414 duration_in_ms=51.826
2017-09-27 09:54:30,332 [salt.state       ][INFO    ][32531] Running state [memcached] at time 09:54:30.331976
2017-09-27 09:54:30,333 [salt.state       ][INFO    ][32531] Executing state service.mod_watch for memcached
2017-09-27 09:54:30,334 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['systemctl', 'is-active', 'memcached.service'] in directory '/root'
2017-09-27 09:54:30,349 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['systemctl', 'is-enabled', 'memcached.service'] in directory '/root'
2017-09-27 09:54:30,363 [salt.loaded.int.module.cmdmod][INFO    ][32531] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'memcached.service'] in directory '/root'
2017-09-27 09:54:30,406 [salt.state       ][INFO    ][32531] {'memcached': True}
2017-09-27 09:54:30,407 [salt.state       ][INFO    ][32531] Completed state [memcached] at time 09:54:30.406433 duration_in_ms=74.455
2017-09-27 09:54:30,409 [salt.minion      ][INFO    ][32531] Returning information for job: 20170927095418456040
2017-09-27 09:54:34,369 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927095434355809
2017-09-27 09:54:34,391 [salt.minion      ][INFO    ][844] Starting a new job with PID 844
2017-09-27 09:54:35,528 [salt.state       ][INFO    ][844] Loading fresh modules for state activity
2017-09-27 09:54:35,577 [salt.fileclient  ][INFO    ][844] Fetching file from saltenv 'base', ** done ** 'haproxy/init.sls'
2017-09-27 09:54:35,610 [salt.fileclient  ][INFO    ][844] Fetching file from saltenv 'base', ** done ** 'haproxy/proxy.sls'
2017-09-27 09:54:35,649 [salt.fileclient  ][INFO    ][844] Fetching file from saltenv 'base', ** done ** 'haproxy/map.jinja'
2017-09-27 09:54:36,718 [salt.utils.schedule][INFO    ][25853] Running scheduled job: __mine_interval
2017-09-27 09:54:36,858 [salt.state       ][INFO    ][844] Running state [haproxy] at time 09:54:36.858420
2017-09-27 09:54:36,859 [salt.state       ][INFO    ][844] Executing state pkg.installed for haproxy
2017-09-27 09:54:36,859 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:54:37,362 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 09:54:39,650 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'haproxy'] in directory '/root'
2017-09-27 09:54:44,462 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927095444449102
2017-09-27 09:54:44,488 [salt.minion      ][INFO    ][1440] Starting a new job with PID 1440
2017-09-27 09:54:44,515 [salt.minion      ][INFO    ][1440] Returning information for job: 20170927095444449102
2017-09-27 09:54:48,853 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:54:48,885 [salt.state       ][INFO    ][844] Made the following changes:
'haproxy' changed from 'absent' to '1.6.3-1~u16.04+mcp1'
'liblua5.3-0' changed from 'absent' to '5.3.1-1ubuntu2'

2017-09-27 09:54:48,897 [salt.state       ][INFO    ][844] Loading fresh modules for state activity
2017-09-27 09:54:48,923 [salt.state       ][INFO    ][844] Completed state [haproxy] at time 09:54:48.922753 duration_in_ms=12064.334
2017-09-27 09:54:48,925 [salt.state       ][INFO    ][844] Running state [/etc/default/haproxy] at time 09:54:48.924785
2017-09-27 09:54:48,925 [salt.state       ][INFO    ][844] Executing state file.managed for /etc/default/haproxy
2017-09-27 09:54:48,950 [salt.fileclient  ][INFO    ][844] Fetching file from saltenv 'base', ** done ** 'haproxy/files/haproxy.default'
2017-09-27 09:54:48,953 [salt.state       ][INFO    ][844] File changed:
--- 
+++ 
@@ -1,10 +1,5 @@
-# Defaults file for HAProxy
-#
-# This is sourced by both, the initscript and the systemd unit file, so do not
-# treat it as a shell script fragment.
 
-# Change the config file location if needed
-#CONFIG="/etc/haproxy/haproxy.cfg"
-
-# Add extra flags here, see haproxy(1) for a few options
+# Set ENABLED to 1 if you want the init script to start haproxy.
+ENABLED=1
+# Add extra flags here.
 #EXTRAOPTS="-de -m 16"

2017-09-27 09:54:48,953 [salt.state       ][INFO    ][844] Completed state [/etc/default/haproxy] at time 09:54:48.953437 duration_in_ms=28.651
2017-09-27 09:54:48,954 [salt.state       ][INFO    ][844] Running state [/etc/haproxy/haproxy.cfg] at time 09:54:48.953785
2017-09-27 09:54:48,954 [salt.state       ][INFO    ][844] Executing state file.managed for /etc/haproxy/haproxy.cfg
2017-09-27 09:54:48,971 [salt.fileclient  ][INFO    ][844] Fetching file from saltenv 'base', ** done ** 'haproxy/files/haproxy.cfg'
2017-09-27 09:54:49,078 [salt.fileclient  ][INFO    ][844] Fetching file from saltenv 'base', ** done ** 'haproxy/map.jinja'
2017-09-27 09:54:49,098 [salt.state       ][INFO    ][844] File changed:
--- 
+++ 
@@ -1,35 +1,180 @@
 global
-	log /dev/log	local0
-	log /dev/log	local1 notice
-	chroot /var/lib/haproxy
-	stats socket /run/haproxy/admin.sock mode 660 level admin
-	stats timeout 30s
-	user haproxy
-	group haproxy
-	daemon
-
-	# Default SSL material locations
-	ca-base /etc/ssl/certs
-	crt-base /etc/ssl/private
-
-	# Default ciphers to use on SSL-enabled listening sockets.
-	# For more information, see ciphers(1SSL). This list is from:
-	#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
-	ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
-	ssl-default-bind-options no-sslv3
+  log /dev/log  local0
+  log /dev/log  local1 notice
+  chroot /var/lib/haproxy
+  stats  socket /run/haproxy/admin.sock mode 660 level admin
+  stats timeout 30s
+  user  haproxy
+  group haproxy
+  daemon
+  pidfile  /var/run/haproxy.pid
+  spread-checks 4
+  tune.maxrewrite 1024
+  tune.bufsize 32768
+  maxconn  16000
+  # SSL options
+  ca-base /etc/haproxy/ssl
+  crt-base /etc/haproxy/ssl
+  tune.ssl.default-dh-param 2048
+  ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
+  ssl-default-bind-options no-sslv3 no-tls-tickets
+  ssl-default-server-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
+  ssl-default-server-options no-sslv3 no-tls-tickets
 
 defaults
-	log	global
-	mode	http
-	option	httplog
-	option	dontlognull
-        timeout connect 5000
-        timeout client  50000
-        timeout server  50000
-	errorfile 400 /etc/haproxy/errors/400.http
-	errorfile 403 /etc/haproxy/errors/403.http
-	errorfile 408 /etc/haproxy/errors/408.http
-	errorfile 500 /etc/haproxy/errors/500.http
-	errorfile 502 /etc/haproxy/errors/502.http
-	errorfile 503 /etc/haproxy/errors/503.http
-	errorfile 504 /etc/haproxy/errors/504.http
+  log  global
+  mode http
+
+  maxconn 8000
+  option  redispatch
+  retries  3
+  stats  enable
+
+  timeout http-request 10s
+  timeout queue 1m
+  timeout connect 10s
+  timeout client 1m
+  timeout server 1m
+  timeout check 10s
+
+listen keystone_public_api
+  bind 10.167.4.10:5000 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:5000 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:5000 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:5000 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen nova_api
+  bind 10.167.4.10:8774 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:8774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:8774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:8774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen glare
+  bind 10.167.4.10:9494 
+  mode http
+  balance roundrobin
+  option  httplog
+  server ctl01 10.167.4.11:9494 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:9494 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:9494 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen nova_novnc
+  bind 10.167.4.10:6080 
+  mode http
+  balance roundrobin
+  option  httplog
+  server ctl01 10.167.4.11:6080 check
+  server ctl02 10.167.4.12:6080 check
+  server ctl03 10.167.4.13:6080 check
+
+listen keystone_admin_api
+  bind 10.167.4.10:35357 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:35357 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:35357 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:35357 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen glance_registry_api
+  bind 10.167.4.10:9191 
+  mode http
+  balance roundrobin
+  option  httplog
+  server ctl01 10.167.4.11:9191 check
+  server ctl02 10.167.4.12:9191 check
+  server ctl03 10.167.4.13:9191 check
+
+listen nova_placement_api
+  bind 10.167.4.10:8778 
+  
+  mode http
+  balance roundrobin
+  option httpclose
+  option httplog
+  option httpchk
+  http-check expect status 401
+  server ctl01 10.167.4.11:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen heat_cloudwatch_api
+  bind 10.167.4.10:8003 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:8003 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:8003 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:8003 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen cinder_api
+  bind 10.167.4.10:8776 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:8776 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:8776 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:8776 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen designate_api
+  bind 10.167.4.10:9001 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:9001 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:9001 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen glance_api
+  bind 10.167.4.10:9292 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:9292 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:9292 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:9292 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen heat_api
+  bind 10.167.4.10:8004 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:8004 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:8004 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:8004 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen heat_cfn_api
+  bind 10.167.4.10:8000 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:8000 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:8000 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:8000 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen nova_metadata_api
+  bind 10.167.4.10:8775 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:8775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:8775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:8775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+
+listen neutron_api
+  bind 10.167.4.10:9696 
+  bind 10.167.4.10:9696 
+  option  httpchk
+  option  httplog
+  option  httpclose
+  server ctl01 10.167.4.11:9696 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:9696 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:9696 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl01 10.167.4.11:9696 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl02 10.167.4.12:9696 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
+  server ctl03 10.167.4.13:9696 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

2017-09-27 09:54:49,098 [salt.state       ][INFO    ][844] Completed state [/etc/haproxy/haproxy.cfg] at time 09:54:49.098294 duration_in_ms=144.508
2017-09-27 09:54:49,099 [salt.state       ][INFO    ][844] Running state [/etc/haproxy/ssl] at time 09:54:49.098658
2017-09-27 09:54:49,099 [salt.state       ][INFO    ][844] Executing state file.directory for /etc/haproxy/ssl
2017-09-27 09:54:49,100 [salt.state       ][INFO    ][844] {'/etc/haproxy/ssl': 'New Dir'}
2017-09-27 09:54:49,100 [salt.state       ][INFO    ][844] Completed state [/etc/haproxy/ssl] at time 09:54:49.100362 duration_in_ms=1.703
2017-09-27 09:54:49,101 [salt.state       ][INFO    ][844] Running state [haproxy_status_packages] at time 09:54:49.101373
2017-09-27 09:54:49,102 [salt.state       ][INFO    ][844] Executing state pkg.installed for haproxy_status_packages
2017-09-27 09:54:49,320 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'socat'] in directory '/root'
2017-09-27 09:54:52,201 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:54:52,251 [salt.state       ][INFO    ][844] Made the following changes:
'socat' changed from 'absent' to '1.7.3.1-1'

2017-09-27 09:54:52,270 [salt.state       ][INFO    ][844] Loading fresh modules for state activity
2017-09-27 09:54:52,292 [salt.state       ][INFO    ][844] Completed state [haproxy_status_packages] at time 09:54:52.291758 duration_in_ms=3190.383
2017-09-27 09:54:52,295 [salt.state       ][INFO    ][844] Running state [/usr/bin/haproxy-status.sh] at time 09:54:52.295115
2017-09-27 09:54:52,296 [salt.state       ][INFO    ][844] Executing state file.managed for /usr/bin/haproxy-status.sh
2017-09-27 09:54:52,322 [salt.fileclient  ][INFO    ][844] Fetching file from saltenv 'base', ** done ** 'haproxy/files/haproxy-status.sh'
2017-09-27 09:54:52,345 [salt.fileclient  ][INFO    ][844] Fetching file from saltenv 'base', ** done ** 'haproxy/map.jinja'
2017-09-27 09:54:52,354 [salt.state       ][INFO    ][844] File changed:
New file
2017-09-27 09:54:52,355 [salt.state       ][INFO    ][844] Completed state [/usr/bin/haproxy-status.sh] at time 09:54:52.354735 duration_in_ms=59.621
2017-09-27 09:54:52,356 [salt.state       ][INFO    ][844] Running state [net.ipv4.ip_nonlocal_bind] at time 09:54:52.356447
2017-09-27 09:54:52,357 [salt.state       ][INFO    ][844] Executing state sysctl.present for net.ipv4.ip_nonlocal_bind
2017-09-27 09:54:52,361 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command 'sysctl -a' in directory '/root'
2017-09-27 09:54:52,380 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command 'sysctl -w net.ipv4.ip_nonlocal_bind="1"' in directory '/root'
2017-09-27 09:54:52,391 [salt.state       ][INFO    ][844] {'net.ipv4.ip_nonlocal_bind': 1}
2017-09-27 09:54:52,392 [salt.state       ][INFO    ][844] Completed state [net.ipv4.ip_nonlocal_bind] at time 09:54:52.391844 duration_in_ms=35.395
2017-09-27 09:54:52,485 [salt.state       ][INFO    ][844] Running state [haproxy] at time 09:54:52.485263
2017-09-27 09:54:52,486 [salt.state       ][INFO    ][844] Executing state service.running for haproxy
2017-09-27 09:54:52,486 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['systemctl', 'status', 'haproxy.service', '-n', '0'] in directory '/root'
2017-09-27 09:54:52,501 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['systemctl', 'is-active', 'haproxy.service'] in directory '/root'
2017-09-27 09:54:52,515 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2017-09-27 09:54:52,527 [salt.state       ][INFO    ][844] The service haproxy is already running
2017-09-27 09:54:52,528 [salt.state       ][INFO    ][844] Completed state [haproxy] at time 09:54:52.527458 duration_in_ms=42.194
2017-09-27 09:54:52,528 [salt.state       ][INFO    ][844] Running state [haproxy] at time 09:54:52.527788
2017-09-27 09:54:52,528 [salt.state       ][INFO    ][844] Executing state service.mod_watch for haproxy
2017-09-27 09:54:52,529 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['systemctl', 'is-active', 'haproxy.service'] in directory '/root'
2017-09-27 09:54:52,542 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2017-09-27 09:54:52,555 [salt.loaded.int.module.cmdmod][INFO    ][844] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'haproxy.service'] in directory '/root'
2017-09-27 09:54:52,589 [salt.state       ][INFO    ][844] {'haproxy': True}
2017-09-27 09:54:52,590 [salt.state       ][INFO    ][844] Completed state [haproxy] at time 09:54:52.589696 duration_in_ms=61.905
2017-09-27 09:54:52,592 [salt.minion      ][INFO    ][844] Returning information for job: 20170927095434355809
2017-09-27 09:54:57,626 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command service.status with jid 20170927095457618237
2017-09-27 09:54:57,649 [salt.minion      ][INFO    ][1693] Starting a new job with PID 1693
2017-09-27 09:54:58,419 [salt.loaded.int.module.cmdmod][INFO    ][1693] Executing command ['systemctl', 'status', 'haproxy.service', '-n', '0'] in directory '/root'
2017-09-27 09:54:58,435 [salt.loaded.int.module.cmdmod][INFO    ][1693] Executing command ['systemctl', 'is-active', 'haproxy.service'] in directory '/root'
2017-09-27 09:54:58,448 [salt.minion      ][INFO    ][1693] Returning information for job: 20170927095457618237
2017-09-27 09:54:59,487 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command service.restart with jid 20170927095459478231
2017-09-27 09:54:59,508 [salt.minion      ][INFO    ][1703] Starting a new job with PID 1703
2017-09-27 09:55:00,332 [salt.loaded.int.module.cmdmod][INFO    ][1703] Executing command ['systemctl', 'status', 'rsyslog.service', '-n', '0'] in directory '/root'
2017-09-27 09:55:00,347 [salt.loaded.int.module.cmdmod][INFO    ][1703] Executing command ['systemctl', 'is-enabled', 'rsyslog.service'] in directory '/root'
2017-09-27 09:55:00,373 [salt.loaded.int.module.cmdmod][INFO    ][1703] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'rsyslog.service'] in directory '/root'
2017-09-27 09:55:00,427 [salt.minion      ][INFO    ][1703] Returning information for job: 20170927095459478231
2017-09-27 09:55:01,548 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command test.ping with jid 20170927095501535447
2017-09-27 09:55:01,569 [salt.minion      ][INFO    ][1721] Starting a new job with PID 1721
2017-09-27 09:55:01,629 [salt.minion      ][INFO    ][1721] Returning information for job: 20170927095501535447
2017-09-27 09:56:42,941 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927095642933327
2017-09-27 09:56:42,964 [salt.minion      ][INFO    ][1747] Starting a new job with PID 1747
2017-09-27 09:56:44,700 [salt.state       ][INFO    ][1747] Loading fresh modules for state activity
2017-09-27 09:56:44,739 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/server.sls'
2017-09-27 09:56:44,858 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 09:56:44,923 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/init.sls'
2017-09-27 09:56:44,943 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/server/init.sls'
2017-09-27 09:56:44,961 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/server/service/init.sls'
2017-09-27 09:56:44,986 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:56:45,055 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/server/service/modules.sls'
2017-09-27 09:56:45,087 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:56:45,143 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/server/service/mpm.sls'
2017-09-27 09:56:45,175 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:56:45,243 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/server/site.sls'
2017-09-27 09:56:45,342 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:56:45,398 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/server/users.sls'
2017-09-27 09:56:45,431 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:56:45,474 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/server/robots.sls'
2017-09-27 09:56:45,497 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:56:46,654 [salt.state       ][INFO    ][1747] Running state [apache2] at time 09:56:46.654187
2017-09-27 09:56:46,654 [salt.state       ][INFO    ][1747] Executing state pkg.installed for apache2
2017-09-27 09:56:46,655 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:56:47,079 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 09:56:49,849 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'apache2'] in directory '/root'
2017-09-27 09:56:52,981 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927095652969049
2017-09-27 09:56:53,004 [salt.minion      ][INFO    ][2368] Starting a new job with PID 2368
2017-09-27 09:56:53,025 [salt.minion      ][INFO    ][2368] Returning information for job: 20170927095652969049
2017-09-27 09:57:03,143 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927095703130373
2017-09-27 09:57:03,164 [salt.minion      ][INFO    ][2454] Starting a new job with PID 2454
2017-09-27 09:57:03,204 [salt.minion      ][INFO    ][2454] Returning information for job: 20170927095703130373
2017-09-27 09:57:07,473 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:57:07,530 [salt.state       ][INFO    ][1747] Made the following changes:
'apache2-data' changed from 'absent' to '2.4.18-2ubuntu3.5'
'libapr1' changed from 'absent' to '1.5.2-3'
'apache2-utils' changed from 'absent' to '2.4.18-2ubuntu3.5'
'libaprutil1-ldap' changed from 'absent' to '1.5.4-1build1'
'apache2-api-20120211' changed from 'absent' to '1'
'httpd' changed from 'absent' to '1'
'apache2' changed from 'absent' to '2.4.18-2ubuntu3.5'
'liblua5.1-0' changed from 'absent' to '5.1.5-8ubuntu1'
'libaprutil1-dbd-sqlite3' changed from 'absent' to '1.5.4-1build1'
'httpd-cgi' changed from 'absent' to '1'
'libaprutil1' changed from 'absent' to '1.5.4-1build1'
'ssl-cert' changed from 'absent' to '1.0.37'
'apache2-bin' changed from 'absent' to '2.4.18-2ubuntu3.5'

2017-09-27 09:57:07,559 [salt.state       ][INFO    ][1747] Loading fresh modules for state activity
2017-09-27 09:57:07,608 [salt.state       ][INFO    ][1747] Completed state [apache2] at time 09:57:07.607746 duration_in_ms=20953.559
2017-09-27 09:57:07,610 [salt.state       ][INFO    ][1747] Running state [a2enmod ssl] at time 09:57:07.610228
2017-09-27 09:57:07,611 [salt.state       ][INFO    ][1747] Executing state cmd.run for a2enmod ssl
2017-09-27 09:57:07,613 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command 'a2enmod ssl' in directory '/root'
2017-09-27 09:57:07,695 [salt.state       ][INFO    ][1747] {'pid': 2856, 'retcode': 0, 'stderr': '', 'stdout': 'Considering dependency setenvif for ssl:\nModule setenvif already enabled\nConsidering dependency mime for ssl:\nModule mime already enabled\nConsidering dependency socache_shmcb for ssl:\nEnabling module socache_shmcb.\nEnabling module ssl.\nSee /usr/share/doc/apache2/README.Debian.gz on how to configure SSL and create self-signed certificates.\nTo activate the new configuration, you need to run:\n  service apache2 restart'}
2017-09-27 09:57:07,696 [salt.state       ][INFO    ][1747] Completed state [a2enmod ssl] at time 09:57:07.696233 duration_in_ms=86.003
2017-09-27 09:57:07,698 [salt.state       ][INFO    ][1747] Running state [a2enmod rewrite] at time 09:57:07.697454
2017-09-27 09:57:07,698 [salt.state       ][INFO    ][1747] Executing state cmd.run for a2enmod rewrite
2017-09-27 09:57:07,699 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command 'a2enmod rewrite' in directory '/root'
2017-09-27 09:57:07,762 [salt.state       ][INFO    ][1747] {'pid': 2882, 'retcode': 0, 'stderr': '', 'stdout': 'Enabling module rewrite.\nTo activate the new configuration, you need to run:\n  service apache2 restart'}
2017-09-27 09:57:07,763 [salt.state       ][INFO    ][1747] Completed state [a2enmod rewrite] at time 09:57:07.762614 duration_in_ms=65.157
2017-09-27 09:57:07,775 [salt.state       ][INFO    ][1747] Running state [libapache2-mod-wsgi] at time 09:57:07.775130
2017-09-27 09:57:07,776 [salt.state       ][INFO    ][1747] Executing state pkg.installed for libapache2-mod-wsgi
2017-09-27 09:57:08,034 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'libapache2-mod-wsgi'] in directory '/root'
2017-09-27 09:57:13,320 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927095713308278
2017-09-27 09:57:13,344 [salt.minion      ][INFO    ][3110] Starting a new job with PID 3110
2017-09-27 09:57:13,361 [salt.minion      ][INFO    ][3110] Returning information for job: 20170927095713308278
2017-09-27 09:57:14,426 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:57:14,456 [salt.state       ][INFO    ][1747] Made the following changes:
'libapache2-mod-wsgi' changed from 'absent' to '4.4.15-0.1~u16.04+mcp1'
'httpd-wsgi' changed from 'absent' to '1'
'libpython2.7' changed from 'absent' to '2.7.12-1ubuntu0~16.04.1'

2017-09-27 09:57:14,466 [salt.state       ][INFO    ][1747] Loading fresh modules for state activity
2017-09-27 09:57:14,478 [salt.state       ][INFO    ][1747] Completed state [libapache2-mod-wsgi] at time 09:57:14.478310 duration_in_ms=6703.18
2017-09-27 09:57:14,480 [salt.state       ][INFO    ][1747] Running state [a2enmod wsgi] at time 09:57:14.479774
2017-09-27 09:57:14,480 [salt.state       ][INFO    ][1747] Executing state cmd.run for a2enmod wsgi
2017-09-27 09:57:14,481 [salt.state       ][INFO    ][1747] /etc/apache2/mods-enabled/wsgi.load exists
2017-09-27 09:57:14,481 [salt.state       ][INFO    ][1747] Completed state [a2enmod wsgi] at time 09:57:14.480785 duration_in_ms=1.011
2017-09-27 09:57:14,482 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/mods-enabled/mpm_prefork.load] at time 09:57:14.482191
2017-09-27 09:57:14,483 [salt.state       ][INFO    ][1747] Executing state file.absent for /etc/apache2/mods-enabled/mpm_prefork.load
2017-09-27 09:57:14,483 [salt.state       ][INFO    ][1747] File /etc/apache2/mods-enabled/mpm_prefork.load is not present
2017-09-27 09:57:14,483 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/mods-enabled/mpm_prefork.load] at time 09:57:14.483159 duration_in_ms=0.968
2017-09-27 09:57:14,483 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/mods-enabled/mpm_prefork.conf] at time 09:57:14.483473
2017-09-27 09:57:14,484 [salt.state       ][INFO    ][1747] Executing state file.absent for /etc/apache2/mods-enabled/mpm_prefork.conf
2017-09-27 09:57:14,484 [salt.state       ][INFO    ][1747] File /etc/apache2/mods-enabled/mpm_prefork.conf is not present
2017-09-27 09:57:14,484 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/mods-enabled/mpm_prefork.conf] at time 09:57:14.484417 duration_in_ms=0.943
2017-09-27 09:57:14,485 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/mods-enabled/mpm_worker.load] at time 09:57:14.484712
2017-09-27 09:57:14,485 [salt.state       ][INFO    ][1747] Executing state file.absent for /etc/apache2/mods-enabled/mpm_worker.load
2017-09-27 09:57:14,485 [salt.state       ][INFO    ][1747] File /etc/apache2/mods-enabled/mpm_worker.load is not present
2017-09-27 09:57:14,486 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/mods-enabled/mpm_worker.load] at time 09:57:14.485639 duration_in_ms=0.927
2017-09-27 09:57:14,486 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/mods-enabled/mpm_worker.conf] at time 09:57:14.485938
2017-09-27 09:57:14,486 [salt.state       ][INFO    ][1747] Executing state file.absent for /etc/apache2/mods-enabled/mpm_worker.conf
2017-09-27 09:57:14,487 [salt.state       ][INFO    ][1747] File /etc/apache2/mods-enabled/mpm_worker.conf is not present
2017-09-27 09:57:14,487 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/mods-enabled/mpm_worker.conf] at time 09:57:14.486845 duration_in_ms=0.907
2017-09-27 09:57:14,488 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/mods-available/mpm_event.conf] at time 09:57:14.488469
2017-09-27 09:57:14,489 [salt.state       ][INFO    ][1747] Executing state file.managed for /etc/apache2/mods-available/mpm_event.conf
2017-09-27 09:57:14,518 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/files/mpm/mpm_event.conf'
2017-09-27 09:57:14,547 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:57:14,589 [salt.state       ][INFO    ][1747] File changed:
--- 
+++ 
@@ -5,14 +5,15 @@
 # ThreadsPerChild: constant number of worker threads in each server process
 # MaxRequestWorkers: maximum number of worker threads
 # MaxConnectionsPerChild: maximum number of requests a server process serves
+
 <IfModule mpm_event_module>
-	StartServers			 2
-	MinSpareThreads		 25
-	MaxSpareThreads		 75
-	ThreadLimit			 64
-	ThreadsPerChild		 25
-	MaxRequestWorkers	  150
-	MaxConnectionsPerChild   0
+    StartServers            5
+    MinSpareThreads         25
+    MaxSpareThreads         75
+    ThreadLimit             64
+    ThreadsPerChild         25
+    MaxRequestWorkers       150
+    MaxConnectionsPerChild  0
 </IfModule>
 
-# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
+# vim: syntax=apache ts=4 sw=4 sts=4 sr et

2017-09-27 09:57:14,590 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/mods-available/mpm_event.conf] at time 09:57:14.589593 duration_in_ms=101.122
2017-09-27 09:57:14,591 [salt.state       ][INFO    ][1747] Running state [a2enmod mpm_event] at time 09:57:14.590812
2017-09-27 09:57:14,591 [salt.state       ][INFO    ][1747] Executing state cmd.run for a2enmod mpm_event
2017-09-27 09:57:14,592 [salt.state       ][INFO    ][1747] /etc/apache2/mods-enabled/mpm_event.load exists
2017-09-27 09:57:14,592 [salt.state       ][INFO    ][1747] Completed state [a2enmod mpm_event] at time 09:57:14.591826 duration_in_ms=1.014
2017-09-27 09:57:14,592 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/ports.conf] at time 09:57:14.592393
2017-09-27 09:57:14,593 [salt.state       ][INFO    ][1747] Executing state file.managed for /etc/apache2/ports.conf
2017-09-27 09:57:14,612 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/files/ports.conf'
2017-09-27 09:57:14,638 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:57:14,676 [salt.state       ][INFO    ][1747] File /etc/apache2/ports.conf is in the correct state
2017-09-27 09:57:14,677 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/ports.conf] at time 09:57:14.676494 duration_in_ms=84.099
2017-09-27 09:57:14,677 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/conf-available/security.conf] at time 09:57:14.677222
2017-09-27 09:57:14,678 [salt.state       ][INFO    ][1747] Executing state file.managed for /etc/apache2/conf-available/security.conf
2017-09-27 09:57:14,699 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/files/security.conf'
2017-09-27 09:57:14,731 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:57:14,759 [salt.state       ][INFO    ][1747] File changed:
--- 
+++ 
@@ -1,73 +1,14 @@
-#
-# Disable access to the entire file system except for the directories that
-# are explicitly allowed later.
-#
-# This currently breaks the configurations that come with some web application
-# Debian packages.
-#
-#<Directory />
-#   AllowOverride None
-#   Require all denied
-#</Directory>
+ServerSignature Off
+TraceEnable Off
+ServerTokens Prod
+<DirectoryMatch "/\.svn">
+    Require all denied
+</DirectoryMatch>
 
+<DirectoryMatch "/\.git">
+    Require all denied
+</DirectoryMatch>
 
-# Changing the following options will not really affect the security of the
-# server, but might make attacks slightly more difficult in some cases.
-
-#
-# ServerTokens
-# This directive configures what you return as the Server HTTP response
-# Header. The default is 'Full' which sends information about the OS-Type
-# and compiled in modules.
-# Set to one of:  Full | OS | Minimal | Minor | Major | Prod
-# where Full conveys the most information, and Prod the least.
-#ServerTokens Minimal
-ServerTokens OS
-#ServerTokens Full
-
-#
-# Optionally add a line containing the server version and virtual host
-# name to server-generated pages (internal error documents, FTP directory
-# listings, mod_status and mod_info output etc., but not CGI generated
-# documents or custom error documents).
-# Set to "EMail" to also include a mailto: link to the ServerAdmin.
-# Set to one of:  On | Off | EMail
-#ServerSignature Off
-ServerSignature On
-
-#
-# Allow TRACE method
-#
-# Set to "extended" to also reflect the request body (only for testing and
-# diagnostic purposes).
-#
-# Set to one of:  On | Off | extended
-TraceEnable Off
-#TraceEnable On
-
-#
-# Forbid access to version control directories
-#
-# If you use version control systems in your document root, you should
-# probably deny access to their directories. For example, for subversion:
-#
-#<DirectoryMatch "/\.svn">
-#   Require all denied
-#</DirectoryMatch>
-
-#
-# Setting this header will prevent MSIE from interpreting files as something
-# else than declared by the content type in the HTTP headers.
-# Requires mod_headers to be enabled.
-#
-#Header set X-Content-Type-Options: "nosniff"
-
-#
-# Setting this header will prevent other sites from embedding pages from this
-# site as frames. This defends against clickjacking attacks.
-# Requires mod_headers to be enabled.
-#
-#Header set X-Frame-Options: "sameorigin"
-
-
-# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
+<DirectoryMatch "/\.hg">
+    Require all denied
+</DirectoryMatch>

2017-09-27 09:57:14,759 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/conf-available/security.conf] at time 09:57:14.758905 duration_in_ms=81.682
2017-09-27 09:57:14,929 [salt.state       ][INFO    ][1747] Running state [keystone] at time 09:57:14.929002
2017-09-27 09:57:14,929 [salt.state       ][INFO    ][1747] Executing state group.present for keystone
2017-09-27 09:57:14,931 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command 'groupadd -g 301 -r keystone' in directory '/root'
2017-09-27 09:57:15,090 [salt.state       ][INFO    ][1747] {'passwd': 'x', 'gid': 301, 'name': 'keystone', 'members': []}
2017-09-27 09:57:15,090 [salt.state       ][INFO    ][1747] Completed state [keystone] at time 09:57:15.090094 duration_in_ms=161.09
2017-09-27 09:57:15,091 [salt.state       ][INFO    ][1747] Running state [keystone] at time 09:57:15.090827
2017-09-27 09:57:15,091 [salt.state       ][INFO    ][1747] Executing state user.present for keystone
2017-09-27 09:57:15,098 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['useradd', '-s', '/bin/false', '-u', '301', '-g', '301', '-m', '-d', '/var/lib/keystone', '-r', 'keystone'] in directory '/root'
2017-09-27 09:57:15,229 [salt.state       ][INFO    ][1747] {'shell': '/bin/false', 'workphone': '', 'uid': 301, 'passwd': 'x', 'roomnumber': '', 'groups': ['keystone'], 'home': '/var/lib/keystone', 'name': 'keystone', 'gid': 301, 'fullname': '', 'homephone': ''}
2017-09-27 09:57:15,229 [salt.state       ][INFO    ][1747] Completed state [keystone] at time 09:57:15.229157 duration_in_ms=138.326
2017-09-27 09:57:15,231 [salt.state       ][INFO    ][1747] Running state [mysql-client] at time 09:57:15.230570
2017-09-27 09:57:15,231 [salt.state       ][INFO    ][1747] Executing state pkg.installed for mysql-client
2017-09-27 09:57:15,436 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'mysql-client'] in directory '/root'
2017-09-27 09:57:23,478 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927095723467949
2017-09-27 09:57:23,503 [salt.minion      ][INFO    ][3292] Starting a new job with PID 3292
2017-09-27 09:57:23,520 [salt.minion      ][INFO    ][3292] Returning information for job: 20170927095723467949
2017-09-27 09:57:24,138 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:57:24,190 [salt.state       ][INFO    ][1747] Made the following changes:
'mysql-wsrep-common-5.6' changed from 'absent' to '5.6.35-0~u16.04+mcp3'
'mysql-client' changed from 'absent' to '5.6.35-0~u16.04+mcp3'
'libmysqlclient18' changed from 'absent' to '1'
'virtual-mysql-client' changed from 'absent' to '1'
'mysql-common-5.6' changed from 'absent' to '1'
'mysql-wsrep-client-5.6' changed from 'absent' to '5.6.35-0~u16.04+mcp3'
'libmysqlclient20' changed from 'absent' to '5.7.19-0ubuntu0.16.04.1'
'mysql-wsrep-libmysqlclient18' changed from 'absent' to '5.6.35-0~u16.04+mcp3'
'libdbi-perl' changed from 'absent' to '1.634-1build1'
'mysql-common' changed from 'absent' to '5.7.19-0ubuntu0.16.04.1'
'libterm-readkey-perl' changed from 'absent' to '2.33-1build1'
'perl-dbdabi-94' changed from 'absent' to '1'
'libdbd-mysql-perl' changed from 'absent' to '4.033-1ubuntu0.1'

2017-09-27 09:57:24,208 [salt.state       ][INFO    ][1747] Loading fresh modules for state activity
2017-09-27 09:57:24,229 [salt.state       ][INFO    ][1747] Completed state [mysql-client] at time 09:57:24.228725 duration_in_ms=8998.154
2017-09-27 09:57:24,238 [salt.state       ][INFO    ][1747] Running state [python-keystoneclient] at time 09:57:24.237726
2017-09-27 09:57:24,238 [salt.state       ][INFO    ][1747] Executing state pkg.installed for python-keystoneclient
2017-09-27 09:57:24,467 [salt.state       ][INFO    ][1747] Package python-keystoneclient is already installed
2017-09-27 09:57:24,468 [salt.state       ][INFO    ][1747] Completed state [python-keystoneclient] at time 09:57:24.467593 duration_in_ms=229.867
2017-09-27 09:57:24,468 [salt.state       ][INFO    ][1747] Running state [python-mysqldb] at time 09:57:24.468203
2017-09-27 09:57:24,469 [salt.state       ][INFO    ][1747] Executing state pkg.installed for python-mysqldb
2017-09-27 09:57:24,476 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-mysqldb'] in directory '/root'
2017-09-27 09:57:26,564 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:57:26,600 [salt.state       ][INFO    ][1747] Made the following changes:
'python2.7-mysqldb' changed from 'absent' to '1'
'python-mysqldb' changed from 'absent' to '1.3.7-1build2'

2017-09-27 09:57:26,611 [salt.state       ][INFO    ][1747] Loading fresh modules for state activity
2017-09-27 09:57:26,624 [salt.state       ][INFO    ][1747] Completed state [python-mysqldb] at time 09:57:26.623796 duration_in_ms=2155.593
2017-09-27 09:57:26,630 [salt.state       ][INFO    ][1747] Running state [python-openstackclient] at time 09:57:26.629532
2017-09-27 09:57:26,630 [salt.state       ][INFO    ][1747] Executing state pkg.installed for python-openstackclient
2017-09-27 09:57:26,843 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-openstackclient'] in directory '/root'
2017-09-27 09:57:33,645 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927095733627369
2017-09-27 09:57:33,667 [salt.minion      ][INFO    ][3517] Starting a new job with PID 3517
2017-09-27 09:57:33,682 [salt.minion      ][INFO    ][3517] Returning information for job: 20170927095733627369
2017-09-27 09:57:35,582 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:57:35,633 [salt.state       ][INFO    ][1747] Made the following changes:
'python-warlock' changed from 'absent' to '1.1.0-1'
'python-openstackclient' changed from 'absent' to '3.8.1-1~u16.04+mcp4'
'python-glanceclient' changed from 'absent' to '1:2.6.0-1~u16.04+mcp3'
'python-openstacksdk' changed from 'absent' to '0.9.13-1~u16.04+mcp0'
'python-cinderclient' changed from 'absent' to '1:1.11.0-1~u16.04+mcp4'
'python-json-pointer' changed from 'absent' to '1.9-3'
'python-jsonpatch' changed from 'absent' to '1.19-3'
'python2.7-cinderclient' changed from 'absent' to '1'

2017-09-27 09:57:35,658 [salt.state       ][INFO    ][1747] Loading fresh modules for state activity
2017-09-27 09:57:35,698 [salt.state       ][INFO    ][1747] Completed state [python-openstackclient] at time 09:57:35.697950 duration_in_ms=9068.416
2017-09-27 09:57:35,713 [salt.state       ][INFO    ][1747] Running state [python-memcache] at time 09:57:35.713018
2017-09-27 09:57:35,714 [salt.state       ][INFO    ][1747] Executing state pkg.installed for python-memcache
2017-09-27 09:57:36,176 [salt.state       ][INFO    ][1747] Package python-memcache is already installed
2017-09-27 09:57:36,177 [salt.state       ][INFO    ][1747] Completed state [python-memcache] at time 09:57:36.176875 duration_in_ms=463.857
2017-09-27 09:57:36,178 [salt.state       ][INFO    ][1747] Running state [python-pycadf] at time 09:57:36.177875
2017-09-27 09:57:36,178 [salt.state       ][INFO    ][1747] Executing state pkg.installed for python-pycadf
2017-09-27 09:57:36,182 [salt.state       ][INFO    ][1747] Package python-pycadf is already installed
2017-09-27 09:57:36,183 [salt.state       ][INFO    ][1747] Completed state [python-pycadf] at time 09:57:36.182844 duration_in_ms=4.97
2017-09-27 09:57:36,184 [salt.state       ][INFO    ][1747] Running state [python-six] at time 09:57:36.183531
2017-09-27 09:57:36,184 [salt.state       ][INFO    ][1747] Executing state pkg.installed for python-six
2017-09-27 09:57:36,187 [salt.state       ][INFO    ][1747] Package python-six is already installed
2017-09-27 09:57:36,188 [salt.state       ][INFO    ][1747] Completed state [python-six] at time 09:57:36.187607 duration_in_ms=4.076
2017-09-27 09:57:36,188 [salt.state       ][INFO    ][1747] Running state [keystone] at time 09:57:36.188266
2017-09-27 09:57:36,189 [salt.state       ][INFO    ][1747] Executing state pkg.installed for keystone
2017-09-27 09:57:36,198 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'keystone'] in directory '/root'
2017-09-27 09:57:43,803 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927095743791448
2017-09-27 09:57:43,828 [salt.minion      ][INFO    ][3678] Starting a new job with PID 3678
2017-09-27 09:57:43,850 [salt.minion      ][INFO    ][3678] Returning information for job: 20170927095743791448
2017-09-27 09:57:48,590 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:57:48,641 [salt.state       ][INFO    ][1747] Made the following changes:
'python-zope.interface' changed from 'absent' to '4.1.3-1build1'
'python-passlib' changed from 'absent' to '1.7.1-2~u16.04+mcp1'
'python2.7-keystone' changed from 'absent' to '1'
'python2.7-zope.interface' changed from 'absent' to '1'
'python-defusedxml' changed from 'absent' to '0.4.1-2ubuntu0.16.04.1'
'libxmlsec1' changed from 'absent' to '1.2.20-2ubuntu4'
'python-pysaml2' changed from 'absent' to '3.0.0-3ubuntu1.16.04.1'
'python-zope' changed from 'absent' to '1'
'libxmlsec1-openssl' changed from 'absent' to '1.2.20-2ubuntu4'
'xmlsec1' changed from 'absent' to '1.2.20-2ubuntu4'
'python-zopeinterface' changed from 'absent' to '1'
'keystone' changed from 'absent' to '2:11.0.2-1~u16.04+mcp3'
'python-keystone' changed from 'absent' to '2:11.0.2-1~u16.04+mcp3'

2017-09-27 09:57:48,659 [salt.state       ][INFO    ][1747] Loading fresh modules for state activity
2017-09-27 09:57:48,682 [salt.state       ][INFO    ][1747] Completed state [keystone] at time 09:57:48.682218 duration_in_ms=12493.95
2017-09-27 09:57:48,695 [salt.state       ][INFO    ][1747] Running state [gettext-base] at time 09:57:48.694567
2017-09-27 09:57:48,695 [salt.state       ][INFO    ][1747] Executing state pkg.installed for gettext-base
2017-09-27 09:57:48,966 [salt.state       ][INFO    ][1747] Package gettext-base is already installed
2017-09-27 09:57:48,967 [salt.state       ][INFO    ][1747] Completed state [gettext-base] at time 09:57:48.966565 duration_in_ms=271.997
2017-09-27 09:57:48,967 [salt.state       ][INFO    ][1747] Running state [python-psycopg2] at time 09:57:48.967231
2017-09-27 09:57:48,968 [salt.state       ][INFO    ][1747] Executing state pkg.installed for python-psycopg2
2017-09-27 09:57:48,975 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-psycopg2'] in directory '/root'
2017-09-27 09:57:53,743 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 09:57:53,791 [salt.state       ][INFO    ][1747] Made the following changes:
'python-psycopg2' changed from 'absent' to '2.6.1-1build2'
'python-egenix-mxtools' changed from 'absent' to '3.2.9-1'
'python-egenix-mxdatetime' changed from 'absent' to '3.2.9-1'
'libpq5' changed from 'absent' to '9.5.8-0ubuntu0.16.04.1'

2017-09-27 09:57:53,803 [salt.state       ][INFO    ][1747] Loading fresh modules for state activity
2017-09-27 09:57:53,817 [salt.state       ][INFO    ][1747] Completed state [python-psycopg2] at time 09:57:53.816585 duration_in_ms=4849.353
2017-09-27 09:57:53,822 [salt.state       ][INFO    ][1747] Running state [python-keystone] at time 09:57:53.822470
2017-09-27 09:57:53,823 [salt.state       ][INFO    ][1747] Executing state pkg.installed for python-keystone
2017-09-27 09:57:54,029 [salt.state       ][INFO    ][1747] Package python-keystone is already installed
2017-09-27 09:57:54,029 [salt.state       ][INFO    ][1747] Completed state [python-keystone] at time 09:57:54.029337 duration_in_ms=206.866
2017-09-27 09:57:54,031 [salt.state       ][INFO    ][1747] Running state [/etc/keystone/policy.json] at time 09:57:54.030686
2017-09-27 09:57:54,031 [salt.state       ][INFO    ][1747] Executing state file.managed for /etc/keystone/policy.json
2017-09-27 09:57:54,031 [salt.loaded.int.states.file][WARNING ][1747] State for file: /etc/keystone/policy.json - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2017-09-27 09:57:54,032 [salt.state       ][INFO    ][1747] {'group': 'keystone', 'user': 'keystone'}
2017-09-27 09:57:54,032 [salt.state       ][INFO    ][1747] Completed state [/etc/keystone/policy.json] at time 09:57:54.032224 duration_in_ms=1.538
2017-09-27 09:57:54,033 [salt.state       ][INFO    ][1747] Running state [/etc/keystone/keystone-paste.ini] at time 09:57:54.032721
2017-09-27 09:57:54,033 [salt.state       ][INFO    ][1747] Executing state file.managed for /etc/keystone/keystone-paste.ini
2017-09-27 09:57:54,056 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/files/ocata/keystone-paste.ini.Debian'
2017-09-27 09:57:54,059 [salt.state       ][INFO    ][1747] {'group': 'keystone', 'user': 'keystone'}
2017-09-27 09:57:54,059 [salt.state       ][INFO    ][1747] Completed state [/etc/keystone/keystone-paste.ini] at time 09:57:54.059330 duration_in_ms=26.608
2017-09-27 09:57:54,060 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/sites-enabled/000-default.conf] at time 09:57:54.059680
2017-09-27 09:57:54,060 [salt.state       ][INFO    ][1747] Executing state file.absent for /etc/apache2/sites-enabled/000-default.conf
2017-09-27 09:57:54,060 [salt.state       ][INFO    ][1747] {'removed': '/etc/apache2/sites-enabled/000-default.conf'}
2017-09-27 09:57:54,061 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/sites-enabled/000-default.conf] at time 09:57:54.060764 duration_in_ms=1.084
2017-09-27 09:57:54,061 [salt.state       ][INFO    ][1747] Running state [/etc/keystone/keystone.conf] at time 09:57:54.061298
2017-09-27 09:57:54,062 [salt.state       ][INFO    ][1747] Executing state file.managed for /etc/keystone/keystone.conf
2017-09-27 09:57:54,094 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/files/ocata/keystone.conf.Debian'
2017-09-27 09:57:54,182 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 09:57:54,210 [salt.state       ][INFO    ][1747] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -15,6 +16,7 @@
 # `AdminTokenAuthMiddleware` (the `admin_token_auth` filter) from your paste
 # application pipelines (for example, in `keystone-paste.ini`). (string value)
 #admin_token = <None>
+admin_token=opnfv_secret
 
 # The base public endpoint URL for Keystone that is advertised to clients
 # (NOTE: this does NOT affect how Keystone listens for connections). Defaults
@@ -98,6 +100,7 @@
 # in the P release. Use oslo.middleware.http_proxy_to_wsgi configuration
 # instead.
 #secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
+secure_proxy_ssl_header = "HTTP_X_FORWARDED_PROTO"
 
 # If set to true, then the server will return information in HTTP responses
 # that may allow an unauthenticated or authenticated user to get more
@@ -117,6 +120,7 @@
 # recommended for auditing use cases. (string value)
 # Allowed values: basic, cadf
 #notification_format = cadf
+notification_format = basic
 
 # You can reduce the number of notifications keystone emits by explicitly
 # opting out. Keystone will not emit notifications that match the patterns
@@ -139,12 +143,14 @@
 # INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
+debug = false
 
 # DEPRECATED: If set to false, the logging level will be set to WARNING instead
 # of the default INFO level. (boolean value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #verbose = true
+verbose = true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -171,6 +177,7 @@
 # is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
 #log_dir = <None>
+log_dir = /var/log/keystone
 
 # Uses logging handler designed to watch file system. When log file is moved or
 # removed this handler will open a new log file with specified path
@@ -414,10 +421,10 @@
 
 # Seconds to wait for a response from a call. (integer value)
 #rpc_response_timeout = 60
-
 # A URL representing the messaging driver to use and its full configuration.
 # (string value)
-#transport_url = <None>
+#transport_url = rabbit://nova:3qVSI7a1m8AdaDQ7BpB0PJu4@192.168.0.4:5673/
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.41:5672,openstack:opnfv_secret@10.167.4.42:5672,openstack:opnfv_secret@10.167.4.43:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
@@ -425,7 +432,6 @@
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
 #rpc_backend = rabbit
-
 # The default exchange under which topics are scoped. May be overridden by an
 # exchange name specified in the transport_url option. (string value)
 #control_exchange = keystone
@@ -446,7 +452,7 @@
 # A list of role names which are prohibited from being an implied role. (list
 # value)
 #prohibited_implied_role = admin
-
+driver = sql
 
 [auth]
 
@@ -460,6 +466,8 @@
 # are being invoked to validate attributes in the request environment, it can
 # cause conflicts. (list value)
 #methods = external,password,token,oauth1,mapped
+
+methods = password,token
 
 # Entry point for the password auth plugin module in the
 # `keystone.auth.password` namespace. You do not need to set this unless you
@@ -518,6 +526,7 @@
 # recommended. Test environments with a single instance of the server can use
 # the dogpile.cache.memory backend. (string value)
 #backend = dogpile.cache.null
+backend = oslo_cache.memcache_pool
 
 # Arguments supplied to the backend module. Specify this option once per
 # argument to be passed to the dogpile.cache backend. Example format:
@@ -531,6 +540,7 @@
 
 # Global toggle for caching. (boolean value)
 #enabled = true
+enabled = true
 
 # Extra debugging from the cache backend (cache keys, get/set/delete/etc
 # calls). This is only really useful if you need to see the specific cache-
@@ -541,23 +551,28 @@
 # Memcache servers in the format of "host:port". (dogpile.cache.memcache and
 # oslo_cache.memcache_pool backends only). (list value)
 #memcache_servers = localhost:11211
+memcache_servers =10.167.4.11:11211,10.167.4.12:11211,10.167.4.13:11211
 
 # Number of seconds memcached server is considered dead before it is tried
 # again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
 # (integer value)
 #memcache_dead_retry = 300
+memcache_dead_retry = 300
 
 # Timeout in seconds for every call to a server. (dogpile.cache.memcache and
 # oslo_cache.memcache_pool backends only). (integer value)
 #memcache_socket_timeout = 3
+memcache_socket_timeout = 1
 
 # Max total number of open connections to every memcached server.
 # (oslo_cache.memcache_pool backend only). (integer value)
 #memcache_pool_maxsize = 10
+memcache_pool_maxsize = 100
 
 # Number of seconds a connection to memcached is held unused in the pool before
 # it is closed. (oslo_cache.memcache_pool backend only). (integer value)
 #memcache_pool_unused_timeout = 60
+memcache_pool_unused_timeout = 60
 
 # Number of seconds that an operation will wait to get a memcache client
 # connection. (integer value)
@@ -573,6 +588,7 @@
 # Absolute path to the file used for the templated catalog backend. This option
 # is only used if the `[catalog] driver` is set to `templated`. (string value)
 #template_file = default_catalog.templates
+template_file = default_catalog.templates
 
 # Entry point for the catalog driver in the `keystone.catalog` namespace.
 # Keystone provides a `sql` option (which supports basic CRUD operations
@@ -580,6 +596,7 @@
 # catalog file on disk), and a `endpoint_filter.sql` option (which supports
 # arbitrary service catalogs per project). (string value)
 #driver = sql
+driver = sql
 
 # Toggle for catalog caching. This has no effect unless global caching is
 # enabled. In a typical deployment, there is no reason to disable this.
@@ -610,22 +627,29 @@
 # slash. Example: https://horizon.example.com (list value)
 #allowed_origin = <None>
 
+
 # Indicate that the actual request can include user credentials (boolean value)
 #allow_credentials = true
+
 
 # Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
 # Headers. (list value)
 #expose_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token
 
+
 # Maximum cache age of CORS preflight requests. (integer value)
 #max_age = 3600
 
+
+
 # Indicate which methods can be used during the actual request. (list value)
 #allow_methods = GET,PUT,POST,DELETE,PATCH
+
 
 # Indicate which header field names may be used during the actual request.
 # (list value)
 #allow_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-Domain-Id,X-Domain-Name
+
 
 
 [cors.subdomain]
@@ -680,7 +704,7 @@
 # of keys should be managed separately and require different rotation policies.
 # Do not share this repository with the repository used to manage keys for
 # Fernet tokens. (string value)
-#key_repository = /etc/keystone/credential-keys/
+key_repository = /var/lib/keystone/credential-keys
 
 
 [database]
@@ -711,6 +735,7 @@
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
 #connection = <None>
+connection=mysql+pymysql://keystone:opnfv_secret@10.167.4.50/keystone
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
@@ -727,6 +752,7 @@
 # Deprecated group/name - [DATABASE]/sql_idle_timeout
 # Deprecated group/name - [sql]/idle_timeout
 #idle_timeout = 3600
+idle_timeout = 3600
 
 # Minimum number of SQL connections to keep open in a pool. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
@@ -738,6 +764,9 @@
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
 #max_pool_size = 5
+max_pool_size=10
+max_overflow=30
+max_retries=-1
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
@@ -860,6 +889,7 @@
 # Newton release. These options remain for backwards compatibility because they
 # are used for URL substitutions.
 #public_bind_host = 0.0.0.0
+public_bind_host=10.167.4.13
 
 # DEPRECATED: The port number for the public service to listen on. (port value)
 # Minimum value: 0
@@ -871,6 +901,7 @@
 # Newton release. These options remain for backwards compatibility because they
 # are used for URL substitutions.
 #public_port = 5000
+public_port = 5000
 
 # DEPRECATED: The IP address of the network interface for the admin service to
 # listen on. (string value)
@@ -882,6 +913,7 @@
 # Newton release. These options remain for backwards compatibility because they
 # are used for URL substitutions.
 #admin_bind_host = 0.0.0.0
+admin_bind_host=10.167.4.13
 
 # DEPRECATED: The port number for the admin service to listen on. (port value)
 # Minimum value: 0
@@ -893,7 +925,17 @@
 # Newton release. These options remain for backwards compatibility because they
 # are used for URL substitutions.
 #admin_port = 35357
-
+admin_port = 35357
+
+
+[extra_headers]
+
+#
+# From keystone
+#
+
+# Specifies the distribution of the keystone server. (string value)
+#Distribution = Ubuntu
 
 [federation]
 
@@ -922,13 +964,6 @@
 # domain with this name or update an existing domain to this name. You are not
 # advised to change this value unless you really have to. (string value)
 #federated_domain_name = Federated
-
-# When turned on, Keystone will save in DB group membership for federated
-# users. Enabling this option is the only way so far to make trusts delegated
-# by federated users work. The downside is that a trust could be used even
-# after the delegating user's permissions were changed in IdP, unless that
-# change was propagated to Keystone by some automation script. (boolean value)
-#cache_group_membership_in_db = false
 
 # A list of trusted dashboard hosts. Before accepting a Single Sign-On request
 # to return a token, the origin host must be a member of this list. This
@@ -977,6 +1012,7 @@
 # other nodes will result in tokens that can not be validated by all nodes.
 # (string value)
 #key_repository = /etc/keystone/fernet-keys/
+key_repository = /var/lib/keystone/fernet-keys
 
 # This controls how many keys are held in rotation by `keystone-manage
 # fernet_rotate` before they are discarded. The default value of 3 means that
@@ -986,7 +1022,7 @@
 # the rotation. (integer value)
 # Minimum value: 1
 #max_active_keys = 3
-
+max_active_keys=3
 
 [healthcheck]
 
@@ -1063,6 +1099,7 @@
 # deployment primarily relies on `ldap` AND is not using domain-specific
 # configuration, you should typically leave this set to `sql`. (string value)
 #driver = sql
+driver = sql
 
 # Toggle for identity caching. This has no effect unless global caching is
 # enabled. There is typically no reason to disable this. (boolean value)
@@ -1081,7 +1118,6 @@
 # Maximum number of entities that will be returned in an identity collection.
 # (integer value)
 #list_limit = <None>
-
 
 [identity_mapping]
 
@@ -1171,7 +1207,6 @@
 # in the P release. Use SQL backends instead.
 #default_lock_timeout = 5
 
-
 [ldap]
 
 #
@@ -1516,6 +1551,7 @@
 # Reason: This option has been deprecated in the O release and will be removed
 # in the P release. Use oslo.cache instead.
 #servers = localhost:11211
+servers =10.167.4.11:11211,10.167.4.12:11211,10.167.4.13:11211
 
 # Number of seconds memcached server is considered dead before it is tried
 # again. This is used by the key value store system. (integer value)
@@ -1811,6 +1847,7 @@
 # messaging, messagingv2, routing, log, test, noop (multi valued)
 # Deprecated group/name - [DEFAULT]/notification_driver
 #driver =
+driver=messagingv2
 
 # A URL representing the messaging driver to use for notifications. If not set,
 # we fall back to the same configuration used for RPC. (string value)
@@ -1967,16 +2004,18 @@
 
 # Specifies the number of messages to prefetch. Setting to zero allows
 # unlimited messages. (integer value)
-#rabbit_qos_prefetch_count = 64
+#rabbit_qos_prefetch_count = 0
 
 # Number of seconds after which the Rabbit broker is considered down if
 # heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
 # value)
 #heartbeat_timeout_threshold = 60
+heartbeat_timeout_threshold = 0
 
 # How often times during the heartbeat_timeout_threshold we check the
 # heartbeat. (integer value)
 #heartbeat_rate = 2
+heartbeat_rate = 2
 
 # Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
 # Deprecated group/name - [DEFAULT]/fake_rabbit
@@ -2260,7 +2299,7 @@
 # The maximum body size for each  request, in bytes. (integer value)
 # Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
 # Deprecated group/name - [DEFAULT]/max_request_body_size
-#max_request_body_size = 114688
+max_request_body_size= 114688
 
 # DEPRECATED: The HTTP Header that will be used to determine what the original
 # request protocol scheme was, even if it was hidden by a SSL termination
@@ -2321,6 +2360,7 @@
 # the v3 policy API) and `sql`. Typically, there is no reason to set this
 # option unless you are providing a custom entry point. (string value)
 #driver = sql
+driver = keystone.policy.backends.sql.Policy
 
 # Maximum number of entities that will be returned in a policy collection.
 # (integer value)
@@ -2386,8 +2426,43 @@
 # Examples of possible values:
 #
 # * messaging://: use oslo_messaging driver for sending notifications.
+# * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications.
+# * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending
+# notifications.
 #  (string value)
 #connection_string = messaging://
+
+#
+# Document type for notification indexing in elasticsearch.
+#  (string value)
+#es_doc_type = notification
+
+#
+# This parameter is a time value parameter (for example: es_scroll_time=2m),
+# indicating for how long the nodes that participate in the search will
+# maintain
+# relevant resources in order to continue and support it.
+#  (string value)
+#es_scroll_time = 2m
+
+#
+# Elasticsearch splits large requests in batches. This parameter defines
+# maximum size of each batch (for example: es_scroll_size=10000).
+#  (integer value)
+#es_scroll_size = 10000
+
+#
+# Redissentinel provides a timeout option on the connections.
+# This parameter defines that timeout (for example: socket_timeout=0.1).
+#  (floating point value)
+#socket_timeout = 0.1
+
+#
+# Redissentinel uses a service name to identify a master redis service.
+# This parameter defines the name (for example:
+# sentinal_service_name=mymaster).
+#  (string value)
+#sentinel_service_name = mymaster
 
 
 [resource]
@@ -2792,6 +2867,7 @@
 # Minimum value: 0
 # Maximum value: 9223372036854775807
 #expiration = 3600
+expiration = 3600
 
 # Entry point for the token provider in the `keystone.token.provider`
 # namespace. The token provider controls the token construction, validation,
@@ -2803,6 +2879,9 @@
 # fernet_rotate` command). (string value)
 #provider = fernet
 
+provider = keystone.token.providers.fernet.Provider
+
+
 # Entry point for the token persistence backend driver in the
 # `keystone.token.persistence` namespace. Keystone provides `kvs` and `sql`
 # drivers. The `kvs` backend depends on the configuration in the `[kvs]`
@@ -2810,10 +2889,12 @@
 # `[database]` section. If you're using the `fernet` `[token] provider`, this
 # backend will not be utilized to persist tokens at all. (string value)
 #driver = sql
+driver = keystone.token.persistence.backends.memcache_pool.Token
 
 # Toggle for caching token creation and validation data. This has no effect
 # unless global caching is enabled. (boolean value)
 #caching = true
+caching = false
 
 # The number of seconds to cache token creation and validation data. This has
 # no effect unless both global and `[token] caching` are enabled. (integer
@@ -2828,6 +2909,7 @@
 # of tokens to consider revoked. Do not disable this option if you're using the
 # `kvs` `[revoke] driver`. (boolean value)
 #revoke_by_id = true
+revoke_by_id = False
 
 # This toggles whether scoped tokens may be be re-scoped to a new project or
 # domain, thereby preventing users from exchanging a scoped token (including
@@ -2854,6 +2936,7 @@
 # Defaults to two days. (integer value)
 #allow_expired_window = 172800
 
+hash_algorithm = sha256
 
 [tokenless_auth]
 
@@ -2910,3 +2993,5 @@
 # Keystone only provides a `sql` driver, so there is no reason to change this
 # unless you are providing a custom entry point. (string value)
 #driver = sql
+[extra_headers]
+Distribution = Ubuntu

2017-09-27 09:57:54,214 [salt.state       ][INFO    ][1747] Completed state [/etc/keystone/keystone.conf] at time 09:57:54.213967 duration_in_ms=152.666
2017-09-27 09:57:54,214 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/sites-enabled/keystone.conf] at time 09:57:54.214398
2017-09-27 09:57:54,215 [salt.state       ][INFO    ][1747] Executing state file.absent for /etc/apache2/sites-enabled/keystone.conf
2017-09-27 09:57:54,215 [salt.state       ][INFO    ][1747] {'removed': '/etc/apache2/sites-enabled/keystone.conf'}
2017-09-27 09:57:54,216 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/sites-enabled/keystone.conf] at time 09:57:54.215731 duration_in_ms=1.333
2017-09-27 09:57:54,216 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/sites-enabled/wsgi-keystone.conf] at time 09:57:54.216097
2017-09-27 09:57:54,216 [salt.state       ][INFO    ][1747] Executing state file.absent for /etc/apache2/sites-enabled/wsgi-keystone.conf
2017-09-27 09:57:54,217 [salt.state       ][INFO    ][1747] File /etc/apache2/sites-enabled/wsgi-keystone.conf is not present
2017-09-27 09:57:54,217 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/sites-enabled/wsgi-keystone.conf] at time 09:57:54.217229 duration_in_ms=1.132
2017-09-27 09:57:54,218 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/sites-available/keystone_wsgi.conf] at time 09:57:54.218412
2017-09-27 09:57:54,219 [salt.state       ][INFO    ][1747] Executing state file.managed for /etc/apache2/sites-available/keystone_wsgi.conf
2017-09-27 09:57:54,240 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/files/apache.conf'
2017-09-27 09:57:54,244 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927095753961077
2017-09-27 09:57:54,261 [salt.minion      ][INFO    ][4150] Starting a new job with PID 4150
2017-09-27 09:57:54,266 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 09:57:54,280 [salt.minion      ][INFO    ][4150] Returning information for job: 20170927095753961077
2017-09-27 09:57:54,306 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/files/ocata/wsgi-keystone.conf'
2017-09-27 09:57:54,385 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/files/_name.conf'
2017-09-27 09:57:54,408 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 09:57:54,448 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/files/_ssl.conf'
2017-09-27 09:57:54,471 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/files/_locations.conf'
2017-09-27 09:57:54,495 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'apache/files/_log.conf'
2017-09-27 09:57:54,506 [salt.state       ][INFO    ][1747] File changed:
New file
2017-09-27 09:57:54,507 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/sites-available/keystone_wsgi.conf] at time 09:57:54.506455 duration_in_ms=288.042
2017-09-27 09:57:54,507 [salt.state       ][INFO    ][1747] Running state [/etc/keystone/logging.conf] at time 09:57:54.507272
2017-09-27 09:57:54,508 [salt.state       ][INFO    ][1747] Executing state file.managed for /etc/keystone/logging.conf
2017-09-27 09:57:54,508 [salt.loaded.int.states.file][WARNING ][1747] State for file: /etc/keystone/logging.conf - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2017-09-27 09:57:54,509 [salt.state       ][INFO    ][1747] {'group': 'keystone', 'user': 'keystone'}
2017-09-27 09:57:54,510 [salt.state       ][INFO    ][1747] Completed state [/etc/keystone/logging.conf] at time 09:57:54.509656 duration_in_ms=2.384
2017-09-27 09:57:54,510 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/sites-enabled/keystone_wsgi.conf] at time 09:57:54.510344
2017-09-27 09:57:54,511 [salt.state       ][INFO    ][1747] Executing state file.symlink for /etc/apache2/sites-enabled/keystone_wsgi.conf
2017-09-27 09:57:54,512 [salt.state       ][INFO    ][1747] {'new': '/etc/apache2/sites-enabled/keystone_wsgi.conf'}
2017-09-27 09:57:54,513 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/sites-enabled/keystone_wsgi.conf] at time 09:57:54.512861 duration_in_ms=2.516
2017-09-27 09:57:54,517 [salt.state       ][INFO    ][1747] Running state [apache2] at time 09:57:54.516859
2017-09-27 09:57:54,517 [salt.state       ][INFO    ][1747] Executing state service.running for apache2
2017-09-27 09:57:54,518 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemctl', 'status', 'apache2.service', '-n', '0'] in directory '/root'
2017-09-27 09:57:54,537 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemctl', 'is-active', 'apache2.service'] in directory '/root'
2017-09-27 09:57:54,551 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemctl', 'is-enabled', 'apache2.service'] in directory '/root'
2017-09-27 09:57:54,566 [salt.state       ][INFO    ][1747] The service apache2 is already running
2017-09-27 09:57:54,567 [salt.state       ][INFO    ][1747] Completed state [apache2] at time 09:57:54.566782 duration_in_ms=49.921
2017-09-27 09:57:54,567 [salt.state       ][INFO    ][1747] Running state [apache2] at time 09:57:54.567186
2017-09-27 09:57:54,568 [salt.state       ][INFO    ][1747] Executing state service.mod_watch for apache2
2017-09-27 09:57:54,568 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemctl', 'is-active', 'apache2.service'] in directory '/root'
2017-09-27 09:57:54,581 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemctl', 'is-enabled', 'apache2.service'] in directory '/root'
2017-09-27 09:57:54,596 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemd-run', '--scope', 'systemctl', 'reload', 'apache2.service'] in directory '/root'
2017-09-27 09:57:54,725 [salt.state       ][INFO    ][1747] {'apache2': True}
2017-09-27 09:57:54,725 [salt.state       ][INFO    ][1747] Completed state [apache2] at time 09:57:54.725395 duration_in_ms=158.207
2017-09-27 09:57:54,726 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/conf-enabled/security.conf] at time 09:57:54.726313
2017-09-27 09:57:54,727 [salt.state       ][INFO    ][1747] Executing state file.symlink for /etc/apache2/conf-enabled/security.conf
2017-09-27 09:57:54,728 [salt.state       ][INFO    ][1747] {'new': '/etc/apache2/conf-enabled/security.conf'}
2017-09-27 09:57:54,728 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/conf-enabled/security.conf] at time 09:57:54.728252 duration_in_ms=1.939
2017-09-27 09:57:54,729 [salt.state       ][INFO    ][1747] Running state [/etc/apache2/sites-enabled/keystone_wsgi] at time 09:57:54.728515
2017-09-27 09:57:54,729 [salt.state       ][INFO    ][1747] Executing state file.absent for /etc/apache2/sites-enabled/keystone_wsgi
2017-09-27 09:57:54,729 [salt.state       ][INFO    ][1747] File /etc/apache2/sites-enabled/keystone_wsgi is not present
2017-09-27 09:57:54,729 [salt.state       ][INFO    ][1747] Completed state [/etc/apache2/sites-enabled/keystone_wsgi] at time 09:57:54.729247 duration_in_ms=0.732
2017-09-27 09:57:54,731 [salt.state       ][INFO    ][1747] Running state [keystone] at time 09:57:54.730871
2017-09-27 09:57:54,733 [salt.state       ][INFO    ][1747] Executing state service.dead for keystone
2017-09-27 09:57:54,734 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemctl', 'status', 'keystone.service', '-n', '0'] in directory '/root'
2017-09-27 09:57:54,753 [salt.state       ][INFO    ][1747] The named service keystone is not available
2017-09-27 09:57:54,758 [salt.state       ][INFO    ][1747] Completed state [keystone] at time 09:57:54.757719 duration_in_ms=26.847
2017-09-27 09:57:54,758 [salt.state       ][INFO    ][1747] Running state [keystone] at time 09:57:54.758172
2017-09-27 09:57:54,759 [salt.state       ][INFO    ][1747] Executing state service.mod_watch for keystone
2017-09-27 09:57:54,760 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command ['systemctl', 'is-active', 'keystone.service'] in directory '/root'
2017-09-27 09:57:54,779 [salt.state       ][INFO    ][1747] Service is already stopped
2017-09-27 09:57:54,779 [salt.state       ][INFO    ][1747] Completed state [keystone] at time 09:57:54.778871 duration_in_ms=20.697
2017-09-27 09:57:54,780 [salt.state       ][INFO    ][1747] Running state [/root/keystonerc] at time 09:57:54.779513
2017-09-27 09:57:54,780 [salt.state       ][INFO    ][1747] Executing state file.managed for /root/keystonerc
2017-09-27 09:57:54,799 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/files/keystonerc'
2017-09-27 09:57:54,822 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 09:57:54,959 [salt.state       ][INFO    ][1747] File changed:
New file
2017-09-27 09:57:54,960 [salt.state       ][INFO    ][1747] Completed state [/root/keystonerc] at time 09:57:54.959471 duration_in_ms=179.958
2017-09-27 09:57:54,960 [salt.state       ][INFO    ][1747] Running state [/root/keystonercv3] at time 09:57:54.960022
2017-09-27 09:57:54,960 [salt.state       ][INFO    ][1747] Executing state file.managed for /root/keystonercv3
2017-09-27 09:57:54,980 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/files/keystonercv3'
2017-09-27 09:57:55,007 [salt.fileclient  ][INFO    ][1747] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 09:57:55,036 [salt.state       ][INFO    ][1747] File changed:
New file
2017-09-27 09:57:55,036 [salt.state       ][INFO    ][1747] Completed state [/root/keystonercv3] at time 09:57:55.036176 duration_in_ms=76.153
2017-09-27 09:57:55,038 [salt.state       ][INFO    ][1747] Running state [keystone-manage db_sync && sleep 1] at time 09:57:55.037566
2017-09-27 09:57:55,038 [salt.state       ][INFO    ][1747] Executing state cmd.run for keystone-manage db_sync && sleep 1
2017-09-27 09:57:55,039 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command 'keystone-manage db_sync && sleep 1' in directory '/root'
2017-09-27 09:57:57,072 [salt.state       ][INFO    ][1747] {'pid': 4206, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 09:57:57,072 [salt.state       ][INFO    ][1747] Completed state [keystone-manage db_sync && sleep 1] at time 09:57:57.072293 duration_in_ms=2034.726
2017-09-27 09:57:57,074 [salt.state       ][INFO    ][1747] Running state [/var/lib/keystone/fernet-keys] at time 09:57:57.073610
2017-09-27 09:57:57,074 [salt.state       ][INFO    ][1747] Executing state file.directory for /var/lib/keystone/fernet-keys
2017-09-27 09:57:57,077 [salt.state       ][INFO    ][1747] {'/var/lib/keystone/fernet-keys': 'New Dir'}
2017-09-27 09:57:57,077 [salt.state       ][INFO    ][1747] Completed state [/var/lib/keystone/fernet-keys] at time 09:57:57.077007 duration_in_ms=3.396
2017-09-27 09:57:57,079 [salt.state       ][INFO    ][1747] Running state [keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone] at time 09:57:57.078803
2017-09-27 09:57:57,079 [salt.state       ][INFO    ][1747] Executing state cmd.run for keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
2017-09-27 09:57:57,080 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command 'keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone' in directory '/root'
2017-09-27 09:57:57,831 [salt.state       ][INFO    ][1747] {'pid': 4330, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 09:57:57,831 [salt.state       ][INFO    ][1747] Completed state [keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone] at time 09:57:57.831227 duration_in_ms=752.422
2017-09-27 09:57:57,833 [salt.state       ][INFO    ][1747] Running state [/var/lib/keystone/credential-keys] at time 09:57:57.832571
2017-09-27 09:57:57,833 [salt.state       ][INFO    ][1747] Executing state file.directory for /var/lib/keystone/credential-keys
2017-09-27 09:57:57,835 [salt.state       ][INFO    ][1747] {'/var/lib/keystone/credential-keys': 'New Dir'}
2017-09-27 09:57:57,836 [salt.state       ][INFO    ][1747] Completed state [/var/lib/keystone/credential-keys] at time 09:57:57.835941 duration_in_ms=3.37
2017-09-27 09:57:57,838 [salt.state       ][INFO    ][1747] Running state [keystone-manage credential_setup --keystone-user keystone --keystone-group keystone] at time 09:57:57.837564
2017-09-27 09:57:57,838 [salt.state       ][INFO    ][1747] Executing state cmd.run for keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
2017-09-27 09:57:57,839 [salt.loaded.int.module.cmdmod][INFO    ][1747] Executing command 'keystone-manage credential_setup --keystone-user keystone --keystone-group keystone' in directory '/root'
2017-09-27 09:57:58,607 [salt.state       ][INFO    ][1747] {'pid': 4400, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 09:57:58,608 [salt.state       ][INFO    ][1747] Completed state [keystone-manage credential_setup --keystone-user keystone --keystone-group keystone] at time 09:57:58.608071 duration_in_ms=770.506
2017-09-27 09:57:58,610 [salt.minion      ][INFO    ][1747] Returning information for job: 20170927095642933327
2017-09-27 09:59:22,859 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command service.restart with jid 20170927095922849743
2017-09-27 09:59:22,887 [salt.minion      ][INFO    ][4618] Starting a new job with PID 4618
2017-09-27 09:59:23,723 [salt.loaded.int.module.cmdmod][INFO    ][4618] Executing command ['systemctl', 'status', 'apache2.service', '-n', '0'] in directory '/root'
2017-09-27 09:59:23,737 [salt.loaded.int.module.cmdmod][INFO    ][4618] Executing command ['systemctl', 'is-enabled', 'apache2.service'] in directory '/root'
2017-09-27 09:59:23,765 [salt.loaded.int.module.cmdmod][INFO    ][4618] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'apache2.service'] in directory '/root'
2017-09-27 09:59:24,127 [salt.minion      ][INFO    ][4618] Returning information for job: 20170927095922849743
2017-09-27 09:59:43,305 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command cmd.run with jid 20170927095943294707
2017-09-27 09:59:43,331 [salt.minion      ][INFO    ][4894] Starting a new job with PID 4894
2017-09-27 09:59:43,338 [salt.loaded.int.module.cmdmod][INFO    ][4894] Executing command '. /root/keystonercv3; openstack service list' in directory '/root'
2017-09-27 09:59:45,469 [salt.minion      ][INFO    ][4894] Returning information for job: 20170927095943294707
2017-09-27 09:59:46,653 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command test.ping with jid 20170927095946643859
2017-09-27 09:59:46,678 [salt.minion      ][INFO    ][4918] Starting a new job with PID 4918
2017-09-27 09:59:46,730 [salt.minion      ][INFO    ][4918] Returning information for job: 20170927095946643859
2017-09-27 10:00:40,312 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927100040306048
2017-09-27 10:00:40,340 [salt.minion      ][INFO    ][4933] Starting a new job with PID 4933
2017-09-27 10:00:42,036 [salt.state       ][INFO    ][4933] Loading fresh modules for state activity
2017-09-27 10:00:42,081 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/init.sls'
2017-09-27 10:00:42,109 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/server.sls'
2017-09-27 10:00:42,184 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/map.jinja'
2017-09-27 10:00:43,648 [salt.state       ][INFO    ][4933] Running state [glance] at time 10:00:43.648204
2017-09-27 10:00:43,649 [salt.state       ][INFO    ][4933] Executing state group.present for glance
2017-09-27 10:00:43,649 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command 'groupadd -g 302 -r glance' in directory '/root'
2017-09-27 10:00:43,762 [salt.state       ][INFO    ][4933] {'passwd': 'x', 'gid': 302, 'name': 'glance', 'members': []}
2017-09-27 10:00:43,763 [salt.state       ][INFO    ][4933] Completed state [glance] at time 10:00:43.762769 duration_in_ms=114.562
2017-09-27 10:00:43,764 [salt.state       ][INFO    ][4933] Running state [glance] at time 10:00:43.763950
2017-09-27 10:00:43,765 [salt.state       ][INFO    ][4933] Executing state user.present for glance
2017-09-27 10:00:43,767 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['useradd', '-s', '/bin/false', '-u', '302', '-g', '302', '-m', '-d', '/var/lib/glance', '-r', 'glance'] in directory '/root'
2017-09-27 10:00:43,904 [salt.state       ][INFO    ][4933] {'shell': '/bin/false', 'workphone': '', 'uid': 302, 'passwd': 'x', 'roomnumber': '', 'groups': ['glance'], 'home': '/var/lib/glance', 'name': 'glance', 'gid': 302, 'fullname': '', 'homephone': ''}
2017-09-27 10:00:43,905 [salt.state       ][INFO    ][4933] Completed state [glance] at time 10:00:43.904756 duration_in_ms=140.803
2017-09-27 10:00:43,906 [salt.state       ][INFO    ][4933] Running state [glance-api] at time 10:00:43.906226
2017-09-27 10:00:43,907 [salt.state       ][INFO    ][4933] Executing state pkg.installed for glance-api
2017-09-27 10:00:43,909 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:00:44,518 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 10:00:47,688 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'glance-api'] in directory '/root'
2017-09-27 10:00:50,402 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100050392945
2017-09-27 10:00:50,429 [salt.minion      ][INFO    ][5493] Starting a new job with PID 5493
2017-09-27 10:00:50,448 [salt.minion      ][INFO    ][5493] Returning information for job: 20170927100050392945
2017-09-27 10:01:00,568 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100100558676
2017-09-27 10:01:00,593 [salt.minion      ][INFO    ][5667] Starting a new job with PID 5667
2017-09-27 10:01:00,608 [salt.minion      ][INFO    ][5667] Returning information for job: 20170927100100558676
2017-09-27 10:01:10,727 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100110717667
2017-09-27 10:01:10,754 [salt.minion      ][INFO    ][5855] Starting a new job with PID 5855
2017-09-27 10:01:10,773 [salt.minion      ][INFO    ][5855] Returning information for job: 20170927100110717667
2017-09-27 10:01:12,981 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:01:13,023 [salt.state       ][INFO    ][4933] Made the following changes:
'python-retrying' changed from 'absent' to '1.3.3-1'
'python-taskflow' changed from 'absent' to '2.9.0-1~u16.04+mcp1'
'libblas.so.3' changed from 'absent' to '1'
'python-numpy-api10' changed from 'absent' to '1'
'liblapack.so.3' changed from 'absent' to '1'
'python-automaton' changed from 'absent' to '1.2.0-1'
'python-numpy' changed from 'absent' to '1:1.11.0-1ubuntu1'
'python-numpy-dev' changed from 'absent' to '1'
'python-cursive' changed from 'absent' to '0.1.1-1~u16.04+mcp0'
'python-ipaddr' changed from 'absent' to '2.1.11-2'
'libblas3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-glance' changed from 'absent' to '2:14.0.0-1~u16.04+mcp7'
'python-wsme' changed from 'absent' to '0.8.0-2ubuntu2'
'python2.7-numpy' changed from 'absent' to '1'
'glance-store-common' changed from 'absent' to '0.20.0-1~u16.04+mcp3'
'python-castellan' changed from 'absent' to '0.4.0-1'
'liblapack3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-f2py' changed from 'absent' to '1'
'python2.7-glance' changed from 'absent' to '1'
'python-numpy-abi9' changed from 'absent' to '1'
'python-kazoo' changed from 'absent' to '2.2.1-1ubuntu1'
'python-networkx' changed from 'absent' to '1.11-1ubuntu1'
'python-semantic-version' changed from 'absent' to '2.3.1-1'
'libblas-common' changed from 'absent' to '3.6.0-2ubuntu2'
'libquadmath0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.4'
'glance-common' changed from 'absent' to '2:14.0.0-1~u16.04+mcp7'
'python-glance-store' changed from 'absent' to '0.20.0-1~u16.04+mcp3'
'libgfortran3' changed from 'absent' to '5.4.0-6ubuntu1~16.04.4'
'glance-api' changed from 'absent' to '2:14.0.0-1~u16.04+mcp7'
'python-swiftclient' changed from 'absent' to '1:3.3.0-1~u16.04+mcp3'

2017-09-27 10:01:13,035 [salt.state       ][INFO    ][4933] Loading fresh modules for state activity
2017-09-27 10:01:13,062 [salt.state       ][INFO    ][4933] Completed state [glance-api] at time 10:01:13.062316 duration_in_ms=29156.091
2017-09-27 10:01:13,068 [salt.state       ][INFO    ][4933] Running state [python-memcache] at time 10:01:13.067619
2017-09-27 10:01:13,068 [salt.state       ][INFO    ][4933] Executing state pkg.installed for python-memcache
2017-09-27 10:01:13,275 [salt.state       ][INFO    ][4933] Package python-memcache is already installed
2017-09-27 10:01:13,275 [salt.state       ][INFO    ][4933] Completed state [python-memcache] at time 10:01:13.274970 duration_in_ms=207.351
2017-09-27 10:01:13,275 [salt.state       ][INFO    ][4933] Running state [python-pycadf] at time 10:01:13.275426
2017-09-27 10:01:13,276 [salt.state       ][INFO    ][4933] Executing state pkg.installed for python-pycadf
2017-09-27 10:01:13,278 [salt.state       ][INFO    ][4933] Package python-pycadf is already installed
2017-09-27 10:01:13,279 [salt.state       ][INFO    ][4933] Completed state [python-pycadf] at time 10:01:13.278667 duration_in_ms=3.241
2017-09-27 10:01:13,279 [salt.state       ][INFO    ][4933] Running state [glance-common] at time 10:01:13.279074
2017-09-27 10:01:13,279 [salt.state       ][INFO    ][4933] Executing state pkg.installed for glance-common
2017-09-27 10:01:13,282 [salt.state       ][INFO    ][4933] Package glance-common is already installed
2017-09-27 10:01:13,282 [salt.state       ][INFO    ][4933] Completed state [glance-common] at time 10:01:13.282142 duration_in_ms=3.067
2017-09-27 10:01:13,283 [salt.state       ][INFO    ][4933] Running state [python-glance-store] at time 10:01:13.282598
2017-09-27 10:01:13,283 [salt.state       ][INFO    ][4933] Executing state pkg.installed for python-glance-store
2017-09-27 10:01:13,285 [salt.state       ][INFO    ][4933] Package python-glance-store is already installed
2017-09-27 10:01:13,286 [salt.state       ][INFO    ][4933] Completed state [python-glance-store] at time 10:01:13.285616 duration_in_ms=3.018
2017-09-27 10:01:13,286 [salt.state       ][INFO    ][4933] Running state [python-glance] at time 10:01:13.286058
2017-09-27 10:01:13,286 [salt.state       ][INFO    ][4933] Executing state pkg.installed for python-glance
2017-09-27 10:01:13,289 [salt.state       ][INFO    ][4933] Package python-glance is already installed
2017-09-27 10:01:13,289 [salt.state       ][INFO    ][4933] Completed state [python-glance] at time 10:01:13.289038 duration_in_ms=2.981
2017-09-27 10:01:13,289 [salt.state       ][INFO    ][4933] Running state [gettext-base] at time 10:01:13.289444
2017-09-27 10:01:13,290 [salt.state       ][INFO    ][4933] Executing state pkg.installed for gettext-base
2017-09-27 10:01:13,292 [salt.state       ][INFO    ][4933] Package gettext-base is already installed
2017-09-27 10:01:13,293 [salt.state       ][INFO    ][4933] Completed state [gettext-base] at time 10:01:13.292504 duration_in_ms=3.061
2017-09-27 10:01:13,293 [salt.state       ][INFO    ][4933] Running state [glance] at time 10:01:13.292909
2017-09-27 10:01:13,293 [salt.state       ][INFO    ][4933] Executing state pkg.installed for glance
2017-09-27 10:01:13,300 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'glance'] in directory '/root'
2017-09-27 10:01:16,923 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:01:16,953 [salt.state       ][INFO    ][4933] Made the following changes:
'glance' changed from 'absent' to '2:14.0.0-1~u16.04+mcp7'
'glance-registry' changed from 'absent' to '2:14.0.0-1~u16.04+mcp7'

2017-09-27 10:01:16,967 [salt.state       ][INFO    ][4933] Loading fresh modules for state activity
2017-09-27 10:01:16,985 [salt.state       ][INFO    ][4933] Completed state [glance] at time 10:01:16.984827 duration_in_ms=3691.918
2017-09-27 10:01:16,992 [salt.state       ][INFO    ][4933] Running state [glance-registry] at time 10:01:16.991701
2017-09-27 10:01:16,992 [salt.state       ][INFO    ][4933] Executing state pkg.installed for glance-registry
2017-09-27 10:01:17,192 [salt.state       ][INFO    ][4933] Package glance-registry is already installed
2017-09-27 10:01:17,193 [salt.state       ][INFO    ][4933] Completed state [glance-registry] at time 10:01:17.192902 duration_in_ms=201.2
2017-09-27 10:01:17,193 [salt.state       ][INFO    ][4933] Running state [python-glanceclient] at time 10:01:17.193407
2017-09-27 10:01:17,194 [salt.state       ][INFO    ][4933] Executing state pkg.installed for python-glanceclient
2017-09-27 10:01:17,196 [salt.state       ][INFO    ][4933] Package python-glanceclient is already installed
2017-09-27 10:01:17,197 [salt.state       ][INFO    ][4933] Completed state [python-glanceclient] at time 10:01:17.196749 duration_in_ms=3.342
2017-09-27 10:01:17,198 [salt.state       ][INFO    ][4933] Running state [/etc/glance/glance-cache.conf] at time 10:01:17.198264
2017-09-27 10:01:17,199 [salt.state       ][INFO    ][4933] Executing state file.managed for /etc/glance/glance-cache.conf
2017-09-27 10:01:17,232 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/files/ocata/glance-cache.conf.Debian'
2017-09-27 10:01:17,271 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/map.jinja'
2017-09-27 10:01:17,293 [salt.state       ][INFO    ][4933] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -226,8 +227,8 @@
 #  (boolean value)
 # This option is deprecated for removal since Newton.
 # Its value may be silently ignored in the future.
-# Reason: This option will be removed in the Pike release or later because the
-# same functionality can be achieved with greater granularity by using policies.
+# Reason: This option will be removed in the Ocata release because the same
+# functionality can be achieved with greater granularity by using policies.
 # Please see the Newton release notes for more information.
 #show_multiple_locations = false
 
@@ -553,6 +554,10 @@
 #  (integer value)
 # Minimum value: 0
 #image_cache_max_size = 10737418240
+image_cache_max_size = 189668314316
+
+
+os_region_name = RegionOne
 
 #
 # The amount of time, in seconds, an incomplete image remains in the cache.
@@ -578,6 +583,7 @@
 #  (integer value)
 # Minimum value: 0
 #image_cache_stall_time = 86400
+image_cache_stall_time = 86400
 
 #
 # Base directory for image cache.
@@ -611,6 +617,7 @@
 #
 #  (string value)
 #image_cache_dir = <None>
+image_cache_dir = /var/lib/glance/image-cache/
 
 #
 # Address the registry server is hosted on.
@@ -623,6 +630,7 @@
 #
 #  (string value)
 #registry_host = 0.0.0.0
+registry_host = 10.167.4.10
 
 #
 # Port the registry server is listening on.
@@ -637,6 +645,7 @@
 # Minimum value: 0
 # Maximum value: 65535
 #registry_port = 9191
+registry_port = 9191
 
 #
 # Protocol to use for communication with the registry server.
@@ -885,6 +894,7 @@
 # log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
+log_file = /var/log/glance/image-cache.log
 
 # (Optional) The base directory used for relative log_file  paths. This option
 # is ignored if log_config_append is set. (string value)
@@ -909,7 +919,7 @@
 
 # Log output to standard error. This option is ignored if log_config_append is
 # set. (boolean value)
-#use_stderr = false
+#use_stderr = true
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
@@ -942,18 +952,6 @@
 # The format for an instance UUID that is passed with the log message. (string
 # value)
 #instance_uuid_format = "[instance: %(uuid)s] "
-
-# Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
-
-# Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
-
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
-# empty string. Logs with level greater or equal to rate_limit_except_level are
-# not filtered. An empty string means that all levels are filtered. (string
-# value)
-#rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
@@ -1040,7 +1038,7 @@
 # /store-capabilities.html
 #
 # For more information on setting up a particular store in your
-# deployment and help with the usage of this feature, please contact
+# deplyment and help with the usage of this feature, please contact
 # the storage driver maintainers listed here:
 # http://docs.openstack.org/developer/glance_store/drivers/index.html
 #
@@ -1069,8 +1067,8 @@
 #
 # Possible values:
 #     * A string of of the following form:
-#       ``<service_type>:<service_name>:<interface>``
-#       At least ``service_type`` and ``interface`` should be specified.
+#       ``<service_type>:<service_name>:<endpoint_type>``
+#       At least ``service_type`` and ``endpoint_type`` should be specified.
 #       ``service_name`` can be omitted.
 #
 # Related options:
@@ -1293,25 +1291,6 @@
 #
 #  (string value)
 #rootwrap_config = /etc/glance/rootwrap.conf
-
-#
-# Volume type that will be used for volume creation in cinder.
-#
-# Some cinder backends can have several volume types to optimize storage usage.
-# Adding this option allows an operator to choose a specific volume type
-# in cinder that can be optimized for images.
-#
-# If this is not set, then the default volume type specified in the cinder
-# configuration will be used for volume creation.
-#
-# Possible values:
-#     * A valid volume type from cinder
-#
-# Related options:
-#     * None
-#
-#  (string value)
-#cinder_volume_type = <None>
 
 #
 # Directory to which the filesystem backend store writes images.
@@ -1392,7 +1371,6 @@
 #     * None
 #
 #  (string value)
-#filesystem_store_metadata_file = <None>
 
 #
 # File access permissions for the image files.
@@ -1894,16 +1872,12 @@
 # images in its own account. More details multi-tenant store can be found at
 # https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
 #
-# NOTE: If using multi-tenant swift store, please make sure
-# that you do not set a swift configuration file with the
-# 'swift_store_config_file' option.
-#
 # Possible values:
 #     * True
 #     * False
 #
 # Related options:
-#     * swift_store_config_file
+#     * None
 #
 #  (boolean value)
 #swift_store_multi_tenant = false
@@ -2119,15 +2093,12 @@
 # option is highly recommended while using Swift storage backend for
 # image storage as it avoids storage of credentials in the database.
 #
-# NOTE: Please do not configure this option if you have set
-# ``swift_store_multi_tenant`` to ``True``.
-#
 # Possible values:
 #     * String value representing an absolute path on the glance-api
 #       node
 #
 # Related options:
-#     * swift_store_multi_tenant
+#     * None
 #
 #  (string value)
 #swift_store_config_file = <None>
@@ -2310,6 +2281,9 @@
 #  (multi valued)
 #vmware_datastores =
 
+os_region_name = RegionOne
+
+
 
 [oslo_policy]
 
@@ -2317,7 +2291,7 @@
 # From oslo.policy
 #
 
-# The file that defines policies. (string value)
+# The JSON file that defines policies. (string value)
 # Deprecated group/name - [DEFAULT]/policy_file
 #policy_file = policy.json
 

2017-09-27 10:01:17,302 [salt.state       ][INFO    ][4933] Completed state [/etc/glance/glance-cache.conf] at time 10:01:17.302401 duration_in_ms=104.136
2017-09-27 10:01:17,303 [salt.state       ][INFO    ][4933] Running state [/etc/glance/glance-registry.conf] at time 10:01:17.302721
2017-09-27 10:01:17,303 [salt.state       ][INFO    ][4933] Executing state file.managed for /etc/glance/glance-registry.conf
2017-09-27 10:01:17,333 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/files/ocata/glance-registry.conf.Debian'
2017-09-27 10:01:17,386 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/map.jinja'
2017-09-27 10:01:17,408 [salt.state       ][INFO    ][4933] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -212,6 +213,7 @@
 #  (integer value)
 # Minimum value: 1
 #limit_param_default = 25
+limit_param_default = 25
 
 #
 # Maximum number of results that could be returned by a request.
@@ -237,6 +239,7 @@
 #  (integer value)
 # Minimum value: 1
 #api_limit_max = 1000
+api_limit_max = 1000
 
 #
 # Show direct image location when returning an image.
@@ -300,8 +303,8 @@
 #  (boolean value)
 # This option is deprecated for removal since Newton.
 # Its value may be silently ignored in the future.
-# Reason: This option will be removed in the Pike release or later because the
-# same functionality can be achieved with greater granularity by using policies.
+# Reason: This option will be removed in the Ocata release because the same
+# functionality can be achieved with greater granularity by using policies.
 # Please see the Newton release notes for more information.
 #show_multiple_locations = false
 
@@ -567,6 +570,7 @@
 #
 #  (string value)
 #bind_host = 0.0.0.0
+bind_host = 10.167.4.13
 
 #
 # Port number on which the server will listen.
@@ -586,6 +590,7 @@
 # Minimum value: 0
 # Maximum value: 65535
 #bind_port = <None>
+bind_port = 9191
 
 #
 # Set the number of incoming connection requests.
@@ -717,6 +722,7 @@
 #  (integer value)
 # Minimum value: 0
 #workers = <None>
+workers = 8
 
 #
 # Maximum line size of message headers.
@@ -795,12 +801,14 @@
 # INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
+debug = false
 
 # DEPRECATED: If set to false, the logging level will be set to WARNING instead
 # of the default INFO level. (boolean value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #verbose = true
+verbose = true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -822,6 +830,7 @@
 # log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
+log_file = /var/log/glance/registry.log
 
 # (Optional) The base directory used for relative log_file  paths. This option
 # is ignored if log_config_append is set. (string value)
@@ -846,7 +855,7 @@
 
 # Log output to standard error. This option is ignored if log_config_append is
 # set. (boolean value)
-#use_stderr = false
+#use_stderr = true
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
@@ -879,18 +888,6 @@
 # The format for an instance UUID that is passed with the log message. (string
 # value)
 #instance_uuid_format = "[instance: %(uuid)s] "
-
-# Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
-
-# Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
-
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
-# empty string. Logs with level greater or equal to rate_limit_except_level are
-# not filtered. An empty string means that all levels are filtered. (string
-# value)
-#rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
@@ -915,7 +912,7 @@
 #rpc_zmq_bind_address = *
 
 # MatchMaker driver. (string value)
-# Allowed values: redis, sentinel, dummy
+# Allowed values: redis, dummy
 # Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
 #rpc_zmq_matchmaker = redis
 
@@ -937,13 +934,12 @@
 # Deprecated group/name - [DEFAULT]/rpc_zmq_host
 #rpc_zmq_host = localhost
 
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger period.
-# The value of 0 specifies no linger period. Pending messages shall be discarded
-# immediately when the socket is closed. Positive values specify an upper bound
-# for the linger period. (integer value)
+# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
+# an infinite linger period. The value of 0 specifies no linger period. Pending
+# messages shall be discarded immediately when the socket is closed. Only
+# supported by impl_zmq. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
+#rpc_cast_timeout = -1
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
@@ -963,20 +959,11 @@
 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
 # value)
 # Deprecated group/name - [DEFAULT]/use_pub_sub
-#use_pub_sub = false
+#use_pub_sub = true
 
 # Use ROUTER remote proxy. (boolean value)
 # Deprecated group/name - [DEFAULT]/use_router_proxy
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
+#use_router_proxy = true
 
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
@@ -1005,62 +992,7 @@
 # a queue when server side disconnects. False means to keep queue and messages
 # even if server is disconnected, when the server appears we send all
 # accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to skip
-# any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value and
-# 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0) means
-# to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is not
-# tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
+#zmq_immediate = false
 
 # Size of executor thread pool. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
@@ -1072,6 +1004,7 @@
 # A URL representing the messaging driver to use and its full configuration.
 # (string value)
 #transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.41:5672,openstack:opnfv_secret@10.167.4.42:5672,openstack:opnfv_secret@10.167.4.43:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
@@ -1083,6 +1016,7 @@
 # The default exchange under which topics are scoped. May be overridden by an
 # exchange name specified in the transport_url option. (string value)
 #control_exchange = openstack
+control_exchange = openstack
 
 
 [database]
@@ -1106,6 +1040,7 @@
 # The back end to use for the database. (string value)
 # Deprecated group/name - [DEFAULT]/db_backend
 #backend = sqlalchemy
+backend = sqlalchemy
 
 # The SQLAlchemy connection string to use to connect to the database. (string
 # value)
@@ -1113,6 +1048,7 @@
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
 #connection = <None>
+connection = mysql+pymysql://glance:opnfv_secret@10.167.4.50/glance
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
@@ -1129,6 +1065,7 @@
 # Deprecated group/name - [DATABASE]/sql_idle_timeout
 # Deprecated group/name - [sql]/idle_timeout
 #idle_timeout = 3600
+idle_timeout = 3600
 
 # Minimum number of SQL connections to keep open in a pool. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
@@ -1140,12 +1077,14 @@
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
 #max_pool_size = 5
+max_pool_size = 10
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
 #max_retries = 10
+max_retries = -1
 
 # Interval between retries of opening a SQL connection. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
@@ -1156,6 +1095,7 @@
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
 #max_overflow = 50
+max_overflow = 30
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
@@ -1200,13 +1140,40 @@
 # Deprecated group/name - [DEFAULT]/dbapi_use_tpool
 #use_tpool = false
 
+[glance_store]
+filesystem_store_datadir = /var/lib/glance/images/
+
+swift_store_endpoint_type = internalURL
+
+cinder_catalog_info = volumev2::internalURL
+
+# Override service catalog lookup with template for cinder endpoint
+# e.g. http://localhost:8776/v2/%(tenant)s (string value)
+#cinder_endpoint_template = <None>
+
+# Region name of this node. If specified, it will be used to locate
+# OpenStack services for stores. (string value)
+# Deprecated group/name - [DEFAULT]/os_region_name
+#cinder_os_region_name = <None>
+
+cinder_os_region_name = RegionOne
+
 
 [keystone_authtoken]
 
 #
 # From keystonemiddleware.auth_token
 #
-
+revocation_cache_time = 10
+auth_type = password
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = glance
+password = opnfv_secret
+auth_uri=http://10.167.4.10:5000
+auth_url=http://10.167.4.10:35357
+memcached_servers=10.167.4.11:11211,10.167.4.12:11211,10.167.4.13:11211
 # Complete "public" Identity API endpoint. This endpoint should not be an
 # "admin" endpoint, as it should be accessible by all end users. Unauthenticated
 # clients are redirected to this endpoint to authenticate. Although this
@@ -1253,12 +1220,7 @@
 # The region in which the identity server can be found. (string value)
 #region_name = <None>
 
-# DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P release.
-# (string value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
+# Directory used to cache files related to PKI tokens. (string value)
 #signing_dir = <None>
 
 # Optionally specify a list of memcached server(s) to use for caching. If left
@@ -1271,14 +1233,10 @@
 # -1 to disable caching completely. (integer value)
 #token_cache_time = 300
 
-# DEPRECATED: Determines the frequency at which the list of revoked tokens is
-# retrieved from the Identity service (in seconds). A high number of revocation
-# events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in the
-# Ocata release and will be removed in the P release. (integer value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
+# Determines the frequency at which the list of revoked tokens is retrieved from
+# the Identity service (in seconds). A high number of revocation events combined
+# with a low cache duration may significantly reduce performance. Only valid for
+# PKI tokens. (integer value)
 #revocation_cache_time = 10
 
 # (Optional) If defined, indicate whether token data should be authenticated or
@@ -1331,40 +1289,19 @@
 # value)
 #enforce_token_bind = permissive
 
-# DEPRECATED: If true, the revocation list will be checked for cached tokens.
-# This requires that PKI tokens are configured on the identity server. (boolean
-# value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
+# If true, the revocation list will be checked for cached tokens. This requires
+# that PKI tokens are configured on the identity server. (boolean value)
 #check_revocations_for_cached = false
 
-# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
-# single algorithm or multiple. The algorithms are those supported by Python
-# standard hashlib.new(). The hashes will be tried in the order given, so put
-# the preferred one first for performance. The result of the first hash will be
+# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
+# or multiple. The algorithms are those supported by Python standard
+# hashlib.new(). The hashes will be tried in the order given, so put the
+# preferred one first for performance. The result of the first hash will be
 # stored in the cache. This will typically be set to multiple values only while
 # migrating from a less secure algorithm to a more secure one. Once all the old
 # tokens are expired this option should be set to a single value for better
 # performance. (list value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
 #hash_algorithms = md5
-
-# A choice of roles that must be present in a service token. Service tokens are
-# allowed to request that an expired token can be used and so this check should
-# tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present. For
-# backwards compatibility reasons this currently only affects the allow_expired
-# check. (list value)
-#service_token_roles = service
-
-# For backwards compatibility reasons we must let valid service tokens pass that
-# don't pass the service_token_roles check as valid. Setting this true will
-# become the default in a future release and should be enabled if possible.
-# (boolean value)
-#service_token_roles_required = false
 
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
@@ -1400,7 +1337,7 @@
 # Reason: Replaced by [DEFAULT]/transport_url
 #password =
 
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
+# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
 # [host:port, host1:port ... ] (list value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
@@ -1416,7 +1353,7 @@
 # Time in ms to wait before the transaction is killed. (integer value)
 #check_timeout = 20000
 
-# Timeout in ms on blocking socket operations. (integer value)
+# Timeout in ms on blocking socket operations (integer value)
 #socket_timeout = 10000
 
 
@@ -1439,16 +1376,15 @@
 # Deprecated group/name - [amqp1]/trace
 #trace = false
 
-# CA certificate PEM file used to verify the server's certificate (string value)
+# CA certificate PEM file to verify server certificate (string value)
 # Deprecated group/name - [amqp1]/ssl_ca_file
 #ssl_ca_file =
 
-# Self-identifying certificate PEM file for client authentication (string value)
+# Identifying certificate PEM file to present to clients (string value)
 # Deprecated group/name - [amqp1]/ssl_cert_file
 #ssl_cert_file =
 
-# Private key PEM file used to sign ssl_cert_file certificate (optional) (string
-# value)
+# Private key PEM file used to sign cert_file certificate (string value)
 # Deprecated group/name - [amqp1]/ssl_key_file
 #ssl_key_file =
 
@@ -1456,11 +1392,8 @@
 # Deprecated group/name - [amqp1]/ssl_key_password
 #ssl_key_password = <None>
 
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
+# Accept clients using either SSL or plain TCP (boolean value)
 # Deprecated group/name - [amqp1]/allow_insecure_clients
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
 #allow_insecure_clients = false
 
 # Space separated list of acceptable SASL mechanisms (string value)
@@ -1502,12 +1435,8 @@
 # Minimum value: 1
 #link_retry_delay = 10
 
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
+# The deadline for an rpc reply message delivery. Only used when caller does not
+# provide a timeout expiry. (integer value)
 # Minimum value: 5
 #default_reply_timeout = 30
 
@@ -1520,11 +1449,6 @@
 # does not provide a timeout expiry. (integer value)
 # Minimum value: 5
 #default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
 
 # Indicates the addressing mode used by the driver.
 # Permitted values:
@@ -1573,6 +1497,7 @@
 # else control_exchange if set
 # else 'notify' (string value)
 #default_notification_exchange = <None>
+default_notification_exchange = glance
 
 # Exchange name used in RPC addresses.
 # Exchange name resolution precedence:
@@ -1594,66 +1519,6 @@
 # Minimum value: 1
 #notify_server_credit = 100
 
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (integer value)
-#kafka_consumer_timeout = 1.0
-
-# Pool Size for Kafka Consumers (integer value)
-#pool_size = 10
-
-# The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
-
-# The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating point
-# value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
 
 [oslo_messaging_notifications]
 
@@ -1665,7 +1530,7 @@
 # messagingv2, routing, log, test, noop (multi valued)
 # Deprecated group/name - [DEFAULT]/notification_driver
 #driver =
-
+driver = messagingv2
 # A URL representing the messaging driver to use for notifications. If not set,
 # we fall back to the same configuration used for RPC. (string value)
 # Deprecated group/name - [DEFAULT]/notification_transport_url
@@ -1773,7 +1638,6 @@
 #rabbit_password = guest
 
 # The RabbitMQ login method. (string value)
-# Allowed values: PLAIN, AMQPLAIN, RABBIT-CR-DEMO
 # Deprecated group/name - [DEFAULT]/rabbit_login_method
 #rabbit_login_method = AMQPLAIN
 
@@ -1806,7 +1670,7 @@
 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
 # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
 # is no longer controlled by the x-ha-policy argument when declaring a queue. If
-# you just want to make sure that all queues (except those with auto-generated
+# you just want to make sure that all queues (except  those with auto-generated
 # names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
 # '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
 # Deprecated group/name - [DEFAULT]/rabbit_ha_queues
@@ -1883,11 +1747,6 @@
 # (integer value)
 #pool_stale = 60
 
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Allowed values: json, msgpack
-#default_serializer_type = json
-
 # Persist notification messages. (boolean value)
 #notification_persistence = false
 
@@ -1933,7 +1792,7 @@
 
 # Reconnecting retry count in case of connectivity problem during sending RPC
 # message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
-# request could be processed more than one time (integer value)
+# request could be processed more then one time (integer value)
 #default_rpc_retry_attempts = -1
 
 # Reconnecting retry delay in case of connectivity problem during sending RPC
@@ -1953,7 +1812,7 @@
 #rpc_zmq_bind_address = *
 
 # MatchMaker driver. (string value)
-# Allowed values: redis, sentinel, dummy
+# Allowed values: redis, dummy
 # Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
 #rpc_zmq_matchmaker = redis
 
@@ -1975,13 +1834,12 @@
 # Deprecated group/name - [DEFAULT]/rpc_zmq_host
 #rpc_zmq_host = localhost
 
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger period.
-# The value of 0 specifies no linger period. Pending messages shall be discarded
-# immediately when the socket is closed. Positive values specify an upper bound
-# for the linger period. (integer value)
+# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
+# an infinite linger period. The value of 0 specifies no linger period. Pending
+# messages shall be discarded immediately when the socket is closed. Only
+# supported by impl_zmq. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
+#rpc_cast_timeout = -1
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
@@ -2001,20 +1859,11 @@
 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
 # value)
 # Deprecated group/name - [DEFAULT]/use_pub_sub
-#use_pub_sub = false
+#use_pub_sub = true
 
 # Use ROUTER remote proxy. (boolean value)
 # Deprecated group/name - [DEFAULT]/use_router_proxy
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
+#use_router_proxy = true
 
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
@@ -2043,62 +1892,7 @@
 # a queue when server side disconnects. False means to keep queue and messages
 # even if server is disconnected, when the server appears we send all
 # accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to skip
-# any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value and
-# 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0) means
-# to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is not
-# tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
+#zmq_immediate = false
 
 
 [oslo_policy]
@@ -2107,9 +1901,10 @@
 # From oslo.policy
 #
 
-# The file that defines policies. (string value)
+# The JSON file that defines policies. (string value)
 # Deprecated group/name - [DEFAULT]/policy_file
 #policy_file = policy.json
+policy_file = /etc/glance/policy.json
 
 # Default rule. Enforced when a requested rule is not found. (string value)
 # Deprecated group/name - [DEFAULT]/policy_default_rule
@@ -2150,6 +1945,7 @@
 #
 #  (string value)
 #flavor = keystone
+flavor = keystone
 
 #
 # Name of the paste configuration file.
@@ -2241,39 +2037,5 @@
 # Examples of possible values:
 #
 # * messaging://: use oslo_messaging driver for sending notifications.
-# * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications.
-# * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending
-# notifications.
 #  (string value)
 #connection_string = messaging://
-
-#
-# Document type for notification indexing in elasticsearch.
-#  (string value)
-#es_doc_type = notification
-
-#
-# This parameter is a time value parameter (for example: es_scroll_time=2m),
-# indicating for how long the nodes that participate in the search will maintain
-# relevant resources in order to continue and support it.
-#  (string value)
-#es_scroll_time = 2m
-
-#
-# Elasticsearch splits large requests in batches. This parameter defines
-# maximum size of each batch (for example: es_scroll_size=10000).
-#  (integer value)
-#es_scroll_size = 10000
-
-#
-# Redissentinel provides a timeout option on the connections.
-# This parameter defines that timeout (for example: socket_timeout=0.1).
-#  (floating point value)
-#socket_timeout = 0.1
-
-#
-# Redissentinel uses a service name to identify a master redis service.
-# This parameter defines the name (for example:
-# sentinal_service_name=mymaster).
-#  (string value)
-#sentinel_service_name = mymaster

2017-09-27 10:01:17,411 [salt.state       ][INFO    ][4933] Completed state [/etc/glance/glance-registry.conf] at time 10:01:17.411117 duration_in_ms=108.392
2017-09-27 10:01:17,412 [salt.state       ][INFO    ][4933] Running state [/etc/glance/glance-scrubber.conf] at time 10:01:17.412008
2017-09-27 10:01:17,412 [salt.state       ][INFO    ][4933] Executing state file.managed for /etc/glance/glance-scrubber.conf
2017-09-27 10:01:17,443 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/files/ocata/glance-scrubber.conf.Debian'
2017-09-27 10:01:17,508 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/map.jinja'
2017-09-27 10:01:17,541 [salt.state       ][INFO    ][4933] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -226,8 +227,8 @@
 #  (boolean value)
 # This option is deprecated for removal since Newton.
 # Its value may be silently ignored in the future.
-# Reason: This option will be removed in the Pike release or later because the
-# same functionality can be achieved with greater granularity by using policies.
+# Reason: This option will be removed in the Ocata release because the same
+# functionality can be achieved with greater granularity by using policies.
 # Please see the Newton release notes for more information.
 #show_multiple_locations = false
 
@@ -620,6 +621,7 @@
 #  (integer value)
 # Minimum value: 0
 #wakeup_time = 300
+wakeup_time = 300
 
 #
 # Run scrubber as a daemon.
@@ -643,6 +645,7 @@
 #
 #  (boolean value)
 #daemon = false
+daemon = false
 
 #
 # Protocol to use for communication with the registry server.
@@ -867,6 +870,7 @@
 #
 #  (string value)
 #registry_host = 0.0.0.0
+registry_host = 10.167.4.10
 
 #
 # Port the registry server is listening on.
@@ -881,6 +885,7 @@
 # Minimum value: 0
 # Maximum value: 65535
 #registry_port = 9191
+registry_port = 9191
 
 #
 # From oslo.log
@@ -917,6 +922,7 @@
 # log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
+log_file = /var/log/glance/scrubber.log
 
 # (Optional) The base directory used for relative log_file  paths. This option
 # is ignored if log_config_append is set. (string value)
@@ -941,7 +947,7 @@
 
 # Log output to standard error. This option is ignored if log_config_append is
 # set. (boolean value)
-#use_stderr = false
+#use_stderr = true
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
@@ -974,18 +980,6 @@
 # The format for an instance UUID that is passed with the log message. (string
 # value)
 #instance_uuid_format = "[instance: %(uuid)s] "
-
-# Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
-
-# Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
-
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
-# empty string. Logs with level greater or equal to rate_limit_except_level are
-# not filtered. An empty string means that all levels are filtered. (string
-# value)
-#rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
@@ -1188,7 +1182,7 @@
 # /store-capabilities.html
 #
 # For more information on setting up a particular store in your
-# deployment and help with the usage of this feature, please contact
+# deplyment and help with the usage of this feature, please contact
 # the storage driver maintainers listed here:
 # http://docs.openstack.org/developer/glance_store/drivers/index.html
 #
@@ -1217,8 +1211,8 @@
 #
 # Possible values:
 #     * A string of of the following form:
-#       ``<service_type>:<service_name>:<interface>``
-#       At least ``service_type`` and ``interface`` should be specified.
+#       ``<service_type>:<service_name>:<endpoint_type>``
+#       At least ``service_type`` and ``endpoint_type`` should be specified.
 #       ``service_name`` can be omitted.
 #
 # Related options:
@@ -1441,25 +1435,6 @@
 #
 #  (string value)
 #rootwrap_config = /etc/glance/rootwrap.conf
-
-#
-# Volume type that will be used for volume creation in cinder.
-#
-# Some cinder backends can have several volume types to optimize storage usage.
-# Adding this option allows an operator to choose a specific volume type
-# in cinder that can be optimized for images.
-#
-# If this is not set, then the default volume type specified in the cinder
-# configuration will be used for volume creation.
-#
-# Possible values:
-#     * A valid volume type from cinder
-#
-# Related options:
-#     * None
-#
-#  (string value)
-#cinder_volume_type = <None>
 
 #
 # Directory to which the filesystem backend store writes images.
@@ -1540,7 +1515,6 @@
 #     * None
 #
 #  (string value)
-#filesystem_store_metadata_file = <None>
 
 #
 # File access permissions for the image files.
@@ -2042,16 +2016,12 @@
 # images in its own account. More details multi-tenant store can be found at
 # https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
 #
-# NOTE: If using multi-tenant swift store, please make sure
-# that you do not set a swift configuration file with the
-# 'swift_store_config_file' option.
-#
 # Possible values:
 #     * True
 #     * False
 #
 # Related options:
-#     * swift_store_config_file
+#     * None
 #
 #  (boolean value)
 #swift_store_multi_tenant = false
@@ -2267,15 +2237,12 @@
 # option is highly recommended while using Swift storage backend for
 # image storage as it avoids storage of credentials in the database.
 #
-# NOTE: Please do not configure this option if you have set
-# ``swift_store_multi_tenant`` to ``True``.
-#
 # Possible values:
 #     * String value representing an absolute path on the glance-api
 #       node
 #
 # Related options:
-#     * swift_store_multi_tenant
+#     * None
 #
 #  (string value)
 #swift_store_config_file = <None>
@@ -2483,7 +2450,7 @@
 # From oslo.policy
 #
 
-# The file that defines policies. (string value)
+# The JSON file that defines policies. (string value)
 # Deprecated group/name - [DEFAULT]/policy_file
 #policy_file = policy.json
 

2017-09-27 10:01:17,541 [salt.state       ][INFO    ][4933] Completed state [/etc/glance/glance-scrubber.conf] at time 10:01:17.541019 duration_in_ms=129.011
2017-09-27 10:01:17,542 [salt.state       ][INFO    ][4933] Running state [/etc/glance/glance-api.conf] at time 10:01:17.541548
2017-09-27 10:01:17,542 [salt.state       ][INFO    ][4933] Executing state file.managed for /etc/glance/glance-api.conf
2017-09-27 10:01:17,573 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/files/ocata/glance-api.conf.Debian'
2017-09-27 10:01:17,727 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/map.jinja'
2017-09-27 10:01:17,796 [salt.state       ][INFO    ][4933] File changed:
--- 
+++ 
@@ -1,3 +1,5 @@
+
+
 [DEFAULT]
 
 #
@@ -233,6 +235,7 @@
 #  (integer value)
 # Minimum value: 1
 #limit_param_default = 25
+limit_param_default = 25
 
 #
 # Maximum number of results that could be returned by a request.
@@ -258,6 +261,7 @@
 #  (integer value)
 # Minimum value: 1
 #api_limit_max = 1000
+api_limit_max = 1000
 
 #
 # Show direct image location when returning an image.
@@ -290,6 +294,7 @@
 #
 #  (boolean value)
 #show_image_direct_url = false
+show_image_direct_url = true
 
 # DEPRECATED:
 # Show all image locations when returning an image.
@@ -321,10 +326,11 @@
 #  (boolean value)
 # This option is deprecated for removal since Newton.
 # Its value may be silently ignored in the future.
-# Reason: This option will be removed in the Pike release or later because the
-# same functionality can be achieved with greater granularity by using policies.
+# Reason: This option will be removed in the Ocata release because the same
+# functionality can be achieved with greater granularity by using policies.
 # Please see the Newton release notes for more information.
 #show_multiple_locations = false
+show_multiple_locations = true
 
 #
 # Maximum size of image a user can upload in bytes.
@@ -409,6 +415,7 @@
 #
 #  (boolean value)
 #enable_v1_api = true
+enable_v1_api=False
 
 #
 # Deploy the v2 OpenStack Images API.
@@ -439,6 +446,7 @@
 #
 #  (boolean value)
 #enable_v2_api = true
+enable_v2_api=True
 
 #
 # Deploy the v1 API Registry service.
@@ -595,6 +603,9 @@
 #  (string value)
 # Allowed values: location_order, store_type
 #location_strategy = location_order
+
+location_strategy = location_order
+
 
 #
 # The location of the property protection file.
@@ -689,6 +700,7 @@
 #
 #  (string value)
 #bind_host = 0.0.0.0
+bind_host = 10.167.4.13
 
 #
 # Port number on which the server will listen.
@@ -708,6 +720,7 @@
 # Minimum value: 0
 # Maximum value: 65535
 #bind_port = <None>
+bind_port = 9292
 
 #
 # Number of Glance worker processes to start.
@@ -732,6 +745,7 @@
 #  (integer value)
 # Minimum value: 0
 #workers = <None>
+workers = 8
 
 #
 # Maximum line size of message headers.
@@ -988,6 +1002,7 @@
 #  (integer value)
 # Minimum value: 0
 #image_cache_max_size = 10737418240
+
 
 #
 # The amount of time, in seconds, an incomplete image remains in the cache.
@@ -1105,6 +1120,7 @@
 #
 #  (string value)
 #registry_host = 0.0.0.0
+registry_host = 10.167.4.10
 
 #
 # Port the registry server is listening on.
@@ -1119,6 +1135,7 @@
 # Minimum value: 0
 # Maximum value: 65535
 #registry_port = 9191
+registry_port = 9191
 
 # DEPRECATED: Whether to pass through the user token when making requests to the
 # registry. To prevent failures with token expiration during big files upload,
@@ -1194,6 +1211,9 @@
 # Keystone trusts support.
 #auth_region = <None>
 
+auth_region = RegionOne
+
+
 #
 # Protocol to use for communication with the registry server.
 #
@@ -1220,6 +1240,7 @@
 #  (string value)
 # Allowed values: http, https
 #registry_client_protocol = http
+registry_client_protocol = http
 
 #
 # Absolute path to the private key file.
@@ -1447,12 +1468,14 @@
 # INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
+debug = false
 
 # DEPRECATED: If set to false, the logging level will be set to WARNING instead
 # of the default INFO level. (boolean value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #verbose = true
+verbose = true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -1474,6 +1497,7 @@
 # log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
+log_file = /var/log/glance/api.log
 
 # (Optional) The base directory used for relative log_file  paths. This option
 # is ignored if log_config_append is set. (string value)
@@ -1567,7 +1591,7 @@
 #rpc_zmq_bind_address = *
 
 # MatchMaker driver. (string value)
-# Allowed values: redis, sentinel, dummy
+# Allowed values: redis, dummy
 # Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
 #rpc_zmq_matchmaker = redis
 
@@ -1621,15 +1645,6 @@
 # Deprecated group/name - [DEFAULT]/use_router_proxy
 #use_router_proxy = false
 
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
 # Maximum value: 65535
@@ -1657,8 +1672,7 @@
 # a queue when server side disconnects. False means to keep queue and messages
 # even if server is disconnected, when the server appears we send all
 # accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
+#zmq_immediate = false
 # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
 # other negative value) means to skip any overrides and leave it to OS default;
 # 0 and 1 (or any other positive value) mean to disable and enable the option
@@ -1724,6 +1738,7 @@
 # A URL representing the messaging driver to use and its full configuration.
 # (string value)
 #transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.41:5672,openstack:opnfv_secret@10.167.4.42:5672,openstack:opnfv_secret@10.167.4.43:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
@@ -1735,6 +1750,8 @@
 # The default exchange under which topics are scoped. May be overridden by an
 # exchange name specified in the transport_url option. (string value)
 #control_exchange = openstack
+control_exchange = openstack
+scrubber_datadir=/var/lib/glance/scrubber
 
 
 [cors]
@@ -1816,6 +1833,7 @@
 # The back end to use for the database. (string value)
 # Deprecated group/name - [DEFAULT]/db_backend
 #backend = sqlalchemy
+backend = sqlalchemy
 
 # The SQLAlchemy connection string to use to connect to the database. (string
 # value)
@@ -1823,6 +1841,7 @@
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
 #connection = <None>
+connection = mysql+pymysql://glance:opnfv_secret@10.167.4.50/glance
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
@@ -1839,6 +1858,7 @@
 # Deprecated group/name - [DATABASE]/sql_idle_timeout
 # Deprecated group/name - [sql]/idle_timeout
 #idle_timeout = 3600
+idle_timeout = 3600
 
 # Minimum number of SQL connections to keep open in a pool. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
@@ -1856,6 +1876,7 @@
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
 #max_retries = 10
+max_retries = -1
 
 # Interval between retries of opening a SQL connection. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
@@ -1866,6 +1887,7 @@
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
 #max_overflow = 50
+max_overflow = 30
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
@@ -1939,7 +1961,8 @@
 #
 #  (list value)
 #stores = file,http
-
+default_store = file
+stores = file,http
 #
 # The default scheme to use for storing images.
 #
@@ -1992,7 +2015,7 @@
 # /store-capabilities.html
 #
 # For more information on setting up a particular store in your
-# deployment and help with the usage of this feature, please contact
+# deplyment and help with the usage of this feature, please contact
 # the storage driver maintainers listed here:
 # http://docs.openstack.org/developer/glance_store/drivers/index.html
 #
@@ -2021,8 +2044,8 @@
 #
 # Possible values:
 #     * A string of of the following form:
-#       ``<service_type>:<service_name>:<interface>``
-#       At least ``service_type`` and ``interface`` should be specified.
+#       ``<service_type>:<service_name>:<endpoint_type>``
+#       At least ``service_type`` and ``endpoint_type`` should be specified.
 #       ``service_name`` can be omitted.
 #
 # Related options:
@@ -2035,6 +2058,7 @@
 #
 #  (string value)
 #cinder_catalog_info = volumev2::publicURL
+cinder_catalog_info = volumev2::internalURL
 
 #
 # Override service catalog lookup with template for cinder endpoint.
@@ -2079,6 +2103,9 @@
 #  (string value)
 # Deprecated group/name - [glance_store]/os_region_name
 #cinder_os_region_name = <None>
+
+cinder_os_region_name = RegionOne
+
 
 #
 # Location of a CA certificates file used for cinder client requests.
@@ -2247,25 +2274,6 @@
 #rootwrap_config = /etc/glance/rootwrap.conf
 
 #
-# Volume type that will be used for volume creation in cinder.
-#
-# Some cinder backends can have several volume types to optimize storage usage.
-# Adding this option allows an operator to choose a specific volume type
-# in cinder that can be optimized for images.
-#
-# If this is not set, then the default volume type specified in the cinder
-# configuration will be used for volume creation.
-#
-# Possible values:
-#     * A valid volume type from cinder
-#
-# Related options:
-#     * None
-#
-#  (string value)
-#cinder_volume_type = <None>
-
-#
 # Directory to which the filesystem backend store writes images.
 #
 # Upon start up, Glance creates the directory if it doesn't already
@@ -2290,6 +2298,7 @@
 #
 #  (string value)
 #filesystem_store_datadir = /var/lib/glance/images
+filesystem_store_datadir = /var/lib/glance/images/
 
 #
 # List of directories and their priorities to which the filesystem
@@ -2344,7 +2353,6 @@
 #     * None
 #
 #  (string value)
-#filesystem_store_metadata_file = <None>
 
 #
 # File access permissions for the image files.
@@ -2433,106 +2441,6 @@
 #http_proxy_information =
 
 #
-# Size, in megabytes, to chunk RADOS images into.
-#
-# Provide an integer value representing the size in megabytes to chunk
-# Glance images into. The default chunk size is 8 megabytes. For optimal
-# performance, the value should be a power of two.
-#
-# When Ceph's RBD object storage system is used as the storage backend
-# for storing Glance images, the images are chunked into objects of the
-# size set using this option. These chunked objects are then stored
-# across the distributed block data store to use for Glance.
-#
-# Possible Values:
-#     * Any positive integer value
-#
-# Related options:
-#     * None
-#
-#  (integer value)
-# Minimum value: 1
-#rbd_store_chunk_size = 8
-
-#
-# RADOS pool in which images are stored.
-#
-# When RBD is used as the storage backend for storing Glance images, the
-# images are stored by means of logical grouping of the objects (chunks
-# of images) into a ``pool``. Each pool is defined with the number of
-# placement groups it can contain. The default pool that is used is
-# 'images'.
-#
-# More information on the RBD storage backend can be found here:
-# http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/
-#
-# Possible Values:
-#     * A valid pool name
-#
-# Related options:
-#     * None
-#
-#  (string value)
-#rbd_store_pool = images
-
-#
-# RADOS user to authenticate as.
-#
-# This configuration option takes in the RADOS user to authenticate as.
-# This is only needed when RADOS authentication is enabled and is
-# applicable only if the user is using Cephx authentication. If the
-# value for this option is not set by the user or is set to None, a
-# default value will be chosen, which will be based on the client.
-# section in rbd_store_ceph_conf.
-#
-# Possible Values:
-#     * A valid RADOS user
-#
-# Related options:
-#     * rbd_store_ceph_conf
-#
-#  (string value)
-#rbd_store_user = <None>
-
-#
-# Ceph configuration file path.
-#
-# This configuration option takes in the path to the Ceph configuration
-# file to be used. If the value for this option is not set by the user
-# or is set to None, librados will locate the default configuration file
-# which is located at /etc/ceph/ceph.conf. If using Cephx
-# authentication, this file should include a reference to the right
-# keyring in a client.<USER> section
-#
-# Possible Values:
-#     * A valid path to a configuration file
-#
-# Related options:
-#     * rbd_store_user
-#
-#  (string value)
-#rbd_store_ceph_conf = /etc/ceph/ceph.conf
-
-#
-# Timeout value for connecting to Ceph cluster.
-#
-# This configuration option takes in the timeout value in seconds used
-# when connecting to the Ceph cluster i.e. it sets the time to wait for
-# glance-api before closing the connection. This prevents glance-api
-# hangups during the connection to RBD. If the value for this option
-# is set to less than or equal to 0, no timeout is set and the default
-# librados value is used.
-#
-# Possible Values:
-#     * Any integer value
-#
-# Related options:
-#     * None
-#
-#  (integer value)
-#rados_connect_timeout = 0
-
-#
 # Chunk size for images to be stored in Sheepdog data store.
 #
 # Provide an integer value representing the size in mebibyte
@@ -2620,469 +2528,6 @@
 #     * swift_store_cacert
 #
 #  (boolean value)
-#swift_store_auth_insecure = false
-
-#
-# Path to the CA bundle file.
-#
-# This configuration option enables the operator to specify the path to
-# a custom Certificate Authority file for SSL verification when
-# connecting to Swift.
-#
-# Possible values:
-#     * A valid path to a CA file
-#
-# Related options:
-#     * swift_store_auth_insecure
-#
-#  (string value)
-#swift_store_cacert = /etc/ssl/certs/ca-certificates.crt
-
-#
-# The region of Swift endpoint to use by Glance.
-#
-# Provide a string value representing a Swift region where Glance
-# can connect to for image storage. By default, there is no region
-# set.
-#
-# When Glance uses Swift as the storage backend to store images
-# for a specific tenant that has multiple endpoints, setting of a
-# Swift region with ``swift_store_region`` allows Glance to connect
-# to Swift in the specified region as opposed to a single region
-# connectivity.
-#
-# This option can be configured for both single-tenant and
-# multi-tenant storage.
-#
-# NOTE: Setting the region with ``swift_store_region`` is
-# tenant-specific and is necessary ``only if`` the tenant has
-# multiple endpoints across different regions.
-#
-# Possible values:
-#     * A string value representing a valid Swift region.
-#
-# Related Options:
-#     * None
-#
-#  (string value)
-#swift_store_region = RegionTwo
-
-#
-# The URL endpoint to use for Swift backend storage.
-#
-# Provide a string value representing the URL endpoint to use for
-# storing Glance images in Swift store. By default, an endpoint
-# is not set and the storage URL returned by ``auth`` is used.
-# Setting an endpoint with ``swift_store_endpoint`` overrides the
-# storage URL and is used for Glance image storage.
-#
-# NOTE: The URL should include the path up to, but excluding the
-# container. The location of an object is obtained by appending
-# the container and object to the configured URL.
-#
-# Possible values:
-#     * String value representing a valid URL path up to a Swift container
-#
-# Related Options:
-#     * None
-#
-#  (string value)
-#swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name
-
-#
-# Endpoint Type of Swift service.
-#
-# This string value indicates the endpoint type to use to fetch the
-# Swift endpoint. The endpoint type determines the actions the user will
-# be allowed to perform, for instance, reading and writing to the Store.
-# This setting is only used if swift_store_auth_version is greater than
-# 1.
-#
-# Possible values:
-#     * publicURL
-#     * adminURL
-#     * internalURL
-#
-# Related options:
-#     * swift_store_endpoint
-#
-#  (string value)
-# Allowed values: publicURL, adminURL, internalURL
-#swift_store_endpoint_type = publicURL
-
-#
-# Type of Swift service to use.
-#
-# Provide a string value representing the service type to use for
-# storing images while using Swift backend storage. The default
-# service type is set to ``object-store``.
-#
-# NOTE: If ``swift_store_auth_version`` is set to 2, the value for
-# this configuration option needs to be ``object-store``. If using
-# a higher version of Keystone or a different auth scheme, this
-# option may be modified.
-#
-# Possible values:
-#     * A string representing a valid service type for Swift storage.
-#
-# Related Options:
-#     * None
-#
-#  (string value)
-#swift_store_service_type = object-store
-
-#
-# Name of single container to store images/name prefix for multiple containers
-#
-# When a single container is being used to store images, this configuration
-# option indicates the container within the Glance account to be used for
-# storing all images. When multiple containers are used to store images, this
-# will be the name prefix for all containers. Usage of single/multiple
-# containers can be controlled using the configuration option
-# ``swift_store_multiple_containers_seed``.
-#
-# When using multiple containers, the containers will be named after the value
-# set for this configuration option with the first N chars of the image UUID
-# as the suffix delimited by an underscore (where N is specified by
-# ``swift_store_multiple_containers_seed``).
-#
-# Example: if the seed is set to 3 and swift_store_container = ``glance``, then
-# an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in
-# the container ``glance_fda``. All dashes in the UUID are included when
-# creating the container name but do not count toward the character limit, so
-# when N=10 the container name would be ``glance_fdae39a1-ba.``
-#
-# Possible values:
-#     * If using single container, this configuration option can be any string
-#       that is a valid swift container name in Glance's Swift account
-#     * If using multiple containers, this configuration option can be any
-#       string as long as it satisfies the container naming rules enforced by
-#       Swift. The value of ``swift_store_multiple_containers_seed`` should be
-#       taken into account as well.
-#
-# Related options:
-#     * ``swift_store_multiple_containers_seed``
-#     * ``swift_store_multi_tenant``
-#     * ``swift_store_create_container_on_put``
-#
-#  (string value)
-#swift_store_container = glance
-
-#
-# The size threshold, in MB, after which Glance will start segmenting image
-# data.
-#
-# Swift has an upper limit on the size of a single uploaded object. By default,
-# this is 5GB. To upload objects bigger than this limit, objects are segmented
-# into multiple smaller objects that are tied together with a manifest file.
-# For more detail, refer to
-# http://docs.openstack.org/developer/swift/overview_large_objects.html
-#
-# This configuration option specifies the size threshold over which the Swift
-# driver will start segmenting image data into multiple smaller files.
-# Currently, the Swift driver only supports creating Dynamic Large Objects.
-#
-# NOTE: This should be set by taking into account the large object limit
-# enforced by the Swift cluster in consideration.
-#
-# Possible values:
-#     * A positive integer that is less than or equal to the large object limit
-#       enforced by the Swift cluster in consideration.
-#
-# Related options:
-#     * ``swift_store_large_object_chunk_size``
-#
-#  (integer value)
-# Minimum value: 1
-#swift_store_large_object_size = 5120
-
-#
-# The maximum size, in MB, of the segments when image data is segmented.
-#
-# When image data is segmented to upload images that are larger than the limit
-# enforced by the Swift cluster, image data is broken into segments that are no
-# bigger than the size specified by this configuration option.
-# Refer to ``swift_store_large_object_size`` for more detail.
-#
-# For example: if ``swift_store_large_object_size`` is 5GB and
-# ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be
-# segmented into 7 segments where the first six segments will be 1GB in size and
-# the seventh segment will be 0.2GB.
-#
-# Possible values:
-#     * A positive integer that is less than or equal to the large object limit
-#       enforced by Swift cluster in consideration.
-#
-# Related options:
-#     * ``swift_store_large_object_size``
-#
-#  (integer value)
-# Minimum value: 1
-#swift_store_large_object_chunk_size = 200
-
-#
-# Create container, if it doesn't already exist, when uploading image.
-#
-# At the time of uploading an image, if the corresponding container doesn't
-# exist, it will be created provided this configuration option is set to True.
-# By default, it won't be created. This behavior is applicable for both single
-# and multiple containers mode.
-#
-# Possible values:
-#     * True
-#     * False
-#
-# Related options:
-#     * None
-#
-#  (boolean value)
-#swift_store_create_container_on_put = false
-
-#
-# Store images in tenant's Swift account.
-#
-# This enables multi-tenant storage mode which causes Glance images to be stored
-# in tenant specific Swift accounts. If this is disabled, Glance stores all
-# images in its own account. More details multi-tenant store can be found at
-# https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
-#
-# NOTE: If using multi-tenant swift store, please make sure
-# that you do not set a swift configuration file with the
-# 'swift_store_config_file' option.
-#
-# Possible values:
-#     * True
-#     * False
-#
-# Related options:
-#     * swift_store_config_file
-#
-#  (boolean value)
-#swift_store_multi_tenant = false
-
-#
-# Seed indicating the number of containers to use for storing images.
-#
-# When using a single-tenant store, images can be stored in one or more than one
-# containers. When set to 0, all images will be stored in one single container.
-# When set to an integer value between 1 and 32, multiple containers will be
-# used to store images. This configuration option will determine how many
-# containers are created. The total number of containers that will be used is
-# equal to 16^N, so if this config option is set to 2, then 16^2=256 containers
-# will be used to store images.
-#
-# Please refer to ``swift_store_container`` for more detail on the naming
-# convention. More detail about using multiple containers can be found at
-# https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-
-# multiple-containers.html
-#
-# NOTE: This is used only when swift_store_multi_tenant is disabled.
-#
-# Possible values:
-#     * A non-negative integer less than or equal to 32
-#
-# Related options:
-#     * ``swift_store_container``
-#     * ``swift_store_multi_tenant``
-#     * ``swift_store_create_container_on_put``
-#
-#  (integer value)
-# Minimum value: 0
-# Maximum value: 32
-#swift_store_multiple_containers_seed = 0
-
-#
-# List of tenants that will be granted admin access.
-#
-# This is a list of tenants that will be granted read/write access on
-# all Swift containers created by Glance in multi-tenant mode. The
-# default value is an empty list.
-#
-# Possible values:
-#     * A comma separated list of strings representing UUIDs of Keystone
-#       projects/tenants
-#
-# Related options:
-#     * None
-#
-#  (list value)
-#swift_store_admin_tenants =
-
-#
-# SSL layer compression for HTTPS Swift requests.
-#
-# Provide a boolean value to determine whether or not to compress
-# HTTPS Swift requests for images at the SSL layer. By default,
-# compression is enabled.
-#
-# When using Swift as the backend store for Glance image storage,
-# SSL layer compression of HTTPS Swift requests can be set using
-# this option. If set to False, SSL layer compression of HTTPS
-# Swift requests is disabled. Disabling this option may improve
-# performance for images which are already in a compressed format,
-# for example, qcow2.
-#
-# Possible values:
-#     * True
-#     * False
-#
-# Related Options:
-#     * None
-#
-#  (boolean value)
-#swift_store_ssl_compression = true
-
-#
-# The number of times a Swift download will be retried before the
-# request fails.
-#
-# Provide an integer value representing the number of times an image
-# download must be retried before erroring out. The default value is
-# zero (no retry on a failed image download). When set to a positive
-# integer value, ``swift_store_retry_get_count`` ensures that the
-# download is attempted this many more times upon a download failure
-# before sending an error message.
-#
-# Possible values:
-#     * Zero
-#     * Positive integer value
-#
-# Related Options:
-#     * None
-#
-#  (integer value)
-# Minimum value: 0
-#swift_store_retry_get_count = 0
-
-#
-# Time in seconds defining the size of the window in which a new
-# token may be requested before the current token is due to expire.
-#
-# Typically, the Swift storage driver fetches a new token upon the
-# expiration of the current token to ensure continued access to
-# Swift. However, some Swift transactions (like uploading image
-# segments) may not recover well if the token expires on the fly.
-#
-# Hence, by fetching a new token before the current token expiration,
-# we make sure that the token does not expire or is close to expiry
-# before a transaction is attempted. By default, the Swift storage
-# driver requests for a new token 60 seconds or less before the
-# current token expiration.
-#
-# Possible values:
-#     * Zero
-#     * Positive integer value
-#
-# Related Options:
-#     * None
-#
-#  (integer value)
-# Minimum value: 0
-#swift_store_expire_soon_interval = 60
-
-#
-# Use trusts for multi-tenant Swift store.
-#
-# This option instructs the Swift store to create a trust for each
-# add/get request when the multi-tenant store is in use. Using trusts
-# allows the Swift store to avoid problems that can be caused by an
-# authentication token expiring during the upload or download of data.
-#
-# By default, ``swift_store_use_trusts`` is set to ``True``(use of
-# trusts is enabled). If set to ``False``, a user token is used for
-# the Swift connection instead, eliminating the overhead of trust
-# creation.
-#
-# NOTE: This option is considered only when
-# ``swift_store_multi_tenant`` is set to ``True``
-#
-# Possible values:
-#     * True
-#     * False
-#
-# Related options:
-#     * swift_store_multi_tenant
-#
-#  (boolean value)
-#swift_store_use_trusts = true
-
-#
-# Reference to default Swift account/backing store parameters.
-#
-# Provide a string value representing a reference to the default set
-# of parameters required for using swift account/backing store for
-# image storage. The default reference value for this configuration
-# option is 'ref1'. This configuration option dereferences the
-# parameters and facilitates image storage in Swift storage backend
-# every time a new image is added.
-#
-# Possible values:
-#     * A valid string value
-#
-# Related options:
-#     * None
-#
-#  (string value)
-#default_swift_reference = ref1
-
-# DEPRECATED: Version of the authentication service to use. Valid versions are 2
-# and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# The option 'auth_version' in the Swift back-end configuration file is
-# used instead.
-#swift_store_auth_version = 2
-
-# DEPRECATED: The address where the Swift authentication service is listening.
-# (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# The option 'auth_address' in the Swift back-end configuration file is
-# used instead.
-#swift_store_auth_address = <None>
-
-# DEPRECATED: The user to authenticate against the Swift authentication service.
-# (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# The option 'user' in the Swift back-end configuration file is set instead.
-#swift_store_user = <None>
-
-# DEPRECATED: Auth key for the user authenticating against the Swift
-# authentication service. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# The option 'key' in the Swift back-end configuration file is used
-# to set the authentication key instead.
-#swift_store_key = <None>
-
-#
-# Absolute path to the file containing the swift account(s)
-# configurations.
-#
-# Include a string value representing the path to a configuration
-# file that has references for each of the configured Swift
-# account(s)/backing stores. By default, no file path is specified
-# and customized Swift referencing is disabled. Configuring this
-# option is highly recommended while using Swift storage backend for
-# image storage as it avoids storage of credentials in the database.
-#
-# NOTE: Please do not configure this option if you have set
-# ``swift_store_multi_tenant`` to ``True``.
-#
-# Possible values:
-#     * String value representing an absolute path on the glance-api
-#       node
-#
-# Related options:
-#     * swift_store_multi_tenant
-#
-#  (string value)
-#swift_store_config_file = <None>
 
 #
 # Address of the ESX/ESXi or vCenter Server target system.
@@ -3275,11 +2720,21 @@
 
 # Supported values for the 'disk_format' image attribute (list value)
 # Deprecated group/name - [DEFAULT]/disk_formats
-#disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso,ploop
+#disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso
 
 
 [keystone_authtoken]
-
+revocation_cache_time = 10
+auth_type = password
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = glance
+password = opnfv_secret
+auth_uri=http://10.167.4.10:5000
+auth_url=http://10.167.4.10:35357
+token_cache_time = -1
+memcached_servers=10.167.4.11:11211,10.167.4.12:11211,10.167.4.13:11211
 #
 # From keystonemiddleware.auth_token
 #
@@ -3330,12 +2785,7 @@
 # The region in which the identity server can be found. (string value)
 #region_name = <None>
 
-# DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P release.
-# (string value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
+# Directory used to cache files related to PKI tokens. (string value)
 #signing_dir = <None>
 
 # Optionally specify a list of memcached server(s) to use for caching. If left
@@ -3348,14 +2798,10 @@
 # -1 to disable caching completely. (integer value)
 #token_cache_time = 300
 
-# DEPRECATED: Determines the frequency at which the list of revoked tokens is
-# retrieved from the Identity service (in seconds). A high number of revocation
-# events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in the
-# Ocata release and will be removed in the P release. (integer value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
+# Determines the frequency at which the list of revoked tokens is retrieved from
+# the Identity service (in seconds). A high number of revocation events combined
+# with a low cache duration may significantly reduce performance. Only valid for
+# PKI tokens. (integer value)
 #revocation_cache_time = 10
 
 # (Optional) If defined, indicate whether token data should be authenticated or
@@ -3408,40 +2854,19 @@
 # value)
 #enforce_token_bind = permissive
 
-# DEPRECATED: If true, the revocation list will be checked for cached tokens.
-# This requires that PKI tokens are configured on the identity server. (boolean
-# value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
+# If true, the revocation list will be checked for cached tokens. This requires
+# that PKI tokens are configured on the identity server. (boolean value)
 #check_revocations_for_cached = false
 
-# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
-# single algorithm or multiple. The algorithms are those supported by Python
-# standard hashlib.new(). The hashes will be tried in the order given, so put
-# the preferred one first for performance. The result of the first hash will be
+# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
+# or multiple. The algorithms are those supported by Python standard
+# hashlib.new(). The hashes will be tried in the order given, so put the
+# preferred one first for performance. The result of the first hash will be
 # stored in the cache. This will typically be set to multiple values only while
 # migrating from a less secure algorithm to a more secure one. Once all the old
 # tokens are expired this option should be set to a single value for better
 # performance. (list value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
 #hash_algorithms = md5
-
-# A choice of roles that must be present in a service token. Service tokens are
-# allowed to request that an expired token can be used and so this check should
-# tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present. For
-# backwards compatibility reasons this currently only affects the allow_expired
-# check. (list value)
-#service_token_roles = service
-
-# For backwards compatibility reasons we must let valid service tokens pass that
-# don't pass the service_token_roles check as valid. Setting this true will
-# become the default in a future release and should be enabled if possible.
-# (boolean value)
-#service_token_roles_required = false
 
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
@@ -3477,7 +2902,7 @@
 # Reason: Replaced by [DEFAULT]/transport_url
 #password =
 
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
+# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
 # [host:port, host1:port ... ] (list value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
@@ -3493,7 +2918,7 @@
 # Time in ms to wait before the transaction is killed. (integer value)
 #check_timeout = 20000
 
-# Timeout in ms on blocking socket operations. (integer value)
+# Timeout in ms on blocking socket operations (integer value)
 #socket_timeout = 10000
 
 
@@ -3534,16 +2959,15 @@
 # Deprecated group/name - [amqp1]/trace
 #trace = false
 
-# CA certificate PEM file used to verify the server's certificate (string value)
+# CA certificate PEM file to verify server certificate (string value)
 # Deprecated group/name - [amqp1]/ssl_ca_file
 #ssl_ca_file =
 
-# Self-identifying certificate PEM file for client authentication (string value)
+# Identifying certificate PEM file to present to clients (string value)
 # Deprecated group/name - [amqp1]/ssl_cert_file
 #ssl_cert_file =
 
-# Private key PEM file used to sign ssl_cert_file certificate (optional) (string
-# value)
+# Private key PEM file used to sign cert_file certificate (string value)
 # Deprecated group/name - [amqp1]/ssl_key_file
 #ssl_key_file =
 
@@ -3551,11 +2975,8 @@
 # Deprecated group/name - [amqp1]/ssl_key_password
 #ssl_key_password = <None>
 
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
+# Accept clients using either SSL or plain TCP (boolean value)
 # Deprecated group/name - [amqp1]/allow_insecure_clients
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
 #allow_insecure_clients = false
 
 # Space separated list of acceptable SASL mechanisms (string value)
@@ -3597,12 +3018,8 @@
 # Minimum value: 1
 #link_retry_delay = 10
 
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
+# The deadline for an rpc reply message delivery. Only used when caller does not
+# provide a timeout expiry. (integer value)
 # Minimum value: 5
 #default_reply_timeout = 30
 
@@ -3615,11 +3032,6 @@
 # does not provide a timeout expiry. (integer value)
 # Minimum value: 5
 #default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
 
 # Indicates the addressing mode used by the driver.
 # Permitted values:
@@ -3668,6 +3080,7 @@
 # else control_exchange if set
 # else 'notify' (string value)
 #default_notification_exchange = <None>
+default_notification_exchange = glance
 
 # Exchange name used in RPC addresses.
 # Exchange name resolution precedence:
@@ -3689,66 +3102,6 @@
 # Minimum value: 1
 #notify_server_credit = 100
 
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (integer value)
-#kafka_consumer_timeout = 1.0
-
-# Pool Size for Kafka Consumers (integer value)
-#pool_size = 10
-
-# The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
-
-# The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating point
-# value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
 
 [oslo_messaging_notifications]
 
@@ -3760,6 +3113,7 @@
 # messagingv2, routing, log, test, noop (multi valued)
 # Deprecated group/name - [DEFAULT]/notification_driver
 #driver =
+driver = messagingv2
 
 # A URL representing the messaging driver to use for notifications. If not set,
 # we fall back to the same configuration used for RPC. (string value)
@@ -3868,7 +3222,6 @@
 #rabbit_password = guest
 
 # The RabbitMQ login method. (string value)
-# Allowed values: PLAIN, AMQPLAIN, RABBIT-CR-DEMO
 # Deprecated group/name - [DEFAULT]/rabbit_login_method
 #rabbit_login_method = AMQPLAIN
 
@@ -3901,7 +3254,7 @@
 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
 # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
 # is no longer controlled by the x-ha-policy argument when declaring a queue. If
-# you just want to make sure that all queues (except those with auto-generated
+# you just want to make sure that all queues (except  those with auto-generated
 # names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
 # '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
 # Deprecated group/name - [DEFAULT]/rabbit_ha_queues
@@ -3978,11 +3331,6 @@
 # (integer value)
 #pool_stale = 60
 
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Allowed values: json, msgpack
-#default_serializer_type = json
-
 # Persist notification messages. (boolean value)
 #notification_persistence = false
 
@@ -4028,7 +3376,7 @@
 
 # Reconnecting retry count in case of connectivity problem during sending RPC
 # message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
-# request could be processed more than one time (integer value)
+# request could be processed more then one time (integer value)
 #default_rpc_retry_attempts = -1
 
 # Reconnecting retry delay in case of connectivity problem during sending RPC
@@ -4048,7 +3396,7 @@
 #rpc_zmq_bind_address = *
 
 # MatchMaker driver. (string value)
-# Allowed values: redis, sentinel, dummy
+# Allowed values: redis, dummy
 # Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
 #rpc_zmq_matchmaker = redis
 
@@ -4070,13 +3418,12 @@
 # Deprecated group/name - [DEFAULT]/rpc_zmq_host
 #rpc_zmq_host = localhost
 
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger period.
-# The value of 0 specifies no linger period. Pending messages shall be discarded
-# immediately when the socket is closed. Positive values specify an upper bound
-# for the linger period. (integer value)
+# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
+# an infinite linger period. The value of 0 specifies no linger period. Pending
+# messages shall be discarded immediately when the socket is closed. Only
+# supported by impl_zmq. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
+#rpc_cast_timeout = -1
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
@@ -4096,20 +3443,11 @@
 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
 # value)
 # Deprecated group/name - [DEFAULT]/use_pub_sub
-#use_pub_sub = false
+#use_pub_sub = true
 
 # Use ROUTER remote proxy. (boolean value)
 # Deprecated group/name - [DEFAULT]/use_router_proxy
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
+#use_router_proxy = true
 
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
@@ -4138,62 +3476,7 @@
 # a queue when server side disconnects. False means to keep queue and messages
 # even if server is disconnected, when the server appears we send all
 # accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to skip
-# any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value and
-# 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0) means
-# to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is not
-# tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
+#zmq_immediate = false
 
 
 [oslo_middleware]
@@ -4213,9 +3496,10 @@
 # From oslo.policy
 #
 
-# The file that defines policies. (string value)
+# The JSON file that defines policies. (string value)
 # Deprecated group/name - [DEFAULT]/policy_file
 #policy_file = policy.json
+policy_file = /etc/glance/policy.json
 
 # Default rule. Enforced when a requested rule is not found. (string value)
 # Deprecated group/name - [DEFAULT]/policy_default_rule
@@ -4256,6 +3540,9 @@
 #
 #  (string value)
 #flavor = keystone
+
+flavor=keystone
+
 
 #
 # Name of the paste configuration file.
@@ -4347,42 +3634,8 @@
 # Examples of possible values:
 #
 # * messaging://: use oslo_messaging driver for sending notifications.
-# * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications.
-# * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending
-# notifications.
 #  (string value)
 #connection_string = messaging://
-
-#
-# Document type for notification indexing in elasticsearch.
-#  (string value)
-#es_doc_type = notification
-
-#
-# This parameter is a time value parameter (for example: es_scroll_time=2m),
-# indicating for how long the nodes that participate in the search will maintain
-# relevant resources in order to continue and support it.
-#  (string value)
-#es_scroll_time = 2m
-
-#
-# Elasticsearch splits large requests in batches. This parameter defines
-# maximum size of each batch (for example: es_scroll_size=10000).
-#  (integer value)
-#es_scroll_size = 10000
-
-#
-# Redissentinel provides a timeout option on the connections.
-# This parameter defines that timeout (for example: socket_timeout=0.1).
-#  (floating point value)
-#socket_timeout = 0.1
-
-#
-# Redissentinel uses a service name to identify a master redis service.
-# This parameter defines the name (for example:
-# sentinal_service_name=mymaster).
-#  (string value)
-#sentinel_service_name = mymaster
 
 
 [store_type_location_strategy]
@@ -4423,6 +3676,7 @@
 #store_type_preference =
 
 
+
 [task]
 
 #

2017-09-27 10:01:17,797 [salt.state       ][INFO    ][4933] Completed state [/etc/glance/glance-api.conf] at time 10:01:17.797182 duration_in_ms=255.632
2017-09-27 10:01:17,798 [salt.state       ][INFO    ][4933] Running state [/etc/glance/glance-api-paste.ini] at time 10:01:17.798086
2017-09-27 10:01:17,798 [salt.state       ][INFO    ][4933] Executing state file.managed for /etc/glance/glance-api-paste.ini
2017-09-27 10:01:17,822 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/files/ocata/glance-api-paste.ini'
2017-09-27 10:01:17,853 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/map.jinja'
2017-09-27 10:01:17,869 [salt.state       ][INFO    ][4933] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 # Use this pipeline for no auth or image caching - DEFAULT
 [pipeline:glance-api]
 pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context rootapp
@@ -12,7 +13,7 @@
 
 # Use this pipeline for keystone auth
 [pipeline:glance-api-keystone]
-pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context  rootapp
+pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context rootapp
 
 # Use this pipeline for keystone auth with image caching
 [pipeline:glance-api-keystone+caching]

2017-09-27 10:01:17,869 [salt.state       ][INFO    ][4933] Completed state [/etc/glance/glance-api-paste.ini] at time 10:01:17.868820 duration_in_ms=70.735
2017-09-27 10:01:17,869 [salt.state       ][INFO    ][4933] Running state [glance-glare] at time 10:01:17.869077
2017-09-27 10:01:17,869 [salt.state       ][INFO    ][4933] Executing state pkg.installed for glance-glare
2017-09-27 10:01:17,882 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'glance-glare'] in directory '/root'
2017-09-27 10:01:20,892 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100120882698
2017-09-27 10:01:20,912 [salt.minion      ][INFO    ][6277] Starting a new job with PID 6277
2017-09-27 10:01:20,932 [salt.minion      ][INFO    ][6277] Returning information for job: 20170927100120882698
2017-09-27 10:01:21,979 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:01:22,087 [salt.state       ][INFO    ][4933] Made the following changes:
'glance-glare' changed from 'absent' to '2:14.0.0-1~u16.04+mcp7'

2017-09-27 10:01:22,097 [salt.state       ][INFO    ][4933] Loading fresh modules for state activity
2017-09-27 10:01:22,114 [salt.state       ][INFO    ][4933] Completed state [glance-glare] at time 10:01:22.114021 duration_in_ms=4244.943
2017-09-27 10:01:22,118 [salt.state       ][INFO    ][4933] Running state [/etc/glance/glance-glare-paste.ini] at time 10:01:22.117990
2017-09-27 10:01:22,118 [salt.state       ][INFO    ][4933] Executing state file.managed for /etc/glance/glance-glare-paste.ini
2017-09-27 10:01:22,152 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/files/ocata/glance-glare-paste.ini'
2017-09-27 10:01:22,157 [salt.state       ][INFO    ][4933] File /etc/glance/glance-glare-paste.ini is in the correct state
2017-09-27 10:01:22,157 [salt.state       ][INFO    ][4933] Completed state [/etc/glance/glance-glare-paste.ini] at time 10:01:22.157012 duration_in_ms=39.021
2017-09-27 10:01:22,158 [salt.state       ][INFO    ][4933] Running state [/etc/glance/glance-glare.conf] at time 10:01:22.158390
2017-09-27 10:01:22,159 [salt.state       ][INFO    ][4933] Executing state file.managed for /etc/glance/glance-glare.conf
2017-09-27 10:01:22,182 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/files/ocata/glance-glare.conf.Debian'
2017-09-27 10:01:22,276 [salt.fileclient  ][INFO    ][4933] Fetching file from saltenv 'base', ** done ** 'glance/map.jinja'
2017-09-27 10:01:22,297 [salt.state       ][INFO    ][4933] File changed:
--- 
+++ 
@@ -1,3 +1,5 @@
+
+
 [DEFAULT]
 
 #
@@ -118,6 +120,7 @@
 #
 #  (string value)
 #bind_host = 0.0.0.0
+bind_host = 10.167.4.13
 
 #
 # Port number on which the server will listen.
@@ -137,6 +140,7 @@
 # Minimum value: 0
 # Maximum value: 65535
 #bind_port = <None>
+bind_port = 9494
 
 #
 # Number of Glance worker processes to start.
@@ -161,6 +165,7 @@
 #  (integer value)
 # Minimum value: 0
 #workers = <None>
+workers = 8
 
 #
 # Maximum line size of message headers.
@@ -412,11 +417,13 @@
 # log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
+log_file = /var/log/glance/glare.log
 
 # (Optional) The base directory used for relative log_file  paths. This option
 # is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
 #log_dir = <None>
+log_dir = /var/log/glance
 
 # Uses logging handler designed to watch file system. When log file is moved or
 # removed this handler will open a new log file with specified path
@@ -436,7 +443,7 @@
 
 # Log output to standard error. This option is ignored if log_config_append is
 # set. (boolean value)
-#use_stderr = false
+#use_stderr = true
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
@@ -469,18 +476,6 @@
 # The format for an instance UUID that is passed with the log message. (string
 # value)
 #instance_uuid_format = "[instance: %(uuid)s] "
-
-# Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
-
-# Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
-
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
-# empty string. Logs with level greater or equal to rate_limit_except_level are
-# not filtered. An empty string means that all levels are filtered. (string
-# value)
-#rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
@@ -572,6 +567,7 @@
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
 #connection = <None>
+connection = mysql+pymysql://glance:opnfv_secret@10.167.4.50/glance?charset=utf8&read_timeout=60
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
@@ -588,6 +584,7 @@
 # Deprecated group/name - [DATABASE]/sql_idle_timeout
 # Deprecated group/name - [sql]/idle_timeout
 #idle_timeout = 3600
+idle_timeout = 3600
 
 # Minimum number of SQL connections to keep open in a pool. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
@@ -599,12 +596,14 @@
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
 #max_pool_size = 5
+max_pool_size = 30
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
 #max_retries = 10
+max_retries = -1
 
 # Interval between retries of opening a SQL connection. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
@@ -615,6 +614,7 @@
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
 #max_overflow = 50
+max_overflow = 60
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
@@ -688,13 +688,18 @@
 #
 #  (list value)
 #stores = file,http
-
+default_store = file
+stores = file,http
 #
 # The default scheme to use for storing images.
 #
 # Provide a string value representing the default scheme to use for
 # storing images. If not set, Glance uses ``file`` as the default
 # scheme to store images with the ``file`` store.
+
+os_region_name=RegionOne
+
+
 #
 # NOTE: The value given for this configuration option must be a valid
 # scheme for a store registered with the ``stores`` configuration
@@ -741,7 +746,7 @@
 # /store-capabilities.html
 #
 # For more information on setting up a particular store in your
-# deployment and help with the usage of this feature, please contact
+# deplyment and help with the usage of this feature, please contact
 # the storage driver maintainers listed here:
 # http://docs.openstack.org/developer/glance_store/drivers/index.html
 #
@@ -770,8 +775,8 @@
 #
 # Possible values:
 #     * A string of of the following form:
-#       ``<service_type>:<service_name>:<interface>``
-#       At least ``service_type`` and ``interface`` should be specified.
+#       ``<service_type>:<service_name>:<endpoint_type>``
+#       At least ``service_type`` and ``endpoint_type`` should be specified.
 #       ``service_name`` can be omitted.
 #
 # Related options:
@@ -784,6 +789,7 @@
 #
 #  (string value)
 #cinder_catalog_info = volumev2::publicURL
+cinder_catalog_info = volumev2::internalURL
 
 #
 # Override service catalog lookup with template for cinder endpoint.
@@ -828,6 +834,9 @@
 #  (string value)
 # Deprecated group/name - [glance_store]/os_region_name
 #cinder_os_region_name = <None>
+
+cinder_os_region_name = RegionOne
+
 
 #
 # Location of a CA certificates file used for cinder client requests.
@@ -996,25 +1005,6 @@
 #rootwrap_config = /etc/glance/rootwrap.conf
 
 #
-# Volume type that will be used for volume creation in cinder.
-#
-# Some cinder backends can have several volume types to optimize storage usage.
-# Adding this option allows an operator to choose a specific volume type
-# in cinder that can be optimized for images.
-#
-# If this is not set, then the default volume type specified in the cinder
-# configuration will be used for volume creation.
-#
-# Possible values:
-#     * A valid volume type from cinder
-#
-# Related options:
-#     * None
-#
-#  (string value)
-#cinder_volume_type = <None>
-
-#
 # Directory to which the filesystem backend store writes images.
 #
 # Upon start up, Glance creates the directory if it doesn't already
@@ -1039,6 +1029,8 @@
 #
 #  (string value)
 #filesystem_store_datadir = /var/lib/glance/images
+filesystem_store_datadir = /var/lib/glance/images/
+
 
 #
 # List of directories and their priorities to which the filesystem
@@ -1093,7 +1085,6 @@
 #     * None
 #
 #  (string value)
-#filesystem_store_metadata_file = <None>
 
 #
 # File access permissions for the image files.
@@ -1182,106 +1173,6 @@
 #http_proxy_information =
 
 #
-# Size, in megabytes, to chunk RADOS images into.
-#
-# Provide an integer value representing the size in megabytes to chunk
-# Glance images into. The default chunk size is 8 megabytes. For optimal
-# performance, the value should be a power of two.
-#
-# When Ceph's RBD object storage system is used as the storage backend
-# for storing Glance images, the images are chunked into objects of the
-# size set using this option. These chunked objects are then stored
-# across the distributed block data store to use for Glance.
-#
-# Possible Values:
-#     * Any positive integer value
-#
-# Related options:
-#     * None
-#
-#  (integer value)
-# Minimum value: 1
-#rbd_store_chunk_size = 8
-
-#
-# RADOS pool in which images are stored.
-#
-# When RBD is used as the storage backend for storing Glance images, the
-# images are stored by means of logical grouping of the objects (chunks
-# of images) into a ``pool``. Each pool is defined with the number of
-# placement groups it can contain. The default pool that is used is
-# 'images'.
-#
-# More information on the RBD storage backend can be found here:
-# http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/
-#
-# Possible Values:
-#     * A valid pool name
-#
-# Related options:
-#     * None
-#
-#  (string value)
-#rbd_store_pool = images
-
-#
-# RADOS user to authenticate as.
-#
-# This configuration option takes in the RADOS user to authenticate as.
-# This is only needed when RADOS authentication is enabled and is
-# applicable only if the user is using Cephx authentication. If the
-# value for this option is not set by the user or is set to None, a
-# default value will be chosen, which will be based on the client.
-# section in rbd_store_ceph_conf.
-#
-# Possible Values:
-#     * A valid RADOS user
-#
-# Related options:
-#     * rbd_store_ceph_conf
-#
-#  (string value)
-#rbd_store_user = <None>
-
-#
-# Ceph configuration file path.
-#
-# This configuration option takes in the path to the Ceph configuration
-# file to be used. If the value for this option is not set by the user
-# or is set to None, librados will locate the default configuration file
-# which is located at /etc/ceph/ceph.conf. If using Cephx
-# authentication, this file should include a reference to the right
-# keyring in a client.<USER> section
-#
-# Possible Values:
-#     * A valid path to a configuration file
-#
-# Related options:
-#     * rbd_store_user
-#
-#  (string value)
-#rbd_store_ceph_conf = /etc/ceph/ceph.conf
-
-#
-# Timeout value for connecting to Ceph cluster.
-#
-# This configuration option takes in the timeout value in seconds used
-# when connecting to the Ceph cluster i.e. it sets the time to wait for
-# glance-api before closing the connection. This prevents glance-api
-# hangups during the connection to RBD. If the value for this option
-# is set to less than or equal to 0, no timeout is set and the default
-# librados value is used.
-#
-# Possible Values:
-#     * Any integer value
-#
-# Related options:
-#     * None
-#
-#  (integer value)
-#rados_connect_timeout = 0
-
-#
 # Chunk size for images to be stored in Sheepdog data store.
 #
 # Provide an integer value representing the size in mebibyte
@@ -1352,486 +1243,6 @@
 #
 #  (string value)
 #sheepdog_store_address = 127.0.0.1
-
-#
-# Set verification of the server certificate.
-#
-# This boolean determines whether or not to verify the server
-# certificate. If this option is set to True, swiftclient won't check
-# for a valid SSL certificate when authenticating. If the option is set
-# to False, then the default CA truststore is used for verification.
-#
-# Possible values:
-#     * True
-#     * False
-#
-# Related options:
-#     * swift_store_cacert
-#
-#  (boolean value)
-#swift_store_auth_insecure = false
-
-#
-# Path to the CA bundle file.
-#
-# This configuration option enables the operator to specify the path to
-# a custom Certificate Authority file for SSL verification when
-# connecting to Swift.
-#
-# Possible values:
-#     * A valid path to a CA file
-#
-# Related options:
-#     * swift_store_auth_insecure
-#
-#  (string value)
-#swift_store_cacert = /etc/ssl/certs/ca-certificates.crt
-
-#
-# The region of Swift endpoint to use by Glance.
-#
-# Provide a string value representing a Swift region where Glance
-# can connect to for image storage. By default, there is no region
-# set.
-#
-# When Glance uses Swift as the storage backend to store images
-# for a specific tenant that has multiple endpoints, setting of a
-# Swift region with ``swift_store_region`` allows Glance to connect
-# to Swift in the specified region as opposed to a single region
-# connectivity.
-#
-# This option can be configured for both single-tenant and
-# multi-tenant storage.
-#
-# NOTE: Setting the region with ``swift_store_region`` is
-# tenant-specific and is necessary ``only if`` the tenant has
-# multiple endpoints across different regions.
-#
-# Possible values:
-#     * A string value representing a valid Swift region.
-#
-# Related Options:
-#     * None
-#
-#  (string value)
-#swift_store_region = RegionTwo
-
-#
-# The URL endpoint to use for Swift backend storage.
-#
-# Provide a string value representing the URL endpoint to use for
-# storing Glance images in Swift store. By default, an endpoint
-# is not set and the storage URL returned by ``auth`` is used.
-# Setting an endpoint with ``swift_store_endpoint`` overrides the
-# storage URL and is used for Glance image storage.
-#
-# NOTE: The URL should include the path up to, but excluding the
-# container. The location of an object is obtained by appending
-# the container and object to the configured URL.
-#
-# Possible values:
-#     * String value representing a valid URL path up to a Swift container
-#
-# Related Options:
-#     * None
-#
-#  (string value)
-#swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name
-
-#
-# Endpoint Type of Swift service.
-#
-# This string value indicates the endpoint type to use to fetch the
-# Swift endpoint. The endpoint type determines the actions the user will
-# be allowed to perform, for instance, reading and writing to the Store.
-# This setting is only used if swift_store_auth_version is greater than
-# 1.
-#
-# Possible values:
-#     * publicURL
-#     * adminURL
-#     * internalURL
-#
-# Related options:
-#     * swift_store_endpoint
-#
-#  (string value)
-# Allowed values: publicURL, adminURL, internalURL
-#swift_store_endpoint_type = publicURL
-
-#
-# Type of Swift service to use.
-#
-# Provide a string value representing the service type to use for
-# storing images while using Swift backend storage. The default
-# service type is set to ``object-store``.
-#
-# NOTE: If ``swift_store_auth_version`` is set to 2, the value for
-# this configuration option needs to be ``object-store``. If using
-# a higher version of Keystone or a different auth scheme, this
-# option may be modified.
-#
-# Possible values:
-#     * A string representing a valid service type for Swift storage.
-#
-# Related Options:
-#     * None
-#
-#  (string value)
-#swift_store_service_type = object-store
-
-#
-# Name of single container to store images/name prefix for multiple containers
-#
-# When a single container is being used to store images, this configuration
-# option indicates the container within the Glance account to be used for
-# storing all images. When multiple containers are used to store images, this
-# will be the name prefix for all containers. Usage of single/multiple
-# containers can be controlled using the configuration option
-# ``swift_store_multiple_containers_seed``.
-#
-# When using multiple containers, the containers will be named after the value
-# set for this configuration option with the first N chars of the image UUID
-# as the suffix delimited by an underscore (where N is specified by
-# ``swift_store_multiple_containers_seed``).
-#
-# Example: if the seed is set to 3 and swift_store_container = ``glance``, then
-# an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in
-# the container ``glance_fda``. All dashes in the UUID are included when
-# creating the container name but do not count toward the character limit, so
-# when N=10 the container name would be ``glance_fdae39a1-ba.``
-#
-# Possible values:
-#     * If using single container, this configuration option can be any string
-#       that is a valid swift container name in Glance's Swift account
-#     * If using multiple containers, this configuration option can be any
-#       string as long as it satisfies the container naming rules enforced by
-#       Swift. The value of ``swift_store_multiple_containers_seed`` should be
-#       taken into account as well.
-#
-# Related options:
-#     * ``swift_store_multiple_containers_seed``
-#     * ``swift_store_multi_tenant``
-#     * ``swift_store_create_container_on_put``
-#
-#  (string value)
-#swift_store_container = glance
-
-#
-# The size threshold, in MB, after which Glance will start segmenting image
-# data.
-#
-# Swift has an upper limit on the size of a single uploaded object. By default,
-# this is 5GB. To upload objects bigger than this limit, objects are segmented
-# into multiple smaller objects that are tied together with a manifest file.
-# For more detail, refer to
-# http://docs.openstack.org/developer/swift/overview_large_objects.html
-#
-# This configuration option specifies the size threshold over which the Swift
-# driver will start segmenting image data into multiple smaller files.
-# Currently, the Swift driver only supports creating Dynamic Large Objects.
-#
-# NOTE: This should be set by taking into account the large object limit
-# enforced by the Swift cluster in consideration.
-#
-# Possible values:
-#     * A positive integer that is less than or equal to the large object limit
-#       enforced by the Swift cluster in consideration.
-#
-# Related options:
-#     * ``swift_store_large_object_chunk_size``
-#
-#  (integer value)
-# Minimum value: 1
-#swift_store_large_object_size = 5120
-
-#
-# The maximum size, in MB, of the segments when image data is segmented.
-#
-# When image data is segmented to upload images that are larger than the limit
-# enforced by the Swift cluster, image data is broken into segments that are no
-# bigger than the size specified by this configuration option.
-# Refer to ``swift_store_large_object_size`` for more detail.
-#
-# For example: if ``swift_store_large_object_size`` is 5GB and
-# ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be
-# segmented into 7 segments where the first six segments will be 1GB in size and
-# the seventh segment will be 0.2GB.
-#
-# Possible values:
-#     * A positive integer that is less than or equal to the large object limit
-#       enforced by Swift cluster in consideration.
-#
-# Related options:
-#     * ``swift_store_large_object_size``
-#
-#  (integer value)
-# Minimum value: 1
-#swift_store_large_object_chunk_size = 200
-
-#
-# Create container, if it doesn't already exist, when uploading image.
-#
-# At the time of uploading an image, if the corresponding container doesn't
-# exist, it will be created provided this configuration option is set to True.
-# By default, it won't be created. This behavior is applicable for both single
-# and multiple containers mode.
-#
-# Possible values:
-#     * True
-#     * False
-#
-# Related options:
-#     * None
-#
-#  (boolean value)
-#swift_store_create_container_on_put = false
-
-#
-# Store images in tenant's Swift account.
-#
-# This enables multi-tenant storage mode which causes Glance images to be stored
-# in tenant specific Swift accounts. If this is disabled, Glance stores all
-# images in its own account. More details multi-tenant store can be found at
-# https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
-#
-# NOTE: If using multi-tenant swift store, please make sure
-# that you do not set a swift configuration file with the
-# 'swift_store_config_file' option.
-#
-# Possible values:
-#     * True
-#     * False
-#
-# Related options:
-#     * swift_store_config_file
-#
-#  (boolean value)
-#swift_store_multi_tenant = false
-
-#
-# Seed indicating the number of containers to use for storing images.
-#
-# When using a single-tenant store, images can be stored in one or more than one
-# containers. When set to 0, all images will be stored in one single container.
-# When set to an integer value between 1 and 32, multiple containers will be
-# used to store images. This configuration option will determine how many
-# containers are created. The total number of containers that will be used is
-# equal to 16^N, so if this config option is set to 2, then 16^2=256 containers
-# will be used to store images.
-#
-# Please refer to ``swift_store_container`` for more detail on the naming
-# convention. More detail about using multiple containers can be found at
-# https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-
-# multiple-containers.html
-#
-# NOTE: This is used only when swift_store_multi_tenant is disabled.
-#
-# Possible values:
-#     * A non-negative integer less than or equal to 32
-#
-# Related options:
-#     * ``swift_store_container``
-#     * ``swift_store_multi_tenant``
-#     * ``swift_store_create_container_on_put``
-#
-#  (integer value)
-# Minimum value: 0
-# Maximum value: 32
-#swift_store_multiple_containers_seed = 0
-
-#
-# List of tenants that will be granted admin access.
-#
-# This is a list of tenants that will be granted read/write access on
-# all Swift containers created by Glance in multi-tenant mode. The
-# default value is an empty list.
-#
-# Possible values:
-#     * A comma separated list of strings representing UUIDs of Keystone
-#       projects/tenants
-#
-# Related options:
-#     * None
-#
-#  (list value)
-#swift_store_admin_tenants =
-
-#
-# SSL layer compression for HTTPS Swift requests.
-#
-# Provide a boolean value to determine whether or not to compress
-# HTTPS Swift requests for images at the SSL layer. By default,
-# compression is enabled.
-#
-# When using Swift as the backend store for Glance image storage,
-# SSL layer compression of HTTPS Swift requests can be set using
-# this option. If set to False, SSL layer compression of HTTPS
-# Swift requests is disabled. Disabling this option may improve
-# performance for images which are already in a compressed format,
-# for example, qcow2.
-#
-# Possible values:
-#     * True
-#     * False
-#
-# Related Options:
-#     * None
-#
-#  (boolean value)
-#swift_store_ssl_compression = true
-
-#
-# The number of times a Swift download will be retried before the
-# request fails.
-#
-# Provide an integer value representing the number of times an image
-# download must be retried before erroring out. The default value is
-# zero (no retry on a failed image download). When set to a positive
-# integer value, ``swift_store_retry_get_count`` ensures that the
-# download is attempted this many more times upon a download failure
-# before sending an error message.
-#
-# Possible values:
-#     * Zero
-#     * Positive integer value
-#
-# Related Options:
-#     * None
-#
-#  (integer value)
-# Minimum value: 0
-#swift_store_retry_get_count = 0
-
-#
-# Time in seconds defining the size of the window in which a new
-# token may be requested before the current token is due to expire.
-#
-# Typically, the Swift storage driver fetches a new token upon the
-# expiration of the current token to ensure continued access to
-# Swift. However, some Swift transactions (like uploading image
-# segments) may not recover well if the token expires on the fly.
-#
-# Hence, by fetching a new token before the current token expiration,
-# we make sure that the token does not expire or is close to expiry
-# before a transaction is attempted. By default, the Swift storage
-# driver requests for a new token 60 seconds or less before the
-# current token expiration.
-#
-# Possible values:
-#     * Zero
-#     * Positive integer value
-#
-# Related Options:
-#     * None
-#
-#  (integer value)
-# Minimum value: 0
-#swift_store_expire_soon_interval = 60
-
-#
-# Use trusts for multi-tenant Swift store.
-#
-# This option instructs the Swift store to create a trust for each
-# add/get request when the multi-tenant store is in use. Using trusts
-# allows the Swift store to avoid problems that can be caused by an
-# authentication token expiring during the upload or download of data.
-#
-# By default, ``swift_store_use_trusts`` is set to ``True``(use of
-# trusts is enabled). If set to ``False``, a user token is used for
-# the Swift connection instead, eliminating the overhead of trust
-# creation.
-#
-# NOTE: This option is considered only when
-# ``swift_store_multi_tenant`` is set to ``True``
-#
-# Possible values:
-#     * True
-#     * False
-#
-# Related options:
-#     * swift_store_multi_tenant
-#
-#  (boolean value)
-#swift_store_use_trusts = true
-
-#
-# Reference to default Swift account/backing store parameters.
-#
-# Provide a string value representing a reference to the default set
-# of parameters required for using swift account/backing store for
-# image storage. The default reference value for this configuration
-# option is 'ref1'. This configuration option dereferences the
-# parameters and facilitates image storage in Swift storage backend
-# every time a new image is added.
-#
-# Possible values:
-#     * A valid string value
-#
-# Related options:
-#     * None
-#
-#  (string value)
-#default_swift_reference = ref1
-
-# DEPRECATED: Version of the authentication service to use. Valid versions are 2
-# and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# The option 'auth_version' in the Swift back-end configuration file is
-# used instead.
-#swift_store_auth_version = 2
-
-# DEPRECATED: The address where the Swift authentication service is listening.
-# (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# The option 'auth_address' in the Swift back-end configuration file is
-# used instead.
-#swift_store_auth_address = <None>
-
-# DEPRECATED: The user to authenticate against the Swift authentication service.
-# (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# The option 'user' in the Swift back-end configuration file is set instead.
-#swift_store_user = <None>
-
-# DEPRECATED: Auth key for the user authenticating against the Swift
-# authentication service. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# The option 'key' in the Swift back-end configuration file is used
-# to set the authentication key instead.
-#swift_store_key = <None>
-
-#
-# Absolute path to the file containing the swift account(s)
-# configurations.
-#
-# Include a string value representing the path to a configuration
-# file that has references for each of the configured Swift
-# account(s)/backing stores. By default, no file path is specified
-# and customized Swift referencing is disabled. Configuring this
-# option is highly recommended while using Swift storage backend for
-# image storage as it avoids storage of credentials in the database.
-#
-# NOTE: Please do not configure this option if you have set
-# ``swift_store_multi_tenant`` to ``True``.
-#
-# Possible values:
-#     * String value representing an absolute path on the glance-api
-#       node
-#
-# Related options:
-#     * swift_store_multi_tenant
-#
-#  (string value)
-#swift_store_config_file = <None>
 
 #
 # Address of the ESX/ESXi or vCenter Server target system.
@@ -2013,7 +1424,17 @@
 
 
 [keystone_authtoken]
-
+revocation_cache_time = 10
+auth_type = password
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = glance
+password = opnfv_secret
+auth_uri=http://10.167.4.10:5000
+auth_url=http://10.167.4.10:35357
+token_cache_time = -1
+memcached_servers=10.167.4.11:11211,10.167.4.12:11211,10.167.4.13:11211
 #
 # From keystonemiddleware.auth_token
 #
@@ -2064,12 +1485,7 @@
 # The region in which the identity server can be found. (string value)
 #region_name = <None>
 
-# DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P release.
-# (string value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
+# Directory used to cache files related to PKI tokens. (string value)
 #signing_dir = <None>
 
 # Optionally specify a list of memcached server(s) to use for caching. If left
@@ -2082,14 +1498,10 @@
 # -1 to disable caching completely. (integer value)
 #token_cache_time = 300
 
-# DEPRECATED: Determines the frequency at which the list of revoked tokens is
-# retrieved from the Identity service (in seconds). A high number of revocation
-# events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in the
-# Ocata release and will be removed in the P release. (integer value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
+# Determines the frequency at which the list of revoked tokens is retrieved from
+# the Identity service (in seconds). A high number of revocation events combined
+# with a low cache duration may significantly reduce performance. Only valid for
+# PKI tokens. (integer value)
 #revocation_cache_time = 10
 
 # (Optional) If defined, indicate whether token data should be authenticated or
@@ -2142,40 +1554,19 @@
 # value)
 #enforce_token_bind = permissive
 
-# DEPRECATED: If true, the revocation list will be checked for cached tokens.
-# This requires that PKI tokens are configured on the identity server. (boolean
-# value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
+# If true, the revocation list will be checked for cached tokens. This requires
+# that PKI tokens are configured on the identity server. (boolean value)
 #check_revocations_for_cached = false
 
-# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
-# single algorithm or multiple. The algorithms are those supported by Python
-# standard hashlib.new(). The hashes will be tried in the order given, so put
-# the preferred one first for performance. The result of the first hash will be
+# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
+# or multiple. The algorithms are those supported by Python standard
+# hashlib.new(). The hashes will be tried in the order given, so put the
+# preferred one first for performance. The result of the first hash will be
 # stored in the cache. This will typically be set to multiple values only while
 # migrating from a less secure algorithm to a more secure one. Once all the old
 # tokens are expired this option should be set to a single value for better
 # performance. (list value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
 #hash_algorithms = md5
-
-# A choice of roles that must be present in a service token. Service tokens are
-# allowed to request that an expired token can be used and so this check should
-# tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present. For
-# backwards compatibility reasons this currently only affects the allow_expired
-# check. (list value)
-#service_token_roles = service
-
-# For backwards compatibility reasons we must let valid service tokens pass that
-# don't pass the service_token_roles check as valid. Setting this true will
-# become the default in a future release and should be enabled if possible.
-# (boolean value)
-#service_token_roles_required = false
 
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
@@ -2211,6 +1602,7 @@
 #
 #  (string value)
 #flavor = keystone
+flavor = keystone
 
 #
 # Name of the paste configuration file.
@@ -2302,39 +1694,5 @@
 # Examples of possible values:
 #
 # * messaging://: use oslo_messaging driver for sending notifications.
-# * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications.
-# * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending
-# notifications.
 #  (string value)
 #connection_string = messaging://
-
-#
-# Document type for notification indexing in elasticsearch.
-#  (string value)
-#es_doc_type = notification
-
-#
-# This parameter is a time value parameter (for example: es_scroll_time=2m),
-# indicating for how long the nodes that participate in the search will maintain
-# relevant resources in order to continue and support it.
-#  (string value)
-#es_scroll_time = 2m
-
-#
-# Elasticsearch splits large requests in batches. This parameter defines
-# maximum size of each batch (for example: es_scroll_size=10000).
-#  (integer value)
-#es_scroll_size = 10000
-
-#
-# Redissentinel provides a timeout option on the connections.
-# This parameter defines that timeout (for example: socket_timeout=0.1).
-#  (floating point value)
-#socket_timeout = 0.1
-
-#
-# Redissentinel uses a service name to identify a master redis service.
-# This parameter defines the name (for example:
-# sentinal_service_name=mymaster).
-#  (string value)
-#sentinel_service_name = mymaster

2017-09-27 10:01:22,302 [salt.state       ][INFO    ][4933] Completed state [/etc/glance/glance-glare.conf] at time 10:01:22.301969 duration_in_ms=143.578
2017-09-27 10:01:22,367 [salt.state       ][INFO    ][4933] Running state [glance-glare] at time 10:01:22.366967
2017-09-27 10:01:22,367 [salt.state       ][INFO    ][4933] Executing state service.running for glance-glare
2017-09-27 10:01:22,369 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'status', 'glance-glare.service', '-n', '0'] in directory '/root'
2017-09-27 10:01:22,382 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-active', 'glance-glare.service'] in directory '/root'
2017-09-27 10:01:22,395 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-enabled', 'glance-glare.service'] in directory '/root'
2017-09-27 10:01:22,409 [salt.state       ][INFO    ][4933] The service glance-glare is already running
2017-09-27 10:01:22,409 [salt.state       ][INFO    ][4933] Completed state [glance-glare] at time 10:01:22.408999 duration_in_ms=42.031
2017-09-27 10:01:22,409 [salt.state       ][INFO    ][4933] Running state [glance-glare] at time 10:01:22.409188
2017-09-27 10:01:22,409 [salt.state       ][INFO    ][4933] Executing state service.mod_watch for glance-glare
2017-09-27 10:01:22,410 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-active', 'glance-glare.service'] in directory '/root'
2017-09-27 10:01:22,423 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-enabled', 'glance-glare.service'] in directory '/root'
2017-09-27 10:01:22,439 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'glance-glare.service'] in directory '/root'
2017-09-27 10:01:22,513 [salt.state       ][INFO    ][4933] {'glance-glare': True}
2017-09-27 10:01:22,514 [salt.state       ][INFO    ][4933] Completed state [glance-glare] at time 10:01:22.513530 duration_in_ms=104.313
2017-09-27 10:01:22,515 [salt.state       ][INFO    ][4933] Running state [glance-api] at time 10:01:22.515221
2017-09-27 10:01:22,516 [salt.state       ][INFO    ][4933] Executing state service.running for glance-api
2017-09-27 10:01:22,516 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'status', 'glance-api.service', '-n', '0'] in directory '/root'
2017-09-27 10:01:22,535 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-active', 'glance-api.service'] in directory '/root'
2017-09-27 10:01:22,551 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-enabled', 'glance-api.service'] in directory '/root'
2017-09-27 10:01:22,565 [salt.state       ][INFO    ][4933] The service glance-api is already running
2017-09-27 10:01:22,566 [salt.state       ][INFO    ][4933] Completed state [glance-api] at time 10:01:22.565455 duration_in_ms=50.231
2017-09-27 10:01:22,566 [salt.state       ][INFO    ][4933] Running state [glance-api] at time 10:01:22.565738
2017-09-27 10:01:22,566 [salt.state       ][INFO    ][4933] Executing state service.mod_watch for glance-api
2017-09-27 10:01:22,567 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-active', 'glance-api.service'] in directory '/root'
2017-09-27 10:01:22,582 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-enabled', 'glance-api.service'] in directory '/root'
2017-09-27 10:01:22,595 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'glance-api.service'] in directory '/root'
2017-09-27 10:01:22,642 [salt.state       ][INFO    ][4933] {'glance-api': True}
2017-09-27 10:01:22,643 [salt.state       ][INFO    ][4933] Completed state [glance-api] at time 10:01:22.642747 duration_in_ms=77.008
2017-09-27 10:01:22,643 [salt.state       ][INFO    ][4933] Running state [glance-registry] at time 10:01:22.643368
2017-09-27 10:01:22,644 [salt.state       ][INFO    ][4933] Executing state service.running for glance-registry
2017-09-27 10:01:22,644 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'status', 'glance-registry.service', '-n', '0'] in directory '/root'
2017-09-27 10:01:22,661 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-active', 'glance-registry.service'] in directory '/root'
2017-09-27 10:01:22,675 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-enabled', 'glance-registry.service'] in directory '/root'
2017-09-27 10:01:22,688 [salt.state       ][INFO    ][4933] The service glance-registry is already running
2017-09-27 10:01:22,688 [salt.state       ][INFO    ][4933] Completed state [glance-registry] at time 10:01:22.688105 duration_in_ms=44.736
2017-09-27 10:01:22,688 [salt.state       ][INFO    ][4933] Running state [glance-registry] at time 10:01:22.688323
2017-09-27 10:01:22,689 [salt.state       ][INFO    ][4933] Executing state service.mod_watch for glance-registry
2017-09-27 10:01:22,689 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-active', 'glance-registry.service'] in directory '/root'
2017-09-27 10:01:22,703 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemctl', 'is-enabled', 'glance-registry.service'] in directory '/root'
2017-09-27 10:01:22,716 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'glance-registry.service'] in directory '/root'
2017-09-27 10:01:22,768 [salt.state       ][INFO    ][4933] {'glance-registry': True}
2017-09-27 10:01:22,768 [salt.state       ][INFO    ][4933] Completed state [glance-registry] at time 10:01:22.768423 duration_in_ms=80.098
2017-09-27 10:01:22,771 [salt.state       ][INFO    ][4933] Running state [glance-manage db_sync] at time 10:01:22.771143
2017-09-27 10:01:22,772 [salt.state       ][INFO    ][4933] Executing state cmd.run for glance-manage db_sync
2017-09-27 10:01:22,772 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command 'glance-manage db_sync' in directory '/root'
2017-09-27 10:01:24,439 [salt.state       ][INFO    ][4933] {'pid': 6462, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.\n/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py:1241: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade\n  expire_on_commit=expire_on_commit, _conf=conf)\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.', 'stdout': 'Upgraded database to: ocata01, current revision(s): ocata01'}
2017-09-27 10:01:24,440 [salt.state       ][INFO    ][4933] Completed state [glance-manage db_sync] at time 10:01:24.439516 duration_in_ms=1668.369
2017-09-27 10:01:24,441 [salt.state       ][INFO    ][4933] Running state [glance-manage db_load_metadefs] at time 10:01:24.441190
2017-09-27 10:01:24,442 [salt.state       ][INFO    ][4933] Executing state cmd.run for glance-manage db_load_metadefs
2017-09-27 10:01:24,443 [salt.loaded.int.module.cmdmod][INFO    ][4933] Executing command 'glance-manage db_load_metadefs' in directory '/root'
2017-09-27 10:01:25,991 [salt.state       ][INFO    ][4933] {'pid': 6515, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.\n/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py:1241: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade\n  expire_on_commit=expire_on_commit, _conf=conf)\n2017-09-27 10:01:25.755 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::AggregateDiskFilter. It already exists in the database.\n2017-09-27 10:01:25.759 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Software::Runtimes. It already exists in the database.\n2017-09-27 10:01:25.763 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::Libvirt. It already exists in the database.\n2017-09-27 10:01:25.768 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Glance::Signatures. It already exists in the database.\n2017-09-27 10:01:25.772 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::AggregateIoOpsFilter. It already exists in the database.\n2017-09-27 10:01:25.776 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::GuestShutdownBehavior. It already exists in the database.\n2017-09-27 10:01:25.780 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::GuestMemoryBacking. It already exists in the database.\n2017-09-27 10:01:25.784 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::LibvirtImage. It already exists in the database.\n2017-09-27 10:01:25.789 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::CPUPinning. It already exists in the database.\n2017-09-27 10:01:25.793 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::RandomNumberGenerator. It already exists in the database.\n2017-09-27 10:01:25.798 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::Trust. It already exists in the database.\n2017-09-27 10:01:25.803 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Software::DBMS. It already exists in the database.\n2017-09-27 10:01:25.807 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::XenAPI. It already exists in the database.\n2017-09-27 10:01:25.812 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace CIM::ProcessorAllocationSettingData. It already exists in the database.\n2017-09-27 10:01:25.816 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::HostCapabilities. It already exists in the database.\n2017-09-27 10:01:25.821 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::VMwareQuotaFlavor. It already exists in the database.\n2017-09-27 10:01:25.826 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace CIM::ResourceAllocationSettingData. It already exists in the database.\n2017-09-27 10:01:25.831 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::VirtCPUTopology. It already exists in the database.\n2017-09-27 10:01:25.836 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::Watchdog. It already exists in the database.\n2017-09-27 10:01:25.841 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::AggregateNumInstancesFilter. It already exists in the database.\n2017-09-27 10:01:25.846 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::Quota. It already exists in the database.\n2017-09-27 10:01:25.852 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace CIM::StorageAllocationSettingData. It already exists in the database.\n2017-09-27 10:01:25.857 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::VMwareFlavor. It already exists in the database.\n2017-09-27 10:01:25.862 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Cinder::Volumetype. It already exists in the database.\n2017-09-27 10:01:25.867 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::OperatingSystem. It already exists in the database.\n2017-09-27 10:01:25.872 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::VMware. It already exists in the database.\n2017-09-27 10:01:25.878 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Software::WebServers. It already exists in the database.\n2017-09-27 10:01:25.883 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::Hypervisor. It already exists in the database.\n2017-09-27 10:01:25.888 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Compute::InstanceData. It already exists in the database.\n2017-09-27 10:01:25.893 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace OS::Glance::CommonImageProperties. It already exists in the database.\n2017-09-27 10:01:25.898 6516 INFO glance.db.sqlalchemy.metadata [-] Skipping namespace CIM::VirtualSystemSettingData. It already exists in the database.\n2017-09-27 10:01:25.899 6516 INFO glance.db.sqlalchemy.metadata [-] Metadata loading finished', 'stdout': ''}
2017-09-27 10:01:25,992 [salt.state       ][INFO    ][4933] Completed state [glance-manage db_load_metadefs] at time 10:01:25.992006 duration_in_ms=1550.813
2017-09-27 10:01:25,993 [salt.state       ][INFO    ][4933] Running state [/var/lib/glance/images] at time 10:01:25.992885
2017-09-27 10:01:25,993 [salt.state       ][INFO    ][4933] Executing state file.directory for /var/lib/glance/images
2017-09-27 10:01:25,995 [salt.state       ][INFO    ][4933] Directory /var/lib/glance/images is in the correct state
2017-09-27 10:01:25,995 [salt.state       ][INFO    ][4933] Completed state [/var/lib/glance/images] at time 10:01:25.995152 duration_in_ms=2.267
2017-09-27 10:01:25,997 [salt.minion      ][INFO    ][4933] Returning information for job: 20170927100040306048
2017-09-27 10:02:10,085 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927100210077905
2017-09-27 10:02:10,109 [salt.minion      ][INFO    ][6613] Starting a new job with PID 6613
2017-09-27 10:02:12,736 [salt.state       ][INFO    ][6613] Loading fresh modules for state activity
2017-09-27 10:02:12,771 [salt.fileclient  ][INFO    ][6613] Fetching file from saltenv 'base', ** done ** 'glusterfs/client.sls'
2017-09-27 10:02:12,822 [salt.fileclient  ][INFO    ][6613] Fetching file from saltenv 'base', ** done ** 'glusterfs/map.jinja'
2017-09-27 10:02:12,839 [py.warnings      ][WARNING ][6613] /usr/lib/python2.7/dist-packages/salt/utils/templates.py:73: DeprecationWarning: Starting in 2015.5, cmd.run uses python_shell=False by default, which doesn't support shellisms (pipes, env variables, etc). cmd.run is currently aliased to cmd.shell to prevent breakage. Please switch to cmd.shell or set python_shell=True to avoid breakage in the future, when this aliasing is removed.

2017-09-27 10:02:12,841 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command 'systemd-escape -p --suffix=mount /var/lib/glance/images' in directory '/root'
2017-09-27 10:02:12,856 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command 'systemd-escape -p --suffix=mount /var/lib/keystone/fernet-keys' in directory '/root'
2017-09-27 10:02:12,871 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command 'systemd-escape -p --suffix=mount /var/lib/keystone/credential-keys' in directory '/root'
2017-09-27 10:02:14,363 [salt.state       ][INFO    ][6613] Running state [glusterfs-client] at time 10:02:14.362952
2017-09-27 10:02:14,363 [salt.state       ][INFO    ][6613] Executing state pkg.installed for glusterfs-client
2017-09-27 10:02:14,364 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:02:14,887 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 10:02:17,731 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'glusterfs-client'] in directory '/root'
2017-09-27 10:02:20,177 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100220167250
2017-09-27 10:02:20,199 [salt.minion      ][INFO    ][7258] Starting a new job with PID 7258
2017-09-27 10:02:20,219 [salt.minion      ][INFO    ][7258] Returning information for job: 20170927100220167250
2017-09-27 10:02:30,394 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100230379491
2017-09-27 10:02:30,415 [salt.minion      ][INFO    ][7407] Starting a new job with PID 7407
2017-09-27 10:02:30,439 [salt.minion      ][INFO    ][7407] Returning information for job: 20170927100230379491
2017-09-27 10:02:40,448 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100240431622
2017-09-27 10:02:40,471 [salt.minion      ][INFO    ][12244] Starting a new job with PID 12244
2017-09-27 10:02:40,490 [salt.minion      ][INFO    ][12244] Returning information for job: 20170927100240431622
2017-09-27 10:02:42,552 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:02:42,604 [salt.state       ][INFO    ][6613] Made the following changes:
'libaio1' changed from 'absent' to '0.3.110-2'
'liblvm2app2.2' changed from 'absent' to '2.02.133-1ubuntu10'
'glusterfs-common' changed from 'absent' to '3.7.6-1ubuntu1'
'libibverbs1' changed from 'absent' to '1.1.8-1.1ubuntu2'
'libdevmapper-event1.02.1' changed from 'absent' to '2:1.02.110-1ubuntu10'
'libattr1-dev' changed from 'absent' to '1:2.4.47-2'
'libc6-dev' changed from 'absent' to '2.23-0ubuntu9'
'fuse' changed from 'absent' to '2.9.4-1ubuntu3.1'
'manpages' changed from 'absent' to '4.04-2'
'linux-libc-dev' changed from 'absent' to '4.4.0-96.119'
'acl-dev' changed from 'absent' to '1'
'libacl1-dev' changed from 'absent' to '2.2.52-3'
'glusterfs-client' changed from 'absent' to '3.7.6-1ubuntu1'
'manpages-dev' changed from 'absent' to '4.04-2'
'psmisc' changed from 'absent' to '22.21-2.1build1'
'liburcu4' changed from 'absent' to '0.9.1-3'
'linux-kernel-headers' changed from 'absent' to '1'
'librdmacm1' changed from 'absent' to '1.0.21-1'
'attr-dev' changed from 'absent' to '1'
'libc-dev' changed from 'absent' to '1'
'libc-dev-bin' changed from 'absent' to '2.23-0ubuntu9'

2017-09-27 10:02:42,633 [salt.state       ][INFO    ][6613] Loading fresh modules for state activity
2017-09-27 10:02:42,681 [salt.state       ][INFO    ][6613] Completed state [glusterfs-client] at time 10:02:42.681188 duration_in_ms=28318.235
2017-09-27 10:02:42,690 [salt.state       ][INFO    ][6613] Running state [attr] at time 10:02:42.689475
2017-09-27 10:02:42,690 [salt.state       ][INFO    ][6613] Executing state pkg.installed for attr
2017-09-27 10:02:42,924 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'attr'] in directory '/root'
2017-09-27 10:02:45,182 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:02:45,215 [salt.state       ][INFO    ][6613] Made the following changes:
'attr' changed from 'absent' to '1:2.4.47-2'

2017-09-27 10:02:45,225 [salt.state       ][INFO    ][6613] Loading fresh modules for state activity
2017-09-27 10:02:45,246 [salt.state       ][INFO    ][6613] Completed state [attr] at time 10:02:45.246001 duration_in_ms=2556.524
2017-09-27 10:02:45,250 [salt.state       ][INFO    ][6613] Running state [/etc/systemd/system/var-lib-glance-images.mount] at time 10:02:45.249957
2017-09-27 10:02:45,251 [salt.state       ][INFO    ][6613] Executing state file.managed for /etc/systemd/system/var-lib-glance-images.mount
2017-09-27 10:02:45,302 [salt.fileclient  ][INFO    ][6613] Fetching file from saltenv 'base', ** done ** 'glusterfs/files/glusterfs-client.mount'
2017-09-27 10:02:45,310 [salt.state       ][INFO    ][6613] File changed:
New file
2017-09-27 10:02:45,311 [salt.state       ][INFO    ][6613] Completed state [/etc/systemd/system/var-lib-glance-images.mount] at time 10:02:45.310605 duration_in_ms=60.648
2017-09-27 10:02:45,468 [salt.state       ][INFO    ][6613] Running state [var-lib-glance-images.mount] at time 10:02:45.468064
2017-09-27 10:02:45,469 [salt.state       ][INFO    ][6613] Executing state service.running for var-lib-glance-images.mount
2017-09-27 10:02:45,471 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'status', 'var-lib-glance-images.mount', '-n', '0'] in directory '/root'
2017-09-27 10:02:45,496 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-active', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,510 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,527 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,540 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,635 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-active', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,651 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,669 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,686 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,702 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,801 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-glance-images.mount'] in directory '/root'
2017-09-27 10:02:45,819 [salt.state       ][INFO    ][6613] {'var-lib-glance-images.mount': True}
2017-09-27 10:02:45,820 [salt.state       ][INFO    ][6613] Completed state [var-lib-glance-images.mount] at time 10:02:45.819822 duration_in_ms=351.757
2017-09-27 10:02:45,821 [salt.state       ][INFO    ][6613] Running state [/var/lib/glance/images] at time 10:02:45.821020
2017-09-27 10:02:45,822 [salt.state       ][INFO    ][6613] Executing state file.directory for /var/lib/glance/images
2017-09-27 10:02:45,827 [salt.state       ][INFO    ][6613] {'group': 'glance', 'user': 'glance'}
2017-09-27 10:02:45,828 [salt.state       ][INFO    ][6613] Completed state [/var/lib/glance/images] at time 10:02:45.827440 duration_in_ms=6.419
2017-09-27 10:02:45,828 [salt.state       ][INFO    ][6613] Running state [/etc/systemd/system/var-lib-keystone-fernet\x2dkeys.mount] at time 10:02:45.827873
2017-09-27 10:02:45,828 [salt.state       ][INFO    ][6613] Executing state file.managed for /etc/systemd/system/var-lib-keystone-fernet\x2dkeys.mount
2017-09-27 10:02:45,894 [salt.fileclient  ][INFO    ][6613] Fetching file from saltenv 'base', ** done ** 'glusterfs/files/glusterfs-client.mount'
2017-09-27 10:02:45,905 [salt.state       ][INFO    ][6613] File changed:
New file
2017-09-27 10:02:45,906 [salt.state       ][INFO    ][6613] Completed state [/etc/systemd/system/var-lib-keystone-fernet\x2dkeys.mount] at time 10:02:45.905584 duration_in_ms=77.711
2017-09-27 10:02:45,907 [salt.state       ][INFO    ][6613] Running state [var-lib-keystone-fernet\x2dkeys.mount] at time 10:02:45.906621
2017-09-27 10:02:45,907 [salt.state       ][INFO    ][6613] Executing state service.running for var-lib-keystone-fernet\x2dkeys.mount
2017-09-27 10:02:45,908 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'status', 'var-lib-keystone-fernet\\x2dkeys.mount', '-n', '0'] in directory '/root'
2017-09-27 10:02:45,931 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-active', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:45,950 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:45,973 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:45,993 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,106 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-active', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,125 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,145 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,165 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,184 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,291 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-fernet\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,311 [salt.state       ][INFO    ][6613] {'var-lib-keystone-fernet\\x2dkeys.mount': True}
2017-09-27 10:02:46,312 [salt.state       ][INFO    ][6613] Completed state [var-lib-keystone-fernet\x2dkeys.mount] at time 10:02:46.311744 duration_in_ms=405.122
2017-09-27 10:02:46,313 [salt.state       ][INFO    ][6613] Running state [/var/lib/keystone/fernet-keys] at time 10:02:46.313148
2017-09-27 10:02:46,314 [salt.state       ][INFO    ][6613] Executing state file.directory for /var/lib/keystone/fernet-keys
2017-09-27 10:02:46,318 [salt.state       ][INFO    ][6613] {'group': 'keystone', 'user': 'keystone'}
2017-09-27 10:02:46,318 [salt.state       ][INFO    ][6613] Completed state [/var/lib/keystone/fernet-keys] at time 10:02:46.318044 duration_in_ms=4.896
2017-09-27 10:02:46,319 [salt.state       ][INFO    ][6613] Running state [/etc/systemd/system/var-lib-keystone-credential\x2dkeys.mount] at time 10:02:46.318602
2017-09-27 10:02:46,319 [salt.state       ][INFO    ][6613] Executing state file.managed for /etc/systemd/system/var-lib-keystone-credential\x2dkeys.mount
2017-09-27 10:02:46,343 [salt.fileclient  ][INFO    ][6613] Fetching file from saltenv 'base', ** done ** 'glusterfs/files/glusterfs-client.mount'
2017-09-27 10:02:46,353 [salt.state       ][INFO    ][6613] File changed:
New file
2017-09-27 10:02:46,354 [salt.state       ][INFO    ][6613] Completed state [/etc/systemd/system/var-lib-keystone-credential\x2dkeys.mount] at time 10:02:46.353578 duration_in_ms=34.977
2017-09-27 10:02:46,355 [salt.state       ][INFO    ][6613] Running state [var-lib-keystone-credential\x2dkeys.mount] at time 10:02:46.354747
2017-09-27 10:02:46,355 [salt.state       ][INFO    ][6613] Executing state service.running for var-lib-keystone-credential\x2dkeys.mount
2017-09-27 10:02:46,356 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'status', 'var-lib-keystone-credential\\x2dkeys.mount', '-n', '0'] in directory '/root'
2017-09-27 10:02:46,379 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-active', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,398 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,422 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,440 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,551 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-active', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,568 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,585 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,602 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,618 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,715 [salt.loaded.int.module.cmdmod][INFO    ][6613] Executing command ['systemctl', 'is-enabled', 'var-lib-keystone-credential\\x2dkeys.mount'] in directory '/root'
2017-09-27 10:02:46,731 [salt.state       ][INFO    ][6613] {'var-lib-keystone-credential\\x2dkeys.mount': True}
2017-09-27 10:02:46,731 [salt.state       ][INFO    ][6613] Completed state [var-lib-keystone-credential\x2dkeys.mount] at time 10:02:46.731042 duration_in_ms=376.293
2017-09-27 10:02:46,732 [salt.state       ][INFO    ][6613] Running state [/var/lib/keystone/credential-keys] at time 10:02:46.732216
2017-09-27 10:02:46,733 [salt.state       ][INFO    ][6613] Executing state file.directory for /var/lib/keystone/credential-keys
2017-09-27 10:02:46,736 [salt.state       ][INFO    ][6613] {'group': 'keystone', 'user': 'keystone'}
2017-09-27 10:02:46,736 [salt.state       ][INFO    ][6613] Completed state [/var/lib/keystone/credential-keys] at time 10:02:46.736169 duration_in_ms=3.952
2017-09-27 10:02:46,738 [salt.minion      ][INFO    ][6613] Returning information for job: 20170927100210077905
2017-09-27 10:02:53,591 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927100253580276
2017-09-27 10:02:53,615 [salt.minion      ][INFO    ][12681] Starting a new job with PID 12681
2017-09-27 10:02:56,324 [salt.state       ][INFO    ][12681] Loading fresh modules for state activity
2017-09-27 10:02:56,375 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/server.sls'
2017-09-27 10:02:56,490 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 10:02:56,547 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/init.sls'
2017-09-27 10:02:56,566 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/server/init.sls'
2017-09-27 10:02:56,584 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/server/service/init.sls'
2017-09-27 10:02:56,608 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:56,647 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/server/service/modules.sls'
2017-09-27 10:02:56,670 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:56,704 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/server/service/mpm.sls'
2017-09-27 10:02:56,727 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:56,765 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/server/site.sls'
2017-09-27 10:02:56,820 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:56,884 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/server/users.sls'
2017-09-27 10:02:56,926 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:56,976 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/server/robots.sls'
2017-09-27 10:02:57,001 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:58,586 [salt.state       ][INFO    ][12681] Running state [apache2] at time 10:02:58.585910
2017-09-27 10:02:58,586 [salt.state       ][INFO    ][12681] Executing state pkg.installed for apache2
2017-09-27 10:02:58,587 [salt.loaded.int.module.cmdmod][INFO    ][12681] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:02:59,121 [salt.state       ][INFO    ][12681] Package apache2 is already installed
2017-09-27 10:02:59,122 [salt.state       ][INFO    ][12681] Completed state [apache2] at time 10:02:59.121777 duration_in_ms=535.866
2017-09-27 10:02:59,122 [salt.state       ][INFO    ][12681] Running state [a2enmod ssl] at time 10:02:59.122269
2017-09-27 10:02:59,123 [salt.state       ][INFO    ][12681] Executing state cmd.run for a2enmod ssl
2017-09-27 10:02:59,123 [salt.state       ][INFO    ][12681] /etc/apache2/mods-enabled/ssl.load exists
2017-09-27 10:02:59,123 [salt.state       ][INFO    ][12681] Completed state [a2enmod ssl] at time 10:02:59.123306 duration_in_ms=1.036
2017-09-27 10:02:59,124 [salt.state       ][INFO    ][12681] Running state [a2enmod rewrite] at time 10:02:59.123745
2017-09-27 10:02:59,124 [salt.state       ][INFO    ][12681] Executing state cmd.run for a2enmod rewrite
2017-09-27 10:02:59,124 [salt.state       ][INFO    ][12681] /etc/apache2/mods-enabled/rewrite.load exists
2017-09-27 10:02:59,125 [salt.state       ][INFO    ][12681] Completed state [a2enmod rewrite] at time 10:02:59.124654 duration_in_ms=0.91
2017-09-27 10:02:59,125 [salt.state       ][INFO    ][12681] Running state [libapache2-mod-wsgi] at time 10:02:59.125086
2017-09-27 10:02:59,125 [salt.state       ][INFO    ][12681] Executing state pkg.installed for libapache2-mod-wsgi
2017-09-27 10:02:59,128 [salt.state       ][INFO    ][12681] Package libapache2-mod-wsgi is already installed
2017-09-27 10:02:59,128 [salt.state       ][INFO    ][12681] Completed state [libapache2-mod-wsgi] at time 10:02:59.128187 duration_in_ms=3.1
2017-09-27 10:02:59,129 [salt.state       ][INFO    ][12681] Running state [a2enmod wsgi] at time 10:02:59.128629
2017-09-27 10:02:59,129 [salt.state       ][INFO    ][12681] Executing state cmd.run for a2enmod wsgi
2017-09-27 10:02:59,129 [salt.state       ][INFO    ][12681] /etc/apache2/mods-enabled/wsgi.load exists
2017-09-27 10:02:59,130 [salt.state       ][INFO    ][12681] Completed state [a2enmod wsgi] at time 10:02:59.129540 duration_in_ms=0.911
2017-09-27 10:02:59,131 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/mods-enabled/mpm_prefork.load] at time 10:02:59.130689
2017-09-27 10:02:59,131 [salt.state       ][INFO    ][12681] Executing state file.absent for /etc/apache2/mods-enabled/mpm_prefork.load
2017-09-27 10:02:59,131 [salt.state       ][INFO    ][12681] File /etc/apache2/mods-enabled/mpm_prefork.load is not present
2017-09-27 10:02:59,132 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/mods-enabled/mpm_prefork.load] at time 10:02:59.131597 duration_in_ms=0.908
2017-09-27 10:02:59,132 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/mods-enabled/mpm_prefork.conf] at time 10:02:59.131899
2017-09-27 10:02:59,132 [salt.state       ][INFO    ][12681] Executing state file.absent for /etc/apache2/mods-enabled/mpm_prefork.conf
2017-09-27 10:02:59,133 [salt.state       ][INFO    ][12681] File /etc/apache2/mods-enabled/mpm_prefork.conf is not present
2017-09-27 10:02:59,133 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/mods-enabled/mpm_prefork.conf] at time 10:02:59.132770 duration_in_ms=0.871
2017-09-27 10:02:59,133 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/mods-enabled/mpm_worker.load] at time 10:02:59.133044
2017-09-27 10:02:59,133 [salt.state       ][INFO    ][12681] Executing state file.absent for /etc/apache2/mods-enabled/mpm_worker.load
2017-09-27 10:02:59,134 [salt.state       ][INFO    ][12681] File /etc/apache2/mods-enabled/mpm_worker.load is not present
2017-09-27 10:02:59,134 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/mods-enabled/mpm_worker.load] at time 10:02:59.133907 duration_in_ms=0.863
2017-09-27 10:02:59,134 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/mods-enabled/mpm_worker.conf] at time 10:02:59.134182
2017-09-27 10:02:59,134 [salt.state       ][INFO    ][12681] Executing state file.absent for /etc/apache2/mods-enabled/mpm_worker.conf
2017-09-27 10:02:59,135 [salt.state       ][INFO    ][12681] File /etc/apache2/mods-enabled/mpm_worker.conf is not present
2017-09-27 10:02:59,135 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/mods-enabled/mpm_worker.conf] at time 10:02:59.135020 duration_in_ms=0.838
2017-09-27 10:02:59,136 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/mods-available/mpm_event.conf] at time 10:02:59.136441
2017-09-27 10:02:59,137 [salt.state       ][INFO    ][12681] Executing state file.managed for /etc/apache2/mods-available/mpm_event.conf
2017-09-27 10:02:59,170 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/files/mpm/mpm_event.conf'
2017-09-27 10:02:59,190 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:59,207 [salt.state       ][INFO    ][12681] File /etc/apache2/mods-available/mpm_event.conf is in the correct state
2017-09-27 10:02:59,207 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/mods-available/mpm_event.conf] at time 10:02:59.207346 duration_in_ms=70.905
2017-09-27 10:02:59,208 [salt.state       ][INFO    ][12681] Running state [a2enmod mpm_event] at time 10:02:59.208046
2017-09-27 10:02:59,208 [salt.state       ][INFO    ][12681] Executing state cmd.run for a2enmod mpm_event
2017-09-27 10:02:59,209 [salt.state       ][INFO    ][12681] /etc/apache2/mods-enabled/mpm_event.load exists
2017-09-27 10:02:59,209 [salt.state       ][INFO    ][12681] Completed state [a2enmod mpm_event] at time 10:02:59.208992 duration_in_ms=0.946
2017-09-27 10:02:59,209 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/ports.conf] at time 10:02:59.209424
2017-09-27 10:02:59,210 [salt.state       ][INFO    ][12681] Executing state file.managed for /etc/apache2/ports.conf
2017-09-27 10:02:59,228 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/files/ports.conf'
2017-09-27 10:02:59,247 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:59,266 [salt.state       ][INFO    ][12681] File /etc/apache2/ports.conf is in the correct state
2017-09-27 10:02:59,266 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/ports.conf] at time 10:02:59.265889 duration_in_ms=56.465
2017-09-27 10:02:59,266 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/conf-available/security.conf] at time 10:02:59.266388
2017-09-27 10:02:59,267 [salt.state       ][INFO    ][12681] Executing state file.managed for /etc/apache2/conf-available/security.conf
2017-09-27 10:02:59,284 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/files/security.conf'
2017-09-27 10:02:59,306 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:59,330 [salt.state       ][INFO    ][12681] File /etc/apache2/conf-available/security.conf is in the correct state
2017-09-27 10:02:59,330 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/conf-available/security.conf] at time 10:02:59.330223 duration_in_ms=63.835
2017-09-27 10:02:59,340 [salt.state       ][INFO    ][12681] Running state [mysql-client] at time 10:02:59.339505
2017-09-27 10:02:59,340 [salt.state       ][INFO    ][12681] Executing state pkg.installed for mysql-client
2017-09-27 10:02:59,343 [salt.state       ][INFO    ][12681] Package mysql-client is already installed
2017-09-27 10:02:59,344 [salt.state       ][INFO    ][12681] Completed state [mysql-client] at time 10:02:59.343593 duration_in_ms=4.088
2017-09-27 10:02:59,344 [salt.state       ][INFO    ][12681] Running state [python-keystoneclient] at time 10:02:59.343961
2017-09-27 10:02:59,344 [salt.state       ][INFO    ][12681] Executing state pkg.installed for python-keystoneclient
2017-09-27 10:02:59,348 [salt.state       ][INFO    ][12681] Package python-keystoneclient is already installed
2017-09-27 10:02:59,348 [salt.state       ][INFO    ][12681] Completed state [python-keystoneclient] at time 10:02:59.347833 duration_in_ms=3.872
2017-09-27 10:02:59,348 [salt.state       ][INFO    ][12681] Running state [python-mysqldb] at time 10:02:59.348200
2017-09-27 10:02:59,349 [salt.state       ][INFO    ][12681] Executing state pkg.installed for python-mysqldb
2017-09-27 10:02:59,352 [salt.state       ][INFO    ][12681] Package python-mysqldb is already installed
2017-09-27 10:02:59,352 [salt.state       ][INFO    ][12681] Completed state [python-mysqldb] at time 10:02:59.352109 duration_in_ms=3.909
2017-09-27 10:02:59,353 [salt.state       ][INFO    ][12681] Running state [python-openstackclient] at time 10:02:59.352472
2017-09-27 10:02:59,353 [salt.state       ][INFO    ][12681] Executing state pkg.installed for python-openstackclient
2017-09-27 10:02:59,356 [salt.state       ][INFO    ][12681] Package python-openstackclient is already installed
2017-09-27 10:02:59,356 [salt.state       ][INFO    ][12681] Completed state [python-openstackclient] at time 10:02:59.356236 duration_in_ms=3.764
2017-09-27 10:02:59,357 [salt.state       ][INFO    ][12681] Running state [python-memcache] at time 10:02:59.356601
2017-09-27 10:02:59,357 [salt.state       ][INFO    ][12681] Executing state pkg.installed for python-memcache
2017-09-27 10:02:59,360 [salt.state       ][INFO    ][12681] Package python-memcache is already installed
2017-09-27 10:02:59,360 [salt.state       ][INFO    ][12681] Completed state [python-memcache] at time 10:02:59.360327 duration_in_ms=3.726
2017-09-27 10:02:59,361 [salt.state       ][INFO    ][12681] Running state [python-pycadf] at time 10:02:59.360688
2017-09-27 10:02:59,361 [salt.state       ][INFO    ][12681] Executing state pkg.installed for python-pycadf
2017-09-27 10:02:59,364 [salt.state       ][INFO    ][12681] Package python-pycadf is already installed
2017-09-27 10:02:59,364 [salt.state       ][INFO    ][12681] Completed state [python-pycadf] at time 10:02:59.364424 duration_in_ms=3.736
2017-09-27 10:02:59,365 [salt.state       ][INFO    ][12681] Running state [python-six] at time 10:02:59.364797
2017-09-27 10:02:59,365 [salt.state       ][INFO    ][12681] Executing state pkg.installed for python-six
2017-09-27 10:02:59,368 [salt.state       ][INFO    ][12681] Package python-six is already installed
2017-09-27 10:02:59,368 [salt.state       ][INFO    ][12681] Completed state [python-six] at time 10:02:59.368411 duration_in_ms=3.614
2017-09-27 10:02:59,369 [salt.state       ][INFO    ][12681] Running state [keystone] at time 10:02:59.368799
2017-09-27 10:02:59,369 [salt.state       ][INFO    ][12681] Executing state pkg.installed for keystone
2017-09-27 10:02:59,372 [salt.state       ][INFO    ][12681] Package keystone is already installed
2017-09-27 10:02:59,372 [salt.state       ][INFO    ][12681] Completed state [keystone] at time 10:02:59.372441 duration_in_ms=3.641
2017-09-27 10:02:59,373 [salt.state       ][INFO    ][12681] Running state [gettext-base] at time 10:02:59.372799
2017-09-27 10:02:59,373 [salt.state       ][INFO    ][12681] Executing state pkg.installed for gettext-base
2017-09-27 10:02:59,376 [salt.state       ][INFO    ][12681] Package gettext-base is already installed
2017-09-27 10:02:59,376 [salt.state       ][INFO    ][12681] Completed state [gettext-base] at time 10:02:59.376435 duration_in_ms=3.636
2017-09-27 10:02:59,377 [salt.state       ][INFO    ][12681] Running state [python-psycopg2] at time 10:02:59.376799
2017-09-27 10:02:59,377 [salt.state       ][INFO    ][12681] Executing state pkg.installed for python-psycopg2
2017-09-27 10:02:59,380 [salt.state       ][INFO    ][12681] Package python-psycopg2 is already installed
2017-09-27 10:02:59,380 [salt.state       ][INFO    ][12681] Completed state [python-psycopg2] at time 10:02:59.380293 duration_in_ms=3.494
2017-09-27 10:02:59,381 [salt.state       ][INFO    ][12681] Running state [python-keystone] at time 10:02:59.380638
2017-09-27 10:02:59,381 [salt.state       ][INFO    ][12681] Executing state pkg.installed for python-keystone
2017-09-27 10:02:59,384 [salt.state       ][INFO    ][12681] Package python-keystone is already installed
2017-09-27 10:02:59,384 [salt.state       ][INFO    ][12681] Completed state [python-keystone] at time 10:02:59.384148 duration_in_ms=3.51
2017-09-27 10:02:59,385 [salt.state       ][INFO    ][12681] Running state [/etc/keystone/policy.json] at time 10:02:59.384667
2017-09-27 10:02:59,385 [salt.state       ][INFO    ][12681] Executing state file.managed for /etc/keystone/policy.json
2017-09-27 10:02:59,385 [salt.loaded.int.states.file][WARNING ][12681] State for file: /etc/keystone/policy.json - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2017-09-27 10:02:59,386 [salt.state       ][INFO    ][12681] File /etc/keystone/policy.json exists with proper permissions. No changes made.
2017-09-27 10:02:59,386 [salt.state       ][INFO    ][12681] Completed state [/etc/keystone/policy.json] at time 10:02:59.386322 duration_in_ms=1.655
2017-09-27 10:02:59,387 [salt.state       ][INFO    ][12681] Running state [/etc/keystone/keystone-paste.ini] at time 10:02:59.386834
2017-09-27 10:02:59,387 [salt.state       ][INFO    ][12681] Executing state file.managed for /etc/keystone/keystone-paste.ini
2017-09-27 10:02:59,405 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/files/ocata/keystone-paste.ini.Debian'
2017-09-27 10:02:59,407 [salt.state       ][INFO    ][12681] File /etc/keystone/keystone-paste.ini is in the correct state
2017-09-27 10:02:59,407 [salt.state       ][INFO    ][12681] Completed state [/etc/keystone/keystone-paste.ini] at time 10:02:59.407391 duration_in_ms=20.557
2017-09-27 10:02:59,408 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/sites-enabled/000-default.conf] at time 10:02:59.407734
2017-09-27 10:02:59,408 [salt.state       ][INFO    ][12681] Executing state file.absent for /etc/apache2/sites-enabled/000-default.conf
2017-09-27 10:02:59,408 [salt.state       ][INFO    ][12681] File /etc/apache2/sites-enabled/000-default.conf is not present
2017-09-27 10:02:59,409 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/sites-enabled/000-default.conf] at time 10:02:59.408803 duration_in_ms=1.068
2017-09-27 10:02:59,409 [salt.state       ][INFO    ][12681] Running state [/etc/keystone/keystone.conf] at time 10:02:59.409321
2017-09-27 10:02:59,410 [salt.state       ][INFO    ][12681] Executing state file.managed for /etc/keystone/keystone.conf
2017-09-27 10:02:59,436 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/files/ocata/keystone.conf.Debian'
2017-09-27 10:02:59,526 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 10:02:59,540 [salt.state       ][INFO    ][12681] File /etc/keystone/keystone.conf is in the correct state
2017-09-27 10:02:59,540 [salt.state       ][INFO    ][12681] Completed state [/etc/keystone/keystone.conf] at time 10:02:59.540058 duration_in_ms=130.737
2017-09-27 10:02:59,540 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/sites-enabled/keystone.conf] at time 10:02:59.540385
2017-09-27 10:02:59,541 [salt.state       ][INFO    ][12681] Executing state file.absent for /etc/apache2/sites-enabled/keystone.conf
2017-09-27 10:02:59,541 [salt.state       ][INFO    ][12681] File /etc/apache2/sites-enabled/keystone.conf is not present
2017-09-27 10:02:59,541 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/sites-enabled/keystone.conf] at time 10:02:59.541391 duration_in_ms=1.006
2017-09-27 10:02:59,542 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/sites-enabled/wsgi-keystone.conf] at time 10:02:59.541724
2017-09-27 10:02:59,542 [salt.state       ][INFO    ][12681] Executing state file.absent for /etc/apache2/sites-enabled/wsgi-keystone.conf
2017-09-27 10:02:59,542 [salt.state       ][INFO    ][12681] File /etc/apache2/sites-enabled/wsgi-keystone.conf is not present
2017-09-27 10:02:59,543 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/sites-enabled/wsgi-keystone.conf] at time 10:02:59.542676 duration_in_ms=0.951
2017-09-27 10:02:59,543 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/sites-available/keystone_wsgi.conf] at time 10:02:59.543159
2017-09-27 10:02:59,543 [salt.state       ][INFO    ][12681] Executing state file.managed for /etc/apache2/sites-available/keystone_wsgi.conf
2017-09-27 10:02:59,566 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/files/apache.conf'
2017-09-27 10:02:59,591 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 10:02:59,634 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/files/ocata/wsgi-keystone.conf'
2017-09-27 10:02:59,717 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/files/_name.conf'
2017-09-27 10:02:59,740 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/map.jinja'
2017-09-27 10:02:59,780 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/files/_ssl.conf'
2017-09-27 10:02:59,803 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/files/_locations.conf'
2017-09-27 10:02:59,826 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'apache/files/_log.conf'
2017-09-27 10:02:59,838 [salt.state       ][INFO    ][12681] File /etc/apache2/sites-available/keystone_wsgi.conf is in the correct state
2017-09-27 10:02:59,838 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/sites-available/keystone_wsgi.conf] at time 10:02:59.838143 duration_in_ms=294.983
2017-09-27 10:02:59,839 [salt.state       ][INFO    ][12681] Running state [/etc/keystone/logging.conf] at time 10:02:59.838964
2017-09-27 10:02:59,840 [salt.state       ][INFO    ][12681] Executing state file.managed for /etc/keystone/logging.conf
2017-09-27 10:02:59,840 [salt.loaded.int.states.file][WARNING ][12681] State for file: /etc/keystone/logging.conf - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2017-09-27 10:02:59,841 [salt.state       ][INFO    ][12681] File /etc/keystone/logging.conf exists with proper permissions. No changes made.
2017-09-27 10:02:59,841 [salt.state       ][INFO    ][12681] Completed state [/etc/keystone/logging.conf] at time 10:02:59.841331 duration_in_ms=2.366
2017-09-27 10:02:59,842 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/sites-enabled/keystone_wsgi.conf] at time 10:02:59.842060
2017-09-27 10:02:59,843 [salt.state       ][INFO    ][12681] Executing state file.symlink for /etc/apache2/sites-enabled/keystone_wsgi.conf
2017-09-27 10:02:59,844 [salt.state       ][INFO    ][12681] Symlink /etc/apache2/sites-enabled/keystone_wsgi.conf is present and owned by root:root
2017-09-27 10:02:59,845 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/sites-enabled/keystone_wsgi.conf] at time 10:02:59.844593 duration_in_ms=2.532
2017-09-27 10:02:59,848 [salt.state       ][INFO    ][12681] Running state [apache2] at time 10:02:59.848294
2017-09-27 10:02:59,849 [salt.state       ][INFO    ][12681] Executing state service.running for apache2
2017-09-27 10:02:59,850 [salt.loaded.int.module.cmdmod][INFO    ][12681] Executing command ['systemctl', 'status', 'apache2.service', '-n', '0'] in directory '/root'
2017-09-27 10:02:59,889 [salt.loaded.int.module.cmdmod][INFO    ][12681] Executing command ['systemctl', 'is-active', 'apache2.service'] in directory '/root'
2017-09-27 10:02:59,910 [salt.loaded.int.module.cmdmod][INFO    ][12681] Executing command ['systemctl', 'is-enabled', 'apache2.service'] in directory '/root'
2017-09-27 10:02:59,932 [salt.state       ][INFO    ][12681] The service apache2 is already running
2017-09-27 10:02:59,933 [salt.state       ][INFO    ][12681] Completed state [apache2] at time 10:02:59.932666 duration_in_ms=84.368
2017-09-27 10:02:59,935 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/conf-enabled/security.conf] at time 10:02:59.934437
2017-09-27 10:02:59,935 [salt.state       ][INFO    ][12681] Executing state file.symlink for /etc/apache2/conf-enabled/security.conf
2017-09-27 10:02:59,938 [salt.state       ][INFO    ][12681] Symlink /etc/apache2/conf-enabled/security.conf is present and owned by root:root
2017-09-27 10:02:59,939 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/conf-enabled/security.conf] at time 10:02:59.938537 duration_in_ms=4.1
2017-09-27 10:02:59,939 [salt.state       ][INFO    ][12681] Running state [/etc/apache2/sites-enabled/keystone_wsgi] at time 10:02:59.939226
2017-09-27 10:02:59,940 [salt.state       ][INFO    ][12681] Executing state file.absent for /etc/apache2/sites-enabled/keystone_wsgi
2017-09-27 10:02:59,941 [salt.state       ][INFO    ][12681] File /etc/apache2/sites-enabled/keystone_wsgi is not present
2017-09-27 10:02:59,941 [salt.state       ][INFO    ][12681] Completed state [/etc/apache2/sites-enabled/keystone_wsgi] at time 10:02:59.941378 duration_in_ms=2.151
2017-09-27 10:02:59,943 [salt.state       ][INFO    ][12681] Running state [keystone] at time 10:02:59.942524
2017-09-27 10:02:59,943 [salt.state       ][INFO    ][12681] Executing state service.dead for keystone
2017-09-27 10:02:59,944 [salt.loaded.int.module.cmdmod][INFO    ][12681] Executing command ['systemctl', 'status', 'keystone.service', '-n', '0'] in directory '/root'
2017-09-27 10:02:59,966 [salt.state       ][INFO    ][12681] The named service keystone is not available
2017-09-27 10:02:59,967 [salt.state       ][INFO    ][12681] Completed state [keystone] at time 10:02:59.966715 duration_in_ms=24.189
2017-09-27 10:02:59,968 [salt.state       ][INFO    ][12681] Running state [/root/keystonerc] at time 10:02:59.967929
2017-09-27 10:02:59,969 [salt.state       ][INFO    ][12681] Executing state file.managed for /root/keystonerc
2017-09-27 10:02:59,992 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/files/keystonerc'
2017-09-27 10:03:00,020 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 10:03:00,047 [salt.state       ][INFO    ][12681] File /root/keystonerc is in the correct state
2017-09-27 10:03:00,047 [salt.state       ][INFO    ][12681] Completed state [/root/keystonerc] at time 10:03:00.047259 duration_in_ms=79.33
2017-09-27 10:03:00,048 [salt.state       ][INFO    ][12681] Running state [/root/keystonercv3] at time 10:03:00.048160
2017-09-27 10:03:00,049 [salt.state       ][INFO    ][12681] Executing state file.managed for /root/keystonercv3
2017-09-27 10:03:00,071 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/files/keystonercv3'
2017-09-27 10:03:00,095 [salt.fileclient  ][INFO    ][12681] Fetching file from saltenv 'base', ** done ** 'keystone/map.jinja'
2017-09-27 10:03:00,113 [salt.state       ][INFO    ][12681] File /root/keystonercv3 is in the correct state
2017-09-27 10:03:00,113 [salt.state       ][INFO    ][12681] Completed state [/root/keystonercv3] at time 10:03:00.113360 duration_in_ms=65.199
2017-09-27 10:03:00,114 [salt.state       ][INFO    ][12681] Running state [keystone-manage db_sync && sleep 1] at time 10:03:00.114069
2017-09-27 10:03:00,115 [salt.state       ][INFO    ][12681] Executing state cmd.run for keystone-manage db_sync && sleep 1
2017-09-27 10:03:00,115 [salt.loaded.int.module.cmdmod][INFO    ][12681] Executing command 'keystone-manage db_sync && sleep 1' in directory '/root'
2017-09-27 10:03:02,052 [salt.state       ][INFO    ][12681] {'pid': 12731, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 10:03:02,053 [salt.state       ][INFO    ][12681] Completed state [keystone-manage db_sync && sleep 1] at time 10:03:02.052965 duration_in_ms=1938.891
2017-09-27 10:03:02,055 [salt.state       ][INFO    ][12681] Running state [/var/lib/keystone/fernet-keys] at time 10:03:02.054652
2017-09-27 10:03:02,056 [salt.state       ][INFO    ][12681] Executing state file.directory for /var/lib/keystone/fernet-keys
2017-09-27 10:03:02,059 [salt.state       ][INFO    ][12681] Directory /var/lib/keystone/fernet-keys is in the correct state
2017-09-27 10:03:02,060 [salt.state       ][INFO    ][12681] Completed state [/var/lib/keystone/fernet-keys] at time 10:03:02.059793 duration_in_ms=5.14
2017-09-27 10:03:02,062 [salt.state       ][INFO    ][12681] Running state [keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone] at time 10:03:02.061856
2017-09-27 10:03:02,063 [salt.state       ][INFO    ][12681] Executing state cmd.run for keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
2017-09-27 10:03:02,064 [salt.loaded.int.module.cmdmod][INFO    ][12681] Executing command 'keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone' in directory '/root'
2017-09-27 10:03:02,816 [salt.state       ][INFO    ][12681] {'pid': 12739, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 10:03:02,817 [salt.state       ][INFO    ][12681] Completed state [keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone] at time 10:03:02.817367 duration_in_ms=755.509
2017-09-27 10:03:02,819 [salt.state       ][INFO    ][12681] Running state [/var/lib/keystone/credential-keys] at time 10:03:02.818825
2017-09-27 10:03:02,820 [salt.state       ][INFO    ][12681] Executing state file.directory for /var/lib/keystone/credential-keys
2017-09-27 10:03:02,823 [salt.state       ][INFO    ][12681] Directory /var/lib/keystone/credential-keys is in the correct state
2017-09-27 10:03:02,823 [salt.state       ][INFO    ][12681] Completed state [/var/lib/keystone/credential-keys] at time 10:03:02.823185 duration_in_ms=4.358
2017-09-27 10:03:02,825 [salt.state       ][INFO    ][12681] Running state [keystone-manage credential_setup --keystone-user keystone --keystone-group keystone] at time 10:03:02.825122
2017-09-27 10:03:02,826 [salt.state       ][INFO    ][12681] Executing state cmd.run for keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
2017-09-27 10:03:02,827 [salt.loaded.int.module.cmdmod][INFO    ][12681] Executing command 'keystone-manage credential_setup --keystone-user keystone --keystone-group keystone' in directory '/root'
2017-09-27 10:03:03,457 [salt.state       ][INFO    ][12681] {'pid': 12745, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 10:03:03,458 [salt.state       ][INFO    ][12681] Completed state [keystone-manage credential_setup --keystone-user keystone --keystone-group keystone] at time 10:03:03.457832 duration_in_ms=632.707
2017-09-27 10:03:03,461 [salt.minion      ][INFO    ][12681] Returning information for job: 20170927100253580276
2017-09-27 10:03:04,667 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command test.ping with jid 20170927100304651940
2017-09-27 10:03:04,690 [salt.minion      ][INFO    ][12755] Starting a new job with PID 12755
2017-09-27 10:03:04,741 [salt.minion      ][INFO    ][12755] Returning information for job: 20170927100304651940
2017-09-27 10:06:56,972 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927100656963529
2017-09-27 10:06:56,998 [salt.minion      ][INFO    ][12805] Starting a new job with PID 12805
2017-09-27 10:06:58,668 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:06:58,711 [salt.fileclient  ][INFO    ][12805] Fetching file from saltenv 'base', ** done ** 'nova/init.sls'
2017-09-27 10:06:58,738 [salt.fileclient  ][INFO    ][12805] Fetching file from saltenv 'base', ** done ** 'nova/controller.sls'
2017-09-27 10:06:58,806 [salt.fileclient  ][INFO    ][12805] Fetching file from saltenv 'base', ** done ** 'nova/map.jinja'
2017-09-27 10:07:00,214 [salt.state       ][INFO    ][12805] Running state [debconf-utils] at time 10:07:00.213564
2017-09-27 10:07:00,214 [salt.state       ][INFO    ][12805] Executing state pkg.installed for debconf-utils
2017-09-27 10:07:00,214 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:07:00,824 [salt.state       ][INFO    ][12805] Package debconf-utils is already installed
2017-09-27 10:07:00,824 [salt.state       ][INFO    ][12805] Completed state [debconf-utils] at time 10:07:00.824191 duration_in_ms=610.625
2017-09-27 10:07:00,825 [salt.state       ][INFO    ][12805] Running state [nova-consoleproxy] at time 10:07:00.825022
2017-09-27 10:07:00,825 [salt.state       ][INFO    ][12805] Executing state debconf.set for nova-consoleproxy
2017-09-27 10:07:00,826 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'debconf-get-selections' in directory '/root'
2017-09-27 10:07:01,002 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'debconf-set-selections /tmp/salt-6WtCcR' in directory '/root'
2017-09-27 10:07:01,303 [salt.state       ][INFO    ][12805] {'nova-consoleproxy/daemon_type': 'novnc'}
2017-09-27 10:07:01,304 [salt.state       ][INFO    ][12805] Completed state [nova-consoleproxy] at time 10:07:01.303997 duration_in_ms=478.97
2017-09-27 10:07:01,310 [salt.state       ][INFO    ][12805] Running state [nova] at time 10:07:01.309888
2017-09-27 10:07:01,310 [salt.state       ][INFO    ][12805] Executing state group.present for nova
2017-09-27 10:07:01,312 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'groupadd -g 303 -r nova' in directory '/root'
2017-09-27 10:07:01,444 [salt.state       ][INFO    ][12805] {'passwd': 'x', 'gid': 303, 'name': 'nova', 'members': []}
2017-09-27 10:07:01,444 [salt.state       ][INFO    ][12805] Completed state [nova] at time 10:07:01.444187 duration_in_ms=134.296
2017-09-27 10:07:01,445 [salt.state       ][INFO    ][12805] Running state [nova] at time 10:07:01.445064
2017-09-27 10:07:01,446 [salt.state       ][INFO    ][12805] Executing state user.present for nova
2017-09-27 10:07:01,447 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['useradd', '-s', '/bin/false', '-u', '303', '-g', '303', '-m', '-d', '/var/lib/nova', '-r', 'nova'] in directory '/root'
2017-09-27 10:07:01,580 [salt.state       ][INFO    ][12805] {'shell': '/bin/false', 'workphone': '', 'uid': 303, 'passwd': 'x', 'roomnumber': '', 'groups': ['nova'], 'home': '/var/lib/nova', 'name': 'nova', 'gid': 303, 'fullname': '', 'homephone': ''}
2017-09-27 10:07:01,581 [salt.state       ][INFO    ][12805] Completed state [nova] at time 10:07:01.580483 duration_in_ms=135.416
2017-09-27 10:07:01,582 [salt.state       ][INFO    ][12805] Running state [python-novaclient] at time 10:07:01.581863
2017-09-27 10:07:01,582 [salt.state       ][INFO    ][12805] Executing state pkg.installed for python-novaclient
2017-09-27 10:07:01,591 [salt.state       ][INFO    ][12805] Package python-novaclient is already installed
2017-09-27 10:07:01,591 [salt.state       ][INFO    ][12805] Completed state [python-novaclient] at time 10:07:01.590876 duration_in_ms=9.013
2017-09-27 10:07:01,592 [salt.state       ][INFO    ][12805] Running state [nova-doc] at time 10:07:01.591886
2017-09-27 10:07:01,592 [salt.state       ][INFO    ][12805] Executing state pkg.installed for nova-doc
2017-09-27 10:07:01,625 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 10:07:04,410 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-doc'] in directory '/root'
2017-09-27 10:07:07,015 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100707001826
2017-09-27 10:07:07,038 [salt.minion      ][INFO    ][13472] Starting a new job with PID 13472
2017-09-27 10:07:07,083 [salt.minion      ][INFO    ][13472] Returning information for job: 20170927100707001826
2017-09-27 10:07:07,331 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:07:07,382 [salt.state       ][INFO    ][12805] Made the following changes:
'nova-doc' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'

2017-09-27 10:07:07,404 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:07:07,442 [salt.state       ][INFO    ][12805] Completed state [nova-doc] at time 10:07:07.441579 duration_in_ms=5849.692
2017-09-27 10:07:07,449 [salt.state       ][INFO    ][12805] Running state [nova-cert] at time 10:07:07.448732
2017-09-27 10:07:07,449 [salt.state       ][INFO    ][12805] Executing state pkg.installed for nova-cert
2017-09-27 10:07:07,728 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-cert'] in directory '/root'
2017-09-27 10:07:17,202 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100717187829
2017-09-27 10:07:17,225 [salt.minion      ][INFO    ][13655] Starting a new job with PID 13655
2017-09-27 10:07:17,243 [salt.minion      ][INFO    ][13655] Returning information for job: 20170927100717187829
2017-09-27 10:07:27,363 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100727350129
2017-09-27 10:07:27,386 [salt.minion      ][INFO    ][14012] Starting a new job with PID 14012
2017-09-27 10:07:27,409 [salt.minion      ][INFO    ][14012] Returning information for job: 20170927100727350129
2017-09-27 10:07:37,531 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100737520564
2017-09-27 10:07:37,559 [salt.minion      ][INFO    ][14206] Starting a new job with PID 14206
2017-09-27 10:07:37,578 [salt.minion      ][INFO    ][14206] Returning information for job: 20170927100737520564
2017-09-27 10:07:47,701 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100747689614
2017-09-27 10:07:47,723 [salt.minion      ][INFO    ][19054] Starting a new job with PID 19054
2017-09-27 10:07:47,746 [salt.minion      ][INFO    ][19054] Returning information for job: 20170927100747689614
2017-09-27 10:07:57,876 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100757864044
2017-09-27 10:07:57,898 [salt.minion      ][INFO    ][19131] Starting a new job with PID 19131
2017-09-27 10:07:57,915 [salt.minion      ][INFO    ][19131] Returning information for job: 20170927100757864044
2017-09-27 10:08:08,042 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100808030460
2017-09-27 10:08:08,068 [salt.minion      ][INFO    ][19506] Starting a new job with PID 19506
2017-09-27 10:08:08,084 [salt.minion      ][INFO    ][19506] Returning information for job: 20170927100808030460
2017-09-27 10:08:18,216 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100818204720
2017-09-27 10:08:18,243 [salt.minion      ][INFO    ][24368] Starting a new job with PID 24368
2017-09-27 10:08:18,264 [salt.minion      ][INFO    ][24368] Returning information for job: 20170927100818204720
2017-09-27 10:08:24,151 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:08:24,184 [salt.state       ][INFO    ][12805] Made the following changes:
'python-os-brick' changed from 'absent' to '1.11.0-1~u16.04+mcp7'
'python-os-win' changed from 'absent' to '1.4.1-1~u16.04+mcp10'
'libxinerama1' changed from 'absent' to '2:1.1.3-1'
'libavahi-common3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2'
'python-junitxml' changed from 'absent' to '0.6-1.1ubuntu1'
'libthai-data' changed from 'absent' to '0.1.24-2'
'python2.7-gobject-2' changed from 'absent' to '1'
'fonts-dejavu-core' changed from 'absent' to '2.35-1'
'python2.7-cairo' changed from 'absent' to '1'
'libcups2' changed from 'absent' to '2.1.3-4ubuntu0.3'
'libgtk2.0-bin' changed from 'absent' to '2.24.30-1ubuntu1.16.04.2'
'libxdamage1' changed from 'absent' to '1:1.1.4-2'
'libxrender1' changed from 'absent' to '1:0.9.9-0ubuntu1'
'python-nova' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'
'fontconfig' changed from 'absent' to '2.11.94-0ubuntu1.1'
'libpixman-1-0' changed from 'absent' to '0.33.6-1'
'libxi6' changed from 'absent' to '2:1.7.6-1'
'python3-mimeparse' changed from 'absent' to '0.1.4-1build1'
'libsubunit-diff-perl' changed from 'absent' to '1'
'python-gobject-2' changed from 'absent' to '2.28.6-12ubuntu1'
'bridge-utils' changed from 'absent' to '1.5-9ubuntu1'
'websockify' changed from 'absent' to '0.8.0+dfsg1-1~u16.04+mcp1'
'python3-unittest2' changed from 'absent' to '1.1.0-6.1'
'libcairo2' changed from 'absent' to '1.14.6-1'
'libxcomposite1' changed from 'absent' to '1:0.4.4-1'
'python3-extras' changed from 'absent' to '0.0.3-3'
'libpangocairo-1.0-0' changed from 'absent' to '1.38.1-1'
'python-cairo' changed from 'absent' to '1.8.8-2'
'libgtk2.0-0' changed from 'absent' to '2.24.30-1ubuntu1.16.04.2'
'libpango-1.0-0' changed from 'absent' to '1.38.1-1'
'python3-linecache2' changed from 'absent' to '1.0.0-2'
'libgraphite2-3' changed from 'absent' to '1.3.10-0ubuntu0.16.04.1'
'libsubunit-perl' changed from 'absent' to '1.1.0-3'
'libxcb-render0' changed from 'absent' to '1.11.1-1ubuntu1'
'libpangoft2-1.0-0' changed from 'absent' to '1.38.1-1'
'python-microversion-parse' changed from 'absent' to '0.1.3-2~u16.04+mcp1'
'python3-pbr' changed from 'absent' to '1.10.0-1~u16.04+mcp1'
'hicolor-icon-theme' changed from 'absent' to '0.15-0ubuntu1'
'libatk1.0-0' changed from 'absent' to '2.18.0-1'
'libgraphite2-2.0.0' changed from 'absent' to '1'
'python-os-vif' changed from 'absent' to '1.4.0-1~u16.04+mcp3'
'libavahi-client3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2'
'python3-subunit' changed from 'absent' to '1.1.0-3'
'libxcb-shm0' changed from 'absent' to '1.11.1-1ubuntu1'
'python-os-xenapi' changed from 'absent' to '0.1.1-1~u16.04+mcp1'
'libthai0' changed from 'absent' to '0.1.24-2'
'python3-testtools' changed from 'absent' to '1.8.1-0ubuntu1'
'libxfixes3' changed from 'absent' to '1:5.0.1-2'
'python-subunit' changed from 'absent' to '1.1.0-3'
'libxrandr2' changed from 'absent' to '2:1.5.0-1'
'libgdk-pixbuf2.0-0' changed from 'absent' to '2.32.2-1ubuntu1.3'
'sqlite3' changed from 'absent' to '3.11.0-1ubuntu1'
'libgtk2.0-common' changed from 'absent' to '2.24.30-1ubuntu1.16.04.2'
'subunit' changed from 'absent' to '1.1.0-3'
'libxcursor1' changed from 'absent' to '1:1.1.14-1'
'libatk1.0-data' changed from 'absent' to '2.18.0-1'
'nova-common' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'
'python2.7-gobject' changed from 'absent' to '1'
'libfontconfig' changed from 'absent' to '1'
'libharfbuzz0b' changed from 'absent' to '1.0.1-1ubuntu0.1'
'os-brick-common' changed from 'absent' to '1.11.0-1~u16.04+mcp7'
'python2.7-nova' changed from 'absent' to '1'
'python-gtk2' changed from 'absent' to '2.24.0-4ubuntu1'
'libdatrie1' changed from 'absent' to '0.2.10-2'
'nova-cert' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'
'python3-traceback2' changed from 'absent' to '1.4.0-3'
'fontconfig-config' changed from 'absent' to '2.11.94-0ubuntu1.1'
'libgdk-pixbuf2.0-common' changed from 'absent' to '2.32.2-1ubuntu1.3'
'libavahi-common-data' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2'
'open-iscsi' changed from 'absent' to '2.0.873+git0.3b4b4500-14ubuntu3.4'
'libfontconfig1' changed from 'absent' to '2.11.94-0ubuntu1.1'
'gtk2.0-binver-2.10.0' changed from 'absent' to '1'

2017-09-27 10:08:24,211 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:08:24,239 [salt.state       ][INFO    ][12805] Completed state [nova-cert] at time 10:08:24.238483 duration_in_ms=76789.749
2017-09-27 10:08:24,249 [salt.state       ][INFO    ][12805] Running state [nova-conductor] at time 10:08:24.248547
2017-09-27 10:08:24,249 [salt.state       ][INFO    ][12805] Executing state pkg.installed for nova-conductor
2017-09-27 10:08:24,504 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-conductor'] in directory '/root'
2017-09-27 10:08:28,108 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:08:28,150 [salt.state       ][INFO    ][12805] Made the following changes:
'nova-conductor' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'

2017-09-27 10:08:28,162 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:08:28,177 [salt.state       ][INFO    ][12805] Completed state [nova-conductor] at time 10:08:28.177271 duration_in_ms=3928.724
2017-09-27 10:08:28,183 [salt.state       ][INFO    ][12805] Running state [novnc] at time 10:08:28.183296
2017-09-27 10:08:28,184 [salt.state       ][INFO    ][12805] Executing state pkg.installed for novnc
2017-09-27 10:08:28,427 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'novnc'] in directory '/root'
2017-09-27 10:08:28,480 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100828390067
2017-09-27 10:08:28,495 [salt.minion      ][INFO    ][24754] Starting a new job with PID 24754
2017-09-27 10:08:28,506 [salt.minion      ][INFO    ][24754] Returning information for job: 20170927100828390067
2017-09-27 10:08:32,327 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:08:32,387 [salt.state       ][INFO    ][12805] Made the following changes:
'novnc' changed from 'absent' to '1:0.6.1-1~u16.04+mcp2'
'libjs-swfobject' changed from 'absent' to '2.2+dfsg-1'
'python-novnc' changed from 'absent' to '1:0.6.1-1~u16.04+mcp2'

2017-09-27 10:08:32,409 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:08:32,433 [salt.state       ][INFO    ][12805] Completed state [novnc] at time 10:08:32.432869 duration_in_ms=4249.571
2017-09-27 10:08:32,443 [salt.state       ][INFO    ][12805] Running state [nova-common] at time 10:08:32.442561
2017-09-27 10:08:32,443 [salt.state       ][INFO    ][12805] Executing state pkg.installed for nova-common
2017-09-27 10:08:32,687 [salt.state       ][INFO    ][12805] Package nova-common is already installed
2017-09-27 10:08:32,688 [salt.state       ][INFO    ][12805] Completed state [nova-common] at time 10:08:32.687627 duration_in_ms=245.066
2017-09-27 10:08:32,688 [salt.state       ][INFO    ][12805] Running state [nova-consoleauth] at time 10:08:32.688234
2017-09-27 10:08:32,689 [salt.state       ][INFO    ][12805] Executing state pkg.installed for nova-consoleauth
2017-09-27 10:08:32,696 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-consoleauth'] in directory '/root'
2017-09-27 10:08:36,486 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:08:36,545 [salt.state       ][INFO    ][12805] Made the following changes:
'nova-consoleauth' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'

2017-09-27 10:08:36,562 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:08:36,580 [salt.state       ][INFO    ][12805] Completed state [nova-consoleauth] at time 10:08:36.580047 duration_in_ms=3891.812
2017-09-27 10:08:36,588 [salt.state       ][INFO    ][12805] Running state [python-pycadf] at time 10:08:36.587873
2017-09-27 10:08:36,588 [salt.state       ][INFO    ][12805] Executing state pkg.installed for python-pycadf
2017-09-27 10:08:36,794 [salt.state       ][INFO    ][12805] Package python-pycadf is already installed
2017-09-27 10:08:36,794 [salt.state       ][INFO    ][12805] Completed state [python-pycadf] at time 10:08:36.794170 duration_in_ms=206.296
2017-09-27 10:08:36,795 [salt.state       ][INFO    ][12805] Running state [python-memcache] at time 10:08:36.794806
2017-09-27 10:08:36,795 [salt.state       ][INFO    ][12805] Executing state pkg.installed for python-memcache
2017-09-27 10:08:36,798 [salt.state       ][INFO    ][12805] Package python-memcache is already installed
2017-09-27 10:08:36,798 [salt.state       ][INFO    ][12805] Completed state [python-memcache] at time 10:08:36.798208 duration_in_ms=3.402
2017-09-27 10:08:36,799 [salt.state       ][INFO    ][12805] Running state [nova-scheduler] at time 10:08:36.798719
2017-09-27 10:08:36,799 [salt.state       ][INFO    ][12805] Executing state pkg.installed for nova-scheduler
2017-09-27 10:08:36,807 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-scheduler'] in directory '/root'
2017-09-27 10:08:38,433 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100838421540
2017-09-27 10:08:38,457 [salt.minion      ][INFO    ][25082] Starting a new job with PID 25082
2017-09-27 10:08:38,472 [salt.minion      ][INFO    ][25082] Returning information for job: 20170927100838421540
2017-09-27 10:08:40,368 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:08:40,428 [salt.state       ][INFO    ][12805] Made the following changes:
'nova-scheduler' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'

2017-09-27 10:08:40,449 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:08:40,474 [salt.state       ][INFO    ][12805] Completed state [nova-scheduler] at time 10:08:40.474016 duration_in_ms=3675.295
2017-09-27 10:08:40,483 [salt.state       ][INFO    ][12805] Running state [nova-api] at time 10:08:40.482882
2017-09-27 10:08:40,483 [salt.state       ][INFO    ][12805] Executing state pkg.installed for nova-api
2017-09-27 10:08:40,742 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-api'] in directory '/root'
2017-09-27 10:08:46,932 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:08:46,992 [salt.state       ][INFO    ][12805] Made the following changes:
'iptables' changed from 'absent' to '1.6.0-2ubuntu3'
'nova-api' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'
'libnfnetlink0' changed from 'absent' to '1.0.1-3'

2017-09-27 10:08:47,014 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:08:47,036 [salt.state       ][INFO    ][12805] Completed state [nova-api] at time 10:08:47.036194 duration_in_ms=6553.31
2017-09-27 10:08:47,045 [salt.state       ][INFO    ][12805] Running state [gettext-base] at time 10:08:47.045261
2017-09-27 10:08:47,046 [salt.state       ][INFO    ][12805] Executing state pkg.installed for gettext-base
2017-09-27 10:08:47,381 [salt.state       ][INFO    ][12805] Package gettext-base is already installed
2017-09-27 10:08:47,381 [salt.state       ][INFO    ][12805] Completed state [gettext-base] at time 10:08:47.381438 duration_in_ms=336.178
2017-09-27 10:08:47,382 [salt.state       ][INFO    ][12805] Running state [nova-consoleproxy] at time 10:08:47.382066
2017-09-27 10:08:47,382 [salt.state       ][INFO    ][12805] Executing state pkg.installed for nova-consoleproxy
2017-09-27 10:08:47,390 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-consoleproxy'] in directory '/root'
2017-09-27 10:08:48,611 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100848601648
2017-09-27 10:08:48,634 [salt.minion      ][INFO    ][25538] Starting a new job with PID 25538
2017-09-27 10:08:48,647 [salt.minion      ][INFO    ][25538] Returning information for job: 20170927100848601648
2017-09-27 10:08:52,208 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:08:52,241 [salt.state       ][INFO    ][12805] Made the following changes:
'nova-novncproxy' changed from 'absent' to '1'
'nova-xvpvncproxy' changed from 'absent' to '1'
'nova-ajax-console-proxy' changed from 'absent' to '1'
'nova-spiceproxy' changed from 'absent' to '1'
'nova-spicehtml5proxy' changed from 'absent' to '1'
'spice-html5' changed from 'absent' to '0.1.4-1'
'nova-consoleproxy' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'

2017-09-27 10:08:52,253 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:08:52,266 [salt.state       ][INFO    ][12805] Completed state [nova-consoleproxy] at time 10:08:52.265661 duration_in_ms=4883.594
2017-09-27 10:08:52,268 [salt.state       ][INFO    ][12805] Running state [/etc/nova/nova.conf] at time 10:08:52.268214
2017-09-27 10:08:52,268 [salt.state       ][INFO    ][12805] Executing state file.managed for /etc/nova/nova.conf
2017-09-27 10:08:52,326 [salt.fileclient  ][INFO    ][12805] Fetching file from saltenv 'base', ** done ** 'nova/files/ocata/nova-controller.conf.Debian'
2017-09-27 10:08:52,584 [salt.fileclient  ][INFO    ][12805] Fetching file from saltenv 'base', ** done ** 'nova/map.jinja'
2017-09-27 10:08:52,632 [salt.state       ][INFO    ][12805] File changed:
--- 
+++ 
@@ -1,4 +1,2753 @@
+
 [DEFAULT]
+
+#
+# From nova.conf
+#
+image_service=nova.image.glance.GlanceImageService
+
+# DEPRECATED:
+# When returning instance metadata, this is the class that is used
+# for getting vendor metadata when that class isn't specified in the individual
+# request. The value should be the full dot-separated path to the class to use.
+#
+# Possible values:
+#
+# * Any valid dot-separated class path that can be imported.
+#  (string value)
+# This option is deprecated for removal since 13.0.0.
+# Its value may be silently ignored in the future.
+#vendordata_driver=nova.api.metadata.vendordata_json.JsonFileVendorData
+
+# DEPRECATED:
+# This option is used to enable or disable quota checking for tenant networks.
+#
+# Related options:
+#
+# * quota_networks
+#  (boolean value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# CRUD operations on tenant networks are only available when using nova-network
+# and nova-network is itself deprecated.
+#enable_network_quota=false
+
+# DEPRECATED:
+# This option controls the number of private networks that can be created per
+# project (or per tenant).
+#
+# Related options:
+#
+# * enable_network_quota
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# CRUD operations on tenant networks are only available when using nova-network
+# and nova-network is itself deprecated.
+#quota_networks=3
+
+#
+# This option specifies the name of the availability zone for the
+# internal services. Services like nova-scheduler, nova-network,
+# nova-conductor are internal services. These services will appear in
+# their own internal availability_zone.
+#
+# Possible values:
+#
+# * Any string representing an availability zone name
+# * 'internal' is the default value
+#
+#  (string value)
+#internal_service_availability_zone=internal
+
+#
+# Default compute node availability_zone.
+#
+# This option determines the availability zone to be used when it is not
+# specified in the VM creation request. If this option is not set,
+# the default availability zone 'nova' is used.
+#
+# Possible values:
+#
+# * Any string representing an availability zone name
+# * 'nova' is the default value
+#
+#  (string value)
+#default_availability_zone=nova
+
+# Length of generated instance admin passwords. (integer value)
+# Minimum value: 0
+#password_length=12
+
+#
+# Time period to generate instance usages for. It is possible to define optional
+# offset to given period by appending @ character followed by a number defining
+# offset.
+#
+# Possible values:
+#
+# *  period, example: ``hour``, ``day``, ``month` or ``year``
+# *  period with offset, example: ``month@15`` will result in monthly audits
+#    starting on 15th day of month.
+#  (string value)
+#instance_usage_audit_period=month
+
+#
+# Start and use a daemon that can run the commands that need to be run with
+# root privileges. This option is usually enabled on nodes that run nova compute
+# processes.
+#  (boolean value)
+#use_rootwrap_daemon=false
+
+#
+# Path to the rootwrap configuration file.
+#
+# Goal of the root wrapper is to allow a service-specific unprivileged user to
+# run a number of actions as the root user in the safest manner possible.
+# The configuration file used here must match the one defined in the sudoers
+# entry.
+#  (string value)
+#rootwrap_config=/etc/nova/rootwrap.conf
+rootwrap_config=/etc/nova/rootwrap.conf
+
+# Explicitly specify the temporary working directory. (string value)
+#tempdir=<None>
+
+#
+# Determine if monkey patching should be applied.
+#
+# Related options:
+#
+# * ``monkey_patch_modules``: This must have values set for this option to
+#   have any effect
+#  (boolean value)
+#monkey_patch=false
+
+#
+# List of modules/decorators to monkey patch.
+#
+# This option allows you to patch a decorator for all functions in specified
+# modules.
+#
+# Possible values:
+#
+# * nova.compute.api:nova.notifications.notify_decorator
+# * nova.api.ec2.cloud:nova.notifications.notify_decorator
+# * [...]
+#
+# Related options:
+#
+# * ``monkey_patch``: This must be set to ``True`` for this option to
+#   have any effect
+#  (list value)
+#monkey_patch_modules=nova.compute.api:nova.notifications.notify_decorator
+
+#
+# Defines which driver to use for controlling virtualization.
+#
+# Possible values:
+#
+# * ``libvirt.LibvirtDriver``
+# * ``xenapi.XenAPIDriver``
+# * ``fake.FakeDriver``
+# * ``ironic.IronicDriver``
+# * ``vmwareapi.VMwareVCDriver``
+# * ``hyperv.HyperVDriver``
+#  (string value)
+#compute_driver=<None>
+compute_driver=libvirt.LibvirtDriver
+
+#
+# Allow destination machine to match source for resize. Useful when
+# testing in single-host environments. By default it is not allowed
+# to resize to the same host. Setting this option to true will add
+# the same host to the destination options.
+#  (boolean value)
+#allow_resize_to_same_host=false
+allow_resize_to_same_host=true
+
+#
+# Availability zone to use when user doesn't specify one.
+#
+# This option is used by the scheduler to determine which availability
+# zone to place a new VM instance into if the user did not specify one
+# at the time of VM boot request.
+#
+# Possible values:
+#
+# * Any string representing an availability zone name
+# * Default value is None.
+#  (string value)
+#default_schedule_zone=<None>
+
+#
+# Image properties that should not be inherited from the instance
+# when taking a snapshot.
+#
+# This option gives an opportunity to select which image-properties
+# should not be inherited by newly created snapshots.
+#
+# Possible values:
+#
+# * A list whose item is an image property. Usually only the image
+#   properties that are only needed by base images can be included
+#   here, since the snapshots that are created from the base images
+#   doesn't need them.
+# * Default list: ['cache_in_nova', 'bittorrent']
+#  (list value)
+#non_inheritable_image_properties=cache_in_nova,bittorrent
+
+# DEPRECATED:
+# This option is used to decide when an image should have no external
+# ramdisk or kernel. By default this is set to 'nokernel', so when an
+# image is booted with the property 'kernel_id' with the value
+# 'nokernel', Nova assumes the image doesn't require an external kernel
+# and ramdisk.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# When an image is booted with the property 'kernel_id' with the value
+# 'nokernel', Nova assumes the image doesn't require an external kernel and
+# ramdisk. This option allows user to change the API behaviour which should not
+# be allowed and this value "nokernel" should be hard coded.
+#null_kernel=nokernel
+
+# DEPRECATED:
+# When creating multiple instances with a single request using the
+# os-multiple-create API extension, this template will be used to build
+# the display name for each instance. The benefit is that the instances
+# end up with different hostnames. Example display names when creating
+# two VM's: name-1, name-2.
+#
+# Possible values:
+#
+# * Valid keys for the template are: name, uuid, count.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This config changes API behaviour. All changes in API behaviour should be
+# discoverable.
+#multi_instance_display_name_template=%(name)s-%(count)d
+
+#
+# Maximum number of devices that will result in a local image being
+# created on the hypervisor node.
+#
+# A negative number means unlimited. Setting max_local_block_devices
+# to 0 means that any request that attempts to create a local disk
+# will fail. This option is meant to limit the number of local discs
+# (so root local disc that is the result of --image being used, and
+# any other ephemeral and swap disks). 0 does not mean that images
+# will be automatically converted to volumes and boot instances from
+# volumes - it just means that all requests that attempt to create a
+# local disk will fail.
+#
+# Possible values:
+#
+# * 0: Creating a local disk is not allowed.
+# * Negative number: Allows unlimited number of local discs.
+# * Positive number: Allows only these many number of local discs.
+#                        (Default value is 3).
+#  (integer value)
+#max_local_block_devices=3
+
+#
+# A list of monitors that can be used for getting compute metrics.
+# You can use the alias/name from the setuptools entry points for
+# nova.compute.monitors.* namespaces. If no namespace is supplied,
+# the "cpu." namespace is assumed for backwards-compatibility.
+#
+# Possible values:
+#
+# * An empty list will disable the feature(Default).
+# * An example value that would enable both the CPU and NUMA memory
+#   bandwidth monitors that used the virt driver variant:
+#   ["cpu.virt_driver", "numa_mem_bw.virt_driver"]
+#  (list value)
+#compute_monitors =
+
+#
+# The default format an ephemeral_volume will be formatted with on creation.
+#
+# Possible values:
+#
+# * ``ext2``
+# * ``ext3``
+# * ``ext4``
+# * ``xfs``
+# * ``ntfs`` (only for Windows guests)
+#  (string value)
+#default_ephemeral_format=<None>
+
+#
+# Determine if instance should boot or fail on VIF plugging timeout.
+#
+# Nova sends a port update to Neutron after an instance has been scheduled,
+# providing Neutron with the necessary information to finish setup of the port.
+# Once completed, Neutron notifies Nova that it has finished setting up the
+# port, at which point Nova resumes the boot of the instance since network
+# connectivity is now supposed to be present. A timeout will occur if the reply
+# is not received after a given interval.
+#
+# This option determines what Nova does when the VIF plugging timeout event
+# happens. When enabled, the instance will error out. When disabled, the
+# instance will continue to boot on the assumption that the port is ready.
+#
+# Possible values:
+#
+# * True: Instances should fail after VIF plugging timeout
+# * False: Instances should continue booting after VIF plugging timeout
+#  (boolean value)
+#vif_plugging_is_fatal=true
+vif_plugging_is_fatal=false
+
+#
+# Timeout for Neutron VIF plugging event message arrival.
+#
+# Number of seconds to wait for Neutron vif plugging events to
+# arrive before continuing or failing (see 'vif_plugging_is_fatal').
+#
+# Related options:
+#
+# * vif_plugging_is_fatal - If ``vif_plugging_timeout`` is set to zero and
+#   ``vif_plugging_is_fatal`` is False, events should not be expected to
+#   arrive at all.
+#  (integer value)
+# Minimum value: 0
+#vif_plugging_timeout=300
+vif_plugging_timeout=0
+
+# Path to '/etc/network/interfaces' template.
+#
+# The path to a template file for the '/etc/network/interfaces'-style file,
+# which
+# will be populated by nova and subsequently used by cloudinit. This provides a
+# method to configure network connectivity in environments without a DHCP
+# server.
+#
+# The template will be rendered using Jinja2 template engine, and receive a
+# top-level key called ``interfaces``. This key will contain a list of
+# dictionaries, one for each interface.
+#
+# Refer to the cloudinit documentaion for more information:
+#
+#   https://cloudinit.readthedocs.io/en/latest/topics/datasources.html
+#
+# Possible values:
+#
+# * A path to a Jinja2-formatted template for a Debian '/etc/network/interfaces'
+#   file. This applies even if using a non Debian-derived guest.
+#
+# Related options:
+#
+# * ``flat_inject``: This must be set to ``True`` to ensure nova embeds network
+#   configuration information in the metadata provided through the config drive.
+#  (string value)
+#injected_network_template=$pybasedir/nova/virt/interfaces.template
+injected_network_template=$pybasedir/nova/virt/interfaces.template
+
+#
+# The image preallocation mode to use.
+#
+# Image preallocation allows storage for instance images to be allocated up
+# front
+# when the instance is initially provisioned. This ensures immediate feedback is
+# given if enough space isn't available. In addition, it should significantly
+# improve performance on writes to new blocks and may even improve I/O
+# performance to prewritten blocks due to reduced fragmentation.
+#
+# Possible values:
+#
+# * "none"  => no storage provisioning is done up front
+# * "space" => storage is fully allocated at instance start
+#  (string value)
+# Allowed values: none, space
+#preallocate_images=none
+
+#
+# Enable use of copy-on-write (cow) images.
+#
+# QEMU/KVM allow the use of qcow2 as backing files. By disabling this,
+# backing files will not be used.
+#  (boolean value)
+#use_cow_images=true
+
+#
+# Force conversion of backing images to raw format.
+#
+# Possible values:
+#
+# * True: Backing image files will be converted to raw image format
+# * False: Backing image files will not be converted
+#
+# Related options:
+#
+# * ``compute_driver``: Only the libvirt driver uses this option.
+#  (boolean value)
+#force_raw_images=true
+
+#
+# Name of the mkfs commands for ephemeral device.
+#
+# The format is <os_type>=<mkfs command>
+#  (multi valued)
+#virt_mkfs =
+
+#
+# Enable resizing of filesystems via a block device.
+#
+# If enabled, attempt to resize the filesystem by accessing the image over a
+# block device. This is done by the host and may not be necessary if the image
+# contains a recent version of cloud-init. Possible mechanisms require the nbd
+# driver (for qcow and raw), or loop (for raw).
+#  (boolean value)
+#resize_fs_using_block_device=false
+
+# Amount of time, in seconds, to wait for NBD device start up. (integer value)
+# Minimum value: 0
+#timeout_nbd=10
+
+#
+# Location of cached images.
+#
+# This is NOT the full path - just a folder name relative to '$instances_path'.
+# For per-compute-host cached images, set to '_base_$my_ip'
+#  (string value)
+#image_cache_subdirectory_name=_base
+
+# Should unused base images be removed? (boolean value)
+#remove_unused_base_images=true
+
+#
+# Unused unresized base images younger than this will not be removed.
+#  (integer value)
+#remove_unused_original_minimum_age_seconds=86400
+
+#
+# Generic property to specify the pointer type.
+#
+# Input devices allow interaction with a graphical framebuffer. For
+# example to provide a graphic tablet for absolute cursor movement.
+#
+# If set, the 'hw_pointer_model' image property takes precedence over
+# this configuration option.
+#
+# Possible values:
+#
+# * None: Uses default behavior provided by drivers (mouse on PS2 for
+#         libvirt x86)
+# * ps2mouse: Uses relative movement. Mouse connected by PS2
+# * usbtablet: Uses absolute movement. Tablet connect by USB
+#
+# Related options:
+#
+# * usbtablet must be configured with VNC enabled or SPICE enabled and SPICE
+#   agent disabled. When used with libvirt the instance mode should be
+#   configured as HVM.
+#   (string value)
+# Allowed values: <None>, ps2mouse, usbtablet
+#pointer_model=usbtablet
+
+#
+# Defines which physical CPUs (pCPUs) can be used by instance
+# virtual CPUs (vCPUs).
+#
+# Possible values:
+#
+# * A comma-separated list of physical CPU numbers that virtual CPUs can be
+#   allocated to by default. Each element should be either a single CPU number,
+#   a range of CPU numbers, or a caret followed by a CPU number to be
+#   excluded from a previous range. For example:
+#
+#     vcpu_pin_set = "4-12,^8,15"
+#  (string value)
+#vcpu_pin_set=<None>
+
+#
+# Number of huge/large memory pages to reserved per NUMA host cell.
+#
+# Possible values:
+#
+# * A list of valid key=value which reflect NUMA node ID, page size
+#   (Default unit is KiB) and number of pages to be reserved.
+#
+#     reserved_huge_pages = node:0,size:2048,count:64
+#     reserved_huge_pages = node:1,size:1GB,count:1
+#
+#   In this example we are reserving on NUMA node 0 64 pages of 2MiB
+#   and on NUMA node 1 1 page of 1GiB.
+#  (dict value)
+#reserved_huge_pages=<None>
+
+#
+# Amount of disk resources in MB to make them always available to host. The
+# disk usage gets reported back to the scheduler from nova-compute running
+# on the compute nodes. To prevent the disk resources from being considered
+# as available, this option can be used to reserve disk space for that host.
+#
+# Possible values:
+#
+# * Any positive integer representing amount of disk in MB to reserve
+#   for the host.
+#  (integer value)
+# Minimum value: 0
+#reserved_host_disk_mb=0
+
+#
+# Amount of memory in MB to reserve for the host so that it is always available
+# to host processes. The host resources usage is reported back to the scheduler
+# continuously from nova-compute running on the compute node. To prevent the
+# host
+# memory from being considered as available, this option is used to reserve
+# memory for the host.
+#
+# Possible values:
+#
+# * Any positive integer representing amount of memory in MB to reserve
+#   for the host.
+#  (integer value)
+# Minimum value: 0
+#reserved_host_memory_mb=512
+
+#
+# This option helps you specify virtual CPU to physical CPU allocation ratio.
+#
+# From Ocata (15.0.0) this is used to influence the hosts selected by
+# the Placement API. Note that when Placement is used, the CoreFilter
+# is redundant, because the Placement API will have already filtered
+# out hosts that would have failed the CoreFilter.
+#
+# This configuration specifies ratio for CoreFilter which can be set
+# per compute node. For AggregateCoreFilter, it will fall back to this
+# configuration value if no per-aggregate setting is found.
+#
+# NOTE: This can be set per-compute, or if set to 0.0, the value
+# set on the scheduler node(s) or compute node(s) will be used
+# and defaulted to 16.0'.
+#
+# Possible values:
+#
+# * Any valid positive integer or float value
+#  (floating point value)
+# Minimum value: 0
+#cpu_allocation_ratio=0.0
+cpu_allocation_ratio=16.0
+
+#
+# This option helps you specify virtual RAM to physical RAM
+# allocation ratio.
+#
+# From Ocata (15.0.0) this is used to influence the hosts selected by
+# the Placement API. Note that when Placement is used, the RamFilter
+# is redundant, because the Placement API will have already filtered
+# out hosts that would have failed the RamFilter.
+#
+# This configuration specifies ratio for RamFilter which can be set
+# per compute node. For AggregateRamFilter, it will fall back to this
+# configuration value if no per-aggregate setting found.
+#
+# NOTE: This can be set per-compute, or if set to 0.0, the value
+# set on the scheduler node(s) or compute node(s) will be used and
+# defaulted to 1.5.
+#
+# Possible values:
+#
+# * Any valid positive integer or float value
+#  (floating point value)
+# Minimum value: 0
+#ram_allocation_ratio=0.0
+ram_allocation_ratio = 1.5
+
+#
+# This option helps you specify virtual disk to physical disk
+# allocation ratio.
+#
+# From Ocata (15.0.0) this is used to influence the hosts selected by
+# the Placement API. Note that when Placement is used, the DiskFilter
+# is redundant, because the Placement API will have already filtered
+# out hosts that would have failed the DiskFilter.
+#
+# A ratio greater than 1.0 will result in over-subscription of the
+# available physical disk, which can be useful for more
+# efficiently packing instances created with images that do not
+# use the entire virtual disk, such as sparse or compressed
+# images. It can be set to a value between 0.0 and 1.0 in order
+# to preserve a percentage of the disk for uses other than
+# instances.
+#
+# NOTE: This can be set per-compute, or if set to 0.0, the value
+# set on the scheduler node(s) or compute node(s) will be used and
+# defaulted to 1.0'.
+#
+# Possible values:
+#
+# * Any valid positive integer or float value
+#  (floating point value)
+# Minimum value: 0
+#disk_allocation_ratio=0.0
+disk_allocation_ratio = 1.0
+
+#
+# Console proxy host to be used to connect to instances on this host. It is the
+# publicly visible name for the console host.
+#
+# Possible values:
+#
+# * Current hostname (default) or any string representing hostname.
+#  (string value)
+#console_host=socket.gethostname()
+
+#
+# Name of the network to be used to set access IPs for instances. If there are
+# multiple IPs to choose from, an arbitrary one will be chosen.
+#
+# Possible values:
+#
+# * None (default)
+# * Any string representing network name.
+#  (string value)
+#default_access_ip_network_name=<None>
+
+#
+# Whether to batch up the application of IPTables rules during a host restart
+# and apply all at the end of the init phase.
+#  (boolean value)
+#defer_iptables_apply=false
+
+#
+# Specifies where instances are stored on the hypervisor's disk.
+# It can point to locally attached storage or a directory on NFS.
+#
+# Possible values:
+#
+# * $state_path/instances where state_path is a config option that specifies
+#   the top-level directory for maintaining nova's state. (default) or
+#   Any string representing directory path.
+#  (string value)
+#instances_path=$state_path/instances
+
+#
+# This option enables periodic compute.instance.exists notifications. Each
+# compute node must be configured to generate system usage data. These
+# notifications are consumed by OpenStack Telemetry service.
+#  (boolean value)
+#instance_usage_audit=false
+
+#
+# Maximum number of 1 second retries in live_migration. It specifies number
+# of retries to iptables when it complains. It happens when an user continuously
+# sends live-migration request to same host leading to concurrent request
+# to iptables.
+#
+# Possible values:
+#
+# * Any positive integer representing retry count.
+#  (integer value)
+# Minimum value: 0
+#live_migration_retry_count=30
+
+#
+# Number of times to retry network allocation. It is required to attempt network
+# allocation retries if the virtual interface plug fails.
+#
+# Possible values:
+#
+# * Any positive integer representing retry count.
+#  (integer value)
+# Minimum value: 0
+#network_allocate_retries=0
+
+#
+# Limits the maximum number of instance builds to run concurrently by
+# nova-compute. Compute service can attempt to build an infinite number of
+# instances, if asked to do so. This limit is enforced to avoid building
+# unlimited instance concurrently on a compute node. This value can be set
+# per compute node.
+#
+# Possible Values:
+#
+# * 0 : treated as unlimited.
+# * Any positive integer representing maximum concurrent builds.
+#  (integer value)
+# Minimum value: 0
+#max_concurrent_builds=10
+
+#
+# Maximum number of live migrations to run concurrently. This limit is enforced
+# to avoid outbound live migrations overwhelming the host/network and causing
+# failures. It is not recommended that you change this unless you are very sure
+# that doing so is safe and stable in your environment.
+#
+# Possible values:
+#
+# * 0 : treated as unlimited.
+# * Negative value defaults to 0.
+# * Any positive integer representing maximum number of live migrations
+#   to run concurrently.
+#  (integer value)
+#max_concurrent_live_migrations=1
+
+#
+# Number of times to retry block device allocation on failures. Starting with
+# Liberty, Cinder can use image volume cache. This may help with block device
+# allocation performance. Look at the cinder image_volume_cache_enabled
+# configuration option.
+#
+# Possible values:
+#
+# * 60 (default)
+# * If value is 0, then one attempt is made.
+# * Any negative value is treated as 0.
+# * For any value > 0, total attempts are (value + 1)
+#  (integer value)
+#block_device_allocate_retries=60
+block_device_allocate_retries=600
+
+#
+# Number of greenthreads available for use to sync power states.
+#
+# This option can be used to reduce the number of concurrent requests
+# made to the hypervisor or system with real instance power states
+# for performance reasons, for example, with Ironic.
+#
+# Possible values:
+#
+# * Any positive integer representing greenthreads count.
+#  (integer value)
+#sync_power_state_pool_size=1000
+
+#
+# Number of seconds to wait between runs of the image cache manager.
+#
+# Possible values:
+# * 0: run at the default rate.
+# * -1: disable
+# * Any other value
+#  (integer value)
+# Minimum value: -1
+#image_cache_manager_interval=2400
+
+#
+# Interval to pull network bandwidth usage info.
+#
+# Not supported on all hypervisors. If a hypervisor doesn't support bandwidth
+# usage, it will not get the info in the usage events.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#  (integer value)
+#bandwidth_poll_interval=600
+
+#
+# Interval to sync power states between the database and the hypervisor.
+#
+# The interval that Nova checks the actual virtual machine power state
+# and the power state that Nova has in its database. If a user powers
+# down their VM, Nova updates the API to report the VM has been
+# powered down. Should something turn on the VM unexpectedly,
+# Nova will turn the VM back off to keep the system in the expected
+# state.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#
+# Related options:
+#
+# * If ``handle_virt_lifecycle_events`` in workarounds_group is
+#   false and this option is negative, then instances that get out
+#   of sync between the hypervisor and the Nova database will have
+#   to be synchronized manually.
+#  (integer value)
+#sync_power_state_interval=600
+
+#
+# Interval between instance network information cache updates.
+#
+# Number of seconds after which each compute node runs the task of
+# querying Neutron for all of its instances networking information,
+# then updates the Nova db with that information. Nova will never
+# update it's cache if this option is set to 0. If we don't update the
+# cache, the metadata service and nova-api endpoints will be proxying
+# incorrect network data about the instance. So, it is not recommended
+# to set this option to 0.
+#
+# Possible values:
+#
+# * Any positive integer in seconds.
+# * Any value <=0 will disable the sync. This is not recommended.
+#  (integer value)
+#heal_instance_info_cache_interval=60
+
+#
+# Interval for reclaiming deleted instances.
+#
+# A value greater than 0 will enable SOFT_DELETE of instances.
+# This option decides whether the server to be deleted will be put into
+# the SOFT_DELETED state. If this value is greater than 0, the deleted
+# server will not be deleted immediately, instead it will be put into
+# a queue until it's too old (deleted time greater than the value of
+# reclaim_instance_interval). The server can be recovered from the
+# delete queue by using the restore action. If the deleted server remains
+# longer than the value of reclaim_instance_interval, it will be
+# deleted by a periodic task in the compute service automatically.
+#
+# Note that this option is read from both the API and compute nodes, and
+# must be set globally otherwise servers could be put into a soft deleted
+# state in the API and never actually reclaimed (deleted) on the compute
+# node.
+#
+# Possible values:
+#
+# * Any positive integer(in seconds) greater than 0 will enable
+#   this option.
+# * Any value <=0 will disable the option.
+#  (integer value)
+#reclaim_instance_interval=0
+
+#
+# Interval for gathering volume usages.
+#
+# This option updates the volume usage cache for every
+# volume_usage_poll_interval number of seconds.
+#
+# Possible values:
+#
+# * Any positive integer(in seconds) greater than 0 will enable
+#   this option.
+# * Any value <=0 will disable the option.
+#  (integer value)
+#volume_usage_poll_interval=0
+
+#
+# Interval for polling shelved instances to offload.
+#
+# The periodic task runs for every shelved_poll_interval number
+# of seconds and checks if there are any shelved instances. If it
+# finds a shelved instance, based on the 'shelved_offload_time' config
+# value it offloads the shelved instances. Check 'shelved_offload_time'
+# config option description for details.
+#
+# Possible values:
+#
+# * Any value <= 0: Disables the option.
+# * Any positive integer in seconds.
+#
+# Related options:
+#
+# * ``shelved_offload_time``
+#  (integer value)
+#shelved_poll_interval=3600
+
+#
+# Time before a shelved instance is eligible for removal from a host.
+#
+# By default this option is set to 0 and the shelved instance will be
+# removed from the hypervisor immediately after shelve operation.
+# Otherwise, the instance will be kept for the value of
+# shelved_offload_time(in seconds) so that during the time period the
+# unshelve action will be faster, then the periodic task will remove
+# the instance from hypervisor after shelved_offload_time passes.
+#
+# Possible values:
+#
+# * 0: Instance will be immediately offloaded after being
+#      shelved.
+# * Any value < 0: An instance will never offload.
+# * Any positive integer in seconds: The instance will exist for
+#   the specified number of seconds before being offloaded.
+#  (integer value)
+#shelved_offload_time=0
+
+#
+# Interval for retrying failed instance file deletes.
+#
+# This option depends on 'maximum_instance_delete_attempts'.
+# This option specifies how often to retry deletes whereas
+# 'maximum_instance_delete_attempts' specifies the maximum number
+# of retry attempts that can be made.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#
+# Related options:
+#
+# * ``maximum_instance_delete_attempts`` from instance_cleaning_opts
+#   group.
+#  (integer value)
+#instance_delete_interval=300
+
+#
+# Interval (in seconds) between block device allocation retries on failures.
+#
+# This option allows the user to specify the time interval between
+# consecutive retries. 'block_device_allocate_retries' option specifies
+# the maximum number of retries.
+#
+# Possible values:
+#
+# * 0: Disables the option.
+# * Any positive integer in seconds enables the option.
+#
+# Related options:
+#
+# * ``block_device_allocate_retries`` in compute_manager_opts group.
+#  (integer value)
+# Minimum value: 0
+#block_device_allocate_retries_interval=3
+block_device_allocate_retries_interval=10
+
+#
+# Interval between sending the scheduler a list of current instance UUIDs to
+# verify that its view of instances is in sync with nova.
+#
+# If the CONF option 'scheduler_tracks_instance_changes' is
+# False, the sync calls will not be made. So, changing this option will
+# have no effect.
+#
+# If the out of sync situations are not very common, this interval
+# can be increased to lower the number of RPC messages being sent.
+# Likewise, if sync issues turn out to be a problem, the interval
+# can be lowered to check more frequently.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#
+# Related options:
+#
+# * This option has no impact if ``scheduler_tracks_instance_changes``
+#   is set to False.
+#  (integer value)
+#scheduler_instance_sync_interval=120
+
+#
+# Interval for updating compute resources.
+#
+# This option specifies how often the update_available_resources
+# periodic task should run. A number less than 0 means to disable the
+# task completely. Leaving this at the default of 0 will cause this to
+# run at the default periodic interval. Setting it to any positive
+# value will cause it to run at approximately that number of seconds.
+#
+# Possible values:
+#
+# * 0: Will run at the default periodic interval.
+# * Any value < 0: Disables the option.
+# * Any positive integer in seconds.
+#  (integer value)
+#update_resources_interval=0
+
+#
+# Time interval after which an instance is hard rebooted automatically.
+#
+# When doing a soft reboot, it is possible that a guest kernel is
+# completely hung in a way that causes the soft reboot task
+# to not ever finish. Setting this option to a time period in seconds
+# will automatically hard reboot an instance if it has been stuck
+# in a rebooting state longer than N seconds.
+#
+# Possible values:
+#
+# * 0: Disables the option (default).
+# * Any positive integer in seconds: Enables the option.
+#  (integer value)
+# Minimum value: 0
+#reboot_timeout=0
+
+#
+# Maximum time in seconds that an instance can take to build.
+#
+# If this timer expires, instance status will be changed to ERROR.
+# Enabling this option will make sure an instance will not be stuck
+# in BUILD state for a longer period.
+#
+# Possible values:
+#
+# * 0: Disables the option (default)
+# * Any positive integer in seconds: Enables the option.
+#  (integer value)
+# Minimum value: 0
+#instance_build_timeout=0
+
+#
+# Interval to wait before un-rescuing an instance stuck in RESCUE.
+#
+# Possible values:
+#
+# * 0: Disables the option (default)
+# * Any positive integer in seconds: Enables the option.
+#  (integer value)
+# Minimum value: 0
+#rescue_timeout=0
+
+#
+# Automatically confirm resizes after N seconds.
+#
+# Resize functionality will save the existing server before resizing.
+# After the resize completes, user is requested to confirm the resize.
+# The user has the opportunity to either confirm or revert all
+# changes. Confirm resize removes the original server and changes
+# server status from resized to active. Setting this option to a time
+# period (in seconds) will automatically confirm the resize if the
+# server is in resized state longer than that time.
+#
+# Possible values:
+#
+# * 0: Disables the option (default)
+# * Any positive integer in seconds: Enables the option.
+#  (integer value)
+# Minimum value: 0
+#resize_confirm_window=0
+
+#
+# Total time to wait in seconds for an instance toperform a clean
+# shutdown.
+#
+# It determines the overall period (in seconds) a VM is allowed to
+# perform a clean shutdown. While performing stop, rescue and shelve,
+# rebuild operations, configuring this option gives the VM a chance
+# to perform a controlled shutdown before the instance is powered off.
+# The default timeout is 60 seconds.
+#
+# The timeout value can be overridden on a per image basis by means
+# of os_shutdown_timeout that is an image metadata setting allowing
+# different types of operating systems to specify how much time they
+# need to shut down cleanly.
+#
+# Possible values:
+#
+# * Any positive integer in seconds (default value is 60).
+#  (integer value)
+# Minimum value: 1
+#shutdown_timeout=60
+
+#
+# The compute service periodically checks for instances that have been
+# deleted in the database but remain running on the compute node. The
+# above option enables action to be taken when such instances are
+# identified.
+#
+# Possible values:
+#
+# * reap: Powers down the instances and deletes them(default)
+# * log: Logs warning message about deletion of the resource
+# * shutdown: Powers down instances and marks them as non-
+#   bootable which can be later used for debugging/analysis
+# * noop: Takes no action
+#
+# Related options:
+#
+# * running_deleted_instance_poll
+# * running_deleted_instance_timeout
+#  (string value)
+# Allowed values: noop, log, shutdown, reap
+#running_deleted_instance_action=reap
+
+#
+# Time interval in seconds to wait between runs for the clean up action.
+# If set to 0, above check will be disabled. If "running_deleted_instance
+# _action" is set to "log" or "reap", a value greater than 0 must be set.
+#
+# Possible values:
+#
+# * Any positive integer in seconds enables the option.
+# * 0: Disables the option.
+# * 1800: Default value.
+#
+# Related options:
+#
+# * running_deleted_instance_action
+#  (integer value)
+#running_deleted_instance_poll_interval=1800
+
+#
+# Time interval in seconds to wait for the instances that have
+# been marked as deleted in database to be eligible for cleanup.
+#
+# Possible values:
+#
+# * Any positive integer in seconds(default is 0).
+#
+# Related options:
+#
+# * "running_deleted_instance_action"
+#  (integer value)
+#running_deleted_instance_timeout=0
+
+#
+# The number of times to attempt to reap an instance's files.
+#
+# This option specifies the maximum number of retry attempts
+# that can be made.
+#
+# Possible values:
+#
+# * Any positive integer defines how many attempts are made.
+# * Any value <=0 means no delete attempts occur, but you should use
+#   ``instance_delete_interval`` to disable the delete attempts.
+#
+# Related options:
+# * ``instance_delete_interval`` in interval_opts group can be used to disable
+#   this option.
+#  (integer value)
+#maximum_instance_delete_attempts=5
+
+# DEPRECATED:
+# This is the message queue topic that the compute service 'listens' on. It is
+# used when the compute service is started up to configure the queue, and
+# whenever an RPC call to the compute service is made.
+#
+# Possible values:
+#
+# * Any string, but there is almost never any reason to ever change this value
+#   from its default of 'compute'.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There is no need to let users choose the RPC topic for all services - there
+# is little gain from this. Furthermore, it makes it really easy to break Nova
+# by using this option.
+#compute_topic=compute
+
+#
+# Sets the scope of the check for unique instance names.
+#
+# The default doesn't check for unique names. If a scope for the name check is
+# set, a launch of a new instance or an update of an existing instance with a
+# duplicate name will result in an ''InstanceExists'' error. The uniqueness is
+# case-insensitive. Setting this option can increase the usability for end
+# users as they don't have to distinguish among instances with the same name
+# by their IDs.
+#
+# Possible values:
+#
+# * '': An empty value means that no uniqueness check is done and duplicate
+#   names are possible.
+# * "project": The instance name check is done only for instances within the
+#   same project.
+# * "global": The instance name check is done for all instances regardless of
+#   the project.
+#  (string value)
+# Allowed values: '', project, global
+#osapi_compute_unique_server_name_scope =
+
+#
+# Enable new services on this host automatically.
+#
+# When a new service (for example "nova-compute") starts up, it gets
+# registered in the database as an enabled service. Sometimes it can be useful
+# to register new services in disabled state and then enabled them at a later
+# point in time. This option can set this behavior for all services per host.
+#
+# Possible values:
+#
+# * ``True``: Each new service is enabled as soon as it registers itself.
+# * ``False``: Services must be enabled via a REST API call or with the CLI
+#   with ``nova service-enable <hostname> <binary>``, otherwise they are not
+#   ready to use.
+#  (boolean value)
+#enable_new_services=true
+
+#
+# Template string to be used to generate instance names.
+#
+# This template controls the creation of the database name of an instance. This
+# is *not* the display name you enter when creating an instance (via Horizon
+# or CLI). For a new deployment it is advisable to change the default value
+# (which uses the database autoincrement) to another value which makes use
+# of the attributes of an instance, like ``instance-%(uuid)s``. If you
+# already have instances in your deployment when you change this, your
+# deployment will break.
+#
+# Possible values:
+#
+# * A string which either uses the instance database ID (like the
+#   default)
+# * A string with a list of named database columns, for example ``%(id)d``
+#   or ``%(uuid)s`` or ``%(hostname)s``.
+#
+# Related options:
+#
+# * not to be confused with: ``multi_instance_display_name_template``
+#  (string value)
+#instance_name_template=instance-%08x
+
+#
+# Number of times to retry live-migration before failing.
+#
+# Possible values:
+#
+# * If == -1, try until out of hosts (default)
+# * If == 0, only try once, no retries
+# * Integer greater than 0
+#  (integer value)
+# Minimum value: -1
+#migrate_max_retries=-1
+
+#
+# Configuration drive format
+#
+# Configuration drive format that will contain metadata attached to the
+# instance when it boots.
+#
+# Possible values:
+#
+# * iso9660: A file system image standard that is widely supported across
+#   operating systems. NOTE: Mind the libvirt bug
+#   (https://bugs.launchpad.net/nova/+bug/1246201) - If your hypervisor
+#   driver is libvirt, and you want live migrate to work without shared storage,
+#   then use VFAT.
+# * vfat: For legacy reasons, you can configure the configuration drive to
+#   use VFAT format instead of ISO 9660.
+#
+# Related options:
+#
+# * This option is meaningful when one of the following alternatives occur:
+#   1. force_config_drive option set to 'true'
+#   2. the REST API call to create the instance contains an enable flag for
+#      config drive option
+#   3. the image used to create the instance requires a config drive,
+#      this is defined by img_config_drive property for that image.
+# * A compute node running Hyper-V hypervisor can be configured to attach
+#   configuration drive as a CD drive. To attach the configuration drive as a CD
+#   drive, set config_drive_cdrom option at hyperv section, to true.
+#  (string value)
+# Allowed values: iso9660, vfat
+#config_drive_format=iso9660
+
+#
+# Force injection to take place on a config drive
+#
+# When this option is set to true configuration drive functionality will be
+# forced enabled by default, otherwise user can still enable configuration
+# drives via the REST API or image metadata properties.
+#
+# Possible values:
+#
+# * True: Force to use of configuration drive regardless the user's input in the
+#         REST API call.
+# * False: Do not force use of configuration drive. Config drives can still be
+#          enabled via the REST API or image metadata properties.
+#
+# Related options:
+#
+# * Use the 'mkisofs_cmd' flag to set the path where you install the
+#   genisoimage program. If genisoimage is in same path as the
+#   nova-compute service, you do not need to set this flag.
+# * To use configuration drive with Hyper-V, you must set the
+#   'mkisofs_cmd' value to the full path to an mkisofs.exe installation.
+#   Additionally, you must set the qemu_img_cmd value in the hyperv
+#   configuration section to the full path to an qemu-img command
+#   installation.
+#  (boolean value)
+#force_config_drive=false
+
+#
+# Name or path of the tool used for ISO image creation
+#
+# Use the mkisofs_cmd flag to set the path where you install the genisoimage
+# program. If genisoimage is on the system path, you do not need to change
+# the default value.
+#
+# To use configuration drive with Hyper-V, you must set the mkisofs_cmd value
+# to the full path to an mkisofs.exe installation. Additionally, you must set
+# the qemu_img_cmd value in the hyperv configuration section to the full path
+# to an qemu-img command installation.
+#
+# Possible values:
+#
+# * Name of the ISO image creator program, in case it is in the same directory
+#   as the nova-compute service
+# * Path to ISO image creator program
+#
+# Related options:
+#
+# * This option is meaningful when config drives are enabled.
+# * To use configuration drive with Hyper-V, you must set the qemu_img_cmd
+#   value in the hyperv configuration section to the full path to an qemu-img
+#   command installation.
+#  (string value)
+#mkisofs_cmd=genisoimage
+
+# DEPRECATED:
+# nova-console-proxy is used to set up multi-tenant VM console access.
+# This option allows pluggable driver program for the console session
+# and represents driver to use for the console proxy.
+#
+# Possible values:
+#
+# * A string representing fully classified class name of console driver.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This option no longer does anything. Previously this option had only two
+# valid,
+# in-tree values: nova.console.xvp.XVPConsoleProxy and
+# nova.console.fake.FakeConsoleProxy. The latter of these was only used in tests
+# and has since been replaced.
+#console_driver=nova.console.xvp.XVPConsoleProxy
+
+# DEPRECATED:
+# Represents the message queue topic name used by nova-console
+# service when communicating via the AMQP server. The Nova API uses a message
+# queue to communicate with nova-console to retrieve a console URL for that
+# host.
+#
+# Possible values:
+#
+# * A string representing topic exchange name
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There is no need to let users choose the RPC topic for all services - there
+# is little gain from this. Furthermore, it makes it really easy to break Nova
+# by using this option.
+#console_topic=console
+
+# DEPRECATED:
+# This option allows you to change the message topic used by nova-consoleauth
+# service when communicating via the AMQP server. Nova Console Authentication
+# server authenticates nova consoles. Users can then access their instances
+# through VNC clients. The Nova API service uses a message queue to
+# communicate with nova-consoleauth to get a VNC console.
+#
+# Possible Values:
+#
+# * 'consoleauth' (default) or Any string representing topic exchange name.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There is no need to let users choose the RPC topic for all services - there
+# is little gain from this. Furthermore, it makes it really easy to break Nova
+# by using this option.
+#consoleauth_topic=consoleauth
+
+# DEPRECATED: The driver to use for database access (string value)
+# This option is deprecated for removal since 13.0.0.
+# Its value may be silently ignored in the future.
+#db_driver=nova.db
+
+# DEPRECATED:
+# Default flavor to use for the EC2 API only.
+# The Nova API does not support a default flavor.
+#  (string value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: The EC2 API is deprecated.
+#default_flavor=m1.small
+
+#
+# Default pool for floating IPs.
+#
+# This option specifies the default floating IP pool for allocating floating
+# IPs.
+#
+# While allocating a floating ip, users can optionally pass in the name of the
+# pool they want to allocate from, otherwise it will be pulled from the
+# default pool.
+#
+# If this option is not set, then 'nova' is used as default floating pool.
+#
+# Possible values:
+#
+# * Any string representing a floating IP pool name
+#  (string value)
+#default_floating_pool=nova
+
+# DEPRECATED:
+# Autoassigning floating IP to VM
+#
+# When set to True, floating IP is auto allocated and associated
+# to the VM upon creation.
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#auto_assign_floating_ip=false
+use_neutron = True
+
+# DEPRECATED:
+# Full class name for the DNS Manager for floating IPs.
+#
+# This option specifies the class of the driver that provides functionality
+# to manage DNS entries associated with floating IPs.
+#
+# When a user adds a DNS entry for a specified domain to a floating IP,
+# nova will add a DNS entry using the specified floating DNS driver.
+# When a floating IP is deallocated, its DNS entry will automatically be
+# deleted.
+#
+# Possible values:
+#
+# * Full Python path to the class to be used
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#floating_ip_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
+
+# DEPRECATED:
+# Full class name for the DNS Manager for instance IPs.
+#
+# This option specifies the class of the driver that provides functionality
+# to manage DNS entries for instances.
+#
+# On instance creation, nova will add DNS entries for the instance name and
+# id, using the specified instance DNS driver and domain. On instance deletion,
+# nova will remove the DNS entries.
+#
+# Possible values:
+#
+# * Full Python path to the class to be used
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#instance_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
+
+# DEPRECATED:
+# If specified, Nova checks if the availability_zone of every instance matches
+# what the database says the availability_zone should be for the specified
+# dns_domain.
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#instance_dns_domain =
+
+#
+# Abstracts out IPv6 address generation to pluggable backends.
+#
+# nova-network can be put into dual-stack mode, so that it uses
+# both IPv4 and IPv6 addresses. In dual-stack mode, by default, instances
+# acquire IPv6 global unicast addresses with the help of stateless address
+# auto-configuration mechanism.
+#
+# Related options:
+#
+# * use_neutron: this option only works with nova-network.
+# * use_ipv6: this option only works if ipv6 is enabled for nova-network.
+#  (string value)
+# Allowed values: rfc2462, account_identifier
+#ipv6_backend=rfc2462
+
+#
+# The IP address which the host is using to connect to the management network.
+#
+# Possible values:
+#
+# * String with valid IP address. Default is IPv4 address of this host.
+#
+# Related options:
+#
+# * metadata_host
+# * my_block_storage_ip
+# * routing_source_ip
+# * vpn_ip
+#  (string value)
+#my_ip=10.89.104.70
+my_ip=10.167.4.13
+
+#
+# The IP address which is used to connect to the block storage network.
+#
+# Possible values:
+#
+# * String with valid IP address. Default is IP address of this host.
+#
+# Related options:
+#
+# * my_ip - if my_block_storage_ip is not set, then my_ip value is used.
+#  (string value)
+#my_block_storage_ip=$my_ip
+
+#
+# Hostname, FQDN or IP address of this host. Must be valid within AMQP key.
+#
+# Possible values:
+#
+# * String with hostname, FQDN or IP address. Default is hostname of this host.
+#  (string value)
+#host=lcy01-22
+
+#
+# Assign IPv6 and IPv4 addresses when creating instances.
+#
+# Related options:
+#
+# * use_neutron: this only works with nova-network.
+#  (boolean value)
+#use_ipv6=false
+
+#
+# This option is a list of full paths to one or more configuration files for
+# dhcpbridge. In most cases the default path of '/etc/nova/nova-dhcpbridge.conf'
+# should be sufficient, but if you have special needs for configuring
+# dhcpbridge,
+# you can change or add to this list.
+#
+# Possible values
+#
+#     A list of strings, where each string is the full path to a dhcpbridge
+#     configuration file.
+#  (multi valued)
+dhcpbridge_flagfile=/etc/nova/nova.conf
+
+#
+# The location where the network configuration files will be kept. The default
+# is
+# the 'networks' directory off of the location where nova's Python module is
+# installed.
+#
+# Possible values
+#
+#     A string containing the full path to the desired configuration directory
+#  (string value)
+#networks_path=$state_path/networks
+
+#
+# This is the name of the network interface for public IP addresses. The default
+# is 'eth0'.
+#
+# Possible values:
+#
+#     Any string representing a network interface name
+#  (string value)
+#public_interface=eth0
+
+#
+# The location of the binary nova-dhcpbridge. By default it is the binary named
+# 'nova-dhcpbridge' that is installed with all the other nova binaries.
+#
+# Possible values:
+#
+#     Any string representing the full path to the binary for dhcpbridge
+#  (string value)
+dhcpbridge=/usr/bin/nova-dhcpbridge
+
+#
+# This is the public IP address of the network host. It is used when creating a
+# SNAT rule.
+#
+# Possible values:
+#
+#     Any valid IP address
+#
+# Related options:
+#
+#     force_snat_range
+#  (string value)
+#routing_source_ip=$my_ip
+
+#
+# The lifetime of a DHCP lease, in seconds. The default is 86400 (one day).
+#
+# Possible values:
+#
+#     Any positive integer value.
+#  (integer value)
+# Minimum value: 1
+#dhcp_lease_time=86400
+
+#
+# Despite the singular form of the name of this option, it is actually a list of
+# zero or more server addresses that dnsmasq will use for DNS nameservers. If
+# this is not empty, dnsmasq will not read /etc/resolv.conf, but will only use
+# the servers specified in this option. If the option use_network_dns_servers is
+# True, the dns1 and dns2 servers from the network will be appended to this
+# list,
+# and will be used as DNS servers, too.
+#
+# Possible values:
+#
+#     A list of strings, where each string is either an IP address or a FQDN.
+#
+# Related options:
+#
+#     use_network_dns_servers
+#  (multi valued)
+#dns_server =
+
+#
+# When this option is set to True, the dns1 and dns2 servers for the network
+# specified by the user on boot will be used for DNS, as well as any specified
+# in
+# the `dns_server` option.
+#
+# Related options:
+#
+#     dns_server
+#  (boolean value)
+#use_network_dns_servers=false
+
+#
+# This option is a list of zero or more IP address ranges in your network's DMZ
+# that should be accepted.
+#
+# Possible values:
+#
+#     A list of strings, each of which should be a valid CIDR.
+#  (list value)
+#dmz_cidr =
+
+#
+# This is a list of zero or more IP ranges that traffic from the
+# `routing_source_ip` will be SNATted to. If the list is empty, then no SNAT
+# rules are created.
+#
+# Possible values:
+#
+#     A list of strings, each of which should be a valid CIDR.
+#
+# Related options:
+#
+#     routing_source_ip
+#  (multi valued)
+#force_snat_range =
+
+#
+# The path to the custom dnsmasq configuration file, if any.
+#
+# Possible values:
+#
+#     The full path to the configuration file, or an empty string if there is no
+#     custom dnsmasq configuration file.
+#  (string value)
+#dnsmasq_config_file =
+
+#
+# This is the class used as the ethernet device driver for linuxnet bridge
+# operations. The default value should be all you need for most cases, but if
+# you
+# wish to use a customized class, set this option to the full dot-separated
+# import path for that class.
+#
+# Possible values:
+#
+#     Any string representing a dot-separated class path that Nova can import.
+#  (string value)
+#linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
+
+#
+# The name of the Open vSwitch bridge that is used with linuxnet when connecting
+# with Open vSwitch."
+#
+# Possible values:
+#
+#     Any string representing a valid bridge name.
+#  (string value)
+#linuxnet_ovs_integration_bridge=br-int
+
+#
+# When True, when a device starts up, and upon binding floating IP addresses,
+# arp
+# messages will be sent to ensure that the arp caches on the compute hosts are
+# up-to-date.
+#
+# Related options:
+#
+#     send_arp_for_ha_count
+#  (boolean value)
+#send_arp_for_ha=false
+
+#
+# When arp messages are configured to be sent, they will be sent with the count
+# set to the value of this option. Of course, if this is set to zero, no arp
+# messages will be sent.
+#
+# Possible values:
+#
+#     Any integer greater than or equal to 0
+#
+# Related options:
+#
+#     send_arp_for_ha
+#  (integer value)
+#send_arp_for_ha_count=3
+
+#
+# When set to True, only the firt nic of a VM will get its default gateway from
+# the DHCP server.
+#  (boolean value)
+#use_single_default_gateway=false
+
+#
+# One or more interfaces that bridges can forward traffic to. If any of the
+# items
+# in this list is the special keyword 'all', then all traffic will be forwarded.
+#
+# Possible values:
+#
+#     A list of zero or more interface names, or the word 'all'.
+#  (multi valued)
+#forward_bridge_interface=all
+
+#
+# This option determines the IP address for the network metadata API server.
+#
+# Possible values:
+#
+#    * Any valid IP address. The default is the address of the Nova API server.
+#
+# Related options:
+#
+#     * metadata_port
+#  (string value)
+#metadata_host=$my_ip
+
+#
+# This option determines the port used for the metadata API server.
+#
+# Related options:
+#
+#     * metadata_host
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#metadata_port=8775
+
+#
+# This expression, if defined, will select any matching iptables rules and place
+# them at the top when applying metadata changes to the rules.
+#
+# Possible values:
+#
+#     * Any string representing a valid regular expression, or an empty string
+#
+# Related options:
+#
+#     * iptables_bottom_regex
+#  (string value)
+#iptables_top_regex =
+
+#
+# This expression, if defined, will select any matching iptables rules and place
+# them at the bottom when applying metadata changes to the rules.
+#
+# Possible values:
+#
+#     * Any string representing a valid regular expression, or an empty string
+#
+# Related options:
+#
+#     * iptables_top_regex
+#  (string value)
+#iptables_bottom_regex =
+
+#
+# By default, packets that do not pass the firewall are DROPped. In many cases,
+# though, an operator may find it more useful to change this from DROP to
+# REJECT,
+# so that the user issuing those packets may have a better idea as to what's
+# going on, or LOGDROP in order to record the blocked traffic before DROPping.
+#
+# Possible values:
+#
+#     * A string representing an iptables chain. The default is DROP.
+#  (string value)
+#iptables_drop_action=DROP
+
+#
+# This option represents the period of time, in seconds, that the ovs_vsctl
+# calls
+# will wait for a response from the database before timing out. A setting of 0
+# means that the utility should wait forever for a response.
+#
+# Possible values:
+#
+#     * Any positive integer if a limited timeout is desired, or zero if the
+#     calls should wait forever for a response.
+#  (integer value)
+# Minimum value: 0
+#ovs_vsctl_timeout=120
+
+#
+# This option is used mainly in testing to avoid calls to the underlying network
+# utilities.
+#  (boolean value)
+#fake_network=false
+
+#
+# This option determines the number of times to retry ebtables commands before
+# giving up. The minimum number of retries is 1.
+#
+# Possible values:
+#
+#     * Any positive integer
+#
+# Related options:
+#
+#     * ebtables_retry_interval
+#  (integer value)
+# Minimum value: 1
+#ebtables_exec_attempts=3
+
+#
+# This option determines the time, in seconds, that the system will sleep in
+# between ebtables retries. Note that each successive retry waits a multiple of
+# this value, so for example, if this is set to the default of 1.0 seconds, and
+# ebtables_exec_attempts is 4, after the first failure, the system will sleep
+# for
+# 1 * 1.0 seconds, after the second failure it will sleep 2 * 1.0 seconds, and
+# after the third failure it will sleep 3 * 1.0 seconds.
+#
+# Possible values:
+#
+#     * Any non-negative float or integer. Setting this to zero will result in
+# no
+#     waiting between attempts.
+#
+# Related options:
+#
+#     * ebtables_exec_attempts
+#  (floating point value)
+#ebtables_retry_interval=1.0
+
+#
+# This option determines whether the network setup information is injected into
+# the VM before it is booted. While it was originally designed to be used only
+# by
+# nova-network, it is also used by the vmware and xenapi virt drivers to control
+# whether network information is injected into a VM.
+#  (boolean value)
+#flat_injected=false
+
+# DEPRECATED:
+# This option determines the bridge used for simple network interfaces when no
+# bridge is specified in the VM creation request.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+#     Any string representing a valid network bridge, such as 'br100'
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#flat_network_bridge=<None>
+
+# DEPRECATED:
+# This is the address of the DNS server for a simple network. If this option is
+# not specified, the default of '8.8.4.4' is used.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+#     Any valid IP address.
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#flat_network_dns=8.8.4.4
+
+# DEPRECATED:
+# This option is the name of the virtual interface of the VM on which the bridge
+# will be built. While it was originally designed to be used only by
+# nova-network, it is also used by libvirt for the bridge interface name.
+#
+# Possible values:
+#
+#     Any valid virtual interface name, such as 'eth0'
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#flat_interface=<None>
+
+# DEPRECATED:
+# This is the VLAN number used for private networks. Note that the when creating
+# the networks, if the specified number has already been assigned, nova-network
+# will increment this number until it finds an available VLAN.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment. It also will be ignored if the configuration
+# option
+# for `network_manager` is not set to the default of
+# 'nova.network.manager.VlanManager'.
+#
+# Possible values:
+#
+#     Any integer between 1 and 4094. Values outside of that range will raise a
+#     ValueError exception. Default = 100.
+#
+# Related options:
+#
+#     ``network_manager``, ``use_neutron``
+#  (integer value)
+# Minimum value: 1
+# Maximum value: 4094
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#vlan_start=100
+
+# DEPRECATED:
+# This option is the name of the virtual interface of the VM on which the VLAN
+# bridge will be built. While it was originally designed to be used only by
+# nova-network, it is also used by libvirt and xenapi for the bridge interface
+# name.
+#
+# Please note that this setting will be ignored in nova-network if the
+# configuration option for `network_manager` is not set to the default of
+# 'nova.network.manager.VlanManager'.
+#
+# Possible values:
+#
+#     Any valid virtual interface name, such as 'eth0'
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options. While
+# this option has an effect when using neutron, it incorrectly override the
+# value
+# provided by neutron and should therefore not be used.
+#vlan_interface=<None>
+
+# DEPRECATED:
+# This option represents the number of networks to create if not explicitly
+# specified when the network is created. The only time this is used is if a CIDR
+# is specified, but an explicit network_size is not. In that case, the subnets
+# are created by diving the IP address space of the CIDR by num_networks. The
+# resulting subnet sizes cannot be larger than the configuration option
+# `network_size`; in that event, they are reduced to `network_size`, and a
+# warning is logged.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+#     Any positive integer is technically valid, although there are practical
+#     limits based upon available IP address space and virtual interfaces. The
+#     default is 1.
+#
+# Related options:
+#
+#     ``use_neutron``, ``network_size``
+#  (integer value)
+# Minimum value: 1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#num_networks=1
+
+# DEPRECATED:
+# This is the public IP address for the cloudpipe VPN servers. It defaults to
+# the
+# IP address of the host.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment. It also will be ignored if the configuration
+# option
+# for `network_manager` is not set to the default of
+# 'nova.network.manager.VlanManager'.
+#
+# Possible values:
+#
+#     Any valid IP address. The default is $my_ip, the IP address of the VM.
+#
+# Related options:
+#
+#     ``network_manager``, ``use_neutron``, ``vpn_start``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#vpn_ip=$my_ip
+
+# DEPRECATED:
+# This is the port number to use as the first VPN port for private networks.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment. It also will be ignored if the configuration
+# option
+# for `network_manager` is not set to the default of
+# 'nova.network.manager.VlanManager', or if you specify a value the 'vpn_start'
+# parameter when creating a network.
+#
+# Possible values:
+#
+#     Any integer representing a valid port number. The default is 1000.
+#
+# Related options:
+#
+#     ``use_neutron``, ``vpn_ip``, ``network_manager``
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#vpn_start=1000
+
+# DEPRECATED:
+# This option determines the number of addresses in each private subnet.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+#     Any positive integer that is less than or equal to the available network
+#     size. Note that if you are creating multiple networks, they must all fit
+# in
+#     the available IP address space. The default is 256.
+#
+# Related options:
+#
+#     ``use_neutron``, ``num_networks``
+#  (integer value)
+# Minimum value: 1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#network_size=256
+
+# DEPRECATED:
+# This option determines the fixed IPv6 address block when creating a network.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+#     Any valid IPv6 CIDR. The default value is "fd00::/48".
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#fixed_range_v6=fd00::/48
+
+# DEPRECATED:
+# This is the default IPv4 gateway. It is used only in the testing suite.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+#     Any valid IP address.
+#
+# Related options:
+#
+#     ``use_neutron``, ``gateway_v6``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#gateway=<None>
+
+# DEPRECATED:
+# This is the default IPv6 gateway. It is used only in the testing suite.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+#     Any valid IP address.
+#
+# Related options:
+#
+#     ``use_neutron``, ``gateway``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#gateway_v6=<None>
+
+# DEPRECATED:
+# This option represents the number of IP addresses to reserve at the top of the
+# address range for VPN clients. It also will be ignored if the configuration
+# option for `network_manager` is not set to the default of
+# 'nova.network.manager.VlanManager'.
+#
+# Possible values:
+#
+#     Any integer, 0 or greater. The default is 0.
+#
+# Related options:
+#
+#     ``use_neutron``, ``network_manager``
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#cnt_vpn_clients=0
+
+# DEPRECATED:
+# This is the number of seconds to wait before disassociating a deallocated
+# fixed
+# IP address. This is only used with the nova-network service, and has no effect
+# when using neutron for networking.
+#
+# Possible values:
+#
+#     Any integer, zero or greater. The default is 600 (10 minutes).
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#fixed_ip_disassociate_timeout=600
+
+# DEPRECATED:
+# This option determines how many times nova-network will attempt to create a
+# unique MAC address before giving up and raising a
+# `VirtualInterfaceMacAddressException` error.
+#
+# Possible values:
+#
+#     Any positive integer. The default is 5.
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (integer value)
+# Minimum value: 1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#create_unique_mac_address_attempts=5
+
+# DEPRECATED:
+# Determines whether unused gateway devices, both VLAN and bridge, are deleted
+# if
+# the network is in nova-network VLAN mode and is multi-hosted.
+#
+# Related options:
+#
+#     ``use_neutron``, ``vpn_ip``, ``fake_network``
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#teardown_unused_network_gateway=false
+
+# DEPRECATED:
+# When this option is True, a call is made to release the DHCP for the instance
+# when that instance is terminated.
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+force_dhcp_release=true
+
+# DEPRECATED:
+# When this option is True, whenever a DNS entry must be updated, a fanout cast
+# message is sent to all network hosts to update their DNS entries in multi-host
+# mode.
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#update_dns_entries=false
+
+# DEPRECATED:
+# This option determines the time, in seconds, to wait between refreshing DNS
+# entries for the network.
+#
+# Possible values:
+#
+#     Either -1 (default), or any positive integer. A negative value will
+# disable
+#     the updates.
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (integer value)
+# Minimum value: -1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#dns_update_periodic_interval=-1
+
+# DEPRECATED:
+# This option allows you to specify the domain for the DHCP server.
+#
+# Possible values:
+#
+#     Any string that is a valid domain name.
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#dhcp_domain=novalocal
+
+# DEPRECATED:
+# This option allows you to specify the L3 management library to be used.
+#
+# Possible values:
+#
+#     Any dot-separated string that represents the import path to an L3
+#     networking library.
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#l3_lib=nova.network.l3.LinuxNetL3
+
+# DEPRECATED:
+# THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK.
+#
+# If True in multi_host mode, all compute hosts share the same dhcp address. The
+# same IP address used for DHCP will be added on each nova-network node which is
+# only visible to the VMs on the same host.
+#
+# The use of this configuration has been deprecated and may be removed in any
+# release after Mitaka. It is recommended that instead of relying on this
+# option,
+# an explicit value should be passed to 'create_networks()' as a keyword
+# argument
+# with the name 'share_address'.
+#  (boolean value)
+# This option is deprecated for removal since 2014.2.
+# Its value may be silently ignored in the future.
+#share_dhcp_address=false
+
+# DEPRECATED: Whether to use Neutron or Nova Network as the back end for
+# networking. Defaults to False (indicating Nova network).Set to True to use
+# neutron. (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#use_neutron=true
+
+#
+# URL for LDAP server which will store DNS entries
+#
+# Possible values:
+#
+# * A valid LDAP URL representing the server
+#  (uri value)
+#ldap_dns_url=ldap://ldap.example.com:389
+
+# Bind user for LDAP server (string value)
+#ldap_dns_user=uid=admin,ou=people,dc=example,dc=org
+
+# Bind user's password for LDAP server (string value)
+#ldap_dns_password=password
+
+#
+# Hostmaster for LDAP DNS driver Statement of Authority
+#
+# Possible values:
+#
+# * Any valid string representing LDAP DNS hostmaster.
+#  (string value)
+#ldap_dns_soa_hostmaster=hostmaster@example.org
+
+#
+# DNS Servers for LDAP DNS driver
+#
+# Possible values:
+#
+# * A valid URL representing a DNS server
+#  (multi valued)
+#ldap_dns_servers=dns.example.org
+
+#
+# Base distinguished name for the LDAP search query
+#
+# This option helps to decide where to look up the host in LDAP.
+#  (string value)
+#ldap_dns_base_dn=ou=hosts,dc=example,dc=org
+
+#
+# Refresh interval (in seconds) for LDAP DNS driver Start of Authority
+#
+# Time interval, a secondary/slave DNS server waits before requesting for
+# primary DNS server's current SOA record. If the records are different,
+# secondary DNS server will request a zone transfer from primary.
+#
+# NOTE: Lower values would cause more traffic.
+#  (integer value)
+#ldap_dns_soa_refresh=1800
+
+#
+# Retry interval (in seconds) for LDAP DNS driver Start of Authority
+#
+# Time interval, a secondary/slave DNS server should wait, if an
+# attempt to transfer zone failed during the previous refresh interval.
+#  (integer value)
+#ldap_dns_soa_retry=3600
+
+#
+# Expiry interval (in seconds) for LDAP DNS driver Start of Authority
+#
+# Time interval, a secondary/slave DNS server holds the information
+# before it is no longer considered authoritative.
+#  (integer value)
+#ldap_dns_soa_expiry=86400
+
+#
+# Minimum interval (in seconds) for LDAP DNS driver Start of Authority
+#
+# It is Minimum time-to-live applies for all resource records in the
+# zone file. This value is supplied to other servers how long they
+# should keep the data in cache.
+#  (integer value)
+#ldap_dns_soa_minimum=7200
+
+# DEPRECATED: The topic network nodes listen on (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There is no need to let users choose the RPC topic for all services - there
+# is little gain from this. Furthermore, it makes it really easy to break Nova
+# by using this option.
+#network_topic=network
+
+# DEPRECATED:
+# Default value for multi_host in networks.
+#
+# nova-network service can operate in a multi-host or single-host mode.
+# In multi-host mode each compute node runs a copy of nova-network and the
+# instances on that compute node use the compute node as a gateway to the
+# Internet. Where as in single-host mode, a central server runs the nova-network
+# service. All compute nodes forward traffic from the instances to the
+# cloud controller which then forwards traffic to the Internet.
+#
+# If this options is set to true, some rpc network calls will be sent directly
+# to host.
+#
+# Note that this option is only used when using nova-network instead of
+# Neutron in your deployment.
+#
+# Related options:
+#
+# * use_neutron
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#multi_host=false
+
+# DEPRECATED:
+# Driver to use for network creation.
+#
+# Network driver initializes (creates bridges and so on) only when the
+# first VM lands on a host node. All network managers configure the
+# network using network drivers. The driver is not tied to any particular
+# network manager.
+#
+# The default Linux driver implements vlans, bridges, and iptables rules
+# using linux utilities.
+#
+# Note that this option is only used when using nova-network instead
+# of Neutron in your deployment.
+#
+# Related options:
+#
+# * use_neutron
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#network_driver=nova.network.linux_net
+
+#
+# Firewall driver to use with ``nova-network`` service.
+#
+# This option only applies when using the ``nova-network`` service. When using
+# another networking services, such as Neutron, this should be to set to the
+# ``nova.virt.firewall.NoopFirewallDriver``.
+#
+# If unset (the default), this will default to the hypervisor-specified
+# default driver.
+#
+# Possible values:
+#
+# * nova.virt.firewall.IptablesFirewallDriver
+# * nova.virt.firewall.NoopFirewallDriver
+# * nova.virt.libvirt.firewall.IptablesFirewallDriver
+# * [...]
+#
+# Related options:
+#
+# * ``use_neutron``: This must be set to ``False`` to enable ``nova-network``
+#   networking
+#  (string value)
+#firewall_driver=<None>
+firewall_driver=nova.virt.firewall.NoopFirewallDriver
+
+#
+# Determine whether to allow network traffic from same network.
+#
+# When set to true, hosts on the same subnet are not filtered and are allowed
+# to pass all types of traffic between them. On a flat network, this allows
+# all instances from all projects unfiltered communication. With VLAN
+# networking, this allows access between instances within the same project.
+#
+# This option only applies when using the ``nova-network`` service. When using
+# another networking services, such as Neutron, security groups or other
+# approaches should be used.
+#
+# Possible values:
+#
+# * True: Network traffic should be allowed pass between all instances on the
+#   same network, regardless of their tenant and security policies
+# * False: Network traffic should not be allowed pass between instances unless
+#   it is unblocked in a security group
+#
+# Related options:
+#
+# * ``use_neutron``: This must be set to ``False`` to enable ``nova-network``
+#   networking
+# * ``firewall_driver``: This must be set to
+#   ``nova.virt.libvirt.firewall.IptablesFirewallDriver`` to ensure the
+#   libvirt firewall driver is enabled.
+#  (boolean value)
+#allow_same_net_traffic=true
+
+#
+# Filename that will be used for storing websocket frames received
+# and sent by a proxy service (like VNC, spice, serial) running on this host.
+# If this is not set, no recording will be done.
+#  (string value)
+#record=<None>
+
+# Run as a background process. (boolean value)
+#daemon=false
+
+# Disallow non-encrypted connections. (boolean value)
+#ssl_only=false
+
+# Set to True if source host is addressed with IPv6. (boolean value)
+#source_is_ipv6=false
+
+# Path to SSL certificate file. (string value)
+#cert=self.pem
+
+# SSL key file (if separate from cert). (string value)
+#key=<None>
+
+#
+# Path to directory with content which will be served by a web server.
+#  (string value)
+#web=/usr/share/spice-html5
+
+#
+# The directory where the Nova python modules are installed.
+#
+# This directory is used to store template files for networking and remote
+# console access. It is also the default path for other config options which
+# need to persist Nova internal data. It is very unlikely that you need to
+# change this option from its default value.
+#
+# Possible values:
+#
+# * The full path to a directory.
+#
+# Related options:
+#
+# * ``state_path``
+#  (string value)
+#pybasedir=/build/nova-elxmSs/nova-15.0.2
+
+#
+# The directory where the Nova binaries are installed.
+#
+# This option is only relevant if the networking capabilities from Nova are
+# used (see services below). Nova's networking capabilities are targeted to
+# be fully replaced by Neutron in the future. It is very unlikely that you need
+# to change this option from its default value.
+#
+# Possible values:
+#
+# * The full path to a directory.
+#  (string value)
+#bindir=/usr/local/bin
+
+#
+# The top-level directory for maintaining Nova's state.
+#
+# This directory is used to store Nova's internal state. It is used by a
+# variety of other config options which derive from this. In some scenarios
+# (for example migrations) it makes sense to use a storage location which is
+# shared between multiple compute hosts (for example via NFS). Unless the
+# option ``instances_path`` gets overwritten, this directory can grow very
+# large.
+#
+# Possible values:
+#
+# * The full path to a directory. Defaults to value provided in ``pybasedir``.
+#  (string value)
+state_path=/var/lib/nova
+
+#
+# Number of seconds indicating how frequently the state of services on a
+# given hypervisor is reported. Nova needs to know this to determine the
+# overall health of the deployment.
+#
+# Related Options:
+#
+# * service_down_time
+#   report_interval should be less than service_down_time. If service_down_time
+#   is less than report_interval, services will routinely be considered down,
+#   because they report in too rarely.
+#  (integer value)
+#report_interval=10
+report_interval=10
+
+#
+# Maximum time in seconds since last check-in for up service
+#
+# Each compute node periodically updates their database status based on the
+# specified report interval. If the compute node hasn't updated the status
+# for more than service_down_time, then the compute node is considered down.
+#
+# Related Options:
+#
+# * report_interval (service_down_time should not be less than report_interval)
+#  (integer value)
+service_down_time = 180
+
+#
+# Enable periodic tasks.
+#
+# If set to true, this option allows services to periodically run tasks
+# on the manager.
+#
+# In case of running multiple schedulers or conductors you may want to run
+# periodic tasks on only one host - in this case disable this option for all
+# hosts but one.
+#  (boolean value)
+#periodic_enable=true
+
+#
+# Number of seconds to randomly delay when starting the periodic task
+# scheduler to reduce stampeding.
+#
+# When compute workers are restarted in unison across a cluster,
+# they all end up running the periodic tasks at the same time
+# causing problems for the external services. To mitigate this
+# behavior, periodic_fuzzy_delay option allows you to introduce a
+# random initial delay when starting the periodic task scheduler.
+#
+# Possible Values:
+#
+# * Any positive integer (in seconds)
+# * 0 : disable the random delay
+#  (integer value)
+# Minimum value: 0
+#periodic_fuzzy_delay=60
+
+# List of APIs to be enabled by default. (list value)
+enabled_apis=osapi_compute,metadata
+
+#
+# List of APIs with enabled SSL.
+#
+# Nova provides SSL support for the API servers. enabled_ssl_apis option
+# allows configuring the SSL support.
+#  (list value)
+#enabled_ssl_apis =
+
+#
+# IP address on which the OpenStack API will listen.
+#
+# The OpenStack API service listens on this IP address for incoming
+# requests.
+#  (string value)
+#osapi_compute_listen=0.0.0.0
+osapi_compute_listen=10.167.4.13
+
+#
+# Port on which the OpenStack API will listen.
+#
+# The OpenStack API service listens on this port number for incoming
+# requests.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#osapi_compute_listen_port=8774
+
+#
+# Number of workers for OpenStack API service. The default will be the number
+# of CPUs available.
+#
+# OpenStack API services can be configured to run as multi-process (workers).
+# This overcomes the problem of reduction in throughput when API request
+# concurrency increases. OpenStack API service will run in the specified
+# number of processes.
+#
+# Possible Values:
+#
+# * Any positive integer
+# * None (default value)
+#  (integer value)
+# Minimum value: 1
+#osapi_compute_workers=<None>
+osapi_compute_workers = 8
+
+#
+# IP address on which the metadata API will listen.
+#
+# The metadata API service listens on this IP address for incoming
+# requests.
+#  (string value)
+#metadata_listen=0.0.0.0
+metadata_listen=10.167.4.13
+osapi_volume_listen=10.167.4.13
+
+#
+# Port on which the metadata API will listen.
+#
+# The metadata API service listens on this port number for incoming
+# requests.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#metadata_listen_port=8775
+
+#
+# Number of workers for metadata service. If not specified the number of
+# available CPUs will be used.
+#
+# The metadata service can be configured to run as multi-process (workers).
+# This overcomes the problem of reduction in throughput when API request
+# concurrency increases. The metadata service will run in the specified
+# number of processes.
+#
+# Possible Values:
+#
+# * Any positive integer
+# * None (default value)
+#  (integer value)
+# Minimum value: 1
+#metadata_workers=<None>
+metadata_workers = 8
+
+# Full class name for the Manager for network (string value)
+# Allowed values: nova.network.manager.FlatManager, nova.network.manager.FlatDHCPManager, nova.network.manager.VlanManager
+#network_manager=nova.network.manager.VlanManager
+
+#
+# This option specifies the driver to be used for the servicegroup service.
+#
+# ServiceGroup API in nova enables checking status of a compute node. When a
+# compute worker running the nova-compute daemon starts, it calls the join API
+# to join the compute group. Services like nova scheduler can query the
+# ServiceGroup API to check if a node is alive. Internally, the ServiceGroup
+# client driver automatically updates the compute worker status. There are
+# multiple backend implementations for this service: Database ServiceGroup
+# driver
+# and Memcache ServiceGroup driver.
+#
+# Possible Values:
+#
+#     * db : Database ServiceGroup driver
+#     * mc : Memcache ServiceGroup driver
+#
+# Related Options:
+#
+#     * service_down_time (maximum time since last check-in for up service)
+#  (string value)
+# Allowed values: db, mc
+#servicegroup_driver=db
 
 #
 # From oslo.log
@@ -7,13 +2756,15 @@
 # If set to true, the logging level will be set to DEBUG instead of the default
 # INFO level. (boolean value)
 # Note: This option can be changed without restarting.
-#debug = false
+#debug=false
+debug=false
 
 # DEPRECATED: If set to false, the logging level will be set to WARNING instead
 # of the default INFO level. (boolean value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
-#verbose = true
+#verbose=true
+verbose=true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -23,90 +2774,90 @@
 # example, logging_context_format_string). (string value)
 # Note: This option can be changed without restarting.
 # Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
+#log_config_append=<None>
 
 # Defines the format string for %%(asctime)s in log records. Default:
 # %(default)s . This option is ignored if log_config_append is set. (string
 # value)
-#log_date_format = %Y-%m-%d %H:%M:%S
+#log_date_format=%Y-%m-%d %H:%M:%S
 
 # (Optional) Name of log file to send logging output to. If no default is set,
 # logging will go to stderr as defined by use_stderr. This option is ignored if
 # log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
-#log_file = <None>
+#log_file=<None>
 
 # (Optional) The base directory used for relative log_file  paths. This option
 # is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
-#log_dir = <None>
+log_dir=/var/log/nova
 
 # Uses logging handler designed to watch file system. When log file is moved or
 # removed this handler will open a new log file with specified path
 # instantaneously. It makes sense only if log_file option is specified and Linux
 # platform is used. This option is ignored if log_config_append is set. (boolean
 # value)
-#watch_log_file = false
+#watch_log_file=false
 
 # Use syslog for logging. Existing syslog format is DEPRECATED and will be
 # changed later to honor RFC5424. This option is ignored if log_config_append is
 # set. (boolean value)
-#use_syslog = false
+#use_syslog=false
 
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
-#syslog_log_facility = LOG_USER
+#syslog_log_facility=LOG_USER
 
 # Log output to standard error. This option is ignored if log_config_append is
 # set. (boolean value)
-#use_stderr = false
+#use_stderr=false
 
 # Format string to use for log messages with context. (string value)
-#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
+#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
 # Format string to use for log messages when context is undefined. (string
 # value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
 
 # Additional data to append to log message when logging level for the message is
 # DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
 
 # Prefix each line of exception output with this format. (string value)
-#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
+#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
 
 # Defines the format string for %(user_identity)s that is used in
 # logging_context_format_string. (string value)
-#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
+#logging_user_identity_format=%(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
 
 # List of package logging levels in logger=LEVEL pairs. This option is ignored
 # if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+#default_log_levels=amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
-#publish_errors = false
+#publish_errors=false
 
 # The format for an instance that is passed with the log message. (string value)
-#instance_format = "[instance: %(uuid)s] "
+#instance_format="[instance: %(uuid)s] "
 
 # The format for an instance UUID that is passed with the log message. (string
 # value)
-#instance_uuid_format = "[instance: %(uuid)s] "
+#instance_uuid_format="[instance: %(uuid)s] "
 
 # Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
+#rate_limit_interval=0
 
 # Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
+#rate_limit_burst=0
 
 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
 # empty string. Logs with level greater or equal to rate_limit_except_level are
 # not filtered. An empty string means that all levels are filtered. (string
 # value)
-#rate_limit_except_level = CRITICAL
+#rate_limit_except_level=CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
-#fatal_deprecations = false
+#fatal_deprecations=false
 
 #
 # From oslo.messaging
@@ -114,41 +2865,41 @@
 
 # Size of RPC connection pool. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
-#rpc_conn_pool_size = 30
+#rpc_conn_pool_size=30
 
 # The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
+#conn_pool_min_size=2
 
 # The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
+#conn_pool_ttl=1200
 
 # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
 # The "host" option should point or resolve to this address. (string value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
-#rpc_zmq_bind_address = *
+#rpc_zmq_bind_address=*
 
 # MatchMaker driver. (string value)
 # Allowed values: redis, sentinel, dummy
 # Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
-#rpc_zmq_matchmaker = redis
+#rpc_zmq_matchmaker=redis
 
 # Number of ZeroMQ contexts, defaults to 1. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
-#rpc_zmq_contexts = 1
+#rpc_zmq_contexts=1
 
 # Maximum number of ingress messages to locally buffer per topic. Default is
 # unlimited. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
-#rpc_zmq_topic_backlog = <None>
+#rpc_zmq_topic_backlog=<None>
 
 # Directory for holding IPC sockets. (string value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
-#rpc_zmq_ipc_dir = /var/run/openstack
+#rpc_zmq_ipc_dir=/var/run/openstack
 
 # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
 # "host" option, if running Nova. (string value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_host
-#rpc_zmq_host = localhost
+#rpc_zmq_host=localhost
 
 # Number of seconds to wait before all pending messages will be sent after
 # closing a socket. The default value of -1 specifies an infinite linger period.
@@ -156,119 +2907,120 @@
 # immediately when the socket is closed. Positive values specify an upper bound
 # for the linger period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
+#zmq_linger=-1
+zmq_linger=30
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_poll_timeout
-#rpc_poll_timeout = 1
+#rpc_poll_timeout=1
 
 # Expiration timeout in seconds of a name service record about existing target (
 # < 0 means no timeout). (integer value)
 # Deprecated group/name - [DEFAULT]/zmq_target_expire
-#zmq_target_expire = 300
+#zmq_target_expire=300
 
 # Update period in seconds of a name service record about existing target.
 # (integer value)
 # Deprecated group/name - [DEFAULT]/zmq_target_update
-#zmq_target_update = 180
+#zmq_target_update=180
 
 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
 # value)
 # Deprecated group/name - [DEFAULT]/use_pub_sub
-#use_pub_sub = false
+#use_pub_sub=false
 
 # Use ROUTER remote proxy. (boolean value)
 # Deprecated group/name - [DEFAULT]/use_router_proxy
-#use_router_proxy = false
+#use_router_proxy=false
 
 # This option makes direct connections dynamic or static. It makes sense only
 # with use_router_proxy=False which means to use direct connections for direct
 # message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
+#use_dynamic_connections=false
 
 # How many additional connections to a host will be made for failover reasons.
 # This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
+#zmq_failover_connections=2
 
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
 # Maximum value: 65535
 # Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
-#rpc_zmq_min_port = 49153
+#rpc_zmq_min_port=49153
 
 # Maximal port number for random ports range. (integer value)
 # Minimum value: 1
 # Maximum value: 65536
 # Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
-#rpc_zmq_max_port = 65536
+#rpc_zmq_max_port=65536
 
 # Number of retries to find free port number before fail with ZMQBindError.
 # (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
-#rpc_zmq_bind_port_retries = 100
+#rpc_zmq_bind_port_retries=100
 
 # Default serialization mechanism for serializing/deserializing
 # outgoing/incoming messages (string value)
 # Allowed values: json, msgpack
 # Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
-#rpc_zmq_serialization = json
+#rpc_zmq_serialization=json
 
 # This option configures round-robin mode in zmq socket. True means not keeping
 # a queue when server side disconnects. False means to keep queue and messages
 # even if server is disconnected, when the server appears we send all
 # accumulated messages to it. (boolean value)
-#zmq_immediate = true
+#zmq_immediate=true
 
 # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
 # other negative value) means to skip any overrides and leave it to OS default;
 # 0 and 1 (or any other positive value) mean to disable and enable the option
 # respectively. (integer value)
-#zmq_tcp_keepalive = -1
+#zmq_tcp_keepalive=-1
 
 # The duration between two keepalive transmissions in idle condition. The unit
 # is platform dependent, for example, seconds in Linux, milliseconds in Windows
 # etc. The default value of -1 (or any other negative value and 0) means to skip
 # any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
+#zmq_tcp_keepalive_idle=-1
 
 # The number of retransmissions to be carried out before declaring that remote
 # end is not available. The default value of -1 (or any other negative value and
 # 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
+#zmq_tcp_keepalive_cnt=-1
 
 # The duration between two successive keepalive retransmissions, if
 # acknowledgement to the previous keepalive transmission is not received. The
 # unit is platform dependent, for example, seconds in Linux, milliseconds in
 # Windows etc. The default value of -1 (or any other negative value and 0) means
 # to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
+#zmq_tcp_keepalive_intvl=-1
 
 # Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
+#rpc_thread_pool_size=100
 
 # Expiration timeout in seconds of a sent/received message after which it is not
 # tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
+#rpc_message_ttl=300
 
 # Wait for message acknowledgements from receivers. This mechanism works only
 # via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
+#rpc_use_acks=false
 
 # Number of seconds to wait for an ack from a cast/call. After each retry
 # attempt this timeout is multiplied by some specified multiplier. (integer
 # value)
-#rpc_ack_timeout_base = 15
+#rpc_ack_timeout_base=15
 
 # Number to multiply base ack timeout by after each retry attempt. (integer
 # value)
-#rpc_ack_timeout_multiplier = 2
+#rpc_ack_timeout_multiplier=2
 
 # Default number of message sending attempts in case of any problems occurred:
 # positive value N means at most N retries, 0 means no retries, None or -1 (or
 # any other negative values) mean to retry forever. This option is used only if
 # acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
+#rpc_retry_attempts=3
 
 # List of publisher hosts SubConsumer can subscribe on. This option has higher
 # priority then the default publishers list taken from the matchmaker. (list
@@ -277,25 +3029,25 @@
 
 # Size of executor thread pool. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
-#executor_thread_pool_size = 64
+#executor_thread_pool_size=64
 
 # Seconds to wait for a response from a call. (integer value)
-#rpc_response_timeout = 60
-
-# A URL representing the messaging driver to use and its full configuration.
-# (string value)
-#transport_url = <None>
+#rpc_response_timeout=60
+rpc_response_timeout=3600
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.41:5672,openstack:opnfv_secret@10.167.4.42:5672,openstack:opnfv_secret@10.167.4.43:5672//openstack
+
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rpc_backend = rabbit
+#rpc_backend=rabbit
+rpc_backend=rabbit
 
 # The default exchange under which topics are scoped. May be overridden by an
 # exchange name specified in the transport_url option. (string value)
-#control_exchange = openstack
+#control_exchange=openstack
 
 #
 # From oslo.service.periodic_task
@@ -303,7 +3055,7 @@
 
 # Some periodic tasks can be run in a separate process. Should we run them here?
 # (boolean value)
-#run_external_periodic_tasks = true
+#run_external_periodic_tasks=true
 
 #
 # From oslo.service.service
@@ -315,21 +3067,1199 @@
 # is in use); and <start>:<end> results in listening on the smallest unused port
 # number within the specified range of port numbers.  The chosen port is
 # displayed in the service's log file. (string value)
-#backdoor_port = <None>
+#backdoor_port=<None>
 
 # Enable eventlet backdoor, using the provided path as a unix socket that can
 # receive connections. This option is mutually exclusive with 'backdoor_port' in
 # that only one should be provided. If both are provided then the existence of
 # this option overrides the usage of that option. (string value)
-#backdoor_socket = <None>
+#backdoor_socket=<None>
 
 # Enables or disables logging values of all registered options when starting a
 # service (at DEBUG level). (boolean value)
-#log_options = true
+#log_options=true
 
 # Specify a timeout after which a gracefully shutdown server will exit. Zero
 # value means endless wait. (integer value)
-#graceful_shutdown_timeout = 60
+#graceful_shutdown_timeout=60
+
+
+[api]
+#
+# Options under this group are used to define Nova API.
+
+#
+# From nova.conf
+#
+
+#
+# This determines the strategy to use for authentication: keystone or noauth2.
+# 'noauth2' is designed for testing only, as it does no actual credential
+# checking. 'noauth2' provides administrative credentials only if 'admin' is
+# specified as the username.
+#  (string value)
+# Allowed values: keystone, noauth2
+# Deprecated group/name - [DEFAULT]/auth_strategy
+#auth_strategy=keystone
+auth_strategy=keystone
+
+#
+# When True, the 'X-Forwarded-For' header is treated as the canonical remote
+# address. When False (the default), the 'remote_address' header is used.
+#
+# You should only enable this if you have an HTML sanitizing proxy.
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/use_forwarded_for
+#use_forwarded_for=false
+use_forwarded_for=false
+
+#
+# When gathering the existing metadata for a config drive, the EC2-style
+# metadata is returned for all versions that don't appear in this option.
+# As of the Liberty release, the available versions are:
+#
+# * 1.0
+# * 2007-01-19
+# * 2007-03-01
+# * 2007-08-29
+# * 2007-10-10
+# * 2007-12-15
+# * 2008-02-01
+# * 2008-09-01
+# * 2009-04-04
+#
+# The option is in the format of a single string, with each version separated
+# by a space.
+#
+# Possible values:
+#
+# * Any string that represents zero or more versions, separated by spaces.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/config_drive_skip_versions
+#config_drive_skip_versions=1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01
+
+#
+# A list of vendordata providers.
+#
+# vendordata providers are how deployers can provide metadata via configdrive
+# and metadata that is specific to their deployment. There are currently two
+# supported providers: StaticJSON and DynamicJSON.
+#
+# StaticJSON reads a JSON file configured by the flag vendordata_jsonfile_path
+# and places the JSON from that file into vendor_data.json and
+# vendor_data2.json.
+#
+# DynamicJSON is configured via the vendordata_dynamic_targets flag, which is
+# documented separately. For each of the endpoints specified in that flag, a
+# section is added to the vendor_data2.json.
+#
+# For more information on the requirements for implementing a vendordata
+# dynamic endpoint, please see the vendordata.rst file in the nova developer
+# reference.
+#
+# Possible values:
+#
+# * A list of vendordata providers, with StaticJSON and DynamicJSON being
+#   current options.
+#
+# Related options:
+#
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_ssl_certfile
+# * vendordata_dynamic_connect_timeout
+# * vendordata_dynamic_read_timeout
+# * vendordata_dynamic_failure_fatal
+#  (list value)
+# Deprecated group/name - [DEFAULT]/vendordata_providers
+#vendordata_providers =
+
+#
+# A list of targets for the dynamic vendordata provider. These targets are of
+# the form <name>@<url>.
+#
+# The dynamic vendordata provider collects metadata by contacting external REST
+# services and querying them for information about the instance. This behaviour
+# is documented in the vendordata.rst file in the nova developer reference.
+#  (list value)
+# Deprecated group/name - [DEFAULT]/vendordata_dynamic_targets
+#vendordata_dynamic_targets =
+
+#
+# Path to an optional certificate file or CA bundle to verify dynamic
+# vendordata REST services ssl certificates against.
+#
+# Possible values:
+#
+# * An empty string, or a path to a valid certificate file
+#
+# Related options:
+#
+# * vendordata_providers
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_connect_timeout
+# * vendordata_dynamic_read_timeout
+# * vendordata_dynamic_failure_fatal
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vendordata_dynamic_ssl_certfile
+#vendordata_dynamic_ssl_certfile =
+
+#
+# Maximum wait time for an external REST service to connect.
+#
+# Possible values:
+#
+# * Any integer with a value greater than three (the TCP packet retransmission
+#   timeout). Note that instance start may be blocked during this wait time,
+#   so this value should be kept small.
+#
+# Related options:
+#
+# * vendordata_providers
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_ssl_certfile
+# * vendordata_dynamic_read_timeout
+# * vendordata_dynamic_failure_fatal
+#  (integer value)
+# Minimum value: 3
+# Deprecated group/name - [DEFAULT]/vendordata_dynamic_connect_timeout
+#vendordata_dynamic_connect_timeout=5
+
+#
+# Maximum wait time for an external REST service to return data once connected.
+#
+# Possible values:
+#
+# * Any integer. Note that instance start is blocked during this wait time,
+#   so this value should be kept small.
+#
+# Related options:
+#
+# * vendordata_providers
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_ssl_certfile
+# * vendordata_dynamic_connect_timeout
+# * vendordata_dynamic_failure_fatal
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/vendordata_dynamic_read_timeout
+#vendordata_dynamic_read_timeout=5
+
+#
+# Should failures to fetch dynamic vendordata be fatal to instance boot?
+#
+# Related options:
+#
+# * vendordata_providers
+# * vendordata_dynamic_targets
+# * vendordata_dynamic_ssl_certfile
+# * vendordata_dynamic_connect_timeout
+# * vendordata_dynamic_read_timeout
+#  (boolean value)
+#vendordata_dynamic_failure_fatal=false
+
+#
+# This option is the time (in seconds) to cache metadata. When set to 0,
+# metadata caching is disabled entirely; this is generally not recommended for
+# performance reasons. Increasing this setting should improve response times
+# of the metadata API when under heavy load. Higher values may increase memory
+# usage, and result in longer times for host metadata changes to take effect.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/metadata_cache_expiration
+#metadata_cache_expiration=15
+
+#
+# Cloud providers may store custom data in vendor data file that will then be
+# available to the instances via the metadata service, and to the rendering of
+# config-drive. The default class for this, JsonFileVendorData, loads this
+# information from a JSON file, whose path is configured by this option. If
+# there is no path set by this option, the class returns an empty dictionary.
+#
+# Possible values:
+#
+# * Any string representing the path to the data file, or an empty string
+#     (default).
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vendordata_jsonfile_path
+#vendordata_jsonfile_path=<None>
+
+#
+# As a query can potentially return many thousands of items, you can limit the
+# maximum number of items in a single response by setting this option.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/osapi_max_limit
+#max_limit=1000
+max_limit=1000
+
+#
+# This string is prepended to the normal URL that is returned in links to the
+# OpenStack Compute API. If it is empty (the default), the URLs are returned
+# unchanged.
+#
+# Possible values:
+#
+# * Any string, including an empty string (the default).
+#  (string value)
+# Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix
+#compute_link_prefix=<None>
+
+#
+# This string is prepended to the normal URL that is returned in links to
+# Glance resources. If it is empty (the default), the URLs are returned
+# unchanged.
+#
+# Possible values:
+#
+# * Any string, including an empty string (the default).
+#  (string value)
+# Deprecated group/name - [DEFAULT]/osapi_glance_link_prefix
+#glance_link_prefix=<None>
+
+#
+# Operators can turn off the ability for a user to take snapshots of their
+# instances by setting this option to False. When disabled, any attempt to
+# take a snapshot will result in a HTTP 400 response ("Bad Request").
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/allow_instance_snapshots
+#allow_instance_snapshots=true
+
+#
+# This option is a list of all instance states for which network address
+# information should not be returned from the API.
+#
+# Possible values:
+#
+#   A list of strings, where each string is a valid VM state, as defined in
+#   nova/compute/vm_states.py. As of the Newton release, they are:
+#
+# * "active"
+# * "building"
+# * "paused"
+# * "suspended"
+# * "stopped"
+# * "rescued"
+# * "resized"
+# * "soft-delete"
+# * "deleted"
+# * "error"
+# * "shelved"
+# * "shelved_offloaded"
+#  (list value)
+# Deprecated group/name - [DEFAULT]/osapi_hide_server_address_states
+#hide_server_address_states=building
+
+# The full path to the fping binary. (string value)
+# Deprecated group/name - [DEFAULT]/fping_path
+#fping_path=/usr/sbin/fping
+fping_path=/usr/sbin/fping
+
+#
+# When True, the TenantNetworkController will query the Neutron API to get the
+# default networks to use.
+#
+# Related options:
+#
+# * neutron_default_tenant_id
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/use_neutron_default_nets
+#use_neutron_default_nets=false
+
+#
+# Tenant ID for getting the default network from Neutron API (also referred in
+# some places as the 'project ID') to use.
+#
+# Related options:
+#
+# * use_neutron_default_nets
+#  (string value)
+# Deprecated group/name - [DEFAULT]/neutron_default_tenant_id
+#neutron_default_tenant_id=default
+
+#
+# Enables returning of the instance password by the relevant server API calls
+# such as create, rebuild, evacuate, or rescue. If the hypervisor does not
+# support password injection, then the password returned will not be correct,
+# so if your hypervisor does not support password injection, set this to False.
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/enable_instance_password
+#enable_instance_password=true
+
+
+[api_database]
+#
+# The *Nova API Database* is a separate database which is used for information
+# which is used across *cells*. This database is mandatory since the Mitaka
+# release (13.0.0).
+
+#
+# From nova.conf
+#
+idle_timeout = 180
+min_pool_size = 100
+max_pool_size = 700
+max_overflow = 100
+retry_interval = 5
+max_retries = -1
+db_max_retries = 3
+db_retry_interval = 1
+connection_debug = 10
+pool_timeout = 120
+connection = mysql+pymysql://nova:opnfv_secret@10.167.4.50/nova_api?charset=utf8
+
+# The SQLAlchemy connection string to use to connect to the database. (string
+# value)
+#connection=sqlite:////var/lib/nova/nova.sqlite
+
+# If True, SQLite uses synchronous mode. (boolean value)
+#sqlite_synchronous=true
+
+# The SQLAlchemy connection string to use to connect to the slave database.
+# (string value)
+#slave_connection=<None>
+
+# The SQL mode to be used for MySQL sessions. This option, including the
+# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
+# the server configuration, set this to no value. Example: mysql_sql_mode=
+# (string value)
+#mysql_sql_mode=TRADITIONAL
+
+# Timeout before idle SQL connections are reaped. (integer value)
+#idle_timeout=3600
+
+# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
+# indicates no limit. (integer value)
+#max_pool_size=<None>
+
+# Maximum number of database connection retries during startup. Set to -1 to
+# specify an infinite retry count. (integer value)
+#max_retries=10
+
+# Interval between retries of opening a SQL connection. (integer value)
+#retry_interval=10
+
+# If set, use this value for max_overflow with SQLAlchemy. (integer value)
+#max_overflow=<None>
+
+# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
+# value)
+#connection_debug=0
+
+# Add Python stack traces to SQL as comment strings. (boolean value)
+#connection_trace=false
+
+# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
+#pool_timeout=<None>
+
+
+[barbican]
+
+#
+# From nova.conf
+#
+
+# Use this endpoint to connect to Barbican, for example:
+# "http://localhost:9311/" (string value)
+#barbican_endpoint=<None>
+
+# Version of the Barbican API, for example: "v1" (string value)
+#barbican_api_version=<None>
+
+# Use this endpoint to connect to Keystone (string value)
+#auth_endpoint=http://localhost:5000/v3
+
+# Number of seconds to wait before retrying poll for key creation completion
+# (integer value)
+#retry_delay=1
+
+# Number of times to retry poll for key creation completion (integer value)
+#number_of_retries=60
+
+
+[cache]
+
+#
+# From nova.conf
+#
+enabled = true
+backend = oslo_cache.memcache_pool
+memcache_servers=10.167.4.11:11211,10.167.4.12:11211,10.167.4.13:11211
+# Prefix for building the configuration dictionary for the cache region. This
+# should not need to be changed unless there is another dogpile.cache region
+# with the same configuration name. (string value)
+#config_prefix=cache.oslo
+
+# Default TTL, in seconds, for any cached item in the dogpile.cache region. This
+# applies to any cached method that doesn't have an explicit cache expiration
+# time defined for it. (integer value)
+#expiration_time=600
+
+# Dogpile.cache backend module. It is recommended that Memcache or Redis
+# (dogpile.cache.redis) be used in production deployments. For eventlet-based or
+# highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is
+# recommended. For low thread servers, dogpile.cache.memcached is recommended.
+# Test environments with a single instance of the server can use the
+# dogpile.cache.memory backend. (string value)
+#backend=dogpile.cache.null
+
+# Arguments supplied to the backend module. Specify this option once per
+# argument to be passed to the dogpile.cache backend. Example format:
+# "<argname>:<value>". (multi valued)
+#backend_argument =
+
+# Proxy classes to import that will affect the way the dogpile.cache backend
+# functions. See the dogpile.cache documentation on changing-backend-behavior.
+# (list value)
+#proxies =
+
+# Global toggle for caching. (boolean value)
+#enabled=false
+
+# Extra debugging from the cache backend (cache keys, get/set/delete/etc calls).
+# This is only really useful if you need to see the specific cache-backend
+# get/set/delete calls with the keys/values.  Typically this should be left set
+# to false. (boolean value)
+#debug_cache_backend=false
+
+# Memcache servers in the format of "host:port". (dogpile.cache.memcache and
+# oslo_cache.memcache_pool backends only). (list value)
+#memcache_servers=localhost:11211
+
+# Number of seconds memcached server is considered dead before it is tried
+# again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
+# (integer value)
+#memcache_dead_retry=300
+
+# Timeout in seconds for every call to a server. (dogpile.cache.memcache and
+# oslo_cache.memcache_pool backends only). (integer value)
+#memcache_socket_timeout=3
+
+# Max total number of open connections to every memcached server.
+# (oslo_cache.memcache_pool backend only). (integer value)
+#memcache_pool_maxsize=10
+
+# Number of seconds a connection to memcached is held unused in the pool before
+# it is closed. (oslo_cache.memcache_pool backend only). (integer value)
+#memcache_pool_unused_timeout=60
+
+# Number of seconds that an operation will wait to get a memcache client
+# connection. (integer value)
+#memcache_pool_connection_get_timeout=10
+
+
+[cells]
+#
+# Cells options allow you to use cells functionality in openstack
+# deployment.
+#
+# Note that the options in this group are only for cells v1 functionality, which
+# is considered experimental and not recommended for new deployments. Cells v1
+# is being replaced with cells v2, which starting in the 15.0.0 Ocata release is
+# required and all Nova deployments will be at least a cells v2 cell of one.
+#
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# Topic.
+#
+# This is the message queue topic that cells nodes listen on. It is
+# used when the cells service is started up to configure the queue,
+# and whenever an RPC call to the scheduler is made.
+#
+# Possible values:
+#
+# * cells: This is the recommended and the default value.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# Configurable RPC topics provide little value and can result in a wide variety
+# of errors. They should not be used.
+#topic=cells
+
+#
+# Enable cell v1 functionality.
+#
+# Note that cells v1 is considered experimental and not recommended for new
+# Nova deployments. Cells v1 is being replaced by cells v2 which starting in
+# the 15.0.0 Ocata release, all Nova deployments are at least a cells v2 cell
+# of one. Setting this option, or any other options in the [cells] group, is
+# not required for cells v2.
+#
+# When this functionality is enabled, it lets you to scale an OpenStack
+# Compute cloud in a more distributed fashion without having to use
+# complicated technologies like database and message queue clustering.
+# Cells are configured as a tree. The top-level cell should have a host
+# that runs a nova-api service, but no nova-compute services. Each
+# child cell should run all of the typical nova-* services in a regular
+# Compute cloud except for nova-api. You can think of cells as a normal
+# Compute deployment in that each cell has its own database server and
+# message queue broker.
+#
+# Related options:
+#
+# * name: A unique cell name must be given when this functionality
+#   is enabled.
+# * cell_type: Cell type should be defined for all cells.
+#  (boolean value)
+enable=False
+
+#
+# Name of the current cell.
+#
+# This value must be unique for each cell. Name of a cell is used as
+# its id, leaving this option unset or setting the same name for
+# two or more cells may cause unexpected behaviour.
+#
+# Related options:
+#
+# * enabled: This option is meaningful only when cells service
+#   is enabled
+#  (string value)
+#name=nova
+
+#
+# Cell capabilities.
+#
+# List of arbitrary key=value pairs defining capabilities of the
+# current cell to be sent to the parent cells. These capabilities
+# are intended to be used in cells scheduler filters/weighers.
+#
+# Possible values:
+#
+# * key=value pairs list for example;
+#   ``hypervisor=xenserver;kvm,os=linux;windows``
+#  (list value)
+#capabilities=hypervisor=xenserver;kvm,os=linux;windows
+
+#
+# Call timeout.
+#
+# Cell messaging module waits for response(s) to be put into the
+# eventlet queue. This option defines the seconds waited for
+# response from a call to a cell.
+#
+# Possible values:
+#
+# * An integer, corresponding to the interval time in seconds.
+#  (integer value)
+# Minimum value: 0
+#call_timeout=60
+
+#
+# Reserve percentage
+#
+# Percentage of cell capacity to hold in reserve, so the minimum
+# amount of free resource is considered to be;
+#
+#     min_free = total * (reserve_percent / 100.0)
+#
+# This option affects both memory and disk utilization.
+#
+# The primary purpose of this reserve is to ensure some space is
+# available for users who want to resize their instance to be larger.
+# Note that currently once the capacity expands into this reserve
+# space this option is ignored.
+#
+# Possible values:
+#
+# * An integer or float, corresponding to the percentage of cell capacity to
+#   be held in reserve.
+#  (floating point value)
+#reserve_percent=10.0
+
+#
+# Type of cell.
+#
+# When cells feature is enabled the hosts in the OpenStack Compute
+# cloud are partitioned into groups. Cells are configured as a tree.
+# The top-level cell's cell_type must be set to ``api``. All other
+# cells are defined as a ``compute cell`` by default.
+#
+# Related option:
+#
+# * quota_driver: Disable quota checking for the child cells.
+#   (nova.quota.NoopQuotaDriver)
+#  (string value)
+# Allowed values: api, compute
+#cell_type=compute
+
+#
+# Mute child interval.
+#
+# Number of seconds after which a lack of capability and capacity
+# update the child cell is to be treated as a mute cell. Then the
+# child cell will be weighed as recommend highly that it be skipped.
+#
+# Possible values:
+#
+# * An integer, corresponding to the interval time in seconds.
+#  (integer value)
+#mute_child_interval=300
+
+#
+# Bandwidth update interval.
+#
+# Seconds between bandwidth usage cache updates for cells.
+#
+# Possible values:
+#
+# * An integer, corresponding to the interval time in seconds.
+#  (integer value)
+#bandwidth_update_interval=600
+
+#
+# Instance update sync database limit.
+#
+# Number of instances to pull from the database at one time for
+# a sync. If there are more instances to update the results will
+# be paged through.
+#
+# Possible values:
+#
+# * An integer, corresponding to a number of instances.
+#  (integer value)
+#instance_update_sync_database_limit=100
+
+#
+# Mute weight multiplier.
+#
+# Multiplier used to weigh mute children. Mute children cells are
+# recommended to be skipped so their weight is multiplied by this
+# negative value.
+#
+# Possible values:
+#
+# * Negative numeric number
+#  (floating point value)
+#mute_weight_multiplier=-10000.0
+
+#
+# Ram weight multiplier.
+#
+# Multiplier used for weighing ram. Negative numbers indicate that
+# Compute should stack VMs on one host instead of spreading out new
+# VMs to more hosts in the cell.
+#
+# Possible values:
+#
+# * Numeric multiplier
+#  (floating point value)
+#ram_weight_multiplier=10.0
+
+#
+# Offset weight multiplier
+#
+# Multiplier used to weigh offset weigher. Cells with higher
+# weight_offsets in the DB will be preferred. The weight_offset
+# is a property of a cell stored in the database. It can be used
+# by a deployer to have scheduling decisions favor or disfavor
+# cells based on the setting.
+#
+# Possible values:
+#
+# * Numeric multiplier
+#  (floating point value)
+#offset_weight_multiplier=1.0
+
+#
+# Instance updated at threshold
+#
+# Number of seconds after an instance was updated or deleted to
+# continue to update cells. This option lets cells manager to only
+# attempt to sync instances that have been updated recently.
+# i.e., a threshold of 3600 means to only update instances that
+# have modified in the last hour.
+#
+# Possible values:
+#
+# * Threshold in seconds
+#
+# Related options:
+#
+# * This value is used with the ``instance_update_num_instances``
+#   value in a periodic task run.
+#  (integer value)
+#instance_updated_at_threshold=3600
+
+#
+# Instance update num instances
+#
+# On every run of the periodic task, nova cells manager will attempt to
+# sync instance_updated_at_threshold number of instances. When the
+# manager gets the list of instances, it shuffles them so that multiple
+# nova-cells services do not attempt to sync the same instances in
+# lockstep.
+#
+# Possible values:
+#
+# * Positive integer number
+#
+# Related options:
+#
+# * This value is used with the ``instance_updated_at_threshold``
+#   value in a periodic task run.
+#  (integer value)
+#instance_update_num_instances=1
+
+#
+# Maximum hop count
+#
+# When processing a targeted message, if the local cell is not the
+# target, a route is defined between neighbouring cells. And the
+# message is processed across the whole routing path. This option
+# defines the maximum hop counts until reaching the target.
+#
+# Possible values:
+#
+# * Positive integer value
+#  (integer value)
+#max_hop_count=10
+
+#
+# Cells scheduler.
+#
+# The class of the driver used by the cells scheduler. This should be
+# the full Python path to the class to be used. If nothing is specified
+# in this option, the CellsScheduler is used.
+#  (string value)
+#scheduler=nova.cells.scheduler.CellsScheduler
+
+#
+# RPC driver queue base.
+#
+# When sending a message to another cell by JSON-ifying the message
+# and making an RPC cast to 'process_message', a base queue is used.
+# This option defines the base queue name to be used when communicating
+# between cells. Various topics by message type will be appended to this.
+#
+# Possible values:
+#
+# * The base queue name to be used when communicating between cells.
+#  (string value)
+#rpc_driver_queue_base=cells.intercell
+
+#
+# Scheduler filter classes.
+#
+# Filter classes the cells scheduler should use. An entry of
+# "nova.cells.filters.all_filters" maps to all cells filters
+# included with nova. As of the Mitaka release the following
+# filter classes are available:
+#
+# Different cell filter: A scheduler hint of 'different_cell'
+# with a value of a full cell name may be specified to route
+# a build away from a particular cell.
+#
+# Image properties filter: Image metadata named
+# 'hypervisor_version_requires' with a version specification
+# may be specified to ensure the build goes to a cell which
+# has hypervisors of the required version. If either the version
+# requirement on the image or the hypervisor capability of the
+# cell is not present, this filter returns without filtering out
+# the cells.
+#
+# Target cell filter: A scheduler hint of 'target_cell' with a
+# value of a full cell name may be specified to route a build to
+# a particular cell. No error handling is done as there's no way
+# to know whether the full path is a valid.
+#
+# As an admin user, you can also add a filter that directs builds
+# to a particular cell.
+#
+#  (list value)
+#scheduler_filter_classes=nova.cells.filters.all_filters
+
+#
+# Scheduler weight classes.
+#
+# Weigher classes the cells scheduler should use. An entry of
+# "nova.cells.weights.all_weighers" maps to all cell weighers
+# included with nova. As of the Mitaka release the following
+# weight classes are available:
+#
+# mute_child: Downgrades the likelihood of child cells being
+# chosen for scheduling requests, which haven't sent capacity
+# or capability updates in a while. Options include
+# mute_weight_multiplier (multiplier for mute children; value
+# should be negative).
+#
+# ram_by_instance_type: Select cells with the most RAM capacity
+# for the instance type being requested. Because higher weights
+# win, Compute returns the number of available units for the
+# instance type requested. The ram_weight_multiplier option defaults
+# to 10.0 that adds to the weight by a factor of 10. Use a negative
+# number to stack VMs on one host instead of spreading out new VMs
+# to more hosts in the cell.
+#
+# weight_offset: Allows modifying the database to weight a particular
+# cell. The highest weight will be the first cell to be scheduled for
+# launching an instance. When the weight_offset of a cell is set to 0,
+# it is unlikely to be picked but it could be picked if other cells
+# have a lower weight, like if they're full. And when the weight_offset
+# is set to a very high value (for example, '999999999999999'), it is
+# likely to be picked if another cell do not have a higher weight.
+#  (list value)
+#scheduler_weight_classes=nova.cells.weights.all_weighers
+
+#
+# Scheduler retries.
+#
+# How many retries when no cells are available. Specifies how many
+# times the scheduler tries to launch a new instance when no cells
+# are available.
+#
+# Possible values:
+#
+# * Positive integer value
+#
+# Related options:
+#
+# * This value is used with the ``scheduler_retry_delay`` value
+#   while retrying to find a suitable cell.
+#  (integer value)
+#scheduler_retries=10
+
+#
+# Scheduler retry delay.
+#
+# Specifies the delay (in seconds) between scheduling retries when no
+# cell can be found to place the new instance on. When the instance
+# could not be scheduled to a cell after ``scheduler_retries`` in
+# combination with ``scheduler_retry_delay``, then the scheduling
+# of the instance failed.
+#
+# Possible values:
+#
+# * Time in seconds.
+#
+# Related options:
+#
+# * This value is used with the ``scheduler_retries`` value
+#   while retrying to find a suitable cell.
+#  (integer value)
+#scheduler_retry_delay=2
+
+#
+# DB check interval.
+#
+# Cell state manager updates cell status for all cells from the DB
+# only after this particular interval time is passed. Otherwise cached
+# status are used. If this value is 0 or negative all cell status are
+# updated from the DB whenever a state is needed.
+#
+# Possible values:
+#
+# * Interval time, in seconds.
+#
+#  (integer value)
+#db_check_interval=60
+
+#
+# Optional cells configuration.
+#
+# Configuration file from which to read cells configuration. If given,
+# overrides reading cells from the database.
+#
+# Cells store all inter-cell communication data, including user names
+# and passwords, in the database. Because the cells data is not updated
+# very frequently, use this option to specify a JSON file to store
+# cells data. With this configuration, the database is no longer
+# consulted when reloading the cells data. The file must have columns
+# present in the Cell model (excluding common database fields and the
+# id column). You must specify the queue connection information through
+# a transport_url field, instead of username, password, and so on.
+#
+# The transport_url has the following form:
+# rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST
+#
+# Possible values:
+#
+# The scheme can be either qpid or rabbit, the following sample shows
+# this optional configuration:
+#
+#     {
+#         "parent": {
+#             "name": "parent",
+#             "api_url": "http://api.example.com:8774",
+#             "transport_url": "rabbit://rabbit.example.com",
+#             "weight_offset": 0.0,
+#             "weight_scale": 1.0,
+#             "is_parent": true
+#         },
+#         "cell1": {
+#             "name": "cell1",
+#             "api_url": "http://api.example.com:8774",
+#             "transport_url": "rabbit://rabbit1.example.com",
+#             "weight_offset": 0.0,
+#             "weight_scale": 1.0,
+#             "is_parent": false
+#         },
+#         "cell2": {
+#             "name": "cell2",
+#             "api_url": "http://api.example.com:8774",
+#             "transport_url": "rabbit://rabbit2.example.com",
+#             "weight_offset": 0.0,
+#             "weight_scale": 1.0,
+#             "is_parent": false
+#         }
+#     }
+#
+#  (string value)
+#cells_config=<None>
+
+
+[cinder]
+
+#
+# From nova.conf
+#
+
+#
+# Info to match when looking for cinder in the service catalog.
+#
+# Possible values:
+#
+# * Format is separated values of the form:
+#   <service_type>:<service_name>:<endpoint_type>
+#
+# Note: Nova does not support the Cinder v1 API since the Nova 15.0.0 Ocata
+# release.
+#
+# Related options:
+#
+# * endpoint_template - Setting this option will override catalog_info
+#  (string value)
+#catalog_info=volumev2:cinderv2:publicURL
+catalog_info=volumev2:cinderv2:internalURL
+
+#
+# If this option is set then it will override service catalog lookup with
+# this template for cinder endpoint
+#
+# Possible values:
+#
+# * URL for cinder endpoint API
+#   e.g. http://localhost:8776/v2/%(project_id)s
+#
+# Note: Nova does not support the Cinder v1 API since the Nova 15.0.0 Ocata
+# release.
+#
+# Related options:
+#
+# * catalog_info - If endpoint_template is not set, catalog_info will be used.
+#  (string value)
+#endpoint_template=<None>
+
+#
+# Region name of this node. This is used when picking the URL in the service
+# catalog.
+#
+# Possible values:
+#
+# * Any string representing region name
+#  (string value)
+#os_region_name=<None>
+os_region_name = RegionOne
+
+#
+# Number of times cinderclient should retry on any failed http call.
+# 0 means connection is attempted only once. Setting it to any positive integer
+# means that on failure connection is retried that many times e.g. setting it
+# to 3 means total attempts to connect will be 4.
+#
+# Possible values:
+#
+# * Any integer value. 0 means connection is attempted only once
+#  (integer value)
+# Minimum value: 0
+#http_retries=3
+
+#
+# Allow attach between instance and volume in different availability zones.
+#
+# If False, volumes attached to an instance must be in the same availability
+# zone in Cinder as the instance availability zone in Nova.
+# This also means care should be taken when booting an instance from a volume
+# where source is not "volume" because Nova will attempt to create a volume
+# using
+# the same availability zone as what is assigned to the instance.
+# If that AZ is not in Cinder (or allow_availability_zone_fallback=False in
+# cinder.conf), the volume create request will fail and the instance will fail
+# the build request.
+# By default there is no availability zone restriction on volume attach.
+#  (boolean value)
+#cross_az_attach=true
+
+[cloudpipe]
+
+#
+# From nova.conf
+#
+
+#
+# Image ID used when starting up a cloudpipe VPN client.
+#
+# An empty instance is created and configured with OpenVPN using
+# boot_script_template. This instance would be snapshotted and stored
+# in glance. ID of the stored image is used in 'vpn_image_id' to
+# create cloudpipe VPN client.
+#
+# Possible values:
+#
+# * Any valid ID of a VPN image
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vpn_image_id
+#vpn_image_id=0
+
+#
+# Flavor for VPN instances.
+#
+# Possible values:
+#
+# * Any valid flavor name
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vpn_flavor
+#vpn_flavor=m1.tiny
+
+#
+# Template for cloudpipe instance boot script.
+#
+# Possible values:
+#
+# * Any valid path to a cloudpipe instance boot script template
+#
+# Related options:
+#
+# The following options are required to configure cloudpipe-managed
+# OpenVPN server.
+#
+# * dmz_net
+# * dmz_mask
+# * cnt_vpn_clients
+#  (string value)
+# Deprecated group/name - [DEFAULT]/boot_script_template
+#boot_script_template=$pybasedir/nova/cloudpipe/bootscript.template
+
+#
+# Network to push into OpenVPN config.
+#
+# Note: Above mentioned OpenVPN config can be found at
+# /etc/openvpn/server.conf.
+#
+# Possible values:
+#
+# * Any valid IPv4/IPV6 address
+#
+# Related options:
+#
+# * boot_script_template - dmz_net is pushed into bootscript.template
+#   to configure cloudpipe-managed OpenVPN server
+#  (IP address value)
+# Deprecated group/name - [DEFAULT]/dmz_net
+#dmz_net=10.0.0.0
+
+#
+# Netmask to push into OpenVPN config.
+#
+# Possible values:
+#
+# * Any valid IPv4/IPV6 netmask
+#
+# Related options:
+#
+# * dmz_net - dmz_net and dmz_mask is pushed into bootscript.template
+#   to configure cloudpipe-managed OpenVPN server
+# * boot_script_template
+#  (IP address value)
+# Deprecated group/name - [DEFAULT]/dmz_mask
+#dmz_mask=255.255.255.0
+
+#
+# Suffix to add to project name for VPN key and secgroups
+#
+# Possible values:
+#
+# * Any string value representing the VPN key suffix
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vpn_key_suffix
+#vpn_key_suffix=-vpn
+
+
+[conductor]
+#
+# Options under this group are used to define Conductor's communication,
+# which manager should be act as a proxy between computes and database,
+# and finally, how many worker processes will be used.
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# Topic exchange name on which conductor nodes listen.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There is no need to let users choose the RPC topic for all services - there
+# is little gain from this. Furthermore, it makes it really easy to break Nova
+# by using this option.
+#topic=conductor
+
+#
+# Number of workers for OpenStack Conductor service. The default will be the
+# number of CPUs available.
+#  (integer value)
+#workers=<None>
+workers = 8
+
+[console]
+#
+# Options under this group allow to tune the configuration of the console proxy
+# service.
+#
+# Note: in configuration of every compute is a ``console_host`` option,
+# which allows to select the console proxy service to connect to.
+
+#
+# From nova.conf
+#
+
+#
+# Adds list of allowed origins to the console websocket proxy to allow
+# connections from other origin hostnames.
+# Websocket proxy matches the host header with the origin header to
+# prevent cross-site requests. This list specifies if any there are
+# values other than host are allowed in the origin header.
+#
+# Possible values:
+#
+# * A list where each element is an allowed origin hostnames, else an empty list
+#  (list value)
+# Deprecated group/name - [DEFAULT]/console_allowed_origins
+#allowed_origins =
+
+
+[consoleauth]
+
+#
+# From nova.conf
+#
+
+#
+# The lifetime of a console auth token.
+#
+# A console auth token is used in authorizing console access for a user.
+# Once the auth token time to live count has elapsed, the token is
+# considered expired.  Expired tokens are then deleted.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/console_token_ttl
+#token_ttl=600
 
 
 [cors]
@@ -341,24 +4271,29 @@
 # Indicate whether this resource may be shared with the domain received in the
 # requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
 # slash. Example: https://horizon.example.com (list value)
-#allowed_origin = <None>
+#allowed_origin=<None>
+
 
 # Indicate that the actual request can include user credentials (boolean value)
-#allow_credentials = true
+#allow_credentials=true
+
 
 # Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
 # Headers. (list value)
-#expose_headers =
+#expose_headers=X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Service-Token
+
 
 # Maximum cache age of CORS preflight requests. (integer value)
-#max_age = 3600
+#max_age=3600
+
 
 # Indicate which methods can be used during the actual request. (list value)
-#allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH
+#allow_methods=GET,PUT,POST,DELETE,PATCH
+
 
 # Indicate which header field names may be used during the actual request. (list
 # value)
-#allow_headers =
+#allow_headers=X-Auth-Token,X-Openstack-Request-Id,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id
 
 
 [cors.subdomain]
@@ -370,24 +4305,106 @@
 # Indicate whether this resource may be shared with the domain received in the
 # requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
 # slash. Example: https://horizon.example.com (list value)
-#allowed_origin = <None>
+#allowed_origin=<None>
 
 # Indicate that the actual request can include user credentials (boolean value)
-#allow_credentials = true
+#allow_credentials=true
 
 # Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
 # Headers. (list value)
-#expose_headers =
+#expose_headers=X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Service-Token
 
 # Maximum cache age of CORS preflight requests. (integer value)
-#max_age = 3600
+#max_age=3600
 
 # Indicate which methods can be used during the actual request. (list value)
-#allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH
+#allow_methods=GET,PUT,POST,DELETE,PATCH
 
 # Indicate which header field names may be used during the actual request. (list
 # value)
-#allow_headers =
+#allow_headers=X-Auth-Token,X-Openstack-Request-Id,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id
+
+
+[crypto]
+
+#
+# From nova.conf
+#
+
+#
+# Filename of root CA (Certificate Authority). This is a container format
+# and includes root certificates.
+#
+# Possible values:
+#
+# * Any file name containing root CA, cacert.pem is default
+#
+# Related options:
+#
+# * ca_path
+#  (string value)
+# Deprecated group/name - [DEFAULT]/ca_file
+#ca_file=cacert.pem
+
+#
+# Filename of a private key.
+#
+# Related options:
+#
+# * keys_path
+#  (string value)
+# Deprecated group/name - [DEFAULT]/key_file
+#key_file=private/cakey.pem
+
+#
+# Filename of root Certificate Revocation List (CRL). This is a list of
+# certificates that have been revoked, and therefore, entities presenting
+# those (revoked) certificates should no longer be trusted.
+#
+# Related options:
+#
+# * ca_path
+#  (string value)
+# Deprecated group/name - [DEFAULT]/crl_file
+#crl_file=crl.pem
+
+#
+# Directory path where keys are located.
+#
+# Related options:
+#
+# * key_file
+#  (string value)
+# Deprecated group/name - [DEFAULT]/keys_path
+#keys_path=$state_path/keys
+
+#
+# Directory path where root CA is located.
+#
+# Related options:
+#
+# * ca_file
+#  (string value)
+# Deprecated group/name - [DEFAULT]/ca_path
+#ca_path=$state_path/CA
+
+# Option to enable/disable use of CA for each project. (boolean value)
+# Deprecated group/name - [DEFAULT]/use_project_ca
+#use_project_ca=false
+
+#
+# Subject for certificate for users, %s for
+# project, user, timestamp
+#  (string value)
+# Deprecated group/name - [DEFAULT]/user_cert_subject
+#user_cert_subject=/C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s
+
+#
+# Subject for certificate for projects, %s for
+# project, timestamp
+#  (string value)
+# Deprecated group/name - [DEFAULT]/project_cert_subject
+#project_cert_subject=/C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s
 
 
 [database]
@@ -402,99 +4419,110 @@
 # Its value may be silently ignored in the future.
 # Reason: Should use config option connection or slave_connection to connect the
 # database.
-#sqlite_db = oslo.sqlite
+#sqlite_db=oslo.sqlite
+idle_timeout = 180
+min_pool_size = 100
+max_pool_size = 700
+max_overflow = 100
+retry_interval = 5
+max_retries = -1
+db_max_retries = 3
+db_retry_interval = 1
+connection_debug = 10
+pool_timeout = 120
+connection = mysql+pymysql://nova:opnfv_secret@10.167.4.50/nova?charset=utf8
 
 # If True, SQLite uses synchronous mode. (boolean value)
 # Deprecated group/name - [DEFAULT]/sqlite_synchronous
-#sqlite_synchronous = true
+#sqlite_synchronous=true
 
 # The back end to use for the database. (string value)
 # Deprecated group/name - [DEFAULT]/db_backend
-#backend = sqlalchemy
+#backend=sqlalchemy
 
 # The SQLAlchemy connection string to use to connect to the database. (string
 # value)
 # Deprecated group/name - [DEFAULT]/sql_connection
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
-#connection = <None>
+#connection=<None>
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
-#slave_connection = <None>
+#slave_connection=<None>
 
 # The SQL mode to be used for MySQL sessions. This option, including the
 # default, overrides any server-set SQL mode. To use whatever SQL mode is set by
 # the server configuration, set this to no value. Example: mysql_sql_mode=
 # (string value)
-#mysql_sql_mode = TRADITIONAL
+#mysql_sql_mode=TRADITIONAL
 
 # Timeout before idle SQL connections are reaped. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_idle_timeout
 # Deprecated group/name - [DATABASE]/sql_idle_timeout
 # Deprecated group/name - [sql]/idle_timeout
-#idle_timeout = 3600
+#idle_timeout=3600
 
 # Minimum number of SQL connections to keep open in a pool. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
 # Deprecated group/name - [DATABASE]/sql_min_pool_size
-#min_pool_size = 1
+#min_pool_size=1
 
 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0
 # indicates no limit. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
-#max_pool_size = 5
+#max_pool_size=5
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
-#max_retries = 10
+#max_retries=10
 
 # Interval between retries of opening a SQL connection. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
 # Deprecated group/name - [DATABASE]/reconnect_interval
-#retry_interval = 10
+#retry_interval=10
 
 # If set, use this value for max_overflow with SQLAlchemy. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
-#max_overflow = 50
+#max_overflow=50
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
 # Minimum value: 0
 # Maximum value: 100
 # Deprecated group/name - [DEFAULT]/sql_connection_debug
-#connection_debug = 0
+#connection_debug=0
 
 # Add Python stack traces to SQL as comment strings. (boolean value)
 # Deprecated group/name - [DEFAULT]/sql_connection_trace
-#connection_trace = false
+#connection_trace=false
 
 # If set, use this value for pool_timeout with SQLAlchemy. (integer value)
 # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
-#pool_timeout = <None>
+#pool_timeout=<None>
 
 # Enable the experimental use of database reconnect on connection lost. (boolean
 # value)
-#use_db_reconnect = false
+#use_db_reconnect=false
 
 # Seconds between retries of a database transaction. (integer value)
-#db_retry_interval = 1
+#db_retry_interval=1
 
 # If True, increases the interval between retries of a database operation up to
 # db_max_retry_interval. (boolean value)
-#db_inc_retry_interval = true
+#db_inc_retry_interval=true
 
 # If db_inc_retry_interval is set, the maximum seconds between retries of a
 # database operation. (integer value)
-#db_max_retry_interval = 10
+#db_max_retry_interval=10
 
 # Maximum retries in case of connection error or deadlock error before error is
 # raised. Set to -1 to specify an infinite retry count. (integer value)
-#db_max_retries = 20
+#db_max_retries=20
 
 #
 # From oslo.db.concurrency
@@ -503,7 +4531,569 @@
 # Enable the experimental use of thread pooling for all DB API calls (boolean
 # value)
 # Deprecated group/name - [DEFAULT]/dbapi_use_tpool
-#use_tpool = false
+#use_tpool=false
+
+
+[ephemeral_storage_encryption]
+
+#
+# From nova.conf
+#
+
+#
+# Enables/disables LVM ephemeral storage encryption.
+#  (boolean value)
+#enabled=false
+
+#
+# Cipher-mode string to be used.
+#
+# The cipher and mode to be used to encrypt ephemeral storage. The set of
+# cipher-mode combinations available depends on kernel support.
+#
+# Possible values:
+#
+# * Any crypto option listed in ``/proc/crypto``.
+#  (string value)
+#cipher=aes-xts-plain64
+
+#
+# Encryption key length in bits.
+#
+# The bit length of the encryption key to be used to encrypt ephemeral storage.
+# In XTS mode only half of the bits are used for encryption key.
+#  (integer value)
+# Minimum value: 1
+#key_size=512
+
+
+[filter_scheduler]
+
+#
+# From nova.conf
+#
+
+#
+# Size of subset of best hosts selected by scheduler.
+#
+# New instances will be scheduled on a host chosen randomly from a subset of the
+# N best hosts, where N is the value set by this option.
+#
+# Setting this to a value greater than 1 will reduce the chance that multiple
+# scheduler processes handling similar requests will select the same host,
+# creating a potential race condition. By selecting a host randomly from the N
+# hosts that best fit the request, the chance of a conflict is reduced. However,
+# the higher you set this value, the less optimal the chosen host may be for a
+# given request.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * An integer, where the integer corresponds to the size of a host subset. Any
+#   integer is valid, although any value less than 1 will be treated as 1
+#  (integer value)
+# Minimum value: 1
+# Deprecated group/name - [DEFAULT]/scheduler_host_subset_size
+#host_subset_size=1
+host_subset_size=30
+
+#
+# The number of instances that can be actively performing IO on a host.
+#
+# Instances performing IO includes those in the following states: build, resize,
+# snapshot, migrate, rescue, unshelve.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'io_ops_filter' filter is enabled.
+#
+# Possible values:
+#
+# * An integer, where the integer corresponds to the max number of instances
+#   that can be actively performing IO on any given host.
+#  (integer value)
+# Deprecated group/name - [DEFAULT]/max_io_ops_per_host
+#max_io_ops_per_host=8
+max_io_ops_per_host=8
+
+#
+# Maximum number of instances that be active on a host.
+#
+# If you need to limit the number of instances on any given host, set this
+# option
+# to the maximum number of instances you want to allow. The num_instances_filter
+# will reject any host that has at least as many instances as this option's
+# value.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'num_instances_filter' filter is enabled.
+#
+# Possible values:
+#
+# * An integer, where the integer corresponds to the max instances that can be
+#   scheduled on a host.
+#  (integer value)
+# Deprecated group/name - [DEFAULT]/max_instances_per_host
+#max_instances_per_host=50
+max_instances_per_host=50
+
+#
+# Enable querying of individual hosts for instance information.
+#
+# The scheduler may need information about the instances on a host in order to
+# evaluate its filters and weighers. The most common need for this information
+# is
+# for the (anti-)affinity filters, which need to choose a host based on the
+# instances already running on a host.
+#
+# If the configured filters and weighers do not need this information, disabling
+# this option will improve performance. It may also be disabled when the
+# tracking
+# overhead proves too heavy, although this will cause classes requiring host
+# usage data to query the database on each request instead.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/scheduler_tracks_instance_changes
+#track_instance_changes=true
+
+#
+# Filters that the scheduler can use.
+#
+# An unordered list of the filter classes the nova scheduler may apply.  Only
+# the
+# filters specified in the 'scheduler_enabled_filters' option will be used, but
+# any filter appearing in that option must also be included in this list.
+#
+# By default, this is set to all filters that are included with nova.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more strings, where each string corresponds to the name of
+#   a filter that may be used for selecting a host
+#
+# Related options:
+#
+# * scheduler_enabled_filters
+#  (multi valued)
+# Deprecated group/name - [DEFAULT]/scheduler_available_filters
+#available_filters=nova.scheduler.filters.all_filters
+available_filters=nova.scheduler.filters.all_filters
+available_filters=nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
+
+
+#
+# Filters that the scheduler will use.
+#
+# An ordered list of filter class names that will be used for filtering
+# hosts. Ignore the word 'default' in the name of this option: these filters
+# will
+# *always* be applied, and they will be applied in the order they are listed so
+# place your most restrictive filters first to make the filtering process more
+# efficient.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more strings, where each string corresponds to the name of
+#   a filter to be used for selecting a host
+#
+# Related options:
+#
+# * All of the filters in this option *must* be present in the
+#   'scheduler_available_filters' option, or a SchedulerHostFilterNotFound
+#   exception will be raised.
+#  (list value)
+# Deprecated group/name - [DEFAULT]/scheduler_default_filters
+#enabled_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
+enabled_filters=DifferentHostFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter
+
+#
+# Filters used for filtering baremetal hosts.
+#
+# Filters are applied in order, so place your most restrictive filters first to
+# make the filtering process more efficient.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more strings, where each string corresponds to the name of
+#   a filter to be used for selecting a baremetal host
+#
+# Related options:
+#
+# * If the 'scheduler_use_baremetal_filters' option is False, this option has
+#   no effect.
+#  (list value)
+# Deprecated group/name - [DEFAULT]/baremetal_scheduler_default_filters
+#baremetal_enabled_filters=RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter
+
+#
+# Enable baremetal filters.
+#
+# Set this to True to tell the nova scheduler that it should use the filters
+# specified in the 'baremetal_scheduler_enabled_filters' option. If you are not
+# scheduling baremetal nodes, leave this at the default setting of False.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Related options:
+#
+# * If this option is set to True, then the filters specified in the
+#   'baremetal_scheduler_enabled_filters' are used instead of the filters
+#   specified in 'scheduler_enabled_filters'.
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/scheduler_use_baremetal_filters
+#use_baremetal_filters=false
+use_baremetal_filters=false
+
+#
+# Weighers that the scheduler will use.
+#
+# Only hosts which pass the filters are weighed. The weight for any host starts
+# at 0, and the weighers order these hosts by adding to or subtracting from the
+# weight assigned by the previous weigher. Weights may become negative. An
+# instance will be scheduled to one of the N most-weighted hosts, where N is
+# 'scheduler_host_subset_size'.
+#
+# By default, this is set to all weighers that are included with Nova.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more strings, where each string corresponds to the name of
+#   a weigher that will be used for selecting a host
+#  (list value)
+# Deprecated group/name - [DEFAULT]/scheduler_weight_classes
+#weight_classes=nova.scheduler.weights.all_weighers
+
+#
+# Ram weight multipler ratio.
+#
+# This option determines how hosts with more or less available RAM are weighed.
+# A
+# positive value will result in the scheduler preferring hosts with more
+# available RAM, and a negative number will result in the scheduler preferring
+# hosts with less available RAM. Another way to look at it is that positive
+# values for this option will tend to spread instances across many hosts, while
+# negative values will tend to fill up (stack) hosts as much as possible before
+# scheduling to a less-used host. The absolute value, whether positive or
+# negative, controls how strong the RAM weigher is relative to other weighers.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'ram' weigher is enabled.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the multipler
+#   ratio for this weigher.
+#  (floating point value)
+# Deprecated group/name - [DEFAULT]/ram_weight_multiplier
+#ram_weight_multiplier=1.0
+
+#
+# Disk weight multipler ratio.
+#
+# Multiplier used for weighing free disk space. Negative numbers mean to
+# stack vs spread.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'ram' weigher is enabled.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the multipler
+#   ratio for this weigher.
+#  (floating point value)
+# Deprecated group/name - [DEFAULT]/disk_weight_multiplier
+#disk_weight_multiplier=1.0
+
+#
+# IO operations weight multipler ratio.
+#
+# This option determines how hosts with differing workloads are weighed.
+# Negative
+# values, such as the default, will result in the scheduler preferring hosts
+# with
+# lighter workloads whereas positive values will prefer hosts with heavier
+# workloads. Another way to look at it is that positive values for this option
+# will tend to schedule instances onto hosts that are already busy, while
+# negative values will tend to distribute the workload across more hosts. The
+# absolute value, whether positive or negative, controls how strong the io_ops
+# weigher is relative to other weighers.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'io_ops' weigher is enabled.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the multipler
+#   ratio for this weigher.
+#  (floating point value)
+# Deprecated group/name - [DEFAULT]/io_ops_weight_multiplier
+#io_ops_weight_multiplier=-1.0
+
+#
+# Multiplier used for weighing hosts for group soft-affinity.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to weight multiplier
+#   for hosts with group soft affinity. Only a positive value are meaningful, as
+#   negative values would make this behave as a soft anti-affinity weigher.
+#  (floating point value)
+# Deprecated group/name - [DEFAULT]/soft_affinity_weight_multiplier
+#soft_affinity_weight_multiplier=1.0
+
+#
+# Multiplier used for weighing hosts for group soft-anti-affinity.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to weight multiplier
+#   for hosts with group soft anti-affinity. Only a positive value are
+#   meaningful, as negative values would make this behave as a soft affinity
+#   weigher.
+#  (floating point value)
+# Deprecated group/name - [DEFAULT]/soft_anti_affinity_weight_multiplier
+#soft_anti_affinity_weight_multiplier=1.0
+
+#
+# List of UUIDs for images that can only be run on certain hosts.
+#
+# If there is a need to restrict some images to only run on certain designated
+# hosts, list those image UUIDs here.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'IsolatedHostsFilter' filter is enabled.
+#
+# Possible values:
+#
+# * A list of UUID strings, where each string corresponds to the UUID of an
+#   image
+#
+# Related options:
+#
+# * scheduler/isolated_hosts
+# * scheduler/restrict_isolated_hosts_to_isolated_images
+#  (list value)
+# Deprecated group/name - [DEFAULT]/isolated_images
+#isolated_images =
+
+#
+# List of hosts that can only run certain images.
+#
+# If there is a need to restrict some images to only run on certain designated
+# hosts, list those host names here.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'IsolatedHostsFilter' filter is enabled.
+#
+# Possible values:
+#
+# * A list of strings, where each string corresponds to the name of a host
+#
+# Related options:
+#
+# * scheduler/isolated_images
+# * scheduler/restrict_isolated_hosts_to_isolated_images
+#  (list value)
+# Deprecated group/name - [DEFAULT]/isolated_hosts
+#isolated_hosts =
+
+#
+# Prevent non-isolated images from being built on isolated hosts.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. Even
+# then, this option doesn't affect the behavior of requests for isolated images,
+# which will *always* be restricted to isolated hosts.
+#
+# Related options:
+#
+# * scheduler/isolated_images
+# * scheduler/isolated_hosts
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/restrict_isolated_hosts_to_isolated_images
+#restrict_isolated_hosts_to_isolated_images=true
+
+#
+# Image property namespace for use in the host aggregate.
+#
+# Images and hosts can be configured so that certain images can only be
+# scheduled
+# to hosts in a particular aggregate. This is done with metadata values set on
+# the host aggregate that are identified by beginning with the value of this
+# option. If the host is part of an aggregate with such a metadata key, the
+# image
+# in the request spec must have the value of that metadata in its properties in
+# order for the scheduler to consider the host as acceptable.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'aggregate_image_properties_isolation' filter
+# is
+# enabled.
+#
+# Possible values:
+#
+# * A string, where the string corresponds to an image property namespace
+#
+# Related options:
+#
+# * aggregate_image_properties_isolation_separator
+#  (string value)
+# Deprecated group/name - [DEFAULT]/aggregate_image_properties_isolation_namespace
+#aggregate_image_properties_isolation_namespace=<None>
+
+#
+# Separator character(s) for image property namespace and name.
+#
+# When using the aggregate_image_properties_isolation filter, the relevant
+# metadata keys are prefixed with the namespace defined in the
+# aggregate_image_properties_isolation_namespace configuration option plus a
+# separator. This option defines the separator to be used.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'aggregate_image_properties_isolation' filter
+# is enabled.
+#
+# Possible values:
+#
+# * A string, where the string corresponds to an image property namespace
+#   separator character
+#
+# Related options:
+#
+# * aggregate_image_properties_isolation_namespace
+#  (string value)
+# Deprecated group/name - [DEFAULT]/aggregate_image_properties_isolation_separator
+#aggregate_image_properties_isolation_separator=.
+
+
+[glance]
+# Configuration options for the Image service
+
+#
+# From nova.conf
+#
+
+#
+# List of glance api servers endpoints available to nova.
+#
+# https is used for ssl-based glance api servers.
+#
+# Possible values:
+#
+# * A list of any fully qualified url of the form
+# "scheme://hostname:port[/path]"
+#   (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image").
+#  (list value)
+#api_servers=<None>
+api_servers = 10.167.4.10:9292
+
+#
+# Enable insecure SSL (https) requests to glance.
+#
+# This setting can be used to turn off verification of the glance server
+# certificate against the certificate authorities.
+#  (boolean value)
+#api_insecure=false
+
+#
+# Enable glance operation retries.
+#
+# Specifies the number of retries when uploading / downloading
+# an image to / from glance. 0 means no retries.
+#  (integer value)
+# Minimum value: 0
+#num_retries=0
+
+#
+# List of url schemes that can be directly accessed.
+#
+# This option specifies a list of url schemes that can be downloaded
+# directly via the direct_url. This direct_URL can be fetched from
+# Image metadata which can be used by nova to get the
+# image more efficiently. nova-compute could benefit from this by
+# invoking a copy when it has access to the same file system as glance.
+#
+# Possible values:
+#
+# * [file], Empty list (default)
+#  (list value)
+#allowed_direct_url_schemes =
+
+#
+# Enable image signature verification.
+#
+# nova uses the image signature metadata from glance and verifies the signature
+# of a signed image while downloading that image. If the image signature cannot
+# be verified or if the image signature metadata is either incomplete or
+# unavailable, then nova will not boot the image and instead will place the
+# instance into an error state. This provides end users with stronger assurances
+# of the integrity of the image data they are using to create servers.
+#
+# Related options:
+#
+# * The options in the `key_manager` group, as the key_manager is used
+#   for the signature validation.
+#  (boolean value)
+#verify_glance_signatures=false
+
+# Enable or disable debug logging with glanceclient. (boolean value)
+#debug=false
+
+
+[guestfs]
+#
+# libguestfs is a set of tools for accessing and modifying virtual
+# machine (VM) disk images. You can use this for viewing and editing
+# files inside guests, scripting changes to VMs, monitoring disk
+# used/free statistics, creating guests, P2V, V2V, performing backups,
+# cloning VMs, building VMs, formatting disks and resizing disks.
+
+#
+# From nova.conf
+#
+
+#
+# Enable/disables guestfs logging.
+#
+# This configures guestfs to debug messages and push them to Openstack
+# logging system. When set to True, it traces libguestfs API calls and
+# enable verbose debug messages. In order to use the above feature,
+# "libguestfs" package must be installed.
+#
+# Related options:
+# Since libguestfs access and modifies VM's managed by libvirt, below options
+# should be set to give access to those VM's.
+#     * libvirt.inject_key
+#     * libvirt.inject_partition
+#     * libvirt.inject_password
+#  (boolean value)
+#debug=false
 
 
 [healthcheck]
@@ -515,10 +5105,10 @@
 # DEPRECATED: The path to respond to healtcheck requests on. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
-#path = /healthcheck
+#path=/healthcheck
 
 # Show more detailed information as part of the response (boolean value)
-#detailed = false
+#detailed=false
 
 # Additional backends that can perform health checks and report that information
 # back as part of a request. (list value)
@@ -526,7 +5116,7 @@
 
 # Check the presence of a file to determine if an application is running on a
 # port. Used by DisableByFileHealthcheck plugin. (string value)
-#disable_by_file_path = <None>
+#disable_by_file_path=<None>
 
 # Check the presence of a file based on a port to determine if an application is
 # running on a port. Expects a "port:path" list of strings. Used by
@@ -534,12 +5124,550 @@
 #disable_by_file_paths =
 
 
+[hyperv]
+#
+# The hyperv feature allows you to configure the Hyper-V hypervisor
+# driver to be used within an OpenStack deployment.
+
+#
+# From nova.conf
+#
+
+#
+# Dynamic memory ratio
+#
+# Enables dynamic memory allocation (ballooning) when set to a value
+# greater than 1. The value expresses the ratio between the total RAM
+# assigned to an instance and its startup RAM amount. For example a
+# ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of
+# RAM allocated at startup.
+#
+# Possible values:
+#
+# * 1.0: Disables dynamic memory allocation (Default).
+# * Float values greater than 1.0: Enables allocation of total implied
+#   RAM divided by this value for startup.
+#  (floating point value)
+#dynamic_memory_ratio=1.0
+
+#
+# Enable instance metrics collection
+#
+# Enables metrics collections for an instance by using Hyper-V's
+# metric APIs. Collected data can by retrieved by other apps and
+# services, e.g.: Ceilometer.
+#  (boolean value)
+#enable_instance_metrics_collection=false
+
+#
+# Instances path share
+#
+# The name of a Windows share mapped to the "instances_path" dir
+# and used by the resize feature to copy files to the target host.
+# If left blank, an administrative share (hidden network share) will
+# be used, looking for the same "instances_path" used locally.
+#
+# Possible values:
+#
+# * "": An administrative share will be used (Default).
+# * Name of a Windows share.
+#
+# Related options:
+#
+# * "instances_path": The directory which will be used if this option
+#   here is left blank.
+#  (string value)
+#instances_path_share =
+
+#
+# Limit CPU features
+#
+# This flag is needed to support live migration to hosts with
+# different CPU features and checked during instance creation
+# in order to limit the CPU features used by the instance.
+#  (boolean value)
+#limit_cpu_features=false
+
+#
+# Mounted disk query retry count
+#
+# The number of times to retry checking for a mounted disk.
+# The query runs until the device can be found or the retry
+# count is reached.
+#
+# Possible values:
+#
+# * Positive integer values. Values greater than 1 is recommended
+#   (Default: 10).
+#
+# Related options:
+#
+# * Time interval between disk mount retries is declared with
+#   "mounted_disk_query_retry_interval" option.
+#  (integer value)
+# Minimum value: 0
+#mounted_disk_query_retry_count=10
+
+#
+# Mounted disk query retry interval
+#
+# Interval between checks for a mounted disk, in seconds.
+#
+# Possible values:
+#
+# * Time in seconds (Default: 5).
+#
+# Related options:
+#
+# * This option is meaningful when the mounted_disk_query_retry_count
+#   is greater than 1.
+# * The retry loop runs with mounted_disk_query_retry_count and
+#   mounted_disk_query_retry_interval configuration options.
+#  (integer value)
+# Minimum value: 0
+#mounted_disk_query_retry_interval=5
+
+#
+# Power state check timeframe
+#
+# The timeframe to be checked for instance power state changes.
+# This option is used to fetch the state of the instance from Hyper-V
+# through the WMI interface, within the specified timeframe.
+#
+# Possible values:
+#
+# * Timeframe in seconds (Default: 60).
+#  (integer value)
+# Minimum value: 0
+#power_state_check_timeframe=60
+
+#
+# Power state event polling interval
+#
+# Instance power state change event polling frequency. Sets the
+# listener interval for power state events to the given value.
+# This option enhances the internal lifecycle notifications of
+# instances that reboot themselves. It is unlikely that an operator
+# has to change this value.
+#
+# Possible values:
+#
+# * Time in seconds (Default: 2).
+#  (integer value)
+# Minimum value: 0
+#power_state_event_polling_interval=2
+
+#
+# qemu-img command
+#
+# qemu-img is required for some of the image related operations
+# like converting between different image types. You can get it
+# from here: (http://qemu.weilnetz.de/) or you can install the
+# Cloudbase OpenStack Hyper-V Compute Driver
+# (https://cloudbase.it/openstack-hyperv-driver/) which automatically
+# sets the proper path for this config option. You can either give the
+# full path of qemu-img.exe or set its path in the PATH environment
+# variable and leave this option to the default value.
+#
+# Possible values:
+#
+# * Name of the qemu-img executable, in case it is in the same
+#   directory as the nova-compute service or its path is in the
+#   PATH environment variable (Default).
+# * Path of qemu-img command (DRIVELETTER:\PATH\TO\QEMU-IMG\COMMAND).
+#
+# Related options:
+#
+# * If the config_drive_cdrom option is False, qemu-img will be used to
+#   convert the ISO to a VHD, otherwise the configuration drive will
+#   remain an ISO. To use configuration drive with Hyper-V, you must
+#   set the mkisofs_cmd value to the full path to an mkisofs.exe
+#   installation.
+#  (string value)
+#qemu_img_cmd=qemu-img.exe
+
+#
+# External virtual switch name
+#
+# The Hyper-V Virtual Switch is a software-based layer-2 Ethernet
+# network switch that is available with the installation of the
+# Hyper-V server role. The switch includes programmatically managed
+# and extensible capabilities to connect virtual machines to both
+# virtual networks and the physical network. In addition, Hyper-V
+# Virtual Switch provides policy enforcement for security, isolation,
+# and service levels. The vSwitch represented by this config option
+# must be an external one (not internal or private).
+#
+# Possible values:
+#
+# * If not provided, the first of a list of available vswitches
+#   is used. This list is queried using WQL.
+# * Virtual switch name.
+#  (string value)
+#vswitch_name=<None>
+
+#
+# Wait soft reboot seconds
+#
+# Number of seconds to wait for instance to shut down after soft
+# reboot request is made. We fall back to hard reboot if instance
+# does not shutdown within this window.
+#
+# Possible values:
+#
+# * Time in seconds (Default: 60).
+#  (integer value)
+# Minimum value: 0
+#wait_soft_reboot_seconds=60
+
+#
+# Configuration drive cdrom
+#
+# OpenStack can be configured to write instance metadata to
+# a configuration drive, which is then attached to the
+# instance before it boots. The configuration drive can be
+# attached as a disk drive (default) or as a CD drive.
+#
+# Possible values:
+#
+# * True: Attach the configuration drive image as a CD drive.
+# * False: Attach the configuration drive image as a disk drive (Default).
+#
+# Related options:
+#
+# * This option is meaningful with force_config_drive option set to 'True'
+#   or when the REST API call to create an instance will have
+#   '--config-drive=True' flag.
+# * config_drive_format option must be set to 'iso9660' in order to use
+#   CD drive as the configuration drive image.
+# * To use configuration drive with Hyper-V, you must set the
+#   mkisofs_cmd value to the full path to an mkisofs.exe installation.
+#   Additionally, you must set the qemu_img_cmd value to the full path
+#   to an qemu-img command installation.
+# * You can configure the Compute service to always create a configuration
+#   drive by setting the force_config_drive option to 'True'.
+#  (boolean value)
+#config_drive_cdrom=false
+
+#
+# Configuration drive inject password
+#
+# Enables setting the admin password in the configuration drive image.
+#
+# Related options:
+#
+# * This option is meaningful when used with other options that enable
+#   configuration drive usage with Hyper-V, such as force_config_drive.
+# * Currently, the only accepted config_drive_format is 'iso9660'.
+#  (boolean value)
+#config_drive_inject_password=false
+
+#
+# Volume attach retry count
+#
+# The number of times to retry attaching a volume. Volume attachment
+# is retried until success or the given retry count is reached.
+#
+# Possible values:
+#
+# * Positive integer values (Default: 10).
+#
+# Related options:
+#
+# * Time interval between attachment attempts is declared with
+#   volume_attach_retry_interval option.
+#  (integer value)
+# Minimum value: 0
+#volume_attach_retry_count=10
+
+#
+# Volume attach retry interval
+#
+# Interval between volume attachment attempts, in seconds.
+#
+# Possible values:
+#
+# * Time in seconds (Default: 5).
+#
+# Related options:
+#
+# * This options is meaningful when volume_attach_retry_count
+#   is greater than 1.
+# * The retry loop runs with volume_attach_retry_count and
+#   volume_attach_retry_interval configuration options.
+#  (integer value)
+# Minimum value: 0
+#volume_attach_retry_interval=5
+
+#
+# Enable RemoteFX feature
+#
+# This requires at least one DirectX 11 capable graphics adapter for
+# Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization
+# feature has to be enabled.
+#
+# Instances with RemoteFX can be requested with the following flavor
+# extra specs:
+#
+# **os:resolution**. Guest VM screen resolution size. Acceptable values::
+#
+#     1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160
+#
+# ``3840x2160`` is only available on Windows / Hyper-V Server 2016.
+#
+# **os:monitors**. Guest VM number of monitors. Acceptable values::
+#
+#     [1, 4] - Windows / Hyper-V Server 2012 R2
+#     [1, 8] - Windows / Hyper-V Server 2016
+#
+# **os:vram**. Guest VM VRAM amount. Only available on
+# Windows / Hyper-V Server 2016. Acceptable values::
+#
+#     64, 128, 256, 512, 1024
+#  (boolean value)
+#enable_remotefx=false
+
+#
+# Use multipath connections when attaching iSCSI or FC disks.
+#
+# This requires the Multipath IO Windows feature to be enabled. MPIO must be
+# configured to claim such devices.
+#  (boolean value)
+#use_multipath_io=false
+
+#
+# List of iSCSI initiators that will be used for estabilishing iSCSI sessions.
+#
+# If none are specified, the Microsoft iSCSI initiator service will choose the
+# initiator.
+#  (list value)
+#iscsi_initiator_list =
+
+
+[image_file_url]
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# List of file systems that are configured in this file in the
+# image_file_url:<list entry name> sections
+#  (list value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The feature to download images from glance via filesystem is not used and will
+# be removed in the future.
+#filesystems =
+
+
+[ironic]
+#
+# Configuration options for Ironic driver (Bare Metal).
+# If using the Ironic driver following options must be set:
+# * auth_type
+# * auth_url
+# * project_name
+# * username
+# * password
+# * project_domain_id or project_domain_name
+# * user_domain_id or user_domain_name
+
+#
+# From nova.conf
+#
+
+# URL override for the Ironic API endpoint. (string value)
+#api_endpoint=http://ironic.example.org:6385/
+
+#
+# The number of times to retry when a request conflicts.
+# If set to 0, only try once, no retries.
+#
+# Related options:
+#
+# * api_retry_interval
+#  (integer value)
+# Minimum value: 0
+#api_max_retries=60
+
+#
+# The number of seconds to wait before retrying the request.
+#
+# Related options:
+#
+# * api_max_retries
+#  (integer value)
+# Minimum value: 0
+#api_retry_interval=2
+
+# Timeout (seconds) to wait for node serial console state changed. Set to 0 to
+# disable timeout. (integer value)
+# Minimum value: 0
+#serial_console_state_timeout=10
+
+# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# (string value)
+#cafile=<None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile=<None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile=<None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure=false
+
+# Timeout value for http requests (integer value)
+#timeout=<None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [ironic]/auth_plugin
+#auth_type=<None>
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section=<None>
+
+# Authentication URL (string value)
+#auth_url=<None>
+
+# Domain ID to scope to (string value)
+#domain_id=<None>
+
+# Domain name to scope to (string value)
+#domain_name=<None>
+
+# Project ID to scope to (string value)
+#project_id=<None>
+
+# Project name to scope to (string value)
+#project_name=<None>
+
+# Domain ID containing project (string value)
+#project_domain_id=<None>
+
+# Domain name containing project (string value)
+#project_domain_name=<None>
+
+# Trust ID (string value)
+#trust_id=<None>
+
+# User ID (string value)
+#user_id=<None>
+
+# Username (string value)
+# Deprecated group/name - [ironic]/user-name
+#username=<None>
+
+# User's domain id (string value)
+#user_domain_id=<None>
+
+# User's domain name (string value)
+#user_domain_name=<None>
+
+# User's password (string value)
+#password=<None>
+
+
+[key_manager]
+
+#
+# From nova.conf
+#
+
+#
+# Fixed key returned by key manager, specified in hex.
+#
+# Possible values:
+#
+# * Empty string or a key in hex value
+#  (string value)
+# Deprecated group/name - [keymgr]/fixed_key
+#fixed_key=<None>
+
+# The full class name of the key manager API class (string value)
+#api_class=castellan.key_manager.barbican_key_manager.BarbicanKeyManager
+
+# The type of authentication credential to create. Possible values are 'token',
+# 'password', 'keystone_token', and 'keystone_password'. Required if no context
+# is passed to the credential factory. (string value)
+#auth_type=<None>
+
+# Token for authentication. Required for 'token' and 'keystone_token' auth_type
+# if no context is passed to the credential factory. (string value)
+#token=<None>
+
+# Username for authentication. Required for 'password' auth_type. Optional for
+# the 'keystone_password' auth_type. (string value)
+#username=<None>
+
+# Password for authentication. Required for 'password' and 'keystone_password'
+# auth_type. (string value)
+#password=<None>
+
+# User ID for authentication. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#user_id=<None>
+
+# User's domain ID for authentication. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#user_domain_id=<None>
+
+# User's domain name for authentication. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#user_domain_name=<None>
+
+# Trust ID for trust scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#trust_id=<None>
+
+# Domain ID for domain scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#domain_id=<None>
+
+# Domain name for domain scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#domain_name=<None>
+
+# Project ID for project scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#project_id=<None>
+
+# Project name for project scoping. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#project_name=<None>
+
+# Project's domain ID for project. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#project_domain_id=<None>
+
+# Project's domain name for project. Optional for 'keystone_token' and
+# 'keystone_password' auth_type. (string value)
+#project_domain_name=<None>
+
+# Allow fetching a new token if the current one is going to expire. Optional for
+# 'keystone_token' and 'keystone_password' auth_type. (boolean value)
+#reauthenticate=true
+
+
 [keystone_authtoken]
 
 #
 # From keystonemiddleware.auth_token
 #
-
+revocation_cache_time = 10
+signing_dir=/tmp/keystone-signing-nova
+auth_type = password
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = nova
+password = opnfv_secret
+auth_uri=http://10.167.4.10:5000
+auth_url=http://10.167.4.10:35357
+memcached_servers=10.167.4.11:11211,10.167.4.12:11211,10.167.4.13:11211
 # Complete "public" Identity API endpoint. This endpoint should not be an
 # "admin" endpoint, as it should be accessible by all end users. Unauthenticated
 # clients are redirected to this endpoint to authenticate. Although this
@@ -547,44 +5675,44 @@
 # If you're using a versioned v2 endpoint here, then this  should *not* be the
 # same endpoint the service user utilizes  for validating tokens, because normal
 # end users may not be  able to reach that endpoint. (string value)
-#auth_uri = <None>
+#auth_uri=<None>
 
 # API version of the admin Identity API endpoint. (string value)
-#auth_version = <None>
+#auth_version=<None>
 
 # Do not handle authorization requests within the middleware, but delegate the
 # authorization decision to downstream WSGI components. (boolean value)
-#delay_auth_decision = false
+#delay_auth_decision=false
 
 # Request timeout value for communicating with Identity API server. (integer
 # value)
-#http_connect_timeout = <None>
+#http_connect_timeout=<None>
 
 # How many times are we trying to reconnect when communicating with Identity API
 # Server. (integer value)
-#http_request_max_retries = 3
+#http_request_max_retries=3
 
 # Request environment key where the Swift cache object is stored. When
 # auth_token middleware is deployed with a Swift cache, use this option to have
 # the middleware share a caching backend with swift. Otherwise, use the
 # ``memcached_servers`` option instead. (string value)
-#cache = <None>
+#cache=<None>
 
 # Required if identity server requires client certificate (string value)
-#certfile = <None>
+#certfile=<None>
 
 # Required if identity server requires client certificate (string value)
-#keyfile = <None>
+#keyfile=<None>
 
 # A PEM encoded Certificate Authority to use when verifying HTTPs connections.
 # Defaults to system CAs. (string value)
-#cafile = <None>
+#cafile=<None>
 
 # Verify HTTPS connections. (boolean value)
-#insecure = false
+#insecure=false
 
 # The region in which the identity server can be found. (string value)
-#region_name = <None>
+#region_name=<None>
 
 # DEPRECATED: Directory used to cache files related to PKI tokens. This option
 # has been deprecated in the Ocata release and will be removed in the P release.
@@ -592,17 +5720,17 @@
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
-#signing_dir = <None>
+#signing_dir=<None>
 
 # Optionally specify a list of memcached server(s) to use for caching. If left
 # undefined, tokens will instead be cached in-process. (list value)
 # Deprecated group/name - [keystone_authtoken]/memcache_servers
-#memcached_servers = <None>
+#memcached_servers=<None>
 
 # In order to prevent excessive effort spent validating tokens, the middleware
 # caches previously-seen tokens for a configurable duration (in seconds). Set to
 # -1 to disable caching completely. (integer value)
-#token_cache_time = 300
+#token_cache_time=300
 
 # DEPRECATED: Determines the frequency at which the list of revoked tokens is
 # retrieved from the Identity service (in seconds). A high number of revocation
@@ -612,7 +5740,7 @@
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
-#revocation_cache_time = 10
+#revocation_cache_time=10
 
 # (Optional) If defined, indicate whether token data should be authenticated or
 # authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
@@ -620,40 +5748,40 @@
 # cache. If the value is not one of these options or empty, auth_token will
 # raise an exception on initialization. (string value)
 # Allowed values: None, MAC, ENCRYPT
-#memcache_security_strategy = None
+#memcache_security_strategy=None
 
 # (Optional, mandatory if memcache_security_strategy is defined) This string is
 # used for key derivation. (string value)
-#memcache_secret_key = <None>
+#memcache_secret_key=<None>
 
 # (Optional) Number of seconds memcached server is considered dead before it is
 # tried again. (integer value)
-#memcache_pool_dead_retry = 300
+#memcache_pool_dead_retry=300
 
 # (Optional) Maximum total number of open connections to every memcached server.
 # (integer value)
-#memcache_pool_maxsize = 10
+#memcache_pool_maxsize=10
 
 # (Optional) Socket timeout in seconds for communicating with a memcached
 # server. (integer value)
-#memcache_pool_socket_timeout = 3
+#memcache_pool_socket_timeout=3
 
 # (Optional) Number of seconds a connection to memcached is held unused in the
 # pool before it is closed. (integer value)
-#memcache_pool_unused_timeout = 60
+#memcache_pool_unused_timeout=60
 
 # (Optional) Number of seconds that an operation will wait to get a memcached
 # client connection from the pool. (integer value)
-#memcache_pool_conn_get_timeout = 10
+#memcache_pool_conn_get_timeout=10
 
 # (Optional) Use the advanced (eventlet safe) memcached client pool. The
 # advanced pool will only work under python 2.x. (boolean value)
-#memcache_use_advanced_pool = false
+#memcache_use_advanced_pool=false
 
 # (Optional) Indicate whether to set the X-Service-Catalog header. If False,
 # middleware will not ask for service catalog on token validation and will not
 # set the X-Service-Catalog header. (boolean value)
-#include_service_catalog = true
+#include_service_catalog=true
 
 # Used to control the use and type of token binding. Can be set to: "disabled"
 # to not check token binding. "permissive" (default) to validate binding
@@ -662,7 +5790,7 @@
 # be rejected. "required" any form of token binding is needed to be allowed.
 # Finally the name of a binding method that must be present in tokens. (string
 # value)
-#enforce_token_bind = permissive
+#enforce_token_bind=permissive
 
 # DEPRECATED: If true, the revocation list will be checked for cached tokens.
 # This requires that PKI tokens are configured on the identity server. (boolean
@@ -670,7 +5798,7 @@
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
-#check_revocations_for_cached = false
+#check_revocations_for_cached=false
 
 # DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
 # single algorithm or multiple. The algorithms are those supported by Python
@@ -683,7 +5811,7 @@
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
-#hash_algorithms = md5
+#hash_algorithms=md5
 
 # A choice of roles that must be present in a service token. Service tokens are
 # allowed to request that an expired token can be used and so this check should
@@ -691,20 +5819,1015 @@
 # here are applied as an ANY check so any role in this list must be present. For
 # backwards compatibility reasons this currently only affects the allow_expired
 # check. (list value)
-#service_token_roles = service
+#service_token_roles=service
 
 # For backwards compatibility reasons we must let valid service tokens pass that
 # don't pass the service_token_roles check as valid. Setting this true will
 # become the default in a future release and should be enabled if possible.
 # (boolean value)
-#service_token_roles_required = false
+#service_token_roles_required=false
+
+# Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
+# (string value)
+#auth_admin_prefix =
+
+# Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
+# (string value)
+#auth_host=127.0.0.1
+
+# Port of the admin Identity API endpoint. Deprecated, use identity_uri.
+# (integer value)
+#auth_port=35357
+
+# Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
+# (string value)
+# Allowed values: http, https
+#auth_protocol=https
+
+# Complete admin Identity API endpoint. This should specify the unversioned root
+# endpoint e.g. https://localhost:35357/ (string value)
+#identity_uri=<None>
+
+# This option is deprecated and may be removed in a future release. Single
+# shared secret with the Keystone configuration used for bootstrapping a
+# Keystone installation, or otherwise bypassing the normal authentication
+# process. This option should not be used, use `admin_user` and `admin_password`
+# instead. (string value)
+#admin_token=<None>
+
+# Service username. (string value)
+#admin_user=<None>
+
+# Service user password. (string value)
+#admin_password=<None>
+
+# Service tenant name. (string value)
+#admin_tenant_name=admin
 
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
-#auth_type = <None>
+#auth_type=<None>
 
 # Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#auth_section=<None>
+
+
+[libvirt]
+#
+# Libvirt options allows cloud administrator to configure related
+# libvirt hypervisor driver to be used within an OpenStack deployment.
+#
+# Almost all of the libvirt config options are influence by ``virt_type`` config
+# which describes the virtualization type (or so called domain type) libvirt
+# should use for specific features such as live migration, snapshot.
+
+#
+# From nova.conf
+#
+
+#
+# The ID of the image to boot from to rescue data from a corrupted instance.
+#
+# If the rescue REST API operation doesn't provide an ID of an image to
+# use, the image which is referenced by this ID is used. If this
+# option is not set, the image from the instance is used.
+#
+# Possible values:
+#
+# * An ID of an image or nothing. If it points to an *Amazon Machine
+#   Image* (AMI), consider to set the config options ``rescue_kernel_id``
+#   and ``rescue_ramdisk_id`` too. If nothing is set, the image of the instance
+#   is used.
+#
+# Related options:
+#
+# * ``rescue_kernel_id``: If the chosen rescue image allows the separate
+#   definition of its kernel disk, the value of this option is used,
+#   if specified. This is the case when *Amazon*'s AMI/AKI/ARI image
+#   format is used for the rescue image.
+# * ``rescue_ramdisk_id``: If the chosen rescue image allows the separate
+#   definition of its RAM disk, the value of this option is used if,
+#   specified. This is the case when *Amazon*'s AMI/AKI/ARI image
+#   format is used for the rescue image.
+#  (string value)
+#rescue_image_id=<None>
+
+#
+# The ID of the kernel (AKI) image to use with the rescue image.
+#
+# If the chosen rescue image allows the separate definition of its kernel
+# disk, the value of this option is used, if specified. This is the case
+# when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image.
+#
+# Possible values:
+#
+# * An ID of an kernel image or nothing. If nothing is specified, the kernel
+#   disk from the instance is used if it was launched with one.
+#
+# Related options:
+#
+# * ``rescue_image_id``: If that option points to an image in *Amazon*'s
+#   AMI/AKI/ARI image format, it's useful to use ``rescue_kernel_id`` too.
+#  (string value)
+#rescue_kernel_id=<None>
+
+#
+# The ID of the RAM disk (ARI) image to use with the rescue image.
+#
+# If the chosen rescue image allows the separate definition of its RAM
+# disk, the value of this option is used, if specified. This is the case
+# when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image.
+#
+# Possible values:
+#
+# * An ID of a RAM disk image or nothing. If nothing is specified, the RAM
+#   disk from the instance is used if it was launched with one.
+#
+# Related options:
+#
+# * ``rescue_image_id``: If that option points to an image in *Amazon*'s
+#   AMI/AKI/ARI image format, it's useful to use ``rescue_ramdisk_id`` too.
+#  (string value)
+#rescue_ramdisk_id=<None>
+
+#
+# Describes the virtualization type (or so called domain type) libvirt should
+# use.
+#
+# The choice of this type must match the underlying virtualization strategy
+# you have chosen for this host.
+#
+# Possible values:
+#
+# * See the predefined set of case-sensitive values.
+#
+# Related options:
+#
+# * ``connection_uri``: depends on this
+# * ``disk_prefix``: depends on this
+# * ``cpu_mode``: depends on this
+# * ``cpu_model``: depends on this
+#  (string value)
+# Allowed values: kvm, lxc, qemu, uml, xen, parallels
+#virt_type=kvm
+virt_type=kvm
+
+#
+# Overrides the default libvirt URI of the chosen virtualization type.
+#
+# If set, Nova will use this URI to connect to libvirt.
+#
+# Possible values:
+#
+# * An URI like ``qemu:///system`` or ``xen+ssh://oirase/`` for example.
+#   This is only necessary if the URI differs to the commonly known URIs
+#   for the chosen virtualization type.
+#
+# Related options:
+#
+# * ``virt_type``: Influences what is used as default value here.
+#  (string value)
+#connection_uri =
+
+#
+# Allow the injection of an admin password for instance only at ``create`` and
+# ``rebuild`` process.
+#
+# There is no agent needed within the image to do this. If *libguestfs* is
+# available on the host, it will be used. Otherwise *nbd* is used. The file
+# system of the image will be mounted and the admin password, which is provided
+# in the REST API call will be injected as password for the root user. If no
+# root user is available, the instance won't be launched and an error is thrown.
+# Be aware that the injection is *not* possible when the instance gets launched
+# from a volume.
+#
+# Possible values:
+#
+# * True: Allows the injection.
+# * False (default): Disallows the injection. Any via the REST API provided
+# admin password will be silently ignored.
+#
+# Related options:
+#
+# * ``inject_partition``: That option will decide about the discovery and usage
+#   of the file system. It also can disable the injection at all.
+#  (boolean value)
+#inject_password=false
+
+#
+# Allow the injection of an SSH key at boot time.
+#
+# There is no agent needed within the image to do this. If *libguestfs* is
+# available on the host, it will be used. Otherwise *nbd* is used. The file
+# system of the image will be mounted and the SSH key, which is provided
+# in the REST API call will be injected as SSH key for the root user and
+# appended to the ``authorized_keys`` of that user. The SELinux context will
+# be set if necessary. Be aware that the injection is *not* possible when the
+# instance gets launched from a volume.
+#
+# This config option will enable directly modifying the instance disk and does
+# not affect what cloud-init may do using data from config_drive option or the
+# metadata service.
+#
+# Related options:
+#
+# * ``inject_partition``: That option will decide about the discovery and usage
+#   of the file system. It also can disable the injection at all.
+#  (boolean value)
+#inject_key=false
+
+#
+# Determines the way how the file system is chosen to inject data into it.
+#
+# *libguestfs* will be used a first solution to inject data. If that's not
+# available on the host, the image will be locally mounted on the host as a
+# fallback solution. If libguestfs is not able to determine the root partition
+# (because there are more or less than one root partition) or cannot mount the
+# file system it will result in an error and the instance won't be boot.
+#
+# Possible values:
+#
+# * -2 => disable the injection of data.
+# * -1 => find the root partition with the file system to mount with libguestfs
+# *  0 => The image is not partitioned
+# * >0 => The number of the partition to use for the injection
+#
+# Related options:
+#
+# * ``inject_key``: If this option allows the injection of a SSH key it depends
+#   on value greater or equal to -1 for ``inject_partition``.
+# * ``inject_password``: If this option allows the injection of an admin
+# password
+#   it depends on value greater or equal to -1 for ``inject_partition``.
+# * ``guestfs`` You can enable the debug log level of libguestfs with this
+#   config option. A more verbose output will help in debugging issues.
+# * ``virt_type``: If you use ``lxc`` as virt_type it will be treated as a
+#   single partition image
+#  (integer value)
+# Minimum value: -2
+#inject_partition=-2
+inject_partition = -1
+
+# DEPRECATED:
+# Enable a mouse cursor within a graphical VNC or SPICE sessions.
+#
+# This will only be taken into account if the VM is fully virtualized and VNC
+# and/or SPICE is enabled. If the node doesn't support a graphical framebuffer,
+# then it is valid to set this to False.
+#
+# Related options:
+# * ``[vnc]enabled``: If VNC is enabled, ``use_usb_tablet`` will have an effect.
+# * ``[spice]enabled`` + ``[spice].agent_enabled``: If SPICE is enabled and the
+#   spice agent is disabled, the config value of ``use_usb_tablet`` will have
+#   an effect.
+#  (boolean value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: This option is being replaced by the 'pointer_model' option.
+#use_usb_tablet=true
+use_usb_tablet=true
+
+#
+# The IP address or hostname to be used as the target for live migration
+# traffic.
+#
+# If this option is set to None, the hostname of the migration target compute
+# node will be used.
+#
+# This option is useful in environments where the live-migration traffic can
+# impact the network plane significantly. A separate network for live-migration
+# traffic can then use this config option and avoids the impact on the
+# management network.
+#
+# Possible values:
+#
+# * A valid IP address or hostname, else None.
+#  (string value)
+#live_migration_inbound_addr=<None>
+
+# DEPRECATED:
+# Live migration target URI to use.
+#
+# Override the default libvirt live migration target URI (which is dependent
+# on virt_type). Any included "%s" is replaced with the migration target
+# hostname.
+#
+# If this option is set to None (which is the default), Nova will automatically
+# generate the `live_migration_uri` value based on only 3 supported `virt_type`
+# in following list:
+# * 'kvm': 'qemu+tcp://%s/system'
+# * 'qemu': 'qemu+tcp://%s/system'
+# * 'xen': 'xenmigr://%s/system'
+#
+# Related options:
+# * ``live_migration_inbound_addr``: If ``live_migration_inbound_addr`` value
+#   is not None, the ip/hostname address of target compute node is used instead
+#   of ``live_migration_uri`` as the uri for live migration.
+# * ``live_migration_scheme``: If ``live_migration_uri`` is not set, the scheme
+#   used for live migration is taken from ``live_migration_scheme`` instead.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# live_migration_uri is deprecated for removal in favor of two other options
+# that
+# allow to change live migration scheme and target URI:
+# ``live_migration_scheme``
+# and ``live_migration_inbound_addr`` respectively.
+#live_migration_uri=<None>
+
+#
+# Schema used for live migration.
+#
+# Override the default libvirt live migration scheme (which is dependant on
+# virt_type). If this option is set to None, nova will automatically choose a
+# sensible default based on the hypervisor. It is not recommended that you
+# change
+# this unless you are very sure that hypervisor supports a particular scheme.
+#
+# Related options:
+# * ``virt_type``: This option is meaningful only when ``virt_type`` is set to
+#   `kvm` or `qemu`.
+# * ``live_migration_uri``: If ``live_migration_uri`` value is not None, the
+#   scheme used for live migration is taken from ``live_migration_uri`` instead.
+#  (string value)
+#live_migration_scheme=<None>
+
+#
+# Enable tunnelled migration.
+#
+# This option enables the tunnelled migration feature, where migration data is
+# transported over the libvirtd connection. If enabled, we use the
+# VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure
+# the network to allow direct hypervisor to hypervisor communication.
+# If False, use the native transport. If not set, Nova will choose a
+# sensible default based on, for example the availability of native
+# encryption support in the hypervisor. Enable this option will definitely
+# impact performance massively.
+#
+# Note that this option is NOT compatible with use of block migration.
+#
+# Possible values:
+#
+# * Supersedes and (if set) overrides the deprecated 'live_migration_flag' and
+#   'block_migration_flag' to enable tunneled migration.
+#  (boolean value)
+#live_migration_tunnelled=false
+
+#
+# Maximum bandwidth(in MiB/s) to be used during migration.
+#
+# If set to 0, the hypervisor will choose a suitable default. Some hypervisors
+# do not support this feature and will return an error if bandwidth is not 0.
+# Please refer to the libvirt documentation for further details.
+#  (integer value)
+#live_migration_bandwidth=0
+
+#
+# Maximum permitted downtime, in milliseconds, for live migration
+# switchover.
+#
+# Will be rounded up to a minimum of 100ms. You can increase this value
+# if you want to allow live-migrations to complete faster, or avoid
+# live-migration timeout errors by allowing the guest to be paused for
+# longer during the live-migration switch over.
+#
+# Related options:
+#
+# * live_migration_completion_timeout
+#  (integer value)
+#live_migration_downtime=500
+
+#
+# Number of incremental steps to reach max downtime value.
+#
+# Will be rounded up to a minimum of 3 steps.
+#  (integer value)
+#live_migration_downtime_steps=10
+
+#
+# Time to wait, in seconds, between each step increase of the migration
+# downtime.
+#
+# Minimum delay is 10 seconds. Value is per GiB of guest RAM + disk to be
+# transferred, with lower bound of a minimum of 2 GiB per device.
+#  (integer value)
+#live_migration_downtime_delay=75
+
+#
+# Time to wait, in seconds, for migration to successfully complete transferring
+# data before aborting the operation.
+#
+# Value is per GiB of guest RAM + disk to be transferred, with lower bound of
+# a minimum of 2 GiB. Should usually be larger than downtime delay * downtime
+# steps. Set to 0 to disable timeouts.
+#
+# Related options:
+#
+# * live_migration_downtime
+# * live_migration_downtime_steps
+# * live_migration_downtime_delay
+#  (integer value)
+# Note: This option can be changed without restarting.
+#live_migration_completion_timeout=800
+
+# DEPRECATED:
+# Time to wait, in seconds, for migration to make forward progress in
+# transferring data before aborting the operation.
+#
+# Set to 0 to disable timeouts.
+#
+# This is deprecated, and now disabled by default because we have found serious
+# bugs in this feature that caused false live-migration timeout failures. This
+# feature will be removed or replaced in a future release.
+#  (integer value)
+# Note: This option can be changed without restarting.
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Serious bugs found in this feature.
+#live_migration_progress_timeout=0
+
+#
+# This option allows nova to switch an on-going live migration to post-copy
+# mode, i.e., switch the active VM to the one on the destination node before the
+# migration is complete, therefore ensuring an upper bound on the memory that
+# needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0.
+#
+# When permitted, post-copy mode will be automatically activated if a
+# live-migration memory copy iteration does not make percentage increase of at
+# least 10% over the last iteration.
+#
+# The live-migration force complete API also uses post-copy when permitted. If
+# post-copy mode is not available, force complete falls back to pausing the VM
+# to ensure the live-migration operation will complete.
+#
+# When using post-copy mode, if the source and destination hosts loose network
+# connectivity, the VM being live-migrated will need to be rebooted. For more
+# details, please see the Administration guide.
+#
+# Related options:
+#
+#     * live_migration_permit_auto_converge
+#  (boolean value)
+#live_migration_permit_post_copy=false
+
+#
+# This option allows nova to start live migration with auto converge on.
+#
+# Auto converge throttles down CPU if a progress of on-going live migration
+# is slow. Auto converge will only be used if this flag is set to True and
+# post copy is not permitted or post copy is unavailable due to the version
+# of libvirt and QEMU in use. Auto converge requires libvirt>=1.2.3 and
+# QEMU>=1.6.0.
+#
+# Related options:
+#
+#     * live_migration_permit_post_copy
+#  (boolean value)
+#live_migration_permit_auto_converge=false
+
+#
+# Determine the snapshot image format when sending to the image service.
+#
+# If set, this decides what format is used when sending the snapshot to the
+# image service.
+# If not set, defaults to same type as source image.
+#
+# Possible values:
+#
+# * ``raw``: RAW disk format
+# * ``qcow2``: KVM default disk format
+# * ``vmdk``: VMWare default disk format
+# * ``vdi``: VirtualBox default disk format
+# * If not set, defaults to same type as source image.
+#  (string value)
+# Allowed values: raw, qcow2, vmdk, vdi
+#snapshot_image_format=<None>
+
+#
+# Override the default disk prefix for the devices attached to an instance.
+#
+# If set, this is used to identify a free disk device name for a bus.
+#
+# Possible values:
+#
+# * Any prefix which will result in a valid disk device name like 'sda' or 'hda'
+#   for example. This is only necessary if the device names differ to the
+#   commonly known device name prefixes for a virtualization type such as: sd,
+#   xvd, uvd, vd.
+#
+# Related options:
+#
+# * ``virt_type``: Influences which device type is used, which determines
+#   the default disk prefix.
+#  (string value)
+#disk_prefix=<None>
+
+# Number of seconds to wait for instance to shut down after soft reboot request
+# is made. We fall back to hard reboot if instance does not shutdown within this
+# window. (integer value)
+#wait_soft_reboot_seconds=120
+
+#
+# Is used to set the CPU mode an instance should have.
+#
+# If virt_type="kvm|qemu", it will default to "host-model", otherwise it will
+# default to "none".
+#
+# Possible values:
+#
+# * ``host-model``: Clones the host CPU feature flags.
+# * ``host-passthrough``: Use the host CPU model exactly;
+# * ``custom``: Use a named CPU model;
+# * ``none``: Not set any CPU model.
+#
+# Related options:
+#
+# * ``cpu_model``: If ``custom`` is used for ``cpu_mode``, set this config
+#   option too, otherwise this would result in an error and the instance won't
+#   be launched.
+#  (string value)
+# Allowed values: host-model, host-passthrough, custom, none
+#cpu_mode=<None>
+cpu_mode=host-passthrough
+
+#
+# Set the name of the libvirt CPU model the instance should use.
+#
+# Possible values:
+#
+# * The names listed in /usr/share/libvirt/cpu_map.xml
+#
+# Related options:
+#
+# * ``cpu_mode``: Don't set this when ``cpu_mode`` is NOT set to ``custom``.
+#   This would result in an error and the instance won't be launched.
+# * ``virt_type``: Only the virtualization types ``kvm`` and ``qemu`` use this.
+#  (string value)
+#cpu_model=<None>
+
+# Location where libvirt driver will store snapshots before uploading them to
+# image service (string value)
+#snapshots_directory=$instances_path/snapshots
+
+# Location where the Xen hvmloader is kept (string value)
+#xen_hvmloader_path=/usr/lib/xen/boot/hvmloader
+
+# Specific cachemodes to use for different disk types e.g:
+# file=directsync,block=none (list value)
+#disk_cachemodes =
+
+# A path to a device that will be used as source of entropy on the host.
+# Permitted options are: /dev/random or /dev/hwrng (string value)
+#rng_dev_path=<None>
+
+# For qemu or KVM guests, set this option to specify a default machine type per
+# host architecture. You can find a list of supported machine types in your
+# environment by checking the output of the "virsh capabilities"command. The
+# format of the value for this config option is host-arch=machine-type. For
+# example: x86_64=machinetype1,armv7l=machinetype2 (list value)
+#hw_machine_type=<None>
+
+# The data source used to the populate the host "serial" UUID exposed to guest
+# in the virtual BIOS. (string value)
+# Allowed values: none, os, hardware, auto
+#sysinfo_serial=auto
+
+# A number of seconds to memory usage statistics period. Zero or negative value
+# mean to disable memory usage statistics. (integer value)
+#mem_stats_period_seconds=10
+
+# List of uid targets and ranges.Syntax is guest-uid:host-uid:countMaximum of 5
+# allowed. (list value)
+#uid_maps =
+
+# List of guid targets and ranges.Syntax is guest-gid:host-gid:countMaximum of 5
+# allowed. (list value)
+#gid_maps =
+
+# In a realtime host context vCPUs for guest will run in that scheduling
+# priority. Priority depends on the host kernel (usually 1-99) (integer value)
+#realtime_scheduler_priority=1
+
+#
+# This is a performance event list which could be used as monitor. These events
+# will be passed to libvirt domain xml while creating a new instances.
+# Then event statistics data can be collected from libvirt.  The minimum
+# libvirt version is 2.0.0. For more information about `Performance monitoring
+# events`, refer https://libvirt.org/formatdomain.html#elementsPerf .
+#
+# Possible values:
+# * A string list. For example: ``enabled_perf_events = cmt, mbml, mbmt``
+#   The supported events list can be found in
+#   https://libvirt.org/html/libvirt-libvirt-domain.html ,
+#   which you may need to search key words ``VIR_PERF_PARAM_*``
+#  (list value)
+#enabled_perf_events =
+
+#
+# VM Images format.
+#
+# If default is specified, then use_cow_images flag is used instead of this
+# one.
+#
+# Related options:
+#
+# * virt.use_cow_images
+# * images_volume_group
+#  (string value)
+# Allowed values: raw, flat, qcow2, lvm, rbd, ploop, default
+#images_type=default
+
+#
+# LVM Volume Group that is used for VM images, when you specify images_type=lvm
+#
+# Related options:
+#
+# * images_type
+#  (string value)
+#images_volume_group=<None>
+
+#
+# Create sparse logical volumes (with virtualsize) if this flag is set to True.
+#  (boolean value)
+#sparse_logical_volumes=false
+
+# The RADOS pool in which rbd volumes are stored (string value)
+#images_rbd_pool=rbd
+
+# Path to the ceph configuration file to use (string value)
+#images_rbd_ceph_conf =
+
+#
+# Discard option for nova managed disks.
+#
+# Requires:
+#
+# * Libvirt >= 1.0.6
+# * Qemu >= 1.5 (raw format)
+# * Qemu >= 1.6 (qcow2 format)
+#  (string value)
+# Allowed values: ignore, unmap
+#hw_disk_discard=<None>
+
+# DEPRECATED: Allows image information files to be stored in non-standard
+# locations (string value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Image info files are no longer used by the image cache
+#image_info_filename_pattern=$instances_path/$image_cache_subdirectory_name/%(image)s.info
+
+# Unused resized base images younger than this will not be removed (integer
+# value)
+#remove_unused_resized_minimum_age_seconds=3600
+
+# DEPRECATED: Write a checksum for files in _base to disk (boolean value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: The image cache no longer periodically calculates checksums of stored
+# images. Data integrity can be checked at the block or filesystem level.
+#checksum_base_images=false
+
+# DEPRECATED: How frequently to checksum base images (integer value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason: The image cache no longer periodically calculates checksums of stored
+# images. Data integrity can be checked at the block or filesystem level.
+#checksum_interval_seconds=3600
+
+#
+# Method used to wipe ephemeral disks when they are deleted. Only takes effect
+# if LVM is set as backing storage.
+#
+# Possible values:
+#
+# * none - do not wipe deleted volumes
+# * zero - overwrite volumes with zeroes
+# * shred - overwrite volume repeatedly
+#
+# Related options:
+#
+# * images_type - must be set to ``lvm``
+# * volume_clear_size
+#  (string value)
+# Allowed values: none, zero, shred
+#volume_clear=zero
+
+#
+# Size of area in MiB, counting from the beginning of the allocated volume,
+# that will be cleared using method set in ``volume_clear`` option.
+#
+# Possible values:
+#
+# * 0 - clear whole volume
+# * >0 - clear specified amount of MiB
+#
+# Related options:
+#
+# * images_type - must be set to ``lvm``
+# * volume_clear - must be set and the value must be different than ``none``
+#   for this option to have any impact
+#  (integer value)
+# Minimum value: 0
+#volume_clear_size=0
+
+#
+# Enable snapshot compression for ``qcow2`` images.
+#
+# Note: you can set ``snapshot_image_format`` to ``qcow2`` to force all
+# snapshots to be in ``qcow2`` format, independently from their original image
+# type.
+#
+# Related options:
+#
+# * snapshot_image_format
+#  (boolean value)
+#snapshot_compression=false
+
+# Use virtio for bridge interfaces with KVM/QEMU (boolean value)
+#use_virtio_for_bridges=true
+use_virtio_for_bridges=true
+
+#
+# Protocols listed here will be accessed directly from QEMU.
+#
+# If gluster is present in qemu_allowed_storage_drivers, glusterfs's backend
+# will
+# pass a disk configuration to QEMU. This allows QEMU to access the volume using
+# libgfapi rather than mounting GlusterFS via fuse.
+#
+# Possible values:
+#
+# * [gluster]
+#  (list value)
+#qemu_allowed_storage_drivers =
+
+#
+# Use multipath connection of the iSCSI or FC volume
+#
+# Volumes can be connected in the LibVirt as multipath devices. This will
+# provide high availability and fault tolerance.
+#  (boolean value)
+# Deprecated group/name - [libvirt]/iscsi_use_multipath
+#volume_use_multipath=false
+
+#
+# Number of times to rediscover AoE target to find volume.
+#
+# Nova provides support for block storage attaching to hosts via AOE (ATA over
+# Ethernet). This option allows the user to specify the maximum number of retry
+# attempts that can be made to discover the AoE device.
+#  (integer value)
+#num_aoe_discover_tries=3
+
+#
+# Absolute path to the directory where the glusterfs volume is mounted on the
+# compute node.
+#  (string value)
+#glusterfs_mount_point_base=$state_path/mnt
+
+#
+# Number of times to scan iSCSI target to find volume.
+#  (integer value)
+#num_iscsi_scan_tries=5
+
+#
+# The iSCSI transport iface to use to connect to target in case offload support
+# is desired.
+#
+# Default format is of the form <transport_name>.<hwaddress> where
+# <transport_name> is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and
+# <hwaddress> is the MAC address of the interface and can be generated via the
+# iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be
+# provided here with the actual transport name.
+#  (string value)
+# Deprecated group/name - [libvirt]/iscsi_transport
+#iscsi_iface=<None>
+
+#
+# Number of times to scan iSER target to find volume.
+#
+# iSER is a server network protocol that extends iSCSI protocol to use Remote
+# Direct Memory Access (RDMA). This option allows the user to specify the
+# maximum
+# number of scan attempts that can be made to find iSER volume.
+#  (integer value)
+#num_iser_scan_tries=5
+
+#
+# Use multipath connection of the iSER volume.
+#
+# iSER volumes can be connected as multipath devices. This will provide high
+# availability and fault tolerance.
+#  (boolean value)
+#iser_use_multipath=false
+
+#
+# The RADOS client name for accessing rbd(RADOS Block Devices) volumes.
+#
+# Libvirt will refer to this user when connecting and authenticating with
+# the Ceph RBD server.
+#  (string value)
+#rbd_user=<None>
+
+#
+# The libvirt UUID of the secret for the rbd_user volumes.
+#  (string value)
+#rbd_secret_uuid=<None>
+
+#
+# Directory where the NFS volume is mounted on the compute node.
+# The default is 'mnt' directory of the location where nova's Python module
+# is installed.
+#
+# NFS provides shared storage for the OpenStack Block Storage service.
+#
+# Possible values:
+#
+# * A string representing absolute path of mount point.
+#  (string value)
+#nfs_mount_point_base=$state_path/mnt
+
+#
+# Mount options passed to the NFS client. See section of the nfs man page
+# for details.
+#
+# Mount options controls the way the filesystem is mounted and how the
+# NFS client behaves when accessing files on this mount point.
+#
+# Possible values:
+#
+# * Any string representing mount options separated by commas.
+# * Example string: vers=3,lookupcache=pos
+#  (string value)
+#nfs_mount_options=<None>
+
+#
+# Directory where the Quobyte volume is mounted on the compute node.
+#
+# Nova supports Quobyte volume driver that enables storing Block Storage
+# service volumes on a Quobyte storage back end. This Option sepcifies the
+# path of the directory where Quobyte volume is mounted.
+#
+# Possible values:
+#
+# * A string representing absolute path of mount point.
+#  (string value)
+#quobyte_mount_point_base=$state_path/mnt
+
+# Path to a Quobyte Client configuration file. (string value)
+#quobyte_client_cfg=<None>
+
+#
+# Path or URL to Scality SOFS(Scale-Out File Server) configuration file.
+#
+# The Scality SOFS provides OpenStack users the option of storing their
+# data on a high capacity, replicated, highly available Scality Ring object
+# storage cluster.
+#  (string value)
+#scality_sofs_config=<None>
+
+#
+# Base dir where Scality SOFS shall be mounted.
+#
+# The Scality volume driver in Nova mounts SOFS and lets the hypervisor access
+# the volumes.
+#
+# Possible values:
+#
+# * $state_path/scality where state_path is a config option that specifies
+#   the top-level directory for maintaining nova's state or Any string
+#   containing the full directory path.
+#  (string value)
+#scality_sofs_mount_point=$state_path/scality
+
+#
+# Directory where the SMBFS shares are mounted on the compute node.
+#  (string value)
+#smbfs_mount_point_base=$state_path/mnt
+
+#
+# Mount options passed to the SMBFS client.
+#
+# Provide SMBFS options as a single string containing all parameters.
+# See mount.cifs man page for  details. Note that the libvirt-qemu ``uid``
+# and ``gid`` must be specified.
+#  (string value)
+#smbfs_mount_options =
+
+#
+# libvirt's transport method for remote file operations.
+#
+# Because libvirt cannot use RPC to copy files over network to/from other
+# compute nodes, other method must be used for:
+#
+# * creating directory on remote host
+# * creating file on remote host
+# * removing file from remote host
+# * copying file to remote host
+#  (string value)
+# Allowed values: ssh, rsync
+#remote_filesystem_transport=ssh
+
+#
+# Directory where the Virtuozzo Storage clusters are mounted on the compute
+# node.
+#
+# This option defines non-standard mountpoint for Vzstorage cluster.
+#
+# Related options:
+#
+# * vzstorage_mount_* group of parameters
+#  (string value)
+#vzstorage_mount_point_base=$state_path/mnt
+
+#
+# Mount owner user name.
+#
+# This option defines the owner user of Vzstorage cluster mountpoint.
+#
+# Related options:
+#
+# * vzstorage_mount_* group of parameters
+#  (string value)
+#vzstorage_mount_user=stack
+
+#
+# Mount owner group name.
+#
+# This option defines the owner group of Vzstorage cluster mountpoint.
+#
+# Related options:
+#
+# * vzstorage_mount_* group of parameters
+#  (string value)
+#vzstorage_mount_group=qemu
+
+#
+# Mount access mode.
+#
+# This option defines the access bits of Vzstorage cluster mountpoint,
+# in the format similar to one of chmod(1) utility, like this: 0770.
+# It consists of one to four digits ranging from 0 to 7, with missing
+# lead digits assumed to be 0's.
+#
+# Related options:
+#
+# * vzstorage_mount_* group of parameters
+#  (string value)
+#vzstorage_mount_perms=0770
+
+#
+# Path to vzstorage client log.
+#
+# This option defines the log of cluster operations,
+# it should include "%(cluster_name)s" template to separate
+# logs from multiple shares.
+#
+# Related options:
+#
+# * vzstorage_mount_opts may include more detailed logging options.
+#  (string value)
+#vzstorage_log_path=/var/log/pstorage/%(cluster_name)s/nova.log.gz
+
+#
+# Path to the SSD cache file.
+#
+# You can attach an SSD drive to a client and configure the drive to store
+# a local cache of frequently accessed data. By having a local cache on a
+# client's SSD drive, you can increase the overall cluster performance by
+# up to 10 and more times.
+# WARNING! There is a lot of SSD models which are not server grade and
+# may loose arbitrary set of data changes on power loss.
+# Such SSDs should not be used in Vstorage and are dangerous as may lead
+# to data corruptions and inconsistencies. Please consult with the manual
+# on which SSD models are known to be safe or verify it using
+# vstorage-hwflush-check(1) utility.
+#
+# This option defines the path which should include "%(cluster_name)s"
+# template to separate caches from multiple shares.
+#
+# Related options:
+#
+# * vzstorage_mount_opts may include more detailed cache options.
+#  (string value)
+#vzstorage_cache_path=<None>
+
+#
+# Extra mount options for pstorage-mount
+#
+# For full description of them, see
+# https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html
+# Format is a python string representation of arguments list, like:
+# "['-v', '-R', '500']"
+# Shouldn't include -c, -l, -C, -u, -g and -m as those have
+# explicit vzstorage_* options.
+#
+# Related options:
+#
+# * All other vzstorage_* options
+#  (list value)
+#vzstorage_mount_opts =
 
 
 [matchmaker_redis]
@@ -717,7 +6840,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
+#host=127.0.0.1
 
 # DEPRECATED: Use this port to connect to redis host. (port value)
 # Minimum value: 0
@@ -725,7 +6848,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
+#port=6379
 
 # DEPRECATED: Password for Redis server (optional). (string value)
 # This option is deprecated for removal.
@@ -741,16 +6864,484 @@
 #sentinel_hosts =
 
 # Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
+#sentinel_group_name=oslo-messaging-zeromq
 
 # Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
+#wait_timeout=2000
 
 # Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
+#check_timeout=20000
 
 # Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
+#socket_timeout=10000
+
+
+[metrics]
+#
+# Configuration options for metrics
+#
+# Options under this group allow to adjust how values assigned to metrics are
+# calculated.
+
+#
+# From nova.conf
+#
+
+#
+# When using metrics to weight the suitability of a host, you can use this
+# option
+# to change how the calculated weight influences the weight assigned to a host
+# as
+# follows:
+#
+# * >1.0: increases the effect of the metric on overall weight
+# * 1.0: no change to the calculated weight
+# * >0.0,<1.0: reduces the effect of the metric on overall weight
+# * 0.0: the metric value is ignored, and the value of the
+#   'weight_of_unavailable' option is returned instead
+# * >-1.0,<0.0: the effect is reduced and reversed
+# * -1.0: the effect is reversed
+# * <-1.0: the effect is increased proportionally and reversed
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the multipler
+#   ratio for this weigher.
+#
+# Related options:
+#
+# * weight_of_unavailable
+#  (floating point value)
+#weight_multiplier=1.0
+
+#
+# This setting specifies the metrics to be weighed and the relative ratios for
+# each metric. This should be a single string value, consisting of a series of
+# one or more 'name=ratio' pairs, separated by commas, where 'name' is the name
+# of the metric to be weighed, and 'ratio' is the relative weight for that
+# metric.
+#
+# Note that if the ratio is set to 0, the metric value is ignored, and instead
+# the weight will be set to the value of the 'weight_of_unavailable' option.
+#
+# As an example, let's consider the case where this option is set to:
+#
+#     ``name1=1.0, name2=-1.3``
+#
+# The final weight will be:
+#
+#     ``(name1.value * 1.0) + (name2.value * -1.3)``
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * A list of zero or more key/value pairs separated by commas, where the key is
+#   a string representing the name of a metric and the value is a numeric weight
+#   for that metric. If any value is set to 0, the value is ignored and the
+#   weight will be set to the value of the 'weight_of_unavailable' option.
+#
+# Related options:
+#
+# * weight_of_unavailable
+#  (list value)
+#weight_setting =
+
+#
+# This setting determines how any unavailable metrics are treated. If this
+# option
+# is set to True, any hosts for which a metric is unavailable will raise an
+# exception, so it is recommended to also use the MetricFilter to filter out
+# those hosts before weighing.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * True or False, where False ensures any metric being unavailable for a host
+#   will set the host weight to 'weight_of_unavailable'.
+#
+# Related options:
+#
+# * weight_of_unavailable
+#  (boolean value)
+#required=true
+
+#
+# When any of the following conditions are met, this value will be used in place
+# of any actual metric value:
+#
+# * One of the metrics named in 'weight_setting' is not available for a host,
+#   and the value of 'required' is False
+# * The ratio specified for a metric in 'weight_setting' is 0
+# * The 'weight_multiplier' option is set to 0
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect.
+#
+# Possible values:
+#
+# * An integer or float value, where the value corresponds to the multipler
+#   ratio for this weigher.
+#
+# Related options:
+#
+# * weight_setting
+# * required
+# * weight_multiplier
+#  (floating point value)
+#weight_of_unavailable=-10000.0
+
+
+[mks]
+#
+# Nova compute node uses WebMKS, a desktop sharing protocol to provide
+# instance console access to VM's created by VMware hypervisors.
+#
+# Related options:
+# Following options must be set to provide console access.
+# * mksproxy_base_url
+# * enabled
+
+#
+# From nova.conf
+#
+
+#
+# Location of MKS web console proxy
+#
+# The URL in the response points to a WebMKS proxy which
+# starts proxying between client and corresponding vCenter
+# server where instance runs. In order to use the web based
+# console access, WebMKS proxy should be installed and configured
+#
+# Possible values:
+#
+# * Must be a valid URL of the form:``http://host:port/``
+#  (string value)
+#mksproxy_base_url=http://127.0.0.1:6090/
+
+#
+# Enables graphical console access for virtual machines.
+#  (boolean value)
+#enabled=false
+
+
+[neutron]
+#
+# Configuration options for neutron (network connectivity as a service).
+
+#
+# From nova.conf
+#
+auth_type=v3password
+project_domain_name = Default
+user_domain_name = Default
+auth_url = http://10.167.4.10:35357/v3
+
+password=opnfv_secret
+project_name=service
+username=neutron
+region_name= RegionOne
+url=http://10.167.4.10:9696
+metadata_proxy_shared_secret=opnfv_secret
+service_metadata_proxy=True
+#
+# This option specifies the URL for connecting to Neutron.
+#
+# Possible values:
+#
+# * Any valid URL that points to the Neutron API service is appropriate here.
+#   This typically matches the URL returned for the 'network' service type
+#   from the Keystone service catalog.
+#  (uri value)
+#url=http://127.0.0.1:9696
+
+#
+# Region name for connecting to Neutron in admin context.
+#
+# This option is used in multi-region setups. If there are two Neutron
+# servers running in two regions in two different machines, then two
+# services need to be created in Keystone with two different regions and
+# associate corresponding endpoints to those services. When requests are made
+# to Keystone, the Keystone service uses the region_name to determine the
+# region the request is coming from.
+#  (string value)
+#region_name=RegionOne
+
+#
+# Specifies the name of an integration bridge interface used by OpenvSwitch.
+# This option is used only if Neutron does not specify the OVS bridge name.
+#
+# Possible values:
+#
+# * Any string representing OVS bridge name.
+#  (string value)
+#ovs_bridge=br-int
+
+#
+# Integer value representing the number of seconds to wait before querying
+# Neutron for extensions.  After this number of seconds the next time Nova
+# needs to create a resource in Neutron it will requery Neutron for the
+# extensions that it has loaded.  Setting value to 0 will refresh the
+# extensions with no wait.
+#  (integer value)
+# Minimum value: 0
+#extension_sync_interval=600
+
+#
+# When set to True, this option indicates that Neutron will be used to proxy
+# metadata requests and resolve instance ids. Otherwise, the instance ID must be
+# passed to the metadata request in the 'X-Instance-ID' header.
+#
+# Related options:
+#
+# * metadata_proxy_shared_secret
+#  (boolean value)
+#service_metadata_proxy=false
+
+#
+# This option holds the shared secret string used to validate proxy requests to
+# Neutron metadata requests. In order to be used, the
+# 'X-Metadata-Provider-Signature' header must be supplied in the request.
+#
+# Related options:
+#
+# * service_metadata_proxy
+#  (string value)
+#metadata_proxy_shared_secret =
+
+# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# (string value)
+#cafile=<None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile=<None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile=<None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure=false
+
+# Timeout value for http requests (integer value)
+#timeout=<None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [neutron]/auth_plugin
+#auth_type=<None>
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section=<None>
+
+# Authentication URL (string value)
+#auth_url=<None>
+
+# Domain ID to scope to (string value)
+#domain_id=<None>
+
+# Domain name to scope to (string value)
+#domain_name=<None>
+
+# Project ID to scope to (string value)
+#project_id=<None>
+
+# Project name to scope to (string value)
+#project_name=<None>
+
+# Domain ID containing project (string value)
+#project_domain_id=<None>
+
+# Domain name containing project (string value)
+#project_domain_name=<None>
+
+# Trust ID (string value)
+#trust_id=<None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id=<None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name=<None>
+
+# User ID (string value)
+#user_id=<None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user-name
+#username=<None>
+
+# User's domain id (string value)
+#user_domain_id=<None>
+
+# User's domain name (string value)
+#user_domain_name=<None>
+
+# User's password (string value)
+#password=<None>
+
+# Tenant ID (string value)
+#tenant_id=<None>
+
+# Tenant Name (string value)
+#tenant_name=<None>
+
+
+[notifications]
+#
+# Most of the actions in Nova which manipulate the system state generate
+# notifications which are posted to the messaging component (e.g. RabbitMQ) and
+# can be consumed by any service outside the Openstack. More technical details
+# at http://docs.openstack.org/developer/nova/notifications.html
+
+#
+# From nova.conf
+#
+
+#
+# If set, send compute.instance.update notifications on instance state
+# changes.
+#
+# Please refer to https://wiki.openstack.org/wiki/SystemUsageData for
+# additional information on notifications.
+#
+# Possible values:
+#
+# * None - no notifications
+# * "vm_state" - notifications on VM state changes
+# * "vm_and_task_state" - notifications on VM and task state changes
+#  (string value)
+# Allowed values: <None>, vm_state, vm_and_task_state
+# Deprecated group/name - [DEFAULT]/notify_on_state_change
+#notify_on_state_change=<None>
+
+#
+# If enabled, send api.fault notifications on caught exceptions in the
+# API service.
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/notify_api_faults
+#notify_on_api_faults=false
+notify_on_api_faults=false
+
+# Default notification level for outgoing notifications. (string value)
+# Allowed values: DEBUG, INFO, WARN, ERROR, CRITICAL
+# Deprecated group/name - [DEFAULT]/default_notification_level
+#default_level=INFO
+
+#
+# Default publisher_id for outgoing notifications. If you consider routing
+# notifications using different publisher, change this value accordingly.
+#
+# Possible values:
+#
+# * Defaults to the IPv4 address of this host, but it can be any valid
+#   oslo.messaging publisher_id
+#
+# Related options:
+#
+# *  my_ip - IP address of this host
+#  (string value)
+# Deprecated group/name - [DEFAULT]/default_publisher_id
+#default_publisher_id=$my_ip
+
+#
+# Specifies which notification format shall be used by nova.
+#
+# The default value is fine for most deployments and rarely needs to be changed.
+# This value can be set to 'versioned' once the infrastructure moves closer to
+# consuming the newer format of notifications. After this occurs, this option
+# will be removed (possibly in the "P" release).
+#
+# Possible values:
+# * unversioned: Only the legacy unversioned notifications are emitted.
+# * versioned: Only the new versioned notifications are emitted.
+# * both: Both the legacy unversioned and the new versioned notifications are
+#   emitted. (Default)
+#
+# The list of versioned notifications is visible in
+# http://docs.openstack.org/developer/nova/notifications.html
+#  (string value)
+# Allowed values: unversioned, versioned, both
+# Deprecated group/name - [DEFAULT]/notification_format
+#notification_format=both
+
+
+[osapi_v21]
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# This option is a list of all of the v2.1 API extensions to never load.
+#
+# Possible values:
+#
+# * A list of strings, each being the alias of an extension that you do not
+#   wish to load.
+#
+# Related options:
+#
+# * enabled
+# * extensions_whitelist
+#  (list value)
+# This option is deprecated for removal since 12.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# API extensions are now part of the standard API. API extensions should be
+# disabled using policy, rather than via these configuration options.
+#extensions_blacklist =
+
+# DEPRECATED:
+# This is a list of extensions. If it is empty, then *all* extensions except
+# those specified in the extensions_blacklist option will be loaded. If it is
+# not
+# empty, then only those extensions in this list will be loaded, provided that
+# they are also not in the extensions_blacklist option.
+#
+# Possible values:
+#
+# * A list of strings, each being the alias of an extension that you wish to
+#   load, or an empty list, which indicates that all extensions are to be run.
+#
+# Related options:
+#
+# * enabled
+# * extensions_blacklist
+#  (list value)
+# This option is deprecated for removal since 12.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# API extensions are now part of the standard API. API extensions should be
+# disabled using policy, rather than via these configuration options.
+#extensions_whitelist =
+
+# DEPRECATED:
+# This option is a string representing a regular expression (regex) that matches
+# the project_id as contained in URLs. If not set, it will match normal UUIDs
+# created by keystone.
+#
+# Possible values:
+#
+# * A string representing any legal regular expression
+#  (string value)
+# This option is deprecated for removal since 13.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# Recent versions of nova constrain project IDs to hexadecimal characters and
+# dashes. If your installation uses IDs outside of this range, you should use
+# this option to provide your own regex and give you time to migrate offending
+# projects to valid IDs before the next release.
+#project_id_regex=<None>
 
 
 [oslo_concurrency]
@@ -761,14 +7352,15 @@
 
 # Enables or disables inter-process locks. (boolean value)
 # Deprecated group/name - [DEFAULT]/disable_process_locking
-#disable_process_locking = false
+#disable_process_locking=false
 
 # Directory to use for lock files.  For security, the specified directory should
 # only be writable by the user running the processes that need locking. Defaults
-# to environment variable OSLO_LOCK_PATH. If external locks are used, a lock
-# path must be set. (string value)
+# to environment variable OSLO_LOCK_PATH. If OSLO_LOCK_PATH is not set in the
+# environment, use the Python tempfile.gettempdir function to find a suitable
+# location. If external locks are used, a lock path must be set. (string value)
 # Deprecated group/name - [DEFAULT]/lock_path
-#lock_path = <None>
+lock_path=/var/lib/nova/tmp
 
 
 [oslo_messaging_amqp]
@@ -780,15 +7372,15 @@
 # Name for the AMQP container. must be globally unique. Defaults to a generated
 # UUID (string value)
 # Deprecated group/name - [amqp1]/container_name
-#container_name = <None>
+#container_name=<None>
 
 # Timeout for inactive connections (in seconds) (integer value)
 # Deprecated group/name - [amqp1]/idle_timeout
-#idle_timeout = 0
+#idle_timeout=0
 
 # Debug: dump AMQP frames to stdout (boolean value)
 # Deprecated group/name - [amqp1]/trace
-#trace = false
+#trace=false
 
 # CA certificate PEM file used to verify the server's certificate (string value)
 # Deprecated group/name - [amqp1]/ssl_ca_file
@@ -805,14 +7397,14 @@
 
 # Password for decrypting ssl_key_file (if encrypted) (string value)
 # Deprecated group/name - [amqp1]/ssl_key_password
-#ssl_key_password = <None>
+#ssl_key_password=<None>
 
 # DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
 # Deprecated group/name - [amqp1]/allow_insecure_clients
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
+#allow_insecure_clients=false
 
 # Space separated list of acceptable SASL mechanisms (string value)
 # Deprecated group/name - [amqp1]/sasl_mechanisms
@@ -836,46 +7428,46 @@
 
 # Seconds to pause before attempting to re-connect. (integer value)
 # Minimum value: 1
-#connection_retry_interval = 1
+#connection_retry_interval=1
 
 # Increase the connection_retry_interval by this many seconds after each
 # unsuccessful failover attempt. (integer value)
 # Minimum value: 0
-#connection_retry_backoff = 2
+#connection_retry_backoff=2
 
 # Maximum limit for connection_retry_interval + connection_retry_backoff
 # (integer value)
 # Minimum value: 1
-#connection_retry_interval_max = 30
+#connection_retry_interval_max=30
 
 # Time to pause between re-connecting an AMQP 1.0 link that failed due to a
 # recoverable error. (integer value)
 # Minimum value: 1
-#link_retry_delay = 10
+#link_retry_delay=10
 
 # The maximum number of attempts to re-send a reply message which failed due to
 # a recoverable error. (integer value)
 # Minimum value: -1
-#default_reply_retry = 0
+#default_reply_retry=0
 
 # The deadline for an rpc reply message delivery. (integer value)
 # Minimum value: 5
-#default_reply_timeout = 30
+#default_reply_timeout=30
 
 # The deadline for an rpc cast or call message delivery. Only used when caller
 # does not provide a timeout expiry. (integer value)
 # Minimum value: 5
-#default_send_timeout = 30
+#default_send_timeout=30
 
 # The deadline for a sent notification message delivery. Only used when caller
 # does not provide a timeout expiry. (integer value)
 # Minimum value: 5
-#default_notify_timeout = 30
+#default_notify_timeout=30
 
 # The duration to schedule a purge of idle sender links. Detach link after
 # expiry. (integer value)
 # Minimum value: 1
-#default_sender_link_timeout = 600
+#default_sender_link_timeout=600
 
 # Indicates the addressing mode used by the driver.
 # Permitted values:
@@ -883,39 +7475,39 @@
 # 'routable' - use routable addresses
 # 'dynamic'  - use legacy addresses if the message bus does not support routing
 # otherwise use routable addressing (string value)
-#addressing_mode = dynamic
+#addressing_mode=dynamic
 
 # address prefix used when sending to a specific server (string value)
 # Deprecated group/name - [amqp1]/server_request_prefix
-#server_request_prefix = exclusive
+#server_request_prefix=exclusive
 
 # address prefix used when broadcasting to all servers (string value)
 # Deprecated group/name - [amqp1]/broadcast_prefix
-#broadcast_prefix = broadcast
+#broadcast_prefix=broadcast
 
 # address prefix when sending to any server in group (string value)
 # Deprecated group/name - [amqp1]/group_request_prefix
-#group_request_prefix = unicast
+#group_request_prefix=unicast
 
 # Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
+#rpc_address_prefix=openstack.org/om/rpc
 
 # Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
+#notify_address_prefix=openstack.org/om/notify
 
 # Appended to the address prefix when sending a fanout message. Used by the
 # message bus to identify fanout messages. (string value)
-#multicast_address = multicast
+#multicast_address=multicast
 
 # Appended to the address prefix when sending to a particular RPC/Notification
 # server. Used by the message bus to identify messages sent to a single
 # destination. (string value)
-#unicast_address = unicast
+#unicast_address=unicast
 
 # Appended to the address prefix when sending to a group of consumers. Used by
 # the message bus to identify messages that should be delivered in a round-robin
 # fashion across consumers. (string value)
-#anycast_address = anycast
+#anycast_address=anycast
 
 # Exchange name used in notification addresses.
 # Exchange name resolution precedence:
@@ -923,7 +7515,7 @@
 # else default_notification_exchange if set
 # else control_exchange if set
 # else 'notify' (string value)
-#default_notification_exchange = <None>
+#default_notification_exchange=<None>
 
 # Exchange name used in RPC addresses.
 # Exchange name resolution precedence:
@@ -931,19 +7523,19 @@
 # else default_rpc_exchange if set
 # else control_exchange if set
 # else 'rpc' (string value)
-#default_rpc_exchange = <None>
+#default_rpc_exchange=<None>
 
 # Window size for incoming RPC Reply messages. (integer value)
 # Minimum value: 1
-#reply_link_credit = 200
+#reply_link_credit=200
 
 # Window size for incoming RPC Request messages (integer value)
 # Minimum value: 1
-#rpc_server_credit = 100
+#rpc_server_credit=100
 
 # Window size for incoming Notification messages (integer value)
 # Minimum value: 1
-#notify_server_credit = 100
+#notify_server_credit=100
 
 # Send messages of this type pre-settled.
 # Pre-settled messages will not receive acknowledgement
@@ -955,8 +7547,8 @@
 # 'rpc-cast' - Send RPC Casts pre-settled
 # 'notify'   - Send Notifications pre-settled
 #  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
+#pre_settled=rpc-cast
+#pre_settled=rpc-reply
 
 
 [oslo_messaging_kafka]
@@ -969,7 +7561,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
+#kafka_default_host=localhost
 
 # DEPRECATED: Default Kafka broker Port (port value)
 # Minimum value: 0
@@ -977,33 +7569,33 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
+#kafka_default_port=9092
 
 # Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
+#kafka_max_fetch_bytes=1048576
 
 # Default timeout(s) for Kafka consumers (integer value)
-#kafka_consumer_timeout = 1.0
+#kafka_consumer_timeout=1.0
 
 # Pool Size for Kafka Consumers (integer value)
-#pool_size = 10
+#pool_size=10
 
 # The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
+#conn_pool_min_size=2
 
 # The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
+#conn_pool_ttl=1200
 
 # Group id for Kafka consumer. Consumers in one group will coordinate message
 # consumption (string value)
-#consumer_group = oslo_messaging_consumer
+#consumer_group=oslo_messaging_consumer
 
 # Upper bound on the delay for KafkaProducer batching in seconds (floating point
 # value)
-#producer_batch_timeout = 0.0
+#producer_batch_timeout=0.0
 
 # Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
+#producer_batch_size=16384
 
 
 [oslo_messaging_notifications]
@@ -1016,16 +7608,17 @@
 # messagingv2, routing, log, test, noop (multi valued)
 # Deprecated group/name - [DEFAULT]/notification_driver
 #driver =
+driver = messagingv2
 
 # A URL representing the messaging driver to use for notifications. If not set,
 # we fall back to the same configuration used for RPC. (string value)
 # Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
+#transport_url=<None>
 
 # AMQP topic used for OpenStack notifications. (list value)
 # Deprecated group/name - [rpc_notifier2]/topics
 # Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
+#topics=notifications
 
 
 [oslo_messaging_rabbit]
@@ -1033,15 +7626,17 @@
 #
 # From oslo.messaging
 #
-
+rabbit_retry_interval = 1
+rabbit_retry_backoff = 2
+rpc_conn_pool_size = 300
 # Use durable queues in AMQP. (boolean value)
 # Deprecated group/name - [DEFAULT]/amqp_durable_queues
 # Deprecated group/name - [DEFAULT]/rabbit_durable_queues
-#amqp_durable_queues = false
+#amqp_durable_queues=false
 
 # Auto-delete queues in AMQP. (boolean value)
 # Deprecated group/name - [DEFAULT]/amqp_auto_delete
-#amqp_auto_delete = false
+#amqp_auto_delete=false
 
 # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
 # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
@@ -1064,22 +7659,22 @@
 # How long to wait before reconnecting in response to an AMQP consumer cancel
 # notification. (floating point value)
 # Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
-#kombu_reconnect_delay = 1.0
+#kombu_reconnect_delay=1.0
 
 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
 # be used. This option may not be available in future versions. (string value)
-#kombu_compression = <None>
+#kombu_compression=<None>
 
 # How long to wait a missing client before abandoning to send it its replies.
 # This value should not be longer than rpc_response_timeout. (integer value)
 # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
-#kombu_missing_consumer_retry_timeout = 60
+#kombu_missing_consumer_retry_timeout=60
 
 # Determines how the next RabbitMQ node is chosen in case the one we are
 # currently connected to becomes unavailable. Takes effect only if more than one
 # RabbitMQ node is provided in config. (string value)
 # Allowed values: round-robin, shuffle
-#kombu_failover_strategy = round-robin
+#kombu_failover_strategy=round-robin
 
 # DEPRECATED: The RabbitMQ broker address where a single node is used. (string
 # value)
@@ -1087,7 +7682,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_host = localhost
+#rabbit_host=localhost
 
 # DEPRECATED: The RabbitMQ broker port where a single node is used. (port value)
 # Minimum value: 0
@@ -1096,63 +7691,63 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_port = 5672
+#rabbit_port=5672
 
 # DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
 # Deprecated group/name - [DEFAULT]/rabbit_hosts
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_hosts = $rabbit_host:$rabbit_port
+#rabbit_hosts=$rabbit_host:$rabbit_port
 
 # Connect over SSL for RabbitMQ. (boolean value)
 # Deprecated group/name - [DEFAULT]/rabbit_use_ssl
-#rabbit_use_ssl = false
+#rabbit_use_ssl=false
 
 # DEPRECATED: The RabbitMQ userid. (string value)
 # Deprecated group/name - [DEFAULT]/rabbit_userid
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_userid = guest
+#rabbit_userid=guest
 
 # DEPRECATED: The RabbitMQ password. (string value)
 # Deprecated group/name - [DEFAULT]/rabbit_password
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_password = guest
+#rabbit_password=guest
 
 # The RabbitMQ login method. (string value)
 # Allowed values: PLAIN, AMQPLAIN, RABBIT-CR-DEMO
 # Deprecated group/name - [DEFAULT]/rabbit_login_method
-#rabbit_login_method = AMQPLAIN
+#rabbit_login_method=AMQPLAIN
 
 # DEPRECATED: The RabbitMQ virtual host. (string value)
 # Deprecated group/name - [DEFAULT]/rabbit_virtual_host
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_virtual_host = /
+#rabbit_virtual_host=/
 
 # How frequently to retry connecting with RabbitMQ. (integer value)
-#rabbit_retry_interval = 1
+#rabbit_retry_interval=1
 
 # How long to backoff for between retries when connecting to RabbitMQ. (integer
 # value)
 # Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
-#rabbit_retry_backoff = 2
+#rabbit_retry_backoff=2
 
 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
 # (integer value)
-#rabbit_interval_max = 30
+#rabbit_interval_max=30
 
 # DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
 # (infinite retry count). (integer value)
 # Deprecated group/name - [DEFAULT]/rabbit_max_retries
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
-#rabbit_max_retries = 0
+#rabbit_max_retries=0
 
 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
 # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
@@ -1161,135 +7756,136 @@
 # names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
 # '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
 # Deprecated group/name - [DEFAULT]/rabbit_ha_queues
-#rabbit_ha_queues = false
+#rabbit_ha_queues=false
 
 # Positive integer representing duration in seconds for queue TTL (x-expires).
 # Queues which are unused for the duration of the TTL are automatically deleted.
 # The parameter affects only reply and fanout queues. (integer value)
 # Minimum value: 1
-#rabbit_transient_queues_ttl = 1800
+#rabbit_transient_queues_ttl=1800
 
 # Specifies the number of messages to prefetch. Setting to zero allows unlimited
 # messages. (integer value)
-#rabbit_qos_prefetch_count = 64
+#rabbit_qos_prefetch_count=0
 
 # Number of seconds after which the Rabbit broker is considered down if
 # heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
 # value)
-#heartbeat_timeout_threshold = 60
+#heartbeat_timeout_threshold=60
 
 # How often times during the heartbeat_timeout_threshold we check the heartbeat.
 # (integer value)
-#heartbeat_rate = 2
+#heartbeat_rate=2
 
 # Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
 # Deprecated group/name - [DEFAULT]/fake_rabbit
-#fake_rabbit = false
+#fake_rabbit=false
 
 # Maximum number of channels to allow (integer value)
-#channel_max = <None>
+#channel_max=<None>
 
 # The maximum byte size for an AMQP frame (integer value)
-#frame_max = <None>
+#frame_max=<None>
 
 # How often to send heartbeats for consumer's connections (integer value)
-#heartbeat_interval = 3
+#heartbeat_interval=3
 
 # Enable SSL (boolean value)
-#ssl = <None>
+#ssl=<None>
 
 # Arguments passed to ssl.wrap_socket (dict value)
-#ssl_options = <None>
+#ssl_options=<None>
 
 # Set socket timeout in seconds for connection's socket (floating point value)
-#socket_timeout = 0.25
+#socket_timeout=0.25
 
 # Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value)
-#tcp_user_timeout = 0.25
+#tcp_user_timeout=0.25
 
 # Set delay for reconnection to some host which has connection error (floating
 # point value)
-#host_connection_reconnect_delay = 0.25
+#host_connection_reconnect_delay=0.25
 
 # Connection factory implementation (string value)
 # Allowed values: new, single, read_write
-#connection_factory = single
+#connection_factory=single
 
 # Maximum number of connections to keep queued. (integer value)
-#pool_max_size = 30
+#pool_max_size=30
 
 # Maximum number of connections to create above `pool_max_size`. (integer value)
-#pool_max_overflow = 0
+#pool_max_overflow=0
 
 # Default number of seconds to wait for a connections to available (integer
 # value)
-#pool_timeout = 30
+#pool_timeout=30
 
 # Lifetime of a connection (since creation) in seconds or None for no recycling.
 # Expired connections are closed on acquire. (integer value)
-#pool_recycle = 600
+#pool_recycle=600
 
 # Threshold at which inactive (since release) connections are considered stale
 # in seconds or None for no staleness. Stale connections are closed on acquire.
 # (integer value)
-#pool_stale = 60
+#pool_stale=60
 
 # Default serialization mechanism for serializing/deserializing
 # outgoing/incoming messages (string value)
 # Allowed values: json, msgpack
-#default_serializer_type = json
+#default_serializer_type=json
 
 # Persist notification messages. (boolean value)
-#notification_persistence = false
+#notification_persistence=false
 
 # Exchange name for sending notifications (string value)
-#default_notification_exchange = ${control_exchange}_notification
+#default_notification_exchange=${control_exchange}_notification
 
 # Max number of not acknowledged message which RabbitMQ can send to notification
 # listener. (integer value)
-#notification_listener_prefetch_count = 100
+#notification_listener_prefetch_count=100
 
 # Reconnecting retry count in case of connectivity problem during sending
 # notification, -1 means infinite retry. (integer value)
-#default_notification_retry_attempts = -1
+#default_notification_retry_attempts=-1
 
 # Reconnecting retry delay in case of connectivity problem during sending
 # notification message (floating point value)
-#notification_retry_delay = 0.25
+#notification_retry_delay=0.25
 
 # Time to live for rpc queues without consumers in seconds. (integer value)
-#rpc_queue_expiration = 60
+#rpc_queue_expiration=60
 
 # Exchange name for sending RPC messages (string value)
-#default_rpc_exchange = ${control_exchange}_rpc
+#default_rpc_exchange=${control_exchange}_rpc
 
 # Exchange name for receiving RPC replies (string value)
-#rpc_reply_exchange = ${control_exchange}_rpc_reply
+#rpc_reply_exchange=${control_exchange}_rpc_reply
 
 # Max number of not acknowledged message which RabbitMQ can send to rpc
 # listener. (integer value)
-#rpc_listener_prefetch_count = 100
+#rpc_listener_prefetch_count=100
 
 # Max number of not acknowledged message which RabbitMQ can send to rpc reply
 # listener. (integer value)
-#rpc_reply_listener_prefetch_count = 100
+#rpc_reply_listener_prefetch_count=100
 
 # Reconnecting retry count in case of connectivity problem during sending reply.
 # -1 means infinite retry during rpc_timeout (integer value)
-#rpc_reply_retry_attempts = -1
+#rpc_reply_retry_attempts=-1
 
 # Reconnecting retry delay in case of connectivity problem during sending reply.
 # (floating point value)
-#rpc_reply_retry_delay = 0.25
+#rpc_reply_retry_delay=0.25
 
 # Reconnecting retry count in case of connectivity problem during sending RPC
 # message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
 # request could be processed more than one time (integer value)
-#default_rpc_retry_attempts = -1
+#default_rpc_retry_attempts=-1
 
 # Reconnecting retry delay in case of connectivity problem during sending RPC
 # message (floating point value)
-#rpc_retry_delay = 0.25
+#rpc_retry_delay=0.25
+
 
 
 [oslo_messaging_zmq]
@@ -1301,30 +7897,30 @@
 # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
 # The "host" option should point or resolve to this address. (string value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
-#rpc_zmq_bind_address = *
+#rpc_zmq_bind_address=*
 
 # MatchMaker driver. (string value)
 # Allowed values: redis, sentinel, dummy
 # Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
-#rpc_zmq_matchmaker = redis
+#rpc_zmq_matchmaker=redis
 
 # Number of ZeroMQ contexts, defaults to 1. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
-#rpc_zmq_contexts = 1
+#rpc_zmq_contexts=1
 
 # Maximum number of ingress messages to locally buffer per topic. Default is
 # unlimited. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
-#rpc_zmq_topic_backlog = <None>
+#rpc_zmq_topic_backlog=<None>
 
 # Directory for holding IPC sockets. (string value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
-#rpc_zmq_ipc_dir = /var/run/openstack
+#rpc_zmq_ipc_dir=/var/run/openstack
 
 # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
 # "host" option, if running Nova. (string value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_host
-#rpc_zmq_host = localhost
+#rpc_zmq_host=localhost
 
 # Number of seconds to wait before all pending messages will be sent after
 # closing a socket. The default value of -1 specifies an infinite linger period.
@@ -1332,119 +7928,120 @@
 # immediately when the socket is closed. Positive values specify an upper bound
 # for the linger period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
+#zmq_linger=-1
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_poll_timeout
-#rpc_poll_timeout = 1
+#rpc_poll_timeout=1
 
 # Expiration timeout in seconds of a name service record about existing target (
 # < 0 means no timeout). (integer value)
 # Deprecated group/name - [DEFAULT]/zmq_target_expire
-#zmq_target_expire = 300
+#zmq_target_expire=300
 
 # Update period in seconds of a name service record about existing target.
 # (integer value)
 # Deprecated group/name - [DEFAULT]/zmq_target_update
-#zmq_target_update = 180
+#zmq_target_update=180
 
 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
 # value)
 # Deprecated group/name - [DEFAULT]/use_pub_sub
-#use_pub_sub = false
+#use_pub_sub=false
 
 # Use ROUTER remote proxy. (boolean value)
 # Deprecated group/name - [DEFAULT]/use_router_proxy
-#use_router_proxy = false
+#use_router_proxy=false
 
 # This option makes direct connections dynamic or static. It makes sense only
 # with use_router_proxy=False which means to use direct connections for direct
 # message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
+#use_dynamic_connections=false
 
 # How many additional connections to a host will be made for failover reasons.
 # This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
+#zmq_failover_connections=2
 
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
 # Maximum value: 65535
 # Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
-#rpc_zmq_min_port = 49153
+#rpc_zmq_min_port=49153
 
 # Maximal port number for random ports range. (integer value)
 # Minimum value: 1
 # Maximum value: 65536
 # Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
-#rpc_zmq_max_port = 65536
+#rpc_zmq_max_port=65536
 
 # Number of retries to find free port number before fail with ZMQBindError.
 # (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
-#rpc_zmq_bind_port_retries = 100
+#rpc_zmq_bind_port_retries=100
 
 # Default serialization mechanism for serializing/deserializing
 # outgoing/incoming messages (string value)
 # Allowed values: json, msgpack
 # Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
-#rpc_zmq_serialization = json
+#rpc_zmq_serialization=json
 
 # This option configures round-robin mode in zmq socket. True means not keeping
 # a queue when server side disconnects. False means to keep queue and messages
 # even if server is disconnected, when the server appears we send all
 # accumulated messages to it. (boolean value)
-#zmq_immediate = true
+#zmq_immediate=true
 
 # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
 # other negative value) means to skip any overrides and leave it to OS default;
 # 0 and 1 (or any other positive value) mean to disable and enable the option
 # respectively. (integer value)
-#zmq_tcp_keepalive = -1
+#zmq_tcp_keepalive=-1
 
 # The duration between two keepalive transmissions in idle condition. The unit
 # is platform dependent, for example, seconds in Linux, milliseconds in Windows
 # etc. The default value of -1 (or any other negative value and 0) means to skip
 # any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
+#zmq_tcp_keepalive_idle=-1
 
 # The number of retransmissions to be carried out before declaring that remote
 # end is not available. The default value of -1 (or any other negative value and
 # 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
+#zmq_tcp_keepalive_cnt=-1
 
 # The duration between two successive keepalive retransmissions, if
 # acknowledgement to the previous keepalive transmission is not received. The
 # unit is platform dependent, for example, seconds in Linux, milliseconds in
 # Windows etc. The default value of -1 (or any other negative value and 0) means
 # to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
+#zmq_tcp_keepalive_intvl=-1
 
 # Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
+#rpc_thread_pool_size=100
+rpc_thread_pool_size=70
 
 # Expiration timeout in seconds of a sent/received message after which it is not
 # tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
+#rpc_message_ttl=300
 
 # Wait for message acknowledgements from receivers. This mechanism works only
 # via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
+#rpc_use_acks=false
 
 # Number of seconds to wait for an ack from a cast/call. After each retry
 # attempt this timeout is multiplied by some specified multiplier. (integer
 # value)
-#rpc_ack_timeout_base = 15
+#rpc_ack_timeout_base=15
 
 # Number to multiply base ack timeout by after each retry attempt. (integer
 # value)
-#rpc_ack_timeout_multiplier = 2
+#rpc_ack_timeout_multiplier=2
 
 # Default number of message sending attempts in case of any problems occurred:
 # positive value N means at most N retries, 0 means no retries, None or -1 (or
 # any other negative values) mean to retry forever. This option is used only if
 # acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
+#rpc_retry_attempts=3
 
 # List of publisher hosts SubConsumer can subscribe on. This option has higher
 # priority then the default publishers list taken from the matchmaker. (list
@@ -1461,19 +8058,19 @@
 # The maximum body size for each  request, in bytes. (integer value)
 # Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
 # Deprecated group/name - [DEFAULT]/max_request_body_size
-#max_request_body_size = 114688
+#max_request_body_size=114688
 
 # DEPRECATED: The HTTP Header that will be used to determine what the original
 # request protocol scheme was, even if it was hidden by a SSL termination proxy.
 # (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
-#secure_proxy_ssl_header = X-Forwarded-Proto
+#secure_proxy_ssl_header=X-Forwarded-Proto
 
 # Whether the application is behind a proxy or not. This determines if the
 # middleware should parse the headers or not. (boolean value)
-#enable_proxy_headers_parsing = false
-
+#enable_proxy_headers_parsing=false
+enable_proxy_headers_parsing=True
 
 [oslo_policy]
 
@@ -1483,11 +8080,11 @@
 
 # The file that defines policies. (string value)
 # Deprecated group/name - [DEFAULT]/policy_file
-#policy_file = policy.json
+#policy_file=policy.json
 
 # Default rule. Enforced when a requested rule is not found. (string value)
 # Deprecated group/name - [DEFAULT]/policy_default_rule
-#policy_default_rule = default
+#policy_default_rule=default
 
 # Directories where policy configuration files are stored. They can be relative
 # to any directory in the search path defined by the config_dir option, or
@@ -1495,4 +8092,2767 @@
 # directories to be searched.  Missing or empty directories are ignored. (multi
 # valued)
 # Deprecated group/name - [DEFAULT]/policy_dirs
-#policy_dirs = policy.d
+#policy_dirs=policy.d
+
+
+[pci]
+
+#
+# From nova.conf
+#
+
+#
+# An alias for a PCI passthrough device requirement.
+#
+# This allows users to specify the alias in the extra_spec for a flavor, without
+# needing to repeat all the PCI property requirements.
+#
+# Possible Values:
+#
+# * A list of JSON values which describe the aliases. For example:
+#
+#     alias = {
+#       "name": "QuickAssist",
+#       "product_id": "0443",
+#       "vendor_id": "8086",
+#       "device_type": "type-PCI"
+#     }
+#
+#   defines an alias for the Intel QuickAssist card. (multi valued). Valid key
+#   values are :
+#
+#   * "name": Name of the PCI alias.
+#   * "product_id": Product ID of the device in hexadecimal.
+#   * "vendor_id": Vendor ID of the device in hexadecimal.
+#   * "device_type": Type of PCI device. Valid values are: "type-PCI",
+#     "type-PF" and "type-VF".
+#  (multi valued)
+# Deprecated group/name - [DEFAULT]/pci_alias
+#alias =
+
+#
+# White list of PCI devices available to VMs.
+#
+# Possible values:
+#
+# * A JSON dictionary which describe a whitelisted PCI device. It should take
+#   the following format:
+#
+#     ["vendor_id": "<id>",] ["product_id": "<id>",]
+#     ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
+#      "devname": "<name>",]
+#     {"<tag>": "<tag_value>",}
+#
+#   Where '[' indicates zero or one occurrences, '{' indicates zero or multiple
+#   occurrences, and '|' mutually exclusive options. Note that any missing
+#   fields are automatically wildcarded.
+#
+#   Valid key values are :
+#
+#   * "vendor_id": Vendor ID of the device in hexadecimal.
+#   * "product_id": Product ID of the device in hexadecimal.
+#   * "address": PCI address of the device.
+#   * "devname": Device name of the device (for e.g. interface name). Not all
+#     PCI devices have a name.
+#   * "<tag>": Additional <tag> and <tag_value> used for matching PCI devices.
+#     Supported <tag>: "physical_network".
+#
+#   The address key supports traditional glob style and regular expression
+#   syntax. Valid examples are:
+#
+#     passthrough_whitelist = {"devname":"eth0",
+#                              "physical_network":"physnet"}
+#     passthrough_whitelist = {"address":"*:0a:00.*"}
+#     passthrough_whitelist = {"address":":0a:00.",
+#                              "physical_network":"physnet1"}
+#     passthrough_whitelist = {"vendor_id":"1137",
+#                              "product_id":"0071"}
+#     passthrough_whitelist = {"vendor_id":"1137",
+#                              "product_id":"0071",
+#                              "address": "0000:0a:00.1",
+#                              "physical_network":"physnet1"}
+#     passthrough_whitelist = {"address":{"domain": ".*",
+#                                         "bus": "02", "slot": "01",
+#                                         "function": "[2-7]"},
+#                              "physical_network":"physnet1"}
+#     passthrough_whitelist = {"address":{"domain": ".*",
+#                                         "bus": "02", "slot": "0[1-2]",
+#                                         "function": ".*"},
+#                              "physical_network":"physnet1"}
+#
+#   The following are invalid, as they specify mutually exclusive options:
+#
+#     passthrough_whitelist = {"devname":"eth0",
+#                              "physical_network":"physnet",
+#                              "address":"*:0a:00.*"}
+#
+# * A JSON list of JSON dictionaries corresponding to the above format. For
+#   example:
+#
+#     passthrough_whitelist = [{"product_id":"0001", "vendor_id":"8086"},
+#                              {"product_id":"0002", "vendor_id":"8086"}]
+#  (multi valued)
+# Deprecated group/name - [DEFAULT]/pci_passthrough_whitelist
+#passthrough_whitelist =
+
+
+[placement]
+
+#
+# From nova.conf
+#
+
+#
+# Region name of this node. This is used when picking the URL in the service
+# catalog.
+#
+# Possible values:
+#
+# * Any string representing region name
+#  (string value)
+os_region_name = RegionOne
+auth_type = password
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = nova
+password = opnfv_secret
+auth_url=http://10.167.4.10:35357/v3
+os_interface = internal
+
+#
+# Endpoint interface for this node. This is used when picking the URL in the
+# service catalog.
+#  (string value)
+#os_interface=<None>
+
+# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# (string value)
+#cafile=<None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile=<None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile=<None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure=false
+
+# Timeout value for http requests (integer value)
+#timeout=<None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [placement]/auth_plugin
+#auth_type=<None>
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section=<None>
+
+# Authentication URL (string value)
+#auth_url=<None>
+
+# Domain ID to scope to (string value)
+#domain_id=<None>
+
+# Domain name to scope to (string value)
+#domain_name=<None>
+
+# Project ID to scope to (string value)
+#project_id=<None>
+
+# Project name to scope to (string value)
+#project_name=<None>
+
+# Domain ID containing project (string value)
+#project_domain_id=<None>
+
+# Domain name containing project (string value)
+#project_domain_name=<None>
+
+# Trust ID (string value)
+#trust_id=<None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id=<None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name=<None>
+
+# User ID (string value)
+#user_id=<None>
+
+# Username (string value)
+# Deprecated group/name - [placement]/user-name
+#username=<None>
+
+# User's domain id (string value)
+#user_domain_id=<None>
+
+# User's domain name (string value)
+#user_domain_name=<None>
+
+# User's password (string value)
+#password=<None>
+
+# Tenant ID (string value)
+#tenant_id=<None>
+
+# Tenant Name (string value)
+#tenant_name=<None>
+
+
+[quota]
+#
+# Quota options allow to manage quotas in openstack deployment.
+
+#
+# From nova.conf
+#
+
+#
+# The number of instances allowed per project.
+#
+# Possible Values
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_instances
+#instances=10
+
+#
+# The number of instance cores or vCPUs allowed per project.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_cores
+#cores=20
+
+#
+# The number of megabytes of instance RAM allowed per project.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_ram
+#ram=51200
+
+# DEPRECATED:
+# The number of floating IPs allowed per project.
+#
+# Floating IPs are not allocated to instances by default. Users need to select
+# them from the pool configured by the OpenStack administrator to attach to
+# their
+# instances.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_floating_ips
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#floating_ips=10
+
+# DEPRECATED:
+# The number of fixed IPs allowed per project.
+#
+# Unlike floating IPs, fixed IPs are allocated dynamically by the network
+# component when instances boot up.  This quota value should be at least the
+# number of instances allowed
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_fixed_ips
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#fixed_ips=-1
+
+#
+# The number of metadata items allowed per instance.
+#
+# Users can associate metadata with an instance during instance creation. This
+# metadata takes the form of key-value pairs.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_metadata_items
+#metadata_items=128
+
+#
+# The number of injected files allowed.
+#
+# File injection allows users to customize the personality of an instance by
+# injecting data into it upon boot. Only text file injection is permitted:
+# binary
+# or ZIP files are not accepted. During file injection, any existing files that
+# match specified files are renamed to include ``.bak`` extension appended with
+# a
+# timestamp.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_injected_files
+#injected_files=5
+
+#
+# The number of bytes allowed per injected file.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_injected_file_content_bytes
+#injected_file_content_bytes=10240
+
+#
+# The maximum allowed injected file path length.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_injected_file_path_length
+#injected_file_path_length=255
+
+# DEPRECATED:
+# The number of security groups per project.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_security_groups
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#security_groups=10
+
+# DEPRECATED:
+# The number of security rules per security group.
+#
+# The associated rules in each security group control the traffic to instances
+# in
+# the group.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_security_group_rules
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#security_group_rules=20
+
+#
+# The maximum number of key pairs allowed per user.
+#
+# Users can create at least one key pair for each project and use the key pair
+# for multiple instances that belong to that project.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_key_pairs
+#key_pairs=100
+
+#
+# The maxiumum number of server groups per project.
+#
+# Server groups are used to control the affinity and anti-affinity scheduling
+# policy for a group of servers or instances. Reducing the quota will not affect
+# any existing group, but new servers will not be allowed into groups that have
+# become over quota.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_server_groups
+#server_groups=10
+
+#
+# The maximum number of servers per server group.
+#
+# Possible values:
+#
+# * A positive integer or 0.
+# * -1 to disable the quota.
+#  (integer value)
+# Minimum value: -1
+# Deprecated group/name - [DEFAULT]/quota_server_group_members
+#server_group_members=10
+
+#
+# The number of seconds until a reservation expires.
+#
+# This quota represents the time period for invalidating quota reservations.
+#  (integer value)
+# Deprecated group/name - [DEFAULT]/reservation_expire
+#reservation_expire=86400
+reservation_expire=86400
+
+#
+# The count of reservations until usage is refreshed.
+#
+# This defaults to 0 (off) to avoid additional load but it is useful to turn on
+# to help keep quota usage up-to-date and reduce the impact of out of sync usage
+# issues.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/until_refresh
+#until_refresh=0
+until_refresh=0
+
+#
+# The number of seconds between subsequent usage refreshes.
+#
+# This defaults to 0 (off) to avoid additional load but it is useful to turn on
+# to help keep quota usage up-to-date and reduce the impact of out of sync usage
+# issues. Note that quotas are not updated on a periodic task, they will update
+# on a new reservation if max_age has passed since the last reservation.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/max_age
+#max_age=0
+
+# DEPRECATED:
+# The quota enforcer driver.
+#
+# Provides abstraction for quota checks. Users can configure a specific
+# driver to use for quota checks.
+#
+# Possible values:
+#
+# * nova.quota.DbQuotaDriver (default) or any string representing fully
+#   qualified class name.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/quota_driver
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+#driver=nova.quota.DbQuotaDriver
+
+
+[rdp]
+#
+# Options under this group enable and configure Remote Desktop Protocol (
+# RDP) related features.
+#
+# This group is only relevant to Hyper-V users.
+
+#
+# From nova.conf
+#
+
+#
+# Enable Remote Desktop Protocol (RDP) related features.
+#
+# Hyper-V, unlike the majority of the hypervisors employed on Nova compute
+# nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to
+# provide instance console access. This option enables RDP for graphical
+# console access for virtual machines created by Hyper-V.
+#
+# **Note:** RDP should only be enabled on compute nodes that support the Hyper-V
+# virtualization platform.
+#
+# Related options:
+#
+# * ``compute_driver``: Must be hyperv.
+#
+#  (boolean value)
+#enabled=false
+
+#
+# The URL an end user would use to connect to the RDP HTML5 console proxy.
+# The console proxy service is called with this token-embedded URL and
+# establishes the connection to the proper instance.
+#
+# An RDP HTML5 console proxy service will need to be configured to listen on the
+# address configured here. Typically the console proxy service would be run on a
+# controller node. The localhost address used as default would only work in a
+# single node environment i.e. devstack.
+#
+# An RDP HTML5 proxy allows a user to access via the web the text or graphical
+# console of any Windows server or workstation using RDP. RDP HTML5 console
+# proxy services include FreeRDP, wsgate.
+# See https://github.com/FreeRDP/FreeRDP-WebConnect
+#
+# Possible values:
+#
+# * <scheme>://<ip-address>:<port-number>/
+#
+#   The scheme must be identical to the scheme configured for the RDP HTML5
+#   console proxy service.
+#
+#   The IP address must be identical to the address on which the RDP HTML5
+#   console proxy service is listening.
+#
+#   The port must be identical to the port on which the RDP HTML5 console proxy
+#   service is listening.
+#
+# Related options:
+#
+# * ``rdp.enabled``: Must be set to ``True`` for ``html5_proxy_base_url`` to be
+#   effective.
+#  (string value)
+#html5_proxy_base_url=http://127.0.0.1:6083/
+
+
+[remote_debug]
+
+#
+# From nova.conf
+#
+
+#
+# Debug host (IP or name) to connect to. This command line parameter is used
+# when
+# you want to connect to a nova service via a debugger running on a different
+# host.
+#
+# Note that using the remote debug option changes how Nova uses the eventlet
+# library to support async IO. This could result in failures that do not occur
+# under normal operation. Use at your own risk.
+#
+# Possible Values:
+#
+#    * IP address of a remote host as a command line parameter
+#      to a nova service. For Example:
+#
+#     /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
+#     --remote_debug-host <IP address where the debugger is running>
+#  (string value)
+#host=<None>
+
+#
+# Debug port to connect to. This command line parameter allows you to specify
+# the port you want to use to connect to a nova service via a debugger running
+# on different host.
+#
+# Note that using the remote debug option changes how Nova uses the eventlet
+# library to support async IO. This could result in failures that do not occur
+# under normal operation. Use at your own risk.
+#
+# Possible Values:
+#
+#    * Port number you want to use as a command line parameter
+#      to a nova service. For Example:
+#
+#     /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
+#     --remote_debug-host <IP address where the debugger is running>
+#     --remote_debug-port <port> it's listening on>.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#port=<None>
+
+
+[scheduler]
+
+#
+# From nova.conf
+#
+
+#
+# The scheduler host manager to use.
+#
+# The host manager manages the in-memory picture of the hosts that the scheduler
+# uses. The options values are chosen from the entry points under the namespace
+# 'nova.scheduler.host_manager' in 'setup.cfg'.
+#  (string value)
+# Allowed values: host_manager, ironic_host_manager
+# Deprecated group/name - [DEFAULT]/scheduler_host_manager
+#host_manager=host_manager
+host_manager=host_manager
+
+#
+# The class of the driver used by the scheduler.
+#
+# The options are chosen from the entry points under the namespace
+# 'nova.scheduler.driver' in 'setup.cfg'.
+#
+# Possible values:
+#
+# * A string, where the string corresponds to the class name of a scheduler
+#   driver. There are a number of options available:
+# ** 'caching_scheduler', which aggressively caches the system state for better
+#    individual scheduler performance at the risk of more retries when running
+#    multiple schedulers
+# ** 'chance_scheduler', which simply picks a host at random
+# ** 'fake_scheduler', which is used for testing
+# ** A custom scheduler driver. In this case, you will be responsible for
+#    creating and maintaining the entry point in your 'setup.cfg' file
+#  (string value)
+# Allowed values: filter_scheduler, caching_scheduler, chance_scheduler, fake_scheduler
+# Deprecated group/name - [DEFAULT]/scheduler_driver
+#driver=filter_scheduler
+driver=filter_scheduler
+
+#
+# Periodic task interval.
+#
+# This value controls how often (in seconds) to run periodic tasks in the
+# scheduler. The specific tasks that are run for each period are determined by
+# the particular scheduler being used.
+#
+# If this is larger than the nova-service 'service_down_time' setting, Nova may
+# report the scheduler service as down. This is because the scheduler driver is
+# responsible for sending a heartbeat and it will only do that as often as this
+# option allows. As each scheduler can work a little differently than the
+# others,
+# be sure to test this with your selected scheduler.
+#
+# Possible values:
+#
+# * An integer, where the integer corresponds to periodic task interval in
+#   seconds. 0 uses the default interval (60 seconds). A negative value disables
+#   periodic tasks.
+#
+# Related options:
+#
+# * ``nova-service service_down_time``
+#  (integer value)
+# Deprecated group/name - [DEFAULT]/scheduler_driver_task_period
+#periodic_task_interval=60
+
+#
+# Maximum number of schedule attempts for a chosen host.
+#
+# This is the maximum number of attempts that will be made to schedule an
+# instance before it is assumed that the failures aren't due to normal
+# occasional
+# race conflicts, but rather some other problem. When this is reached a
+# MaxRetriesExceeded exception is raised, and the instance is set to an error
+# state.
+#
+# Possible values:
+#
+# * A positive integer, where the integer corresponds to the max number of
+#   attempts that can be made when scheduling an instance.
+#          (integer value)
+# Minimum value: 1
+# Deprecated group/name - [DEFAULT]/scheduler_max_attempts
+#max_attempts=3
+max_attempts=3
+
+#
+# Periodic task interval.
+#
+# This value controls how often (in seconds) the scheduler should attempt
+# to discover new hosts that have been added to cells. If negative (the
+# default), no automatic discovery will occur.
+#
+# Small deployments may want this periodic task enabled, as surveying the
+# cells for new hosts is likely to be lightweight enough to not cause undue
+# burdon to the scheduler. However, larger clouds (and those that are not
+# adding hosts regularly) will likely want to disable this automatic
+# behavior and instead use the `nova-manage cell_v2 discover_hosts` command
+# when hosts have been added to a cell.
+#  (integer value)
+# Minimum value: -1
+#discover_hosts_in_cells_interval=-1
+discover_hosts_in_cells_interval=300
+
+
+[serial_console]
+#
+# The serial console feature allows you to connect to a guest in case a
+# graphical console like VNC, RDP or SPICE is not available. This is only
+# currently supported for the libvirt, Ironic and hyper-v drivers.
+
+#
+# From nova.conf
+#
+
+#
+# Enable the serial console feature.
+#
+# In order to use this feature, the service ``nova-serialproxy`` needs to run.
+# This service is typically executed on the controller node.
+#  (boolean value)
+#enabled=false
+
+#
+# A range of TCP ports a guest can use for its backend.
+#
+# Each instance which gets created will use one port out of this range. If the
+# range is not big enough to provide another port for an new instance, this
+# instance won't get launched.
+#
+# Possible values:
+#
+# * Each string which passes the regex ``\d+:\d+`` For example ``10000:20000``.
+#   Be sure that the first port number is lower than the second port number
+#   and that both are in range from 0 to 65535.
+#  (string value)
+#port_range=10000:20000
+
+#
+# The URL an end user would use to connect to the ``nova-serialproxy`` service.
+#
+# The ``nova-serialproxy`` service is called with this token enriched URL
+# and establishes the connection to the proper instance.
+#
+# Related options:
+#
+# * The IP address must be identical to the address to which the
+#   ``nova-serialproxy`` service is listening (see option ``serialproxy_host``
+#   in this section).
+# * The port must be the same as in the option ``serialproxy_port`` of this
+#   section.
+# * If you choose to use a secured websocket connection, then start this option
+#   with ``wss://`` instead of the unsecured ``ws://``. The options ``cert``
+#   and ``key`` in the ``[DEFAULT]`` section have to be set for that.
+#  (uri value)
+#base_url=ws://127.0.0.1:6083/
+
+#
+# The IP address to which proxy clients (like ``nova-serialproxy``) should
+# connect to get the serial console of an instance.
+#
+# This is typically the IP address of the host of a ``nova-compute`` service.
+#  (string value)
+#proxyclient_address=127.0.0.1
+
+#
+# The IP address which is used by the ``nova-serialproxy`` service to listen
+# for incoming requests.
+#
+# The ``nova-serialproxy`` service listens on this IP address for incoming
+# connection requests to instances which expose serial console.
+#
+# Related options:
+#
+# * Ensure that this is the same IP address which is defined in the option
+#   ``base_url`` of this section or use ``0.0.0.0`` to listen on all addresses.
+#  (string value)
+#serialproxy_host=0.0.0.0
+
+#
+# The port number which is used by the ``nova-serialproxy`` service to listen
+# for incoming requests.
+#
+# The ``nova-serialproxy`` service listens on this port number for incoming
+# connection requests to instances which expose serial console.
+#
+# Related options:
+#
+# * Ensure that this is the same port number which is defined in the option
+#   ``base_url`` of this section.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#serialproxy_port=6083
+
+
+[service_user]
+#
+# Configuration options for service to service authentication using a service
+# token. These options allow to send a service token along with the
+# user's token when contacting external REST APIs.
+
+#
+# From nova.conf
+#
+
+#
+# When True, if sending a user token to an REST API, also send a service token.
+#
+# Nova often reuses the user token provided to the nova-api to talk to other
+# REST APIs, such as Cinder and Neutron. It is possible that while the
+# user token was valid when the request was made to Nova, the token may expire
+# before it reaches the other service. To avoid any failures, and to
+# make it clear it is Nova calling the service on the users behalf, we include
+# a server token along with the user token. Should the user's token have
+# expired, a valid service token ensures the REST API request will still be
+# accepted by the keystone middleware.
+#
+# This feature is currently experimental, and as such is turned off by default
+# while full testing and performance tuning of this feature is completed.
+#  (boolean value)
+#send_service_user_token=false
+
+# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# (string value)
+#cafile=<None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile=<None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile=<None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure=false
+
+# Timeout value for http requests (integer value)
+#timeout=<None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [service_user]/auth_plugin
+#auth_type=<None>
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section=<None>
+
+# Authentication URL (string value)
+#auth_url=<None>
+
+# Domain ID to scope to (string value)
+#domain_id=<None>
+
+# Domain name to scope to (string value)
+#domain_name=<None>
+
+# Project ID to scope to (string value)
+#project_id=<None>
+
+# Project name to scope to (string value)
+#project_name=<None>
+
+# Domain ID containing project (string value)
+#project_domain_id=<None>
+
+# Domain name containing project (string value)
+#project_domain_name=<None>
+
+# Trust ID (string value)
+#trust_id=<None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id=<None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name=<None>
+
+# User ID (string value)
+#user_id=<None>
+
+# Username (string value)
+# Deprecated group/name - [service_user]/user-name
+#username=<None>
+
+# User's domain id (string value)
+#user_domain_id=<None>
+
+# User's domain name (string value)
+#user_domain_name=<None>
+
+# User's password (string value)
+#password=<None>
+
+# Tenant ID (string value)
+#tenant_id=<None>
+
+# Tenant Name (string value)
+#tenant_name=<None>
+
+
+[spice]
+#
+# SPICE console feature allows you to connect to a guest virtual machine.
+# SPICE is a replacement for fairly limited VNC protocol.
+#
+# Following requirements must be met in order to use SPICE:
+#
+# * Virtualization driver must be libvirt
+# * spice.enabled set to True
+# * vnc.enabled set to False
+# * update html5proxy_base_url
+# * update server_proxyclient_address
+enabled = false
+html5proxy_base_url = https://10.167.4.80:6080/spice_auto.html
+#
+# From nova.conf
+#
+
+#
+# Enable SPICE related features.
+#
+# Related options:
+#
+# * VNC must be explicitly disabled to get access to the SPICE console. Set the
+#   enabled option to False in the [vnc] section to disable the VNC console.
+#  (boolean value)
+#enabled=false
+
+#
+# Enable the SPICE guest agent support on the instances.
+#
+# The Spice agent works with the Spice protocol to offer a better guest console
+# experience. However, the Spice console can still be used without the Spice
+# Agent. With the Spice agent installed the following features are enabled:
+#
+# * Copy & Paste of text and images between the guest and client machine
+# * Automatic adjustment of resolution when the client screen changes - e.g.
+#   if you make the Spice console full screen the guest resolution will adjust
+# to
+#   match it rather than letterboxing.
+# * Better mouse integration - The mouse can be captured and released without
+#   needing to click inside the console or press keys to release it. The
+#   performance of mouse movement is also improved.
+#  (boolean value)
+#agent_enabled=true
+
+#
+# Location of the SPICE HTML5 console proxy.
+#
+# End user would use this URL to connect to the `nova-spicehtml5proxy``
+# service. This service will forward request to the console of an instance.
+#
+# In order to use SPICE console, the service ``nova-spicehtml5proxy`` should be
+# running. This service is typically launched on the controller node.
+#
+# Possible values:
+#
+# * Must be a valid URL of the form:  ``http://host:port/spice_auto.html``
+#   where host is the node running ``nova-spicehtml5proxy`` and the port is
+#   typically 6082. Consider not using default value as it is not well defined
+#   for any real deployment.
+#
+# Related options:
+#
+# * This option depends on ``html5proxy_host`` and ``html5proxy_port`` options.
+#   The access URL returned by the compute node must have the host
+#   and port where the ``nova-spicehtml5proxy`` service is listening.
+#  (uri value)
+#html5proxy_base_url=http://127.0.0.1:6082/spice_auto.html
+
+#
+# The  address where the SPICE server running on the instances should listen.
+#
+# Typically, the ``nova-spicehtml5proxy`` proxy client runs on the controller
+# node and connects over the private network to this address on the compute
+# node(s).
+#
+# Possible values:
+#
+# * IP address to listen on.
+#  (string value)
+#server_listen=127.0.0.1
+
+#
+# The address used by ``nova-spicehtml5proxy`` client to connect to instance
+# console.
+#
+# Typically, the ``nova-spicehtml5proxy`` proxy client runs on the
+# controller node and connects over the private network to this address on the
+# compute node(s).
+#
+# Possible values:
+#
+# * Any valid IP address on the compute node.
+#
+# Related options:
+#
+# * This option depends on the ``server_listen`` option.
+#   The proxy client must be able to access the address specified in
+#   ``server_listen`` using the value of this option.
+#  (string value)
+#server_proxyclient_address=127.0.0.1
+
+#
+# A keyboard layout which is supported by the underlying hypervisor on this
+# node.
+#
+# Possible values:
+# * This is usually an 'IETF language tag' (default is 'en-us'). If you
+#   use QEMU as hypervisor, you should find the list of supported keyboard
+#   layouts at /usr/share/qemu/keymaps.
+#  (string value)
+#keymap=en-us
+
+#
+# IP address or a hostname on which the ``nova-spicehtml5proxy`` service
+# listens for incoming requests.
+#
+# Related options:
+#
+# * This option depends on the ``html5proxy_base_url`` option.
+#   The ``nova-spicehtml5proxy`` service must be listening on a host that is
+#   accessible from the HTML5 client.
+#  (string value)
+#html5proxy_host=0.0.0.0
+
+#
+# Port on which the ``nova-spicehtml5proxy`` service listens for incoming
+# requests.
+#
+# Related options:
+#
+# * This option depends on the ``html5proxy_base_url`` option.
+#   The ``nova-spicehtml5proxy`` service must be listening on a port that is
+#   accessible from the HTML5 client.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#html5proxy_port=6082
+
+
+[ssl]
+
+#
+# From nova.conf
+#
+
+# CA certificate file to use to verify connecting clients. (string value)
+# Deprecated group/name - [DEFAULT]/ssl_ca_file
+#ca_file=<None>
+
+# Certificate file to use when starting the server securely. (string value)
+# Deprecated group/name - [DEFAULT]/ssl_cert_file
+#cert_file=<None>
+
+# Private key file to use when starting the server securely. (string value)
+# Deprecated group/name - [DEFAULT]/ssl_key_file
+#key_file=<None>
+
+# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
+# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
+# distributions. (string value)
+#version=<None>
+
+# Sets the list of available ciphers. value should be a string in the OpenSSL
+# cipher list format. (string value)
+#ciphers=<None>
+
+
+[trusted_computing]
+#
+# Configuration options for enabling Trusted Platform Module.
+
+#
+# From nova.conf
+#
+
+#
+# The host to use as the attestation server.
+#
+# Cloud computing pools can involve thousands of compute nodes located at
+# different geographical locations, making it difficult for cloud providers to
+# identify a node's trustworthiness. When using the Trusted filter, users can
+# request that their VMs only be placed on nodes that have been verified by the
+# attestation server specified in this option.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'TrustedFilter' filter is enabled.
+#
+# Possible values:
+#
+# * A string representing the host name or IP address of the attestation server,
+#   or an empty string.
+#
+# Related options:
+#
+# * attestation_server_ca_file
+# * attestation_port
+# * attestation_api_url
+# * attestation_auth_blob
+# * attestation_auth_timeout
+# * attestation_insecure_ssl
+#  (string value)
+#attestation_server=<None>
+
+#
+# The absolute path to the certificate to use for authentication when connecting
+# to the attestation server. See the `attestation_server` help text for more
+# information about host verification.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'TrustedFilter' filter is enabled.
+#
+# Possible values:
+#
+# * A string representing the path to the authentication certificate for the
+#   attestation server, or an empty string.
+#
+# Related options:
+#
+# * attestation_server
+# * attestation_port
+# * attestation_api_url
+# * attestation_auth_blob
+# * attestation_auth_timeout
+# * attestation_insecure_ssl
+#  (string value)
+#attestation_server_ca_file=<None>
+
+#
+# The port to use when connecting to the attestation server. See the
+# `attestation_server` help text for more information about host verification.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'TrustedFilter' filter is enabled.
+#
+# Related options:
+#
+# * attestation_server
+# * attestation_server_ca_file
+# * attestation_api_url
+# * attestation_auth_blob
+# * attestation_auth_timeout
+# * attestation_insecure_ssl
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#attestation_port=8443
+
+#
+# The URL on the attestation server to use. See the `attestation_server` help
+# text for more information about host verification.
+#
+# This value must be just that path portion of the full URL, as it will be
+# joined
+# to the host specified in the attestation_server option.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'TrustedFilter' filter is enabled.
+#
+# Possible values:
+#
+# * A valid URL string of the attestation server, or an empty string.
+#
+# Related options:
+#
+# * attestation_server
+# * attestation_server_ca_file
+# * attestation_port
+# * attestation_auth_blob
+# * attestation_auth_timeout
+# * attestation_insecure_ssl
+#  (string value)
+#attestation_api_url=/OpenAttestationWebServices/V1.0
+
+#
+# Attestation servers require a specific blob that is used to authenticate. The
+# content and format of the blob are determined by the particular attestation
+# server being used. There is no default value; you must supply the value as
+# specified by your attestation service. See the `attestation_server` help text
+# for more information about host verification.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'TrustedFilter' filter is enabled.
+#
+# Possible values:
+#
+# * A string containing the specific blob required by the attestation server, or
+#   an empty string.
+#
+# Related options:
+#
+# * attestation_server
+# * attestation_server_ca_file
+# * attestation_port
+# * attestation_api_url
+# * attestation_auth_timeout
+# * attestation_insecure_ssl
+#  (string value)
+#attestation_auth_blob=<None>
+
+#
+# This value controls how long a successful attestation is cached. Once this
+# period has elapsed, a new attestation request will be made. See the
+# `attestation_server` help text for more information about host verification.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'TrustedFilter' filter is enabled.
+#
+# Possible values:
+#
+# * A integer value, corresponding to the timeout interval for attestations in
+#   seconds. Any integer is valid, although setting this to zero or negative
+#   values can greatly impact performance when using an attestation service.
+#
+# Related options:
+#
+# * attestation_server
+# * attestation_server_ca_file
+# * attestation_port
+# * attestation_api_url
+# * attestation_auth_blob
+# * attestation_insecure_ssl
+#  (integer value)
+#attestation_auth_timeout=60
+
+#
+# When set to True, the SSL certificate verification is skipped for the
+# attestation service. See the `attestation_server` help text for more
+# information about host verification.
+#
+# This option is only used by the FilterScheduler and its subclasses; if you use
+# a different scheduler, this option has no effect. Also note that this setting
+# only affects scheduling if the 'TrustedFilter' filter is enabled.
+#
+# Related options:
+#
+# * attestation_server
+# * attestation_server_ca_file
+# * attestation_port
+# * attestation_api_url
+# * attestation_auth_blob
+# * attestation_auth_timeout
+#  (boolean value)
+#attestation_insecure_ssl=false
+
+
+[upgrade_levels]
+#
+# upgrade_levels options are used to set version cap for RPC
+# messages sent between different nova services.
+#
+# By default all services send messages using the latest version
+# they know about.
+#
+# The compute upgrade level is an important part of rolling upgrades
+# where old and new nova-compute services run side by side.
+#
+# The other options can largely be ignored, and are only kept to
+# help with a possible future backport issue.
+
+#
+# From nova.conf
+#
+
+#
+# Compute RPC API version cap.
+#
+# By default, we always send messages using the most recent version
+# the client knows about.
+#
+# Where you have old and new compute services running, you should set
+# this to the lowest deployed version. This is to guarantee that all
+# services never send messages that one of the compute nodes can't
+# understand. Note that we only support upgrading from release N to
+# release N+1.
+#
+# Set this option to "auto" if you want to let the compute RPC module
+# automatically determine what version to use based on the service
+# versions in the deployment.
+#
+# Possible values:
+#
+# * By default send the latest version the client knows about
+# * 'auto': Automatically determines what version to use based on
+#   the service versions in the deployment.
+# * A string representing a version number in the format 'N.N';
+#   for example, possible values might be '1.12' or '2.0'.
+# * An OpenStack release name, in lower case, such as 'mitaka' or
+#   'liberty'.
+#  (string value)
+#compute=<None>
+
+# Cells RPC API version cap (string value)
+#cells=<None>
+
+# Intercell RPC API version cap (string value)
+#intercell=<None>
+
+# Cert RPC API version cap (string value)
+#cert=<None>
+
+# Scheduler RPC API version cap (string value)
+#scheduler=<None>
+
+# Conductor RPC API version cap (string value)
+#conductor=<None>
+
+# Console RPC API version cap (string value)
+#console=<None>
+
+# Consoleauth RPC API version cap (string value)
+#consoleauth=<None>
+
+# Network RPC API version cap (string value)
+#network=<None>
+
+# Base API RPC API version cap (string value)
+#baseapi=<None>
+
+
+[vendordata_dynamic_auth]
+#
+# Options within this group control the authentication of the vendordata
+# subsystem of the metadata API server (and config drive) with external systems.
+
+#
+# From nova.conf
+#
+
+# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# (string value)
+#cafile=<None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile=<None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile=<None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure=false
+
+# Timeout value for http requests (integer value)
+#timeout=<None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [vendordata_dynamic_auth]/auth_plugin
+#auth_type=<None>
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section=<None>
+
+# Authentication URL (string value)
+#auth_url=<None>
+
+# Domain ID to scope to (string value)
+#domain_id=<None>
+
+# Domain name to scope to (string value)
+#domain_name=<None>
+
+# Project ID to scope to (string value)
+#project_id=<None>
+
+# Project name to scope to (string value)
+#project_name=<None>
+
+# Domain ID containing project (string value)
+#project_domain_id=<None>
+
+# Domain name containing project (string value)
+#project_domain_name=<None>
+
+# Trust ID (string value)
+#trust_id=<None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id=<None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name=<None>
+
+# User ID (string value)
+#user_id=<None>
+
+# Username (string value)
+# Deprecated group/name - [vendordata_dynamic_auth]/user-name
+#username=<None>
+
+# User's domain id (string value)
+#user_domain_id=<None>
+
+# User's domain name (string value)
+#user_domain_name=<None>
+
+# User's password (string value)
+#password=<None>
+
+# Tenant ID (string value)
+#tenant_id=<None>
+
+# Tenant Name (string value)
+#tenant_name=<None>
+
+
+[vmware]
+#
+# Related options:
+# Following options must be set in order to launch VMware-based
+# virtual machines.
+#
+# * compute_driver: Must use vmwareapi.VMwareVCDriver.
+# * vmware.host_username
+# * vmware.host_password
+# * vmware.cluster_name
+
+#
+# From nova.conf
+#
+
+#
+# This option specifies the physical ethernet adapter name for VLAN
+# networking.
+#
+# Set the vlan_interface configuration option to match the ESX host
+# interface that handles VLAN-tagged VM traffic.
+#
+# Possible values:
+#
+# * Any valid string representing VLAN interface name
+#  (string value)
+#vlan_interface=vmnic0
+
+#
+# This option should be configured only when using the NSX-MH Neutron
+# plugin. This is the name of the integration bridge on the ESXi server
+# or host. This should not be set for any other Neutron plugin. Hence
+# the default value is not set.
+#
+# Possible values:
+#
+# * Any valid string representing the name of the integration bridge
+#  (string value)
+#integration_bridge=<None>
+
+#
+# Set this value if affected by an increased network latency causing
+# repeated characters when typing in a remote console.
+#  (integer value)
+# Minimum value: 0
+#console_delay_seconds=<None>
+
+#
+# Identifies the remote system where the serial port traffic will
+# be sent.
+#
+# This option adds a virtual serial port which sends console output to
+# a configurable service URI. At the service URI address there will be
+# virtual serial port concentrator that will collect console logs.
+# If this is not set, no serial ports will be added to the created VMs.
+#
+# Possible values:
+#
+# * Any valid URI
+#  (string value)
+#serial_port_service_uri=<None>
+
+#
+# Identifies a proxy service that provides network access to the
+# serial_port_service_uri.
+#
+# Possible values:
+#
+# * Any valid URI
+#
+# Related options:
+# This option is ignored if serial_port_service_uri is not specified.
+# * serial_port_service_uri
+#  (string value)
+#serial_port_proxy_uri=<None>
+
+#
+# Hostname or IP address for connection to VMware vCenter host. (string value)
+#host_ip=<None>
+
+# Port for connection to VMware vCenter host. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#host_port=443
+
+# Username for connection to VMware vCenter host. (string value)
+#host_username=<None>
+
+# Password for connection to VMware vCenter host. (string value)
+#host_password=<None>
+
+#
+# Specifies the CA bundle file to be used in verifying the vCenter
+# server certificate.
+#  (string value)
+#ca_file=<None>
+
+#
+# If true, the vCenter server certificate is not verified. If false,
+# then the default CA truststore is used for verification.
+#
+# Related options:
+# * ca_file: This option is ignored if "ca_file" is set.
+#  (boolean value)
+#insecure=false
+
+# Name of a VMware Cluster ComputeResource. (string value)
+#cluster_name=<None>
+
+#
+# Regular expression pattern to match the name of datastore.
+#
+# The datastore_regex setting specifies the datastores to use with
+# Compute. For example, datastore_regex="nas.*" selects all the data
+# stores that have a name starting with "nas".
+#
+# NOTE: If no regex is given, it just picks the datastore with the
+# most freespace.
+#
+# Possible values:
+#
+# * Any matching regular expression to a datastore must be given
+#  (string value)
+#datastore_regex=<None>
+
+#
+# Time interval in seconds to poll remote tasks invoked on
+# VMware VC server.
+#  (floating point value)
+#task_poll_interval=0.5
+
+#
+# Number of times VMware vCenter server API must be retried on connection
+# failures, e.g. socket error, etc.
+#  (integer value)
+# Minimum value: 0
+#api_retry_count=10
+
+#
+# This option specifies VNC starting port.
+#
+# Every VM created by ESX host has an option of enabling VNC client
+# for remote connection. Above option 'vnc_port' helps you to set
+# default starting port for the VNC client.
+#
+# Possible values:
+#
+# * Any valid port number within 5900 -(5900 + vnc_port_total)
+#
+# Related options:
+# Below options should be set to enable VNC client.
+# * vnc.enabled = True
+# * vnc_port_total
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#vnc_port=5900
+
+#
+# Total number of VNC ports.
+#  (integer value)
+# Minimum value: 0
+#vnc_port_total=10000
+
+#
+# This option enables/disables the use of linked clone.
+#
+# The ESX hypervisor requires a copy of the VMDK file in order to boot
+# up a virtual machine. The compute driver must download the VMDK via
+# HTTP from the OpenStack Image service to a datastore that is visible
+# to the hypervisor and cache it. Subsequent virtual machines that need
+# the VMDK use the cached version and don't have to copy the file again
+# from the OpenStack Image service.
+#
+# If set to false, even with a cached VMDK, there is still a copy
+# operation from the cache location to the hypervisor file directory
+# in the shared datastore. If set to true, the above copy operation
+# is avoided as it creates copy of the virtual machine that shares
+# virtual disks with its parent VM.
+#  (boolean value)
+#use_linked_clone=true
+
+# DEPRECATED:
+# This option specifies VIM Service WSDL Location
+#
+# If vSphere API versions 5.1 and later is being used, this section can
+# be ignored. If version is less than 5.1, WSDL files must be hosted
+# locally and their location must be specified in the above section.
+#
+# Optional over-ride to default location for bug work-arounds.
+#
+# Possible values:
+#
+# * http://<server>/vimService.wsdl
+# * file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Only vCenter versions earlier than 5.1 require this option and the
+# current minimum version is 5.1.
+#wsdl_location=<None>
+
+#
+# This option enables or disables storage policy based placement
+# of instances.
+#
+# Related options:
+#
+# * pbm_default_policy
+#  (boolean value)
+#pbm_enabled=false
+
+#
+# This option specifies the PBM service WSDL file location URL.
+#
+# Setting this will disable storage policy based placement
+# of instances.
+#
+# Possible values:
+#
+# * Any valid file path
+#   e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl
+#  (string value)
+#pbm_wsdl_location=<None>
+
+#
+# This option specifies the default policy to be used.
+#
+# If pbm_enabled is set and there is no defined storage policy for the
+# specific request, then this policy will be used.
+#
+# Possible values:
+#
+# * Any valid storage policy such as VSAN default storage policy
+#
+# Related options:
+#
+# * pbm_enabled
+#  (string value)
+#pbm_default_policy=<None>
+
+#
+# This option specifies the limit on the maximum number of objects to
+# return in a single result.
+#
+# A positive value will cause the operation to suspend the retrieval
+# when the count of objects reaches the specified limit. The server may
+# still limit the count to something less than the configured value.
+# Any remaining objects may be retrieved with additional requests.
+#  (integer value)
+# Minimum value: 0
+#maximum_objects=100
+
+#
+# This option adds a prefix to the folder where cached images are stored
+#
+# This is not the full path - just a folder prefix. This should only be
+# used when a datastore cache is shared between compute nodes.
+#
+# Note: This should only be used when the compute nodes are running on same
+# host or they have a shared file system.
+#
+# Possible values:
+#
+# * Any string representing the cache prefix to the folder
+#  (string value)
+#cache_prefix=<None>
+
+
+[vnc]
+#
+# Virtual Network Computer (VNC) can be used to provide remote desktop
+# console access to instances for tenants and/or administrators.
+
+#
+# From nova.conf
+#
+
+#
+# Enable VNC related features.
+#
+# Guests will get created with graphical devices to support this. Clients
+# (for example Horizon) can then establish a VNC connection to the guest.
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/vnc_enabled
+#enabled=true
+enabled=true
+
+#
+# Keymap for VNC.
+#
+# The keyboard mapping (keymap) determines which keyboard layout a VNC
+# session should use by default.
+#
+# Possible values:
+#
+# * A keyboard layout which is supported by the underlying hypervisor on
+#   this node. This is usually an 'IETF language tag' (for example
+#   'en-us').  If you use QEMU as hypervisor, you should find the  list
+#   of supported keyboard layouts at ``/usr/share/qemu/keymaps``.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vnc_keymap
+#keymap=en-us
+
+#
+# The IP address or hostname on which an instance should listen to for
+# incoming VNC connection requests on this node.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vncserver_listen
+#vncserver_listen=127.0.0.1
+
+#
+# Private, internal IP address or hostname of VNC console proxy.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients.
+#
+# This option sets the private address to which proxy clients, such as
+# ``nova-xvpvncproxy``, should connect to.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vncserver_proxyclient_address
+#vncserver_proxyclient_address=127.0.0.1
+
+#
+# Public address of noVNC VNC console proxy.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. noVNC provides
+# VNC support through a websocket-based client.
+#
+# This option sets the public base URL to which client systems will
+# connect. noVNC clients can use this address to connect to the noVNC
+# instance and, by extension, the VNC sessions.
+#
+# Related options:
+#
+# * novncproxy_host
+# * novncproxy_port
+#  (uri value)
+# Deprecated group/name - [DEFAULT]/novncproxy_base_url
+#novncproxy_base_url=http://127.0.0.1:6080/vnc_auto.html
+enabled = true
+novncproxy_host = 10.167.4.13
+novncproxy_base_url = https://10.167.4.80:6080/vnc_auto.html
+novncproxy_port=6080
+vncserver_listen=10.167.4.13
+keymap = en-us
+
+#
+# IP address or hostname that the XVP VNC console proxy should bind to.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. Xen provides
+# the Xenserver VNC Proxy, or XVP, as an alternative to the
+# websocket-based noVNC proxy used by Libvirt. In contrast to noVNC,
+# XVP clients are Java-based.
+#
+# This option sets the private address to which the XVP VNC console proxy
+# service should bind to.
+#
+# Related options:
+#
+# * xvpvncproxy_port
+# * xvpvncproxy_base_url
+#  (string value)
+# Deprecated group/name - [DEFAULT]/xvpvncproxy_host
+#xvpvncproxy_host=0.0.0.0
+
+#
+# Port that the XVP VNC console proxy should bind to.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. Xen provides
+# the Xenserver VNC Proxy, or XVP, as an alternative to the
+# websocket-based noVNC proxy used by Libvirt. In contrast to noVNC,
+# XVP clients are Java-based.
+#
+# This option sets the private port to which the XVP VNC console proxy
+# service should bind to.
+#
+# Related options:
+#
+# * xvpvncproxy_host
+# * xvpvncproxy_base_url
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/xvpvncproxy_port
+#xvpvncproxy_port=6081
+
+#
+# Public URL address of XVP VNC console proxy.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. Xen provides
+# the Xenserver VNC Proxy, or XVP, as an alternative to the
+# websocket-based noVNC proxy used by Libvirt. In contrast to noVNC,
+# XVP clients are Java-based.
+#
+# This option sets the public base URL to which client systems will
+# connect. XVP clients can use this address to connect to the XVP
+# instance and, by extension, the VNC sessions.
+#
+# Related options:
+#
+# * xvpvncproxy_host
+# * xvpvncproxy_port
+#  (uri value)
+# Deprecated group/name - [DEFAULT]/xvpvncproxy_base_url
+#xvpvncproxy_base_url=http://127.0.0.1:6081/console
+
+#
+# IP address that the noVNC console proxy should bind to.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. noVNC provides
+# VNC support through a websocket-based client.
+#
+# This option sets the private address to which the noVNC console proxy
+# service should bind to.
+#
+# Related options:
+#
+# * novncproxy_port
+# * novncproxy_base_url
+#  (string value)
+# Deprecated group/name - [DEFAULT]/novncproxy_host
+#novncproxy_host=0.0.0.0
+
+#
+# Port that the noVNC console proxy should bind to.
+#
+# The VNC proxy is an OpenStack component that enables compute service
+# users to access their instances through VNC clients. noVNC provides
+# VNC support through a websocket-based client.
+#
+# This option sets the private port to which the noVNC console proxy
+# service should bind to.
+#
+# Related options:
+#
+# * novncproxy_host
+# * novncproxy_base_url
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/novncproxy_port
+#novncproxy_port=6080
+
+
+[workarounds]
+#
+# A collection of workarounds used to mitigate bugs or issues found in system
+# tools (e.g. Libvirt or QEMU) or Nova itself under certain conditions. These
+# should only be enabled in exceptional circumstances. All options are linked
+# against bug IDs, where more information on the issue can be found.
+
+#
+# From nova.conf
+#
+
+#
+# Use sudo instead of rootwrap.
+#
+# Allow fallback to sudo for performance reasons.
+#
+# For more information, refer to the bug report:
+#
+#   https://bugs.launchpad.net/nova/+bug/1415106
+#
+# Possible values:
+#
+# * True: Use sudo instead of rootwrap
+# * False: Use rootwrap as usual
+#
+# Interdependencies to other options:
+#
+# * Any options that affect 'rootwrap' will be ignored.
+#  (boolean value)
+#disable_rootwrap=false
+
+#
+# Disable live snapshots when using the libvirt driver.
+#
+# Live snapshots allow the snapshot of the disk to happen without an
+# interruption to the guest, using coordination with a guest agent to
+# quiesce the filesystem.
+#
+# When using libvirt 1.2.2 live snapshots fail intermittently under load
+# (likely related to concurrent libvirt/qemu operations). This config
+# option provides a mechanism to disable live snapshot, in favor of cold
+# snapshot, while this is resolved. Cold snapshot causes an instance
+# outage while the guest is going through the snapshotting process.
+#
+# For more information, refer to the bug report:
+#
+#   https://bugs.launchpad.net/nova/+bug/1334398
+#
+# Possible values:
+#
+# * True: Live snapshot is disabled when using libvirt
+# * False: Live snapshots are always used when snapshotting (as long as
+#   there is a new enough libvirt and the backend storage supports it)
+#  (boolean value)
+#disable_libvirt_livesnapshot=true
+
+#
+# Enable handling of events emitted from compute drivers.
+#
+# Many compute drivers emit lifecycle events, which are events that occur when,
+# for example, an instance is starting or stopping. If the instance is going
+# through task state changes due to an API operation, like resize, the events
+# are ignored.
+#
+# This is an advanced feature which allows the hypervisor to signal to the
+# compute service that an unexpected state change has occurred in an instance
+# and that the instance can be shutdown automatically. Unfortunately, this can
+# race in some conditions, for example in reboot operations or when the compute
+# service or when host is rebooted (planned or due to an outage). If such races
+# are common, then it is advisable to disable this feature.
+#
+# Care should be taken when this feature is disabled and
+# 'sync_power_state_interval' is set to a negative value. In this case, any
+# instances that get out of sync between the hypervisor and the Nova database
+# will have to be synchronized manually.
+#
+# For more information, refer to the bug report:
+#
+#   https://bugs.launchpad.net/bugs/1444630
+#
+# Interdependencies to other options:
+#
+# * If ``sync_power_state_interval`` is negative and this feature is disabled,
+#   then instances that get out of sync between the hypervisor and the Nova
+#   database will have to be synchronized manually.
+#  (boolean value)
+#handle_virt_lifecycle_events=true
+
+
+[wsgi]
+#
+# Options under this group are used to configure WSGI (Web Server Gateway
+# Interface). WSGI is used to serve API requests.
+
+#
+# From nova.conf
+#
+
+#
+# This option represents a file name for the paste.deploy config for nova-api.
+#
+# Possible values:
+#
+# * A string representing file name for the paste.deploy config.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/api_paste_config
+api_paste_config=/etc/nova/api-paste.ini
+
+#
+# It represents a python format string that is used as the template to generate
+# log lines. The following values can be formatted into it: client_ip,
+# date_time, request_line, status_code, body_length, wall_seconds.
+#
+# This option is used for building custom request loglines.
+#
+# Possible values:
+#
+# * '%(client_ip)s "%(request_line)s" status: %(status_code)s'
+#   'len: %(body_length)s time: %(wall_seconds).7f' (default)
+# * Any formatted string formed by specific values.
+#  (string value)
+# Deprecated group/name - [DEFAULT]/wsgi_log_format
+#wsgi_log_format=%(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f
+
+#
+# This option specifies the HTTP header used to determine the protocol scheme
+# for the original request, even if it was removed by a SSL terminating proxy.
+#
+# Possible values:
+#
+# * None (default) - the request scheme is not influenced by any HTTP headers.
+# * Valid HTTP header, like HTTP_X_FORWARDED_PROTO
+#  (string value)
+# Deprecated group/name - [DEFAULT]/secure_proxy_ssl_header
+#secure_proxy_ssl_header=<None>
+
+#
+# This option allows setting path to the CA certificate file that should be used
+# to verify connecting clients.
+#
+# Possible values:
+#
+# * String representing path to the CA certificate file.
+#
+# Related options:
+#
+# * enabled_ssl_apis
+#  (string value)
+# Deprecated group/name - [DEFAULT]/ssl_ca_file
+#ssl_ca_file=<None>
+
+#
+# This option allows setting path to the SSL certificate of API server.
+#
+# Possible values:
+#
+# * String representing path to the SSL certificate.
+#
+# Related options:
+#
+# * enabled_ssl_apis
+#  (string value)
+# Deprecated group/name - [DEFAULT]/ssl_cert_file
+#ssl_cert_file=<None>
+
+#
+# This option specifies the path to the file where SSL private key of API
+# server is stored when SSL is in effect.
+#
+# Possible values:
+#
+# * String representing path to the SSL private key.
+#
+# Related options:
+#
+# * enabled_ssl_apis
+#  (string value)
+# Deprecated group/name - [DEFAULT]/ssl_key_file
+#ssl_key_file=<None>
+
+#
+# This option sets the value of TCP_KEEPIDLE in seconds for each server socket.
+# It specifies the duration of time to keep connection active. TCP generates a
+# KEEPALIVE transmission for an application that requests to keep connection
+# active. Not supported on OS X.
+#
+# Related options:
+#
+# * keep_alive
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/tcp_keepidle
+#tcp_keepidle=600
+
+#
+# This option specifies the size of the pool of greenthreads used by wsgi.
+# It is possible to limit the number of concurrent connections using this
+# option.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/wsgi_default_pool_size
+#default_pool_size=1000
+
+#
+# This option specifies the maximum line size of message headers to be accepted.
+# max_header_line may need to be increased when using large tokens (typically
+# those generated by the Keystone v3 API with big service catalogs).
+#
+# Since TCP is a stream based protocol, in order to reuse a connection, the HTTP
+# has to have a way to indicate the end of the previous response and beginning
+# of the next. Hence, in a keep_alive case, all messages must have a
+# self-defined message length.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/max_header_line
+#max_header_line=16384
+
+#
+# This option allows using the same TCP connection to send and receive multiple
+# HTTP requests/responses, as opposed to opening a new one for every single
+# request/response pair. HTTP keep-alive indicates HTTP connection reuse.
+#
+# Possible values:
+#
+# * True : reuse HTTP connection.
+# * False : closes the client socket connection explicitly.
+#
+# Related options:
+#
+# * tcp_keepidle
+#  (boolean value)
+# Deprecated group/name - [DEFAULT]/wsgi_keep_alive
+#keep_alive=true
+
+#
+# This option specifies the timeout for client connections' socket operations.
+# If an incoming connection is idle for this number of seconds it will be
+# closed. It indicates timeout on individual read/writes on the socket
+# connection. To wait forever set to 0.
+#  (integer value)
+# Minimum value: 0
+# Deprecated group/name - [DEFAULT]/client_socket_timeout
+#client_socket_timeout=900
+
+
+[xenserver]
+#
+# XenServer options are used when the compute_driver is set to use
+# XenServer (compute_driver=xenapi.XenAPIDriver).
+#
+# Must specify connection_url, connection_password and ovs_integration_bridge to
+# use compute_driver=xenapi.XenAPIDriver.
+
+#
+# From nova.conf
+#
+
+#
+# Number of seconds to wait for agent's reply to a request.
+#
+# Nova configures/performs certain administrative actions on a server with the
+# help of an agent that's installed on the server. The communication between
+# Nova and the agent is achieved via sharing messages, called records, over
+# xenstore, a shared storage across all the domains on a Xenserver host.
+# Operations performed by the agent on behalf of nova are: 'version','
+# key_init',
+# 'password','resetnetwork','inject_file', and 'agentupdate'.
+#
+# To perform one of the above operations, the xapi 'agent' plugin writes the
+# command and its associated parameters to a certain location known to the
+# domain
+# and awaits response. On being notified of the message, the agent performs
+# appropriate actions on the server and writes the result back to xenstore. This
+# result is then read by the xapi 'agent' plugin to determine the
+# success/failure
+# of the operation.
+#
+# This config option determines how long the xapi 'agent' plugin shall wait to
+# read the response off of xenstore for a given request/command. If the agent on
+# the instance fails to write the result in this time period, the operation is
+# considered to have timed out.
+#
+# Related options:
+#
+# * ``agent_version_timeout``
+# * ``agent_resetnetwork_timeout``
+#
+#  (integer value)
+# Minimum value: 0
+#agent_timeout=30
+
+#
+# Number of seconds to wait for agent't reply to version request.
+#
+# This indicates the amount of time xapi 'agent' plugin waits for the agent to
+# respond to the 'version' request specifically. The generic timeout for agent
+# communication ``agent_timeout`` is ignored in this case.
+#
+# During the build process the 'version' request is used to determine if the
+# agent is available/operational to perform other requests such as
+# 'resetnetwork', 'password', 'key_init' and 'inject_file'. If the 'version'
+# call
+# fails, the other configuration is skipped. So, this configuration option can
+# also be interpreted as time in which agent is expected to be fully
+# operational.
+#  (integer value)
+# Minimum value: 0
+#agent_version_timeout=300
+
+#
+# Number of seconds to wait for agent's reply to resetnetwork
+# request.
+#
+# This indicates the amount of time xapi 'agent' plugin waits for the agent to
+# respond to the 'resetnetwork' request specifically. The generic timeout for
+# agent communication ``agent_timeout`` is ignored in this case.
+#  (integer value)
+# Minimum value: 0
+#agent_resetnetwork_timeout=60
+
+#
+# Path to locate guest agent on the server.
+#
+# Specifies the path in which the XenAPI guest agent should be located. If the
+# agent is present, network configuration is not injected into the image.
+#
+# Related options:
+#
+# For this option to have an effect:
+# * ``flat_injected`` should be set to ``True``
+# * ``compute_driver`` should be set to ``xenapi.XenAPIDriver``
+#
+#  (string value)
+#agent_path=usr/sbin/xe-update-networking
+
+#
+# Disables the use of XenAPI agent.
+#
+# This configuration option suggests whether the use of agent should be enabled
+# or not regardless of what image properties are present. Image properties have
+# an effect only when this is set to ``True``. Read description of config option
+# ``use_agent_default`` for more information.
+#
+# Related options:
+#
+# * ``use_agent_default``
+#
+#  (boolean value)
+#disable_agent=false
+
+#
+# Whether or not to use the agent by default when its usage is enabled but not
+# indicated by the image.
+#
+# The use of XenAPI agent can be disabled altogether using the configuration
+# option ``disable_agent``. However, if it is not disabled, the use of an agent
+# can still be controlled by the image in use through one of its properties,
+# ``xenapi_use_agent``. If this property is either not present or specified
+# incorrectly on the image, the use of agent is determined by this configuration
+# option.
+#
+# Note that if this configuration is set to ``True`` when the agent is not
+# present, the boot times will increase significantly.
+#
+# Related options:
+#
+# * ``disable_agent``
+#
+#  (boolean value)
+#use_agent_default=false
+
+# Timeout in seconds for XenAPI login. (integer value)
+# Minimum value: 0
+#login_timeout=10
+
+#
+# Maximum number of concurrent XenAPI connections.
+#
+# In nova, multiple XenAPI requests can happen at a time.
+# Configuring this option will parallelize access to the XenAPI
+# session, which allows you to make concurrent XenAPI connections.
+#  (integer value)
+# Minimum value: 1
+#connection_concurrent=5
+
+# DEPRECATED:
+# Base URL for torrent files; must contain a slash character (see RFC 1808,
+# step 6).
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_base_url=<None>
+
+# DEPRECATED: Probability that peer will become a seeder (1.0 = 100%) (floating
+# point value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_seed_chance=1.0
+
+# DEPRECATED:
+# Number of seconds after downloading an image via BitTorrent that it should
+# be seeded for other peers.'
+#  (integer value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_seed_duration=3600
+
+# DEPRECATED:
+# Cached torrent files not accessed within this number of seconds can be reaped.
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_max_last_accessed=86400
+
+# DEPRECATED: Beginning of port range to listen on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_listen_port_start=6881
+
+# DEPRECATED: End of port range to listen on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_listen_port_end=6891
+
+# DEPRECATED:
+# Number of seconds a download can remain at the same progress percentage w/o
+# being considered a stall.
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_download_stall_cutoff=600
+
+# DEPRECATED:
+# Maximum number of seeder processes to run concurrently within a given dom0
+# (-1 = no limit).
+#  (integer value)
+# Minimum value: -1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_max_seeder_processes_per_host=1
+
+#
+# Cache glance images locally.
+#
+# The value for this option must be chosen from the choices listed
+# here. Configuring a value other than these will default to 'all'.
+#
+# Note: There is nothing that deletes these images.
+#
+# Possible values:
+#
+# * `all`: will cache all images.
+# * `some`: will only cache images that have the
+#   image_property `cache_in_nova=True`.
+# * `none`: turns off caching entirely.
+#  (string value)
+# Allowed values: all, some, none
+#cache_images=all
+
+#
+# Compression level for images.
+#
+# By setting this option we can configure the gzip compression level.
+# This option sets GZIP environment variable before spawning tar -cz
+# to force the compression level. It defaults to none, which means the
+# GZIP environment variable is not set and the default (usually -6)
+# is used.
+#
+# Possible values:
+#
+# * Range is 1-9, e.g., 9 for gzip -9, 9 being most
+#   compressed but most CPU intensive on dom0.
+# * Any values out of this range will default to None.
+#  (integer value)
+# Minimum value: 1
+# Maximum value: 9
+#image_compression_level=<None>
+
+# Default OS type used when uploading an image to glance (string value)
+#default_os_type=linux
+
+# Time in secs to wait for a block device to be created (integer value)
+# Minimum value: 1
+#block_device_creation_timeout=10
+
+#
+# Maximum size in bytes of kernel or ramdisk images.
+#
+# Specifying the maximum size of kernel or ramdisk will avoid copying
+# large files to dom0 and fill up /boot/guest.
+#  (integer value)
+#max_kernel_ramdisk_size=16777216
+
+#
+# Filter for finding the SR to be used to install guest instances on.
+#
+# Possible values:
+#
+# * To use the Local Storage in default XenServer/XCP installations
+#   set this flag to other-config:i18n-key=local-storage.
+# * To select an SR with a different matching criteria, you could
+#   set it to other-config:my_favorite_sr=true.
+# * To fall back on the Default SR, as displayed by XenCenter,
+#   set this flag to: default-sr:true.
+#  (string value)
+#sr_matching_filter=default-sr:true
+
+#
+# Whether to use sparse_copy for copying data on a resize down.
+# (False will use standard dd). This speeds up resizes down
+# considerably since large runs of zeros won't have to be rsynced.
+#  (boolean value)
+#sparse_copy=true
+
+#
+# Maximum number of retries to unplug VBD.
+# If set to 0, should try once, no retries.
+#  (integer value)
+# Minimum value: 0
+#num_vbd_unplug_retries=10
+
+#
+# Whether or not to download images via Bit Torrent.
+#
+# The value for this option must be chosen from the choices listed
+# here. Configuring a value other than these will default to 'none'.
+#
+# Possible values:
+#
+# * `all`: will download all images.
+# * `some`: will only download images that have the image_property
+#           `bittorrent=true`.
+# * `none`: will turnoff downloading images via Bit Torrent.
+#  (string value)
+# Allowed values: all, some, none
+#torrent_images=none
+
+#
+# Name of network to use for booting iPXE ISOs.
+#
+# An iPXE ISO is a specially crafted ISO which supports iPXE booting.
+# This feature gives a means to roll your own image.
+#
+# By default this option is not set. Enable this option to
+# boot an iPXE ISO.
+#
+# Related Options:
+#
+# * `ipxe_boot_menu_url`
+# * `ipxe_mkisofs_cmd`
+#  (string value)
+#ipxe_network_name=<None>
+
+#
+# URL to the iPXE boot menu.
+#
+# An iPXE ISO is a specially crafted ISO which supports iPXE booting.
+# This feature gives a means to roll your own image.
+#
+# By default this option is not set. Enable this option to
+# boot an iPXE ISO.
+#
+# Related Options:
+#
+# * `ipxe_network_name`
+# * `ipxe_mkisofs_cmd`
+#  (string value)
+#ipxe_boot_menu_url=<None>
+
+#
+# Name and optionally path of the tool used for ISO image creation.
+#
+# An iPXE ISO is a specially crafted ISO which supports iPXE booting.
+# This feature gives a means to roll your own image.
+#
+# Note: By default `mkisofs` is not present in the Dom0, so the
+# package can either be manually added to Dom0 or include the
+# `mkisofs` binary in the image itself.
+#
+# Related Options:
+#
+# * `ipxe_network_name`
+# * `ipxe_boot_menu_url`
+#  (string value)
+#ipxe_mkisofs_cmd=mkisofs
+
+#
+# URL for connection to XenServer/Xen Cloud Platform. A special value
+# of unix://local can be used to connect to the local unix socket.
+#
+# Possible values:
+#
+# * Any string that represents a URL. The connection_url is
+#   generally the management network IP address of the XenServer.
+# * This option must be set if you chose the XenServer driver.
+#  (string value)
+#connection_url=<None>
+
+# Username for connection to XenServer/Xen Cloud Platform (string value)
+#connection_username=root
+
+# Password for connection to XenServer/Xen Cloud Platform (string value)
+#connection_password=<None>
+
+#
+# The interval used for polling of coalescing vhds.
+#
+# This is the interval after which the task of coalesce VHD is
+# performed, until it reaches the max attempts that is set by
+# vhd_coalesce_max_attempts.
+#
+# Related options:
+#
+# * `vhd_coalesce_max_attempts`
+#  (floating point value)
+# Minimum value: 0
+#vhd_coalesce_poll_interval=5.0
+
+#
+# Ensure compute service is running on host XenAPI connects to.
+# This option must be set to false if the 'independent_compute'
+# option is set to true.
+#
+# Possible values:
+#
+# * Setting this option to true will make sure that compute service
+#   is running on the same host that is specified by connection_url.
+# * Setting this option to false, doesn't perform the check.
+#
+# Related options:
+#
+# * `independent_compute`
+#  (boolean value)
+#check_host=true
+
+#
+# Max number of times to poll for VHD to coalesce.
+#
+# This option determines the maximum number of attempts that can be
+# made for coalescing the VHD before giving up.
+#
+# Related opitons:
+#
+# * `vhd_coalesce_poll_interval`
+#  (integer value)
+# Minimum value: 0
+#vhd_coalesce_max_attempts=20
+
+# Base path to the storage repository on the XenServer host. (string value)
+#sr_base_path=/var/run/sr-mount
+
+#
+# The iSCSI Target Host.
+#
+# This option represents the hostname or ip of the iSCSI Target.
+# If the target host is not present in the connection information from
+# the volume provider then the value from this option is taken.
+#
+# Possible values:
+#
+# * Any string that represents hostname/ip of Target.
+#  (string value)
+#target_host=<None>
+
+#
+# The iSCSI Target Port.
+#
+# This option represents the port of the iSCSI Target. If the
+# target port is not present in the connection information from the
+# volume provider then the value from this option is taken.
+#  (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#target_port=3260
+
+# DEPRECATED:
+# Used to enable the remapping of VBD dev.
+# (Works around an issue in Ubuntu Maverick)
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This option provided a workaround for issues in Ubuntu Maverick, which
+# was released in April 2010 and was dropped from support in April 2012.
+# There's no reason to continue supporting this option.
+#remap_vbd_dev=false
+
+#
+# Specify prefix to remap VBD dev to (ex. /dev/xvdb -> /dev/sdb).
+#
+# Related options:
+#
+# * If `remap_vbd_dev` is set to False this option has no impact.
+#  (string value)
+#remap_vbd_dev_prefix=sd
+
+#
+# Used to prevent attempts to attach VBDs locally, so Nova can
+# be run in a VM on a different host.
+#
+# Related options:
+#
+# * ``CONF.flat_injected`` (Must be False)
+# * ``CONF.xenserver.check_host`` (Must be False)
+# * ``CONF.default_ephemeral_format`` (Must be unset or 'ext3')
+# * Joining host aggregates (will error if attempted)
+# * Swap disks for Windows VMs (will error if attempted)
+# * Nova-based auto_configure_disk (will error if attempted)
+#  (boolean value)
+#independent_compute=false
+
+#
+# Wait time for instances to go to running state.
+#
+# Provide an integer value representing time in seconds to set the
+# wait time for an instance to go to running state.
+#
+# When a request to create an instance is received by nova-api and
+# communicated to nova-compute, the creation of the instance occurs
+# through interaction with Xen via XenAPI in the compute node. Once
+# the node on which the instance(s) are to be launched is decided by
+# nova-schedule and the launch is triggered, a certain amount of wait
+# time is involved until the instance(s) can become available and
+# 'running'. This wait time is defined by running_timeout. If the
+# instances do not go to running state within this specified wait
+# time, the launch expires and the instance(s) are set to 'error'
+# state.
+#  (integer value)
+# Minimum value: 0
+#running_timeout=60
+
+# DEPRECATED:
+# The XenAPI VIF driver using XenServer Network APIs.
+#
+# Provide a string value representing the VIF XenAPI vif driver to use for
+# plugging virtual network interfaces.
+#
+# Xen configuration uses bridging within the backend domain to allow
+# all VMs to appear on the network as individual hosts. Bridge
+# interfaces are used to create a XenServer VLAN network in which
+# the VIFs for the VM instances are plugged. If no VIF bridge driver
+# is plugged, the bridge is not made available. This configuration
+# option takes in a value for the VIF driver.
+#
+# Possible values:
+#
+# * nova.virt.xenapi.vif.XenAPIOpenVswitchDriver (default)
+# * nova.virt.xenapi.vif.XenAPIBridgeDriver (deprecated)
+#
+# Related options:
+#
+# * ``vlan_interface``
+# * ``ovs_integration_bridge``
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There are only two in-tree vif drivers for XenServer. XenAPIBridgeDriver is
+# for
+# nova-network which is deprecated and XenAPIOpenVswitchDriver is for Neutron
+# which is the default configuration for Nova since the 15.0.0 Ocata release. In
+# the future the "use_neutron" configuration option will be used to determine
+# which vif driver to use.
+#vif_driver=nova.virt.xenapi.vif.XenAPIOpenVswitchDriver
+
+#
+# Dom0 plugin driver used to handle image uploads.
+#
+# Provide a string value representing a plugin driver required to
+# handle the image uploading to GlanceStore.
+#
+# Images, and snapshots from XenServer need to be uploaded to the data
+# store for use. image_upload_handler takes in a value for the Dom0
+# plugin driver. This driver is then called to uplaod images to the
+# GlanceStore.
+#  (string value)
+#image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore
+
+#
+# Number of seconds to wait for SR to settle if the VDI
+# does not exist when first introduced.
+#
+# Some SRs, particularly iSCSI connections are slow to see the VDIs
+# right after they got introduced. Setting this option to a
+# time interval will make the SR to wait for that time period
+# before raising VDI not found exception.
+#  (integer value)
+# Minimum value: 0
+#introduce_vdi_retry_wait=20
+
+#
+# The name of the integration Bridge that is used with xenapi
+# when connecting with Open vSwitch.
+#
+# Note: The value of this config option is dependent on the
+# environment, therefore this configuration value must be set
+# accordingly if you are using XenAPI.
+#
+# Possible values:
+#
+# * Any string that represents a bridge name.
+#  (string value)
+#ovs_integration_bridge=<None>
+
+#
+# When adding new host to a pool, this will append a --force flag to the
+# command, forcing hosts to join a pool, even if they have different CPUs.
+#
+# Since XenServer version 5.6 it is possible to create a pool of hosts that have
+# different CPU capabilities. To accommodate CPU differences, XenServer limited
+# features it uses to determine CPU compatibility to only the ones that are
+# exposed by CPU and support for CPU masking was added.
+# Despite this effort to level differences between CPUs, it is still possible
+# that adding new host will fail, thus option to force join was introduced.
+#  (boolean value)
+#use_join_force=true
+
+#
+# Publicly visible name for this console host.
+#
+# Possible values:
+#
+# * A string representing a valid hostname
+#  (string value)
+# Deprecated group/name - [DEFAULT]/console_public_hostname
+#console_public_hostname=lcy01-22
+
+
+[xvp]
+#
+# Configuration options for XVP.
+#
+# xvp (Xen VNC Proxy) is a proxy server providing password-protected VNC-based
+# access to the consoles of virtual machines hosted on Citrix XenServer.
+
+#
+# From nova.conf
+#
+
+# XVP conf template (string value)
+# Deprecated group/name - [DEFAULT]/console_xvp_conf_template
+#console_xvp_conf_template=$pybasedir/nova/console/xvp.conf.template
+
+# Generated XVP conf file (string value)
+# Deprecated group/name - [DEFAULT]/console_xvp_conf
+#console_xvp_conf=/etc/xvp.conf
+
+# XVP master process pid file (string value)
+# Deprecated group/name - [DEFAULT]/console_xvp_pid
+#console_xvp_pid=/var/run/xvp.pid
+
+# XVP log file (string value)
+# Deprecated group/name - [DEFAULT]/console_xvp_log
+#console_xvp_log=/var/log/xvp.log
+
+# Port for XVP to multiplex VNC connections on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/console_xvp_multiplex_port
+#console_xvp_multiplex_port=5900

2017-09-27 10:08:52,636 [salt.state       ][INFO    ][12805] Completed state [/etc/nova/nova.conf] at time 10:08:52.635744 duration_in_ms=367.528
2017-09-27 10:08:52,636 [salt.state       ][INFO    ][12805] Running state [/etc/nova/api-paste.ini] at time 10:08:52.636272
2017-09-27 10:08:52,636 [salt.state       ][INFO    ][12805] Executing state file.managed for /etc/nova/api-paste.ini
2017-09-27 10:08:52,667 [salt.fileclient  ][INFO    ][12805] Fetching file from saltenv 'base', ** done ** 'nova/files/ocata/api-paste.ini.Debian'
2017-09-27 10:08:52,689 [salt.fileclient  ][INFO    ][12805] Fetching file from saltenv 'base', ** done ** 'nova/map.jinja'
2017-09-27 10:08:52,734 [salt.state       ][INFO    ][12805] File changed:
--- 
+++ 
@@ -68,7 +68,6 @@
 
 [app:oscomputeversionapp]
 paste.app_factory = nova.api.openstack.compute.versions:Versions.factory
-
 ##########
 # Shared #
 ##########

2017-09-27 10:08:52,734 [salt.state       ][INFO    ][12805] Completed state [/etc/nova/api-paste.ini] at time 10:08:52.733820 duration_in_ms=97.547
2017-09-27 10:08:53,187 [salt.state       ][INFO    ][12805] Running state [nova_controller_api_db_sync_version_20] at time 10:08:53.187279
2017-09-27 10:08:53,189 [salt.state       ][ERROR   ][12805] State 'novang.api_db_version_present' was not found in SLS 'nova.controller'
Reason: 'novang.api_db_version_present' is not available.

2017-09-27 10:08:53,190 [salt.state       ][INFO    ][12805] Running state [nova_controller_db_sync_version_334] at time 10:08:53.190473
2017-09-27 10:08:53,192 [salt.state       ][ERROR   ][12805] State 'novang.db_version_present' was not found in SLS 'nova.controller'
Reason: 'novang.db_version_present' is not available.

2017-09-27 10:08:53,195 [salt.state       ][INFO    ][12805] Running state [nova-manage cell_v2 map_cell0] at time 10:08:53.194648
2017-09-27 10:08:53,195 [salt.state       ][INFO    ][12805] Executing state cmd.run for nova-manage cell_v2 map_cell0
2017-09-27 10:08:53,195 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'nova-manage cell_v2 map_cell0' in directory '/root'
2017-09-27 10:08:55,830 [salt.state       ][INFO    ][12805] {'pid': 25908, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 10:08:55,830 [salt.state       ][INFO    ][12805] Completed state [nova-manage cell_v2 map_cell0] at time 10:08:55.830091 duration_in_ms=2635.442
2017-09-27 10:08:55,830 [salt.state       ][INFO    ][12805] Running state [nova-manage cell_v2 create_cell --name=cell1] at time 10:08:55.830355
2017-09-27 10:08:55,831 [salt.state       ][INFO    ][12805] Executing state cmd.run for nova-manage cell_v2 create_cell --name=cell1
2017-09-27 10:08:55,831 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'nova-manage cell_v2 list_cells | grep cell1' in directory '/root'
2017-09-27 10:08:58,449 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'nova-manage cell_v2 create_cell --name=cell1' in directory '/root'
2017-09-27 10:08:58,793 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100858782870
2017-09-27 10:08:58,817 [salt.minion      ][INFO    ][25975] Starting a new job with PID 25975
2017-09-27 10:08:58,831 [salt.minion      ][INFO    ][25975] Returning information for job: 20170927100858782870
2017-09-27 10:09:01,292 [salt.state       ][INFO    ][12805] {'pid': 25970, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 10:09:01,293 [salt.state       ][INFO    ][12805] Completed state [nova-manage cell_v2 create_cell --name=cell1] at time 10:09:01.292609 duration_in_ms=5462.25
2017-09-27 10:09:01,293 [salt.state       ][INFO    ][12805] Running state [/etc/systemd/system/nova-placement-api.service] at time 10:09:01.293046
2017-09-27 10:09:01,293 [salt.state       ][INFO    ][12805] Executing state file.symlink for /etc/systemd/system/nova-placement-api.service
2017-09-27 10:09:01,296 [salt.state       ][INFO    ][12805] {'new': '/etc/systemd/system/nova-placement-api.service'}
2017-09-27 10:09:01,296 [salt.state       ][INFO    ][12805] Completed state [/etc/systemd/system/nova-placement-api.service] at time 10:09:01.295755 duration_in_ms=2.709
2017-09-27 10:09:01,297 [salt.state       ][INFO    ][12805] Running state [nova-placement-api] at time 10:09:01.296886
2017-09-27 10:09:01,297 [salt.state       ][INFO    ][12805] Executing state pkg.installed for nova-placement-api
2017-09-27 10:09:01,315 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-placement-api'] in directory '/root'
2017-09-27 10:09:03,222 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:09:03,269 [salt.state       ][INFO    ][12805] Made the following changes:
'nova-placement-api' changed from 'absent' to '2:15.0.7-1~u16.04+mcp9'

2017-09-27 10:09:03,283 [salt.state       ][INFO    ][12805] Loading fresh modules for state activity
2017-09-27 10:09:03,296 [salt.state       ][INFO    ][12805] Completed state [nova-placement-api] at time 10:09:03.296223 duration_in_ms=1999.338
2017-09-27 10:09:03,299 [salt.state       ][INFO    ][12805] Running state [/etc/apache2/sites-available/nova-placement-api.conf] at time 10:09:03.298519
2017-09-27 10:09:03,299 [salt.state       ][INFO    ][12805] Executing state file.managed for /etc/apache2/sites-available/nova-placement-api.conf
2017-09-27 10:09:03,322 [salt.fileclient  ][INFO    ][12805] Fetching file from saltenv 'base', ** done ** 'nova/files/ocata/nova-placement-api.conf'
2017-09-27 10:09:03,345 [salt.fileclient  ][INFO    ][12805] Fetching file from saltenv 'base', ** done ** 'nova/map.jinja'
2017-09-27 10:09:03,395 [salt.state       ][INFO    ][12805] File changed:
New file
2017-09-27 10:09:03,396 [salt.state       ][INFO    ][12805] Completed state [/etc/apache2/sites-available/nova-placement-api.conf] at time 10:09:03.395534 duration_in_ms=97.013
2017-09-27 10:09:03,396 [salt.state       ][INFO    ][12805] Running state [/etc/apache2/sites-enabled/nova-placement-api.conf] at time 10:09:03.396187
2017-09-27 10:09:03,397 [salt.state       ][INFO    ][12805] Executing state file.symlink for /etc/apache2/sites-enabled/nova-placement-api.conf
2017-09-27 10:09:03,403 [salt.state       ][INFO    ][12805] {'new': '/etc/apache2/sites-enabled/nova-placement-api.conf'}
2017-09-27 10:09:03,403 [salt.state       ][INFO    ][12805] Completed state [/etc/apache2/sites-enabled/nova-placement-api.conf] at time 10:09:03.403324 duration_in_ms=7.135
2017-09-27 10:09:03,406 [salt.state       ][INFO    ][12805] Running state [nova-manage cell_v2 discover_hosts] at time 10:09:03.405701
2017-09-27 10:09:03,406 [salt.state       ][INFO    ][12805] Executing state cmd.run for nova-manage cell_v2 discover_hosts
2017-09-27 10:09:03,409 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'nova-manage cell_v2 discover_hosts' in directory '/root'
2017-09-27 10:09:06,074 [salt.state       ][INFO    ][12805] {'pid': 26037, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 10:09:06,075 [salt.state       ][INFO    ][12805] Completed state [nova-manage cell_v2 discover_hosts] at time 10:09:06.074708 duration_in_ms=2668.966
2017-09-27 10:09:06,440 [salt.state       ][INFO    ][12805] Running state [cell1] at time 10:09:06.440248
2017-09-27 10:09:06,442 [salt.state       ][ERROR   ][12805] State 'novang.map_instances' was not found in SLS 'nova.controller'
Reason: 'novang.map_instances' is not available.

2017-09-27 10:09:06,442 [salt.state       ][INFO    ][12805] Running state [nova-manage api_db sync] at time 10:09:06.442259
2017-09-27 10:09:06,442 [salt.state       ][INFO    ][12805] Executing state cmd.run for nova-manage api_db sync
2017-09-27 10:09:06,443 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'nova-manage api_db sync' in directory '/root'
2017-09-27 10:09:08,885 [salt.state       ][INFO    ][12805] {'pid': 26071, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 10:09:08,885 [salt.state       ][INFO    ][12805] Completed state [nova-manage api_db sync] at time 10:09:08.885326 duration_in_ms=2443.066
2017-09-27 10:09:08,886 [salt.state       ][INFO    ][12805] Running state [nova-manage db sync] at time 10:09:08.886063
2017-09-27 10:09:08,886 [salt.state       ][INFO    ][12805] Executing state cmd.run for nova-manage db sync
2017-09-27 10:09:08,887 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'nova-manage db sync' in directory '/root'
2017-09-27 10:09:08,980 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100908969305
2017-09-27 10:09:08,999 [salt.minion      ][INFO    ][26086] Starting a new job with PID 26086
2017-09-27 10:09:09,014 [salt.minion      ][INFO    ][26086] Returning information for job: 20170927100908969305
2017-09-27 10:09:19,165 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100919154677
2017-09-27 10:09:19,188 [salt.minion      ][INFO    ][26110] Starting a new job with PID 26110
2017-09-27 10:09:19,205 [salt.minion      ][INFO    ][26110] Returning information for job: 20170927100919154677
2017-09-27 10:09:29,363 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100929351505
2017-09-27 10:09:29,388 [salt.minion      ][INFO    ][26127] Starting a new job with PID 26127
2017-09-27 10:09:29,407 [salt.minion      ][INFO    ][26127] Returning information for job: 20170927100929351505
2017-09-27 10:09:39,570 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100939559812
2017-09-27 10:09:39,594 [salt.minion      ][INFO    ][26150] Starting a new job with PID 26150
2017-09-27 10:09:39,612 [salt.minion      ][INFO    ][26150] Returning information for job: 20170927100939559812
2017-09-27 10:09:49,777 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100949765902
2017-09-27 10:09:49,804 [salt.minion      ][INFO    ][26167] Starting a new job with PID 26167
2017-09-27 10:09:49,825 [salt.minion      ][INFO    ][26167] Returning information for job: 20170927100949765902
2017-09-27 10:09:59,989 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927100959978951
2017-09-27 10:10:00,016 [salt.minion      ][INFO    ][26185] Starting a new job with PID 26185
2017-09-27 10:10:00,036 [salt.minion      ][INFO    ][26185] Returning information for job: 20170927100959978951
2017-09-27 10:10:10,208 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101010197360
2017-09-27 10:10:10,231 [salt.minion      ][INFO    ][26207] Starting a new job with PID 26207
2017-09-27 10:10:10,248 [salt.minion      ][INFO    ][26207] Returning information for job: 20170927101010197360
2017-09-27 10:10:14,563 [salt.state       ][INFO    ][12805] {'pid': 26082, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.\n/usr/lib/python2.7/dist-packages/pymysql/cursors.py:167: Warning: (1831, u"Duplicate index \'block_device_mapping_instance_uuid_virtual_name_device_name_idx\' defined on the table \'nova_cell0.block_device_mapping\'. This is deprecated and will be disallowed in a future release.")\n  result = self._query(query)\n/usr/lib/python2.7/dist-packages/pymysql/cursors.py:167: Warning: (1831, u"Duplicate index \'uniq_instances0uuid\' defined on the table \'nova_cell0.instances\'. This is deprecated and will be disallowed in a future release.")\n  result = self._query(query)', 'stdout': ''}
2017-09-27 10:10:14,564 [salt.state       ][INFO    ][12805] Completed state [nova-manage db sync] at time 10:10:14.564015 duration_in_ms=65677.948
2017-09-27 10:10:14,567 [salt.state       ][INFO    ][12805] Running state [nova-manage db online_data_migrations] at time 10:10:14.567085
2017-09-27 10:10:14,568 [salt.state       ][INFO    ][12805] Executing state cmd.run for nova-manage db online_data_migrations
2017-09-27 10:10:14,570 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command 'nova-manage db online_data_migrations' in directory '/root'
2017-09-27 10:10:17,342 [salt.state       ][INFO    ][12805] {'pid': 26215, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': 'Running batches of 50 until complete\n+---------------------------------------------+--------------+-----------+\n|                  Migration                  | Total Needed | Completed |\n+---------------------------------------------+--------------+-----------+\n|    aggregate_uuids_online_data_migration    |      0       |     0     |\n| delete_build_requests_with_no_instance_uuid |      0       |     0     |\n|    migrate_aggregate_reset_autoincrement    |      0       |     0     |\n|              migrate_aggregates             |      0       |     0     |\n|      migrate_flavor_reset_autoincrement     |      0       |     0     |\n|               migrate_flavors               |      0       |     0     |\n|      migrate_instance_groups_to_api_db      |      0       |     0     |\n|          migrate_instance_keypairs          |      0       |     0     |\n|      migrate_instances_add_request_spec     |      0       |     0     |\n|          migrate_keypairs_to_api_db         |      0       |     0     |\n+---------------------------------------------+--------------+-----------+'}
2017-09-27 10:10:17,343 [salt.state       ][INFO    ][12805] Completed state [nova-manage db online_data_migrations] at time 10:10:17.342590 duration_in_ms=2775.502
2017-09-27 10:10:17,346 [salt.state       ][INFO    ][12805] Running state [apache2] at time 10:10:17.345842
2017-09-27 10:10:17,346 [salt.state       ][INFO    ][12805] Executing state service.running for apache2
2017-09-27 10:10:17,348 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'status', 'apache2.service', '-n', '0'] in directory '/root'
2017-09-27 10:10:17,371 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'apache2.service'] in directory '/root'
2017-09-27 10:10:17,387 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'apache2.service'] in directory '/root'
2017-09-27 10:10:17,406 [salt.state       ][INFO    ][12805] The service apache2 is already running
2017-09-27 10:10:17,406 [salt.state       ][INFO    ][12805] Completed state [apache2] at time 10:10:17.406296 duration_in_ms=60.452
2017-09-27 10:10:17,407 [salt.state       ][INFO    ][12805] Running state [apache2] at time 10:10:17.406702
2017-09-27 10:10:17,407 [salt.state       ][INFO    ][12805] Executing state service.mod_watch for apache2
2017-09-27 10:10:17,408 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'apache2.service'] in directory '/root'
2017-09-27 10:10:17,426 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'apache2.service'] in directory '/root'
2017-09-27 10:10:17,448 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'apache2.service'] in directory '/root'
2017-09-27 10:10:19,717 [salt.state       ][INFO    ][12805] {'apache2': True}
2017-09-27 10:10:19,718 [salt.state       ][INFO    ][12805] Completed state [apache2] at time 10:10:19.717924 duration_in_ms=2311.221
2017-09-27 10:10:19,719 [salt.state       ][INFO    ][12805] Running state [nova-novncproxy] at time 10:10:19.718669
2017-09-27 10:10:19,719 [salt.state       ][INFO    ][12805] Executing state service.running for nova-novncproxy
2017-09-27 10:10:19,720 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'status', 'nova-novncproxy.service', '-n', '0'] in directory '/root'
2017-09-27 10:10:19,737 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-novncproxy.service'] in directory '/root'
2017-09-27 10:10:19,755 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-novncproxy.service'] in directory '/root'
2017-09-27 10:10:19,769 [salt.state       ][INFO    ][12805] The service nova-novncproxy is already running
2017-09-27 10:10:19,769 [salt.state       ][INFO    ][12805] Completed state [nova-novncproxy] at time 10:10:19.769319 duration_in_ms=50.649
2017-09-27 10:10:19,770 [salt.state       ][INFO    ][12805] Running state [nova-novncproxy] at time 10:10:19.769533
2017-09-27 10:10:19,770 [salt.state       ][INFO    ][12805] Executing state service.mod_watch for nova-novncproxy
2017-09-27 10:10:19,770 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-novncproxy.service'] in directory '/root'
2017-09-27 10:10:19,791 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-novncproxy.service'] in directory '/root'
2017-09-27 10:10:19,812 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'nova-novncproxy.service'] in directory '/root'
2017-09-27 10:10:19,883 [salt.state       ][INFO    ][12805] {'nova-novncproxy': True}
2017-09-27 10:10:19,883 [salt.state       ][INFO    ][12805] Completed state [nova-novncproxy] at time 10:10:19.883113 duration_in_ms=113.578
2017-09-27 10:10:19,884 [salt.state       ][INFO    ][12805] Running state [nova-cert] at time 10:10:19.884017
2017-09-27 10:10:19,884 [salt.state       ][INFO    ][12805] Executing state service.running for nova-cert
2017-09-27 10:10:19,885 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'status', 'nova-cert.service', '-n', '0'] in directory '/root'
2017-09-27 10:10:19,909 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-cert.service'] in directory '/root'
2017-09-27 10:10:19,939 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-cert.service'] in directory '/root'
2017-09-27 10:10:19,964 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-cert.service'] in directory '/root'
2017-09-27 10:10:19,984 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'nova-cert.service'] in directory '/root'
2017-09-27 10:10:20,037 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-cert.service'] in directory '/root'
2017-09-27 10:10:20,056 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-cert.service'] in directory '/root'
2017-09-27 10:10:20,072 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-cert.service'] in directory '/root'
2017-09-27 10:10:20,092 [salt.state       ][INFO    ][12805] {'nova-cert': True}
2017-09-27 10:10:20,093 [salt.state       ][INFO    ][12805] Completed state [nova-cert] at time 10:10:20.092586 duration_in_ms=208.568
2017-09-27 10:10:20,093 [salt.state       ][INFO    ][12805] Running state [nova-conductor] at time 10:10:20.093297
2017-09-27 10:10:20,094 [salt.state       ][INFO    ][12805] Executing state service.running for nova-conductor
2017-09-27 10:10:20,094 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'status', 'nova-conductor.service', '-n', '0'] in directory '/root'
2017-09-27 10:10:20,112 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-conductor.service'] in directory '/root'
2017-09-27 10:10:20,131 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-conductor.service'] in directory '/root'
2017-09-27 10:10:20,150 [salt.state       ][INFO    ][12805] The service nova-conductor is already running
2017-09-27 10:10:20,151 [salt.state       ][INFO    ][12805] Completed state [nova-conductor] at time 10:10:20.150473 duration_in_ms=57.175
2017-09-27 10:10:20,151 [salt.state       ][INFO    ][12805] Running state [nova-conductor] at time 10:10:20.150645
2017-09-27 10:10:20,151 [salt.state       ][INFO    ][12805] Executing state service.mod_watch for nova-conductor
2017-09-27 10:10:20,151 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-conductor.service'] in directory '/root'
2017-09-27 10:10:20,164 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-conductor.service'] in directory '/root'
2017-09-27 10:10:20,181 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'nova-conductor.service'] in directory '/root'
2017-09-27 10:10:20,422 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101020412851
2017-09-27 10:10:20,445 [salt.minion      ][INFO    ][26583] Starting a new job with PID 26583
2017-09-27 10:10:20,454 [salt.minion      ][INFO    ][26583] Returning information for job: 20170927101020412851
2017-09-27 10:10:21,007 [salt.state       ][INFO    ][12805] {'nova-conductor': True}
2017-09-27 10:10:21,008 [salt.state       ][INFO    ][12805] Completed state [nova-conductor] at time 10:10:21.007700 duration_in_ms=857.054
2017-09-27 10:10:21,008 [salt.state       ][INFO    ][12805] Running state [nova-api] at time 10:10:21.008331
2017-09-27 10:10:21,009 [salt.state       ][INFO    ][12805] Executing state service.running for nova-api
2017-09-27 10:10:21,009 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'status', 'nova-api.service', '-n', '0'] in directory '/root'
2017-09-27 10:10:21,024 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-api.service'] in directory '/root'
2017-09-27 10:10:21,040 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-api.service'] in directory '/root'
2017-09-27 10:10:21,058 [salt.state       ][INFO    ][12805] The service nova-api is already running
2017-09-27 10:10:21,059 [salt.state       ][INFO    ][12805] Completed state [nova-api] at time 10:10:21.058585 duration_in_ms=50.253
2017-09-27 10:10:21,059 [salt.state       ][INFO    ][12805] Running state [nova-api] at time 10:10:21.058743
2017-09-27 10:10:21,059 [salt.state       ][INFO    ][12805] Executing state service.mod_watch for nova-api
2017-09-27 10:10:21,059 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-api.service'] in directory '/root'
2017-09-27 10:10:21,075 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-api.service'] in directory '/root'
2017-09-27 10:10:21,098 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'nova-api.service'] in directory '/root'
2017-09-27 10:10:24,412 [salt.state       ][INFO    ][12805] {'nova-api': True}
2017-09-27 10:10:24,412 [salt.state       ][INFO    ][12805] Completed state [nova-api] at time 10:10:24.412166 duration_in_ms=3353.421
2017-09-27 10:10:24,413 [salt.state       ][INFO    ][12805] Running state [nova-consoleauth] at time 10:10:24.412803
2017-09-27 10:10:24,413 [salt.state       ][INFO    ][12805] Executing state service.running for nova-consoleauth
2017-09-27 10:10:24,413 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'status', 'nova-consoleauth.service', '-n', '0'] in directory '/root'
2017-09-27 10:10:24,429 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-consoleauth.service'] in directory '/root'
2017-09-27 10:10:24,451 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-consoleauth.service'] in directory '/root'
2017-09-27 10:10:24,467 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-consoleauth.service'] in directory '/root'
2017-09-27 10:10:24,487 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'nova-consoleauth.service'] in directory '/root'
2017-09-27 10:10:24,531 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-consoleauth.service'] in directory '/root'
2017-09-27 10:10:24,554 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-consoleauth.service'] in directory '/root'
2017-09-27 10:10:24,575 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-consoleauth.service'] in directory '/root'
2017-09-27 10:10:24,590 [salt.state       ][INFO    ][12805] {'nova-consoleauth': True}
2017-09-27 10:10:24,591 [salt.state       ][INFO    ][12805] Completed state [nova-consoleauth] at time 10:10:24.590672 duration_in_ms=177.866
2017-09-27 10:10:24,591 [salt.state       ][INFO    ][12805] Running state [nova-scheduler] at time 10:10:24.591277
2017-09-27 10:10:24,591 [salt.state       ][INFO    ][12805] Executing state service.running for nova-scheduler
2017-09-27 10:10:24,592 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'status', 'nova-scheduler.service', '-n', '0'] in directory '/root'
2017-09-27 10:10:24,619 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-scheduler.service'] in directory '/root'
2017-09-27 10:10:24,634 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-scheduler.service'] in directory '/root'
2017-09-27 10:10:24,645 [salt.state       ][INFO    ][12805] The service nova-scheduler is already running
2017-09-27 10:10:24,645 [salt.state       ][INFO    ][12805] Completed state [nova-scheduler] at time 10:10:24.644977 duration_in_ms=53.698
2017-09-27 10:10:24,645 [salt.state       ][INFO    ][12805] Running state [nova-scheduler] at time 10:10:24.645122
2017-09-27 10:10:24,645 [salt.state       ][INFO    ][12805] Executing state service.mod_watch for nova-scheduler
2017-09-27 10:10:24,646 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-active', 'nova-scheduler.service'] in directory '/root'
2017-09-27 10:10:24,663 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemctl', 'is-enabled', 'nova-scheduler.service'] in directory '/root'
2017-09-27 10:10:24,683 [salt.loaded.int.module.cmdmod][INFO    ][12805] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'nova-scheduler.service'] in directory '/root'
2017-09-27 10:10:26,566 [salt.state       ][INFO    ][12805] {'nova-scheduler': True}
2017-09-27 10:10:26,566 [salt.state       ][INFO    ][12805] Completed state [nova-scheduler] at time 10:10:26.566218 duration_in_ms=1921.095
2017-09-27 10:10:26,568 [salt.minion      ][INFO    ][12805] Returning information for job: 20170927100656963529
2017-09-27 10:12:51,018 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command test.ping with jid 20170927101251008188
2017-09-27 10:12:51,050 [salt.minion      ][INFO    ][26987] Starting a new job with PID 26987
2017-09-27 10:12:51,096 [salt.minion      ][INFO    ][26987] Returning information for job: 20170927101251008188
2017-09-27 10:13:41,188 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927101341177011
2017-09-27 10:13:41,211 [salt.minion      ][INFO    ][27002] Starting a new job with PID 27002
2017-09-27 10:13:42,934 [salt.state       ][INFO    ][27002] Loading fresh modules for state activity
2017-09-27 10:13:42,961 [salt.fileclient  ][INFO    ][27002] Fetching file from saltenv 'base', ** done ** 'heat/init.sls'
2017-09-27 10:13:42,982 [salt.fileclient  ][INFO    ][27002] Fetching file from saltenv 'base', ** done ** 'heat/server.sls'
2017-09-27 10:13:43,012 [salt.fileclient  ][INFO    ][27002] Fetching file from saltenv 'base', ** done ** 'heat/map.jinja'
2017-09-27 10:13:44,469 [salt.state       ][INFO    ][27002] Running state [heat-api-cloudwatch] at time 10:13:44.469371
2017-09-27 10:13:44,470 [salt.state       ][INFO    ][27002] Executing state pkg.installed for heat-api-cloudwatch
2017-09-27 10:13:44,470 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:13:45,007 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 10:13:47,691 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'heat-api-cloudwatch'] in directory '/root'
2017-09-27 10:13:51,230 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101351219378
2017-09-27 10:13:51,254 [salt.minion      ][INFO    ][27604] Starting a new job with PID 27604
2017-09-27 10:13:51,275 [salt.minion      ][INFO    ][27604] Returning information for job: 20170927101351219378
2017-09-27 10:14:01,392 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101401380149
2017-09-27 10:14:01,418 [salt.minion      ][INFO    ][27801] Starting a new job with PID 27801
2017-09-27 10:14:01,437 [salt.minion      ][INFO    ][27801] Returning information for job: 20170927101401380149
2017-09-27 10:14:07,159 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:14:07,211 [salt.state       ][INFO    ][27002] Made the following changes:
'python-magnumclient' changed from 'absent' to '2.5.0-1~u16.04+mcp5'
'python-ceilometerclient' changed from 'absent' to '2.8.1-1~u16.04+mcp0'
'python-heatclient' changed from 'absent' to '1.8.0-1~u16.04+mcp2'
'python-manilaclient' changed from 'absent' to '1.14.0-1~u16.04+mcp2'
'python-senlinclient' changed from 'absent' to '1.2.0-1~u16.04+mcp7'
'python-barbicanclient' changed from 'absent' to '4.2.0-1~u16.04+mcp2'
'heat-api-cloudwatch' changed from 'absent' to '1:8.0.1-1~u16.04+mcp9'
'python-aodhclient' changed from 'absent' to '0.8.0-1~u16.04+mcp1'
'python-monascaclient' changed from 'absent' to '1.5.0-1~u16.04+mcp2'
'python-croniter' changed from 'absent' to '0.3.8-1'
'python-mistralclient' changed from 'absent' to '1:3.0.0-1~u16.04+mcp5'
'heat-common' changed from 'absent' to '1:8.0.1-1~u16.04+mcp9'
'python-manilaclient-doc' changed from 'absent' to '1.14.0-1~u16.04+mcp2'
'python-troveclient' changed from 'absent' to '1:2.8.0-1~u16.04+mcp0'
'python-heat' changed from 'absent' to '1:8.0.1-1~u16.04+mcp9'
'python-saharaclient' changed from 'absent' to '1.1.0-1~u16.04+mcp4'
'python-yaql' changed from 'absent' to '1.1.0-0ubuntu1'
'python-zaqarclient' changed from 'absent' to '1.4.0-1~u16.04+mcp0'

2017-09-27 10:14:07,239 [salt.state       ][INFO    ][27002] Loading fresh modules for state activity
2017-09-27 10:14:07,288 [salt.state       ][INFO    ][27002] Completed state [heat-api-cloudwatch] at time 10:14:07.288421 duration_in_ms=22819.05
2017-09-27 10:14:07,297 [salt.state       ][INFO    ][27002] Running state [heat-api-cfn] at time 10:14:07.296985
2017-09-27 10:14:07,297 [salt.state       ][INFO    ][27002] Executing state pkg.installed for heat-api-cfn
2017-09-27 10:14:07,515 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'heat-api-cfn'] in directory '/root'
2017-09-27 10:14:11,279 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:14:11,312 [salt.state       ][INFO    ][27002] Made the following changes:
'heat-api-cfn' changed from 'absent' to '1:8.0.1-1~u16.04+mcp9'

2017-09-27 10:14:11,324 [salt.state       ][INFO    ][27002] Loading fresh modules for state activity
2017-09-27 10:14:11,336 [salt.state       ][INFO    ][27002] Completed state [heat-api-cfn] at time 10:14:11.336223 duration_in_ms=4039.238
2017-09-27 10:14:11,341 [salt.state       ][INFO    ][27002] Running state [python-heatclient] at time 10:14:11.341381
2017-09-27 10:14:11,342 [salt.state       ][INFO    ][27002] Executing state pkg.installed for python-heatclient
2017-09-27 10:14:11,578 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101411545326
2017-09-27 10:14:11,593 [salt.minion      ][INFO    ][28226] Starting a new job with PID 28226
2017-09-27 10:14:11,597 [salt.state       ][INFO    ][27002] Package python-heatclient is already installed
2017-09-27 10:14:11,597 [salt.state       ][INFO    ][27002] Completed state [python-heatclient] at time 10:14:11.597292 duration_in_ms=255.91
2017-09-27 10:14:11,598 [salt.state       ][INFO    ][27002] Running state [heat-common] at time 10:14:11.597620
2017-09-27 10:14:11,598 [salt.state       ][INFO    ][27002] Executing state pkg.installed for heat-common
2017-09-27 10:14:11,601 [salt.state       ][INFO    ][27002] Package heat-common is already installed
2017-09-27 10:14:11,601 [salt.state       ][INFO    ][27002] Completed state [heat-common] at time 10:14:11.600893 duration_in_ms=3.273
2017-09-27 10:14:11,601 [salt.state       ][INFO    ][27002] Running state [gettext-base] at time 10:14:11.601195
2017-09-27 10:14:11,602 [salt.state       ][INFO    ][27002] Executing state pkg.installed for gettext-base
2017-09-27 10:14:11,604 [salt.minion      ][INFO    ][28226] Returning information for job: 20170927101411545326
2017-09-27 10:14:11,604 [salt.state       ][INFO    ][27002] Package gettext-base is already installed
2017-09-27 10:14:11,605 [salt.state       ][INFO    ][27002] Completed state [gettext-base] at time 10:14:11.604618 duration_in_ms=3.423
2017-09-27 10:14:11,605 [salt.state       ][INFO    ][27002] Running state [heat-engine] at time 10:14:11.604913
2017-09-27 10:14:11,605 [salt.state       ][INFO    ][27002] Executing state pkg.installed for heat-engine
2017-09-27 10:14:11,613 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'heat-engine'] in directory '/root'
2017-09-27 10:14:15,316 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:14:15,359 [salt.state       ][INFO    ][27002] Made the following changes:
'heat-engine' changed from 'absent' to '1:8.0.1-1~u16.04+mcp9'

2017-09-27 10:14:15,374 [salt.state       ][INFO    ][27002] Loading fresh modules for state activity
2017-09-27 10:14:15,391 [salt.state       ][INFO    ][27002] Completed state [heat-engine] at time 10:14:15.391134 duration_in_ms=3786.22
2017-09-27 10:14:15,401 [salt.state       ][INFO    ][27002] Running state [heat-api] at time 10:14:15.401024
2017-09-27 10:14:15,402 [salt.state       ][INFO    ][27002] Executing state pkg.installed for heat-api
2017-09-27 10:14:15,708 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'heat-api'] in directory '/root'
2017-09-27 10:14:19,365 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:14:19,405 [salt.state       ][INFO    ][27002] Made the following changes:
'heat-api' changed from 'absent' to '1:8.0.1-1~u16.04+mcp9'

2017-09-27 10:14:19,432 [salt.state       ][INFO    ][27002] Loading fresh modules for state activity
2017-09-27 10:14:19,462 [salt.state       ][INFO    ][27002] Completed state [heat-api] at time 10:14:19.462046 duration_in_ms=4061.021
2017-09-27 10:14:19,466 [salt.state       ][INFO    ][27002] Running state [/etc/heat/heat.conf] at time 10:14:19.466080
2017-09-27 10:14:19,467 [salt.state       ][INFO    ][27002] Executing state file.managed for /etc/heat/heat.conf
2017-09-27 10:14:19,505 [salt.fileclient  ][INFO    ][27002] Fetching file from saltenv 'base', ** done ** 'heat/files/ocata/heat.conf.Debian'
2017-09-27 10:14:19,618 [salt.fileclient  ][INFO    ][27002] Fetching file from saltenv 'base', ** done ** 'heat/map.jinja'
2017-09-27 10:14:19,684 [salt.state       ][INFO    ][27002] File changed:
--- 
+++ 
@@ -1,8 +1,10 @@
+
 [DEFAULT]
 
 #
 # From heat.api.middleware.ssl
 #
+region_name_for_services=RegionOne
 
 # The HTTP Header that will be used to determine what the original request
 # protocol scheme was, even if it was removed by an SSL terminator proxy.
@@ -16,7 +18,7 @@
 
 # Name of the engine node. This can be an opaque identifier. It is not
 # necessarily a hostname, FQDN, or IP address. (string value)
-#host = 04b90e956415
+#host = lgw01-02
 
 # List of directories to search for plug-ins. (list value)
 #plugin_dirs = /usr/lib64/heat,/usr/lib/heat,/usr/local/lib/heat,/usr/local/lib64/heat
@@ -166,10 +168,6 @@
 # resource properties before storing them in database. (boolean value)
 #encrypt_parameters_and_properties = false
 
-# Number of tasks, running at a time. If option equals 0, batching is disabled.
-# (integer value)
-#task_batch_size = 0
-
 # Seconds between running periodic tasks. (integer value)
 #periodic_interval = 60
 
@@ -177,15 +175,19 @@
 # require instances to use a different endpoint than in the keystone catalog
 # (string value)
 #heat_metadata_server_url = <None>
+heat_metadata_server_url=https://10.167.4.80:8000
 
 # URL of the Heat waitcondition server. (string value)
 #heat_waitcondition_server_url = <None>
+heat_waitcondition_server_url=https://10.167.4.80:8000/v1/waitcondition
 
 # URL of the Heat CloudWatch server. (string value)
 #heat_watch_server_url =
+heat_watch_server_url=https://10.167.4.80:8003
 
 # Instance connection to CFN/CW API via https. (string value)
 #instance_connection_is_secure = 0
+instance_connection_is_secure=0
 
 # Instance connection to CFN/CW API validate certs if SSL is used. (string
 # value)
@@ -206,12 +208,15 @@
 # `stack_user_domain_id` option is set, this option is ignored. (string value)
 #stack_user_domain_name = <None>
 
-# Keystone username, a user with roles sufficient to manage users and projects
-# in the stack_user_domain. (string value)
-#stack_domain_admin = <None>
-
-# Keystone password for stack_domain_admin user. (string value)
-#stack_domain_admin_password = <None>
+# Keystone username, a user with roles sufficient to manage
+# users and projects in the stack_user_domain. (string value)
+stack_domain_admin = heat_domain_admin
+
+# Keystone password for stack_domain_admin user. (string
+# value)
+stack_domain_admin_password=opnfv_secret
+
+stack_user_domain_name = heat_user_domain
 
 # Maximum raw byte size of any template. (integer value)
 #max_template_size = 524288
@@ -277,6 +282,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #verbose = true
+verbose = true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -298,6 +304,7 @@
 # log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
+log_file=/var/log/heat/heat.log
 
 # (Optional) The base directory used for relative log_file  paths. This option
 # is ignored if log_config_append is set. (string value)
@@ -546,11 +553,12 @@
 
 # Seconds to wait for a response from a call. (integer value)
 #rpc_response_timeout = 60
+rpc_response_timeout = 600
 
 # A URL representing the messaging driver to use and its full configuration.
 # (string value)
 #transport_url = <None>
-
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.41:5672,openstack:opnfv_secret@10.167.4.42:5672,openstack:opnfv_secret@10.167.4.43:5672//openstack
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
 # This option is deprecated for removal.
@@ -595,7 +603,9 @@
 # Specify a timeout after which a gracefully shutdown server will exit. Zero
 # value means endless wait. (integer value)
 #graceful_shutdown_timeout = 60
-
+max_resources_per_stack=20000
+max_json_body_size=10880000
+max_template_size=5440000
 
 [auth_password]
 
@@ -1104,6 +1114,7 @@
 
 # Maximum cache age of CORS preflight requests. (integer value)
 #max_age = 3600
+
 
 # Indicate which methods can be used during the actual request. (list value)
 #allow_methods = GET,PUT,POST,DELETE,PATCH
@@ -1170,6 +1181,7 @@
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
 #connection = <None>
+connection = mysql+pymysql://heat:opnfv_secret@10.167.4.50/heat?charset=utf8
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
@@ -1197,12 +1209,14 @@
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
 #max_pool_size = 5
+max_pool_size = 30
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
 #max_retries = 10
+max_retries = -1
 
 # Interval between retries of opening a SQL connection. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
@@ -1213,6 +1227,7 @@
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
 #max_overflow = 50
+max_overflow = 60
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
@@ -1252,30 +1267,29 @@
 [ec2authtoken]
 
 #
-# From heat.api.aws.ec2token
+# Options defined in heat.api.aws.ec2token
 #
 
 # Authentication Endpoint URI. (string value)
-#auth_uri = <None>
+auth_uri=http://10.167.4.10:5000/v2.0
 
 # Allow orchestration of multiple clouds. (boolean value)
-#multi_cloud = false
-
-# Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At least
-# one endpoint needs to be specified. (list value)
-#allowed_auth_uris =
-
-# Optional PEM-formatted certificate chain file. (string value)
-#cert_file = <None>
-
-# Optional PEM-formatted file that contains the private key. (string value)
-#key_file = <None>
-
-# Optional CA cert file to use in SSL connections. (string value)
-#ca_file = <None>
-
-# If set, then the server's certificate will not be verified. (boolean value)
-#insecure = false
+#multi_cloud=false
+
+# Allowed keystone endpoints for auth_uri when multi_cloud is
+# enabled. At least one endpoint needs to be specified. (list
+# value)
+#allowed_auth_uris=
+
+keystone_ec2_uri=http://10.167.4.10:5000/v2.0/ec2tokens
+
+[clients]
+endpoint_type = internalURL
+
+[clients_heat]
+endpoint_type = publicURL
+[clients_keystone]
+auth_uri=http://10.167.4.10:35357
 
 
 [eventlet_opts]
@@ -1331,6 +1345,7 @@
 # interface. (IP address value)
 # Deprecated group/name - [DEFAULT]/bind_host
 #bind_host = 0.0.0.0
+bind_host = 10.167.4.13
 
 # The port on which the server will listen. (port value)
 # Minimum value: 0
@@ -1355,6 +1370,7 @@
 # Minimum value: 0
 # Deprecated group/name - [DEFAULT]/workers
 #workers = 0
+workers=4
 
 # Maximum line size of message headers to be accepted. max_header_line may need
 # to be increased when using large tokens (typically those generated by the
@@ -1377,6 +1393,7 @@
 # interface. (IP address value)
 # Deprecated group/name - [DEFAULT]/bind_host
 #bind_host = 0.0.0.0
+bind_host = 10.167.4.13
 
 # The port on which the server will listen. (port value)
 # Minimum value: 0
@@ -1422,6 +1439,7 @@
 # interface. (IP address value)
 # Deprecated group/name - [DEFAULT]/bind_host
 #bind_host = 0.0.0.0
+bind_host = 10.167.4.13
 
 # The port on which the server will listen. (port value)
 # Minimum value: 0
@@ -1623,13 +1641,61 @@
 # possible. (boolean value)
 #service_token_roles_required = false
 
+# Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
+# (string value)
+#auth_admin_prefix =
+
+# Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
+# (string value)
+#auth_host = 127.0.0.1
+
+# Port of the admin Identity API endpoint. Deprecated, use identity_uri.
+# (integer value)
+#auth_port = 35357
+
+# Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
+# (string value)
+# Allowed values: http, https
+#auth_protocol = https
+
+# Complete admin Identity API endpoint. This should specify the unversioned
+# root endpoint e.g. https://localhost:35357/ (string value)
+#identity_uri = <None>
+
+# This option is deprecated and may be removed in a future release. Single
+# shared secret with the Keystone configuration used for bootstrapping a
+# Keystone installation, or otherwise bypassing the normal authentication
+# process. This option should not be used, use `admin_user` and
+# `admin_password` instead. (string value)
+#admin_token = <None>
+
+# Service username. (string value)
+#admin_user = <None>
+
+# Service user password. (string value)
+#admin_password = <None>
+
+# Service tenant name. (string value)
+#admin_tenant_name = admin
+
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
 #auth_type = <None>
 
 # Config Section from which to load plugin specific options (string value)
 #auth_section = <None>
-
+auth_type = password
+auth_uri=http://10.167.4.10:5000/v2.0
+#identity_uri=http://10.167.4.10:35357
+#admin_user=heat
+#admin_password=opnfv_secret
+#admin_tenant_name=service
+auth_url=http://10.167.4.10:35357
+username = heat
+password = opnfv_secret
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
 
 [matchmaker_redis]
 
@@ -1924,6 +1990,7 @@
 # messaging, messagingv2, routing, log, test, noop (multi valued)
 # Deprecated group/name - [DEFAULT]/notification_driver
 #driver =
+driver = messagingv2
 
 # A URL representing the messaging driver to use for notifications. If not set,
 # we fall back to the same configuration used for RPC. (string value)
@@ -2080,7 +2147,7 @@
 
 # Specifies the number of messages to prefetch. Setting to zero allows
 # unlimited messages. (integer value)
-#rabbit_qos_prefetch_count = 64
+#rabbit_qos_prefetch_count = 0
 
 # Number of seconds after which the Rabbit broker is considered down if
 # heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
@@ -2369,6 +2436,7 @@
 #
 # From oslo.middleware
 #
+enable_proxy_headers_parsing = True
 
 # The maximum body size for each  request, in bytes. (integer value)
 # Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
@@ -2482,8 +2550,43 @@
 # Examples of possible values:
 #
 # * messaging://: use oslo_messaging driver for sending notifications.
+# * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications.
+# * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending
+# notifications.
 #  (string value)
 #connection_string = messaging://
+
+#
+# Document type for notification indexing in elasticsearch.
+#  (string value)
+#es_doc_type = notification
+
+#
+# This parameter is a time value parameter (for example: es_scroll_time=2m),
+# indicating for how long the nodes that participate in the search will
+# maintain
+# relevant resources in order to continue and support it.
+#  (string value)
+#es_scroll_time = 2m
+
+#
+# Elasticsearch splits large requests in batches. This parameter defines
+# maximum size of each batch (for example: es_scroll_size=10000).
+#  (integer value)
+#es_scroll_size = 10000
+
+#
+# Redissentinel provides a timeout option on the connections.
+# This parameter defines that timeout (for example: socket_timeout=0.1).
+#  (floating point value)
+#socket_timeout = 0.1
+
+#
+# Redissentinel uses a service name to identify a master redis service.
+# This parameter defines the name (for example:
+# sentinal_service_name=mymaster).
+#  (string value)
+#sentinel_service_name = mymaster
 
 
 [revision]
@@ -2531,7 +2634,13 @@
 #
 # From heat.common.context
 #
-
+auth_plugin = password
+auth_url = http://10.167.4.10:35357
+username = heat
+password = opnfv_secret
+user_domain_name = default
+project_domain_id = default
+user_domain_id = default
 # Authentication type to load (string value)
 # Deprecated group/name - [trustee]/auth_plugin
 #auth_type = <None>

2017-09-27 10:14:19,684 [salt.state       ][INFO    ][27002] Completed state [/etc/heat/heat.conf] at time 10:14:19.684416 duration_in_ms=218.334
2017-09-27 10:14:19,685 [salt.state       ][INFO    ][27002] Running state [/etc/heat/api-paste.ini] at time 10:14:19.684891
2017-09-27 10:14:19,685 [salt.state       ][INFO    ][27002] Executing state file.managed for /etc/heat/api-paste.ini
2017-09-27 10:14:19,706 [salt.fileclient  ][INFO    ][27002] Fetching file from saltenv 'base', ** done ** 'heat/files/ocata/api-paste.ini'
2017-09-27 10:14:19,710 [salt.state       ][INFO    ][27002] File /etc/heat/api-paste.ini is in the correct state
2017-09-27 10:14:19,710 [salt.state       ][INFO    ][27002] Completed state [/etc/heat/api-paste.ini] at time 10:14:19.709846 duration_in_ms=24.954
2017-09-27 10:14:19,711 [salt.state       ][INFO    ][27002] Running state [source /root/keystonercv3; heat-keystone-setup-domain --stack-user-domain-name heat_user_domain --stack-domain-admin heat_domain_admin --stack-domain-admin-password opnfv_secret] at time 10:14:19.711444
2017-09-27 10:14:19,712 [salt.state       ][INFO    ][27002] Executing state cmd.run for source /root/keystonercv3; heat-keystone-setup-domain --stack-user-domain-name heat_user_domain --stack-domain-admin heat_domain_admin --stack-domain-admin-password opnfv_secret
2017-09-27 10:14:19,714 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command 'source /root/keystonercv3; heat-keystone-setup-domain --stack-user-domain-name heat_user_domain --stack-domain-admin heat_domain_admin --stack-domain-admin-password opnfv_secret' in directory '/root'
2017-09-27 10:14:21,721 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101421710072
2017-09-27 10:14:21,745 [salt.minion      ][INFO    ][28617] Starting a new job with PID 28617
2017-09-27 10:14:21,765 [salt.minion      ][INFO    ][28617] Returning information for job: 20170927101421710072
2017-09-27 10:14:21,975 [salt.state       ][INFO    ][27002] {'pid': 28603, 'retcode': 0, 'stderr': "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py:136: UserWarning: Using keystoneclient sessions has been deprecated. Please update your software to use keystoneauth1.\n  warnings.warn('Using keystoneclient sessions has been deprecated. '", 'stdout': '2017-09-27 10:14:20.886 28605 WARNING oslo_config.cfg [-] Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.\n2017-09-27 10:14:21.376 28605 INFO __main__ [-] Creating domain heat_user_domain\n2017-09-27 10:14:21.424 28605 WARNING __main__ [-] Domain heat_user_domain already exists\n2017-09-27 10:14:21.540 28605 WARNING __main__ [-] User heat_domain_admin already exists\n\nPlease update your heat.conf with the following in [DEFAULT]\n\nstack_user_domain_id=0e0a22119508417ea555692f9001cb0b\nstack_domain_admin=heat_domain_admin\nstack_domain_admin_password=opnfv_secret'}
2017-09-27 10:14:21,975 [salt.state       ][INFO    ][27002] Completed state [source /root/keystonercv3; heat-keystone-setup-domain --stack-user-domain-name heat_user_domain --stack-domain-admin heat_domain_admin --stack-domain-admin-password opnfv_secret] at time 10:14:21.975214 duration_in_ms=2263.768
2017-09-27 10:14:21,977 [salt.state       ][INFO    ][27002] Running state [heat-manage db_sync] at time 10:14:21.976913
2017-09-27 10:14:21,977 [salt.state       ][INFO    ][27002] Executing state cmd.run for heat-manage db_sync
2017-09-27 10:14:21,978 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command 'heat-manage db_sync' in directory '/root'
2017-09-27 10:14:22,848 [salt.state       ][INFO    ][27002] {'pid': 28620, 'retcode': 0, 'stderr': '', 'stdout': '2017-09-27 10:14:22.703 28621 WARNING oslo_config.cfg [-] Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.'}
2017-09-27 10:14:22,848 [salt.state       ][INFO    ][27002] Completed state [heat-manage db_sync] at time 10:14:22.847998 duration_in_ms=871.084
2017-09-27 10:14:22,849 [salt.state       ][INFO    ][27002] Running state [chown heat:heat /var/log/heat/ -R] at time 10:14:22.848537
2017-09-27 10:14:22,849 [salt.state       ][INFO    ][27002] Executing state cmd.run for chown heat:heat /var/log/heat/ -R
2017-09-27 10:14:22,849 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command 'chown heat:heat /var/log/heat/ -R' in directory '/root'
2017-09-27 10:14:22,864 [salt.state       ][INFO    ][27002] {'pid': 28626, 'retcode': 0, 'stderr': '', 'stdout': ''}
2017-09-27 10:14:22,864 [salt.state       ][INFO    ][27002] Completed state [chown heat:heat /var/log/heat/ -R] at time 10:14:22.863995 duration_in_ms=15.456
2017-09-27 10:14:22,965 [salt.state       ][INFO    ][27002] Running state [heat-api-cfn] at time 10:14:22.964521
2017-09-27 10:14:22,965 [salt.state       ][INFO    ][27002] Executing state service.running for heat-api-cfn
2017-09-27 10:14:22,966 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'status', 'heat-api-cfn.service', '-n', '0'] in directory '/root'
2017-09-27 10:14:22,985 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-active', 'heat-api-cfn.service'] in directory '/root'
2017-09-27 10:14:23,003 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-enabled', 'heat-api-cfn.service'] in directory '/root'
2017-09-27 10:14:23,019 [salt.state       ][INFO    ][27002] The service heat-api-cfn is already running
2017-09-27 10:14:23,019 [salt.state       ][INFO    ][27002] Completed state [heat-api-cfn] at time 10:14:23.019076 duration_in_ms=54.551
2017-09-27 10:14:23,020 [salt.state       ][INFO    ][27002] Running state [heat-api-cfn] at time 10:14:23.019456
2017-09-27 10:14:23,020 [salt.state       ][INFO    ][27002] Executing state service.mod_watch for heat-api-cfn
2017-09-27 10:14:23,021 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-active', 'heat-api-cfn.service'] in directory '/root'
2017-09-27 10:14:23,040 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-enabled', 'heat-api-cfn.service'] in directory '/root'
2017-09-27 10:14:23,057 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'heat-api-cfn.service'] in directory '/root'
2017-09-27 10:14:23,167 [salt.state       ][INFO    ][27002] {'heat-api-cfn': True}
2017-09-27 10:14:23,168 [salt.state       ][INFO    ][27002] Completed state [heat-api-cfn] at time 10:14:23.167587 duration_in_ms=148.128
2017-09-27 10:14:23,169 [salt.state       ][INFO    ][27002] Running state [heat-engine] at time 10:14:23.168816
2017-09-27 10:14:23,169 [salt.state       ][INFO    ][27002] Executing state service.running for heat-engine
2017-09-27 10:14:23,171 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'status', 'heat-engine.service', '-n', '0'] in directory '/root'
2017-09-27 10:14:23,193 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-active', 'heat-engine.service'] in directory '/root'
2017-09-27 10:14:23,211 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-enabled', 'heat-engine.service'] in directory '/root'
2017-09-27 10:14:23,227 [salt.state       ][INFO    ][27002] The service heat-engine is already running
2017-09-27 10:14:23,228 [salt.state       ][INFO    ][27002] Completed state [heat-engine] at time 10:14:23.227750 duration_in_ms=58.932
2017-09-27 10:14:23,228 [salt.state       ][INFO    ][27002] Running state [heat-engine] at time 10:14:23.228103
2017-09-27 10:14:23,229 [salt.state       ][INFO    ][27002] Executing state service.mod_watch for heat-engine
2017-09-27 10:14:23,229 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-active', 'heat-engine.service'] in directory '/root'
2017-09-27 10:14:23,248 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-enabled', 'heat-engine.service'] in directory '/root'
2017-09-27 10:14:23,267 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'heat-engine.service'] in directory '/root'
2017-09-27 10:14:23,489 [salt.state       ][INFO    ][27002] {'heat-engine': True}
2017-09-27 10:14:23,489 [salt.state       ][INFO    ][27002] Completed state [heat-engine] at time 10:14:23.489339 duration_in_ms=261.235
2017-09-27 10:14:23,490 [salt.state       ][INFO    ][27002] Running state [heat-api] at time 10:14:23.489982
2017-09-27 10:14:23,490 [salt.state       ][INFO    ][27002] Executing state service.running for heat-api
2017-09-27 10:14:23,491 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'status', 'heat-api.service', '-n', '0'] in directory '/root'
2017-09-27 10:14:23,520 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-active', 'heat-api.service'] in directory '/root'
2017-09-27 10:14:23,534 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-enabled', 'heat-api.service'] in directory '/root'
2017-09-27 10:14:23,547 [salt.state       ][INFO    ][27002] The service heat-api is already running
2017-09-27 10:14:23,547 [salt.state       ][INFO    ][27002] Completed state [heat-api] at time 10:14:23.547155 duration_in_ms=57.172
2017-09-27 10:14:23,547 [salt.state       ][INFO    ][27002] Running state [heat-api] at time 10:14:23.547328
2017-09-27 10:14:23,548 [salt.state       ][INFO    ][27002] Executing state service.mod_watch for heat-api
2017-09-27 10:14:23,548 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-active', 'heat-api.service'] in directory '/root'
2017-09-27 10:14:23,562 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-enabled', 'heat-api.service'] in directory '/root'
2017-09-27 10:14:23,576 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'heat-api.service'] in directory '/root'
2017-09-27 10:14:23,635 [salt.state       ][INFO    ][27002] {'heat-api': True}
2017-09-27 10:14:23,636 [salt.state       ][INFO    ][27002] Completed state [heat-api] at time 10:14:23.635469 duration_in_ms=88.138
2017-09-27 10:14:23,636 [salt.state       ][INFO    ][27002] Running state [heat-api-cloudwatch] at time 10:14:23.636338
2017-09-27 10:14:23,637 [salt.state       ][INFO    ][27002] Executing state service.running for heat-api-cloudwatch
2017-09-27 10:14:23,638 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'status', 'heat-api-cloudwatch.service', '-n', '0'] in directory '/root'
2017-09-27 10:14:23,653 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-active', 'heat-api-cloudwatch.service'] in directory '/root'
2017-09-27 10:14:23,673 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-enabled', 'heat-api-cloudwatch.service'] in directory '/root'
2017-09-27 10:14:23,688 [salt.state       ][INFO    ][27002] The service heat-api-cloudwatch is already running
2017-09-27 10:14:23,689 [salt.state       ][INFO    ][27002] Completed state [heat-api-cloudwatch] at time 10:14:23.688496 duration_in_ms=52.156
2017-09-27 10:14:23,689 [salt.state       ][INFO    ][27002] Running state [heat-api-cloudwatch] at time 10:14:23.688765
2017-09-27 10:14:23,689 [salt.state       ][INFO    ][27002] Executing state service.mod_watch for heat-api-cloudwatch
2017-09-27 10:14:23,690 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-active', 'heat-api-cloudwatch.service'] in directory '/root'
2017-09-27 10:14:23,704 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemctl', 'is-enabled', 'heat-api-cloudwatch.service'] in directory '/root'
2017-09-27 10:14:23,719 [salt.loaded.int.module.cmdmod][INFO    ][27002] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'heat-api-cloudwatch.service'] in directory '/root'
2017-09-27 10:14:23,778 [salt.state       ][INFO    ][27002] {'heat-api-cloudwatch': True}
2017-09-27 10:14:23,778 [salt.state       ][INFO    ][27002] Completed state [heat-api-cloudwatch] at time 10:14:23.778296 duration_in_ms=89.529
2017-09-27 10:14:23,780 [salt.minion      ][INFO    ][27002] Returning information for job: 20170927101341177011
2017-09-27 10:15:09,254 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command test.ping with jid 20170927101509245528
2017-09-27 10:15:09,283 [salt.minion      ][INFO    ][28912] Starting a new job with PID 28912
2017-09-27 10:15:09,339 [salt.minion      ][INFO    ][28912] Returning information for job: 20170927101509245528
2017-09-27 10:16:55,021 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927101655010845
2017-09-27 10:16:55,046 [salt.minion      ][INFO    ][28937] Starting a new job with PID 28937
2017-09-27 10:16:56,654 [salt.state       ][INFO    ][28937] Loading fresh modules for state activity
2017-09-27 10:16:56,698 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/init.sls'
2017-09-27 10:16:56,727 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/controller.sls'
2017-09-27 10:16:56,793 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:16:56,836 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/user.sls'
2017-09-27 10:16:56,876 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/volume.sls'
2017-09-27 10:16:56,920 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:16:56,940 [salt.state       ][INFO    ][28937] Running state [cinder] at time 10:16:56.940227
2017-09-27 10:16:56,941 [salt.state       ][INFO    ][28937] Executing state group.present for cinder
2017-09-27 10:16:56,943 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command 'groupadd -g 304 -r cinder' in directory '/root'
2017-09-27 10:16:57,084 [salt.state       ][INFO    ][28937] {'passwd': 'x', 'gid': 304, 'name': 'cinder', 'members': []}
2017-09-27 10:16:57,085 [salt.state       ][INFO    ][28937] Completed state [cinder] at time 10:16:57.085031 duration_in_ms=144.804
2017-09-27 10:16:57,086 [salt.state       ][INFO    ][28937] Running state [cinder] at time 10:16:57.085673
2017-09-27 10:16:57,086 [salt.state       ][INFO    ][28937] Executing state user.present for cinder
2017-09-27 10:16:57,088 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['useradd', '-s', '/bin/false', '-u', '304', '-g', '304', '-m', '-d', '/var/lib/cinder', '-r', 'cinder'] in directory '/root'
2017-09-27 10:16:57,189 [salt.state       ][INFO    ][28937] {'shell': '/bin/false', 'workphone': '', 'uid': 304, 'passwd': 'x', 'roomnumber': '', 'groups': ['cinder'], 'home': '/var/lib/cinder', 'name': 'cinder', 'gid': 304, 'fullname': '', 'homephone': ''}
2017-09-27 10:16:57,190 [salt.state       ][INFO    ][28937] Completed state [cinder] at time 10:16:57.190223 duration_in_ms=104.546
2017-09-27 10:16:58,792 [salt.state       ][INFO    ][28937] Running state [python-cinder] at time 10:16:58.791903
2017-09-27 10:16:58,792 [salt.state       ][INFO    ][28937] Executing state pkg.installed for python-cinder
2017-09-27 10:16:58,793 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:16:59,424 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 10:17:02,036 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-cinder'] in directory '/root'
2017-09-27 10:17:05,059 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101705048800
2017-09-27 10:17:05,102 [salt.minion      ][INFO    ][29502] Starting a new job with PID 29502
2017-09-27 10:17:05,120 [salt.minion      ][INFO    ][29502] Returning information for job: 20170927101705048800
2017-09-27 10:17:15,236 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101715225377
2017-09-27 10:17:15,259 [salt.minion      ][INFO    ][29647] Starting a new job with PID 29647
2017-09-27 10:17:15,276 [salt.minion      ][INFO    ][29647] Returning information for job: 20170927101715225377
2017-09-27 10:17:25,397 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101725385966
2017-09-27 10:17:25,427 [salt.minion      ][INFO    ][29809] Starting a new job with PID 29809
2017-09-27 10:17:25,449 [salt.minion      ][INFO    ][29809] Returning information for job: 20170927101725385966
2017-09-27 10:17:35,543 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:17:35,573 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101735566144
2017-09-27 10:17:35,598 [salt.minion      ][INFO    ][29876] Starting a new job with PID 29876
2017-09-27 10:17:35,602 [salt.state       ][INFO    ][28937] Made the following changes:
'python-redis' changed from 'absent' to '2.10.5-1ubuntu1'
'python-uritemplate' changed from 'absent' to '0.6-1ubuntu1'
'python-googleapi' changed from 'absent' to '1.4.2-1ubuntu1.1'
'python-oslo.vmware' changed from 'absent' to '2.17.0-1~u16.04+mcp4'
'python-tooz' changed from 'absent' to '1.47.0-1~u16.04+mcp1'
'python2.7-googleapi' changed from 'absent' to '1'
'python-cinder' changed from 'absent' to '2:10.0.3-1~u16.04+mcp12'
'python-pymemcache' changed from 'absent' to '1.3.2-2ubuntu1'
'python-pyasn1-modules' changed from 'absent' to '0.0.7-0.1'
'python-rtslib-fb' changed from 'absent' to '2.1.57+debian-3'
'python-zake' changed from 'absent' to '0.1.6-1'
'python-oauth2client' changed from 'absent' to '2.0.1-1'
'python-suds' changed from 'absent' to '0.7~git20150727.94664dd-3'
'python-voluptuous' changed from 'absent' to '0.9.3-1~u16.04+mcp1'
'python-rsa' changed from 'absent' to '3.2.3-1.1'
'python2.7-rtslib-fb' changed from 'absent' to '1'

2017-09-27 10:17:35,610 [salt.minion      ][INFO    ][29876] Returning information for job: 20170927101735566144
2017-09-27 10:17:35,626 [salt.state       ][INFO    ][28937] Loading fresh modules for state activity
2017-09-27 10:17:35,670 [salt.state       ][INFO    ][28937] Completed state [python-cinder] at time 10:17:35.669579 duration_in_ms=36877.676
2017-09-27 10:17:35,678 [salt.state       ][INFO    ][28937] Running state [lvm2] at time 10:17:35.677768
2017-09-27 10:17:35,678 [salt.state       ][INFO    ][28937] Executing state pkg.installed for lvm2
2017-09-27 10:17:35,949 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'lvm2'] in directory '/root'
2017-09-27 10:17:45,730 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101745718184
2017-09-27 10:17:45,750 [salt.minion      ][INFO    ][31477] Starting a new job with PID 31477
2017-09-27 10:17:45,769 [salt.minion      ][INFO    ][31477] Returning information for job: 20170927101745718184
2017-09-27 10:17:55,891 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101755880812
2017-09-27 10:17:55,914 [salt.minion      ][INFO    ][2955] Starting a new job with PID 2955
2017-09-27 10:17:55,935 [salt.minion      ][INFO    ][2955] Returning information for job: 20170927101755880812
2017-09-27 10:17:56,369 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:17:56,428 [salt.state       ][INFO    ][28937] Made the following changes:
'libreadline5' changed from 'absent' to '5.2+dfsg-3build1'
'dmsetup' changed from 'absent' to '2:1.02.110-1ubuntu10'
'dmeventd' changed from 'absent' to '2:1.02.110-1ubuntu10'
'liblvm2cmd2.02' changed from 'absent' to '2.02.133-1ubuntu10'
'lvm2' changed from 'absent' to '2.02.133-1ubuntu10'

2017-09-27 10:17:56,455 [salt.state       ][INFO    ][28937] Loading fresh modules for state activity
2017-09-27 10:17:56,486 [salt.state       ][INFO    ][28937] Completed state [lvm2] at time 10:17:56.486290 duration_in_ms=20808.519
2017-09-27 10:17:56,499 [salt.state       ][INFO    ][28937] Running state [cinder-api] at time 10:17:56.499422
2017-09-27 10:17:56,500 [salt.state       ][INFO    ][28937] Executing state pkg.installed for cinder-api
2017-09-27 10:17:56,783 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'cinder-api'] in directory '/root'
2017-09-27 10:18:06,061 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101806051904
2017-09-27 10:18:06,084 [salt.minion      ][INFO    ][3058] Starting a new job with PID 3058
2017-09-27 10:18:06,103 [salt.minion      ][INFO    ][3058] Returning information for job: 20170927101806051904
2017-09-27 10:18:08,529 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:18:08,574 [salt.state       ][INFO    ][28937] Made the following changes:
'cinder-api' changed from 'absent' to '2:10.0.3-1~u16.04+mcp12'
'cinder-common' changed from 'absent' to '2:10.0.3-1~u16.04+mcp12'

2017-09-27 10:18:08,594 [salt.state       ][INFO    ][28937] Loading fresh modules for state activity
2017-09-27 10:18:08,607 [salt.state       ][INFO    ][28937] Completed state [cinder-api] at time 10:18:08.607301 duration_in_ms=12107.878
2017-09-27 10:18:08,613 [salt.state       ][INFO    ][28937] Running state [python-memcache] at time 10:18:08.612815
2017-09-27 10:18:08,613 [salt.state       ][INFO    ][28937] Executing state pkg.installed for python-memcache
2017-09-27 10:18:08,813 [salt.state       ][INFO    ][28937] Package python-memcache is already installed
2017-09-27 10:18:08,814 [salt.state       ][INFO    ][28937] Completed state [python-memcache] at time 10:18:08.813682 duration_in_ms=200.868
2017-09-27 10:18:08,814 [salt.state       ][INFO    ][28937] Running state [python-pycadf] at time 10:18:08.813944
2017-09-27 10:18:08,814 [salt.state       ][INFO    ][28937] Executing state pkg.installed for python-pycadf
2017-09-27 10:18:08,817 [salt.state       ][INFO    ][28937] Package python-pycadf is already installed
2017-09-27 10:18:08,817 [salt.state       ][INFO    ][28937] Completed state [python-pycadf] at time 10:18:08.816795 duration_in_ms=2.852
2017-09-27 10:18:08,817 [salt.state       ][INFO    ][28937] Running state [gettext-base] at time 10:18:08.817027
2017-09-27 10:18:08,817 [salt.state       ][INFO    ][28937] Executing state pkg.installed for gettext-base
2017-09-27 10:18:08,820 [salt.state       ][INFO    ][28937] Package gettext-base is already installed
2017-09-27 10:18:08,820 [salt.state       ][INFO    ][28937] Completed state [gettext-base] at time 10:18:08.819849 duration_in_ms=2.822
2017-09-27 10:18:08,820 [salt.state       ][INFO    ][28937] Running state [cinder-scheduler] at time 10:18:08.820056
2017-09-27 10:18:08,820 [salt.state       ][INFO    ][28937] Executing state pkg.installed for cinder-scheduler
2017-09-27 10:18:08,828 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'cinder-scheduler'] in directory '/root'
2017-09-27 10:18:12,616 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:18:12,652 [salt.state       ][INFO    ][28937] Made the following changes:
'cinder-scheduler' changed from 'absent' to '2:10.0.3-1~u16.04+mcp12'

2017-09-27 10:18:12,666 [salt.state       ][INFO    ][28937] Loading fresh modules for state activity
2017-09-27 10:18:12,677 [salt.state       ][INFO    ][28937] Completed state [cinder-scheduler] at time 10:18:12.676972 duration_in_ms=3856.916
2017-09-27 10:18:12,679 [salt.state       ][INFO    ][28937] Running state [/etc/cinder/cinder.conf] at time 10:18:12.678979
2017-09-27 10:18:12,679 [salt.state       ][INFO    ][28937] Executing state file.managed for /etc/cinder/cinder.conf
2017-09-27 10:18:12,710 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/files/ocata/cinder.conf.controller.Debian'
2017-09-27 10:18:12,826 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:18:12,864 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/files/backend/_lvm.conf'
2017-09-27 10:18:12,867 [salt.state       ][INFO    ][28937] File changed:
--- 
+++ 
@@ -1,11 +1,157 @@
+
+
 [DEFAULT]
 rootwrap_config = /etc/cinder/rootwrap.conf
 api_paste_confg = /etc/cinder/api-paste.ini
+
 iscsi_helper = tgtadm
 volume_name_template = volume-%s
-volume_group = cinder-volumes
+#volume_group = cinder
+
 verbose = True
+
+osapi_volume_workers = 4
+
 auth_strategy = keystone
+
 state_path = /var/lib/cinder
-lock_path = /var/lock/cinder
+
+use_syslog=False
+
+glance_num_retries=0
+debug=False
+
+os_region_name=RegionOne
+allow_availability_zone_fallback = True
+
+#glance_api_ssl_compression=False
+#glance_api_insecure=False
+
+osapi_volume_listen=10.167.4.13
+
+glance_api_servers = http://10.167.4.10:9292
+
+glance_host=10.167.4.10
+glance_port=9292
+glance_api_version=2
+
+enable_v3_api = True
+
+os_privileged_user_name=cinder
+os_privileged_user_password=opnfv_secret
+os_privileged_user_tenant=service
+os_privileged_user_auth_url=http://10.167.4.10:5000/v3/
+
+volume_backend_name=DEFAULT
+
+default_volume_type=lvm-driver
+
+enabled_backends=lvm-driver
+
+
+#RPC response timeout recommended by Hitachi
+rpc_response_timeout=3600
+
+#Rabbit
+control_exchange=cinder
+
+
+volume_clear=none
+
+
+
+volume_name_template = volume-%s
+
+#volume_group = vg_cinder_volume
+
 volumes_dir = /var/lib/cinder/volumes
+log_dir=/var/log/cinder
+
+# Use syslog for logging. (boolean value)
+#use_syslog=false
+
+use_syslog=false
+verbose=True
+lock_path=/var/lock/cinder
+
+nova_catalog_admin_info = compute:nova:adminURL
+nova_catalog_info = compute:nova:internalURL
+
+osapi_volume_extension = cinder.api.contrib.standard_extensions
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.41:5672,openstack:opnfv_secret@10.167.4.42:5672,openstack:opnfv_secret@10.167.4.43:5672//openstack
+
+[oslo_messaging_notifications]
+driver = messagingv2
+
+[oslo_concurrency]
+
+lock_path=/var/lock/cinder
+
+[oslo_middleware]
+
+enable_proxy_headers_parsing = True
+
+
+[keystone_authtoken]
+signing_dir=/tmp/keystone-signing-cinder
+revocation_cache_time = 10
+auth_type = password
+user_domain_name = Default
+project_domain_name = Default
+project_name = service
+username = cinder
+password = opnfv_secret
+
+auth_uri=http://10.167.4.10:5000
+auth_url=http://10.167.4.10:35357
+# Temporary disabled for backward compataiblity
+#auth_uri=http://10.167.4.10/identity
+#auth_url=http://10.167.4.10/identity_v2_admin
+memcached_servers=10.167.4.11:11211,10.167.4.12:11211,10.167.4.13:11211,10.167.4.11:11211,10.167.4.12:11211,10.167.4.13:11211
+auth_version = v3
+
+[barbican]
+auth_endpoint=http://10.167.4.10:5000
+
+[database]
+idle_timeout=3600
+max_pool_size=30
+max_retries=-1
+max_overflow=40
+connection = mysql+pymysql://cinder:opnfv_secret@10.167.4.50/cinder?charset=utf8
+[lvm-driver]
+host=ctl03
+volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
+volume_backend_name=lvm-driver
+lvm_type = default
+iscsi_helper = tgtadm
+volume_group = cinder-volume
+
+[cors]
+
+#
+# From oslo.middleware.cors
+#
+
+# Indicate whether this resource may be shared with the domain
+# received in the requests "origin" header. (list value)
+#allowed_origin = <None>
+
+# Indicate that the actual request can include user credentials
+# (boolean value)
+#allow_credentials = true
+
+# Indicate which headers are safe to expose to the API. Defaults to
+# HTTP Simple Headers. (list value)
+#expose_headers = X-Image-Meta-Checksum,X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID
+
+# Maximum cache age of CORS preflight requests. (integer value)
+#max_age = 3600
+
+# Indicate which methods can be used during the actual request. (list
+# value)
+#allow_methods = GET,PUT,POST,DELETE,PATCH
+
+# Indicate which header field names may be used during the actual
+# request. (list value)
+#allow_headers = Content-MD5,X-Image-Meta-Checksum,X-Storage-Token,Accept-Encoding,X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID

2017-09-27 10:18:12,867 [salt.state       ][INFO    ][28937] Completed state [/etc/cinder/cinder.conf] at time 10:18:12.867397 duration_in_ms=188.418
2017-09-27 10:18:12,868 [salt.state       ][INFO    ][28937] Running state [/etc/cinder/api-paste.ini] at time 10:18:12.868117
2017-09-27 10:18:12,868 [salt.state       ][INFO    ][28937] Executing state file.managed for /etc/cinder/api-paste.ini
2017-09-27 10:18:12,902 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/files/ocata/api-paste.ini.controller.Debian'
2017-09-27 10:18:12,935 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:18:12,975 [salt.state       ][INFO    ][28937] File changed:
--- 
+++ 
@@ -73,3 +73,4 @@
 
 [filter:authtoken]
 paste.filter_factory = keystonemiddleware.auth_token:filter_factory
+

2017-09-27 10:18:12,976 [salt.state       ][INFO    ][28937] Completed state [/etc/cinder/api-paste.ini] at time 10:18:12.975861 duration_in_ms=107.744
2017-09-27 10:18:12,976 [salt.state       ][INFO    ][28937] Running state [/etc/apache2/conf-available/cinder-wsgi.conf] at time 10:18:12.976264
2017-09-27 10:18:12,977 [salt.state       ][INFO    ][28937] Executing state file.managed for /etc/apache2/conf-available/cinder-wsgi.conf
2017-09-27 10:18:12,992 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/files/ocata/cinder-wsgi.conf'
2017-09-27 10:18:13,009 [salt.fileclient  ][INFO    ][28937] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:18:13,049 [salt.state       ][INFO    ][28937] File changed:
--- 
+++ 
@@ -1,7 +1,7 @@
-Listen 8776
-LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\" %D(us)" cinder_combined
 
-<VirtualHost *:8776>
+Listen 10.167.4.13:8776
+
+<VirtualHost 10.167.4.13:8776>
     WSGIDaemonProcess cinder-wsgi processes=5 threads=1 user=cinder display-name=%{GROUP}
     WSGIProcessGroup cinder-wsgi
     WSGIScriptAlias / /usr/bin/cinder-wsgi
@@ -12,10 +12,10 @@
     </IfVersion>
 
     ErrorLog /var/log/apache2/cinder_error.log
-    CustomLog /var/log/apache2/cinder.log cinder_combined
+    CustomLog /var/log/apache2/cinder.log "%v:%p %h %l %u %t \"%r\" %>s %D %O \"%{Referer}i\" \"%{User-Agent}i\""
 
     <Directory /usr/bin>
-	<IfVersion >= 2.4>
+        <IfVersion >= 2.4>
             Require all granted
         </IfVersion>
         <IfVersion < 2.4>

2017-09-27 10:18:13,051 [salt.state       ][INFO    ][28937] Completed state [/etc/apache2/conf-available/cinder-wsgi.conf] at time 10:18:13.050522 duration_in_ms=74.258
2017-09-27 10:18:13,125 [salt.state       ][INFO    ][28937] Running state [apache2] at time 10:18:13.124506
2017-09-27 10:18:13,130 [salt.state       ][INFO    ][28937] Executing state service.running for apache2
2017-09-27 10:18:13,133 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'status', 'apache2.service', '-n', '0'] in directory '/root'
2017-09-27 10:18:13,143 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'is-active', 'apache2.service'] in directory '/root'
2017-09-27 10:18:13,151 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'is-enabled', 'apache2.service'] in directory '/root'
2017-09-27 10:18:13,164 [salt.state       ][INFO    ][28937] The service apache2 is already running
2017-09-27 10:18:13,164 [salt.state       ][INFO    ][28937] Completed state [apache2] at time 10:18:13.164356 duration_in_ms=39.85
2017-09-27 10:18:13,165 [salt.state       ][INFO    ][28937] Running state [apache2] at time 10:18:13.164510
2017-09-27 10:18:13,165 [salt.state       ][INFO    ][28937] Executing state service.mod_watch for apache2
2017-09-27 10:18:13,165 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'is-active', 'apache2.service'] in directory '/root'
2017-09-27 10:18:13,173 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'is-enabled', 'apache2.service'] in directory '/root'
2017-09-27 10:18:13,183 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'apache2.service'] in directory '/root'
2017-09-27 10:18:16,230 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101816220223
2017-09-27 10:18:16,253 [salt.minion      ][INFO    ][3698] Starting a new job with PID 3698
2017-09-27 10:18:16,275 [salt.minion      ][INFO    ][3698] Returning information for job: 20170927101816220223
2017-09-27 10:18:17,453 [salt.state       ][INFO    ][28937] {'apache2': True}
2017-09-27 10:18:17,453 [salt.state       ][INFO    ][28937] Completed state [apache2] at time 10:18:17.453451 duration_in_ms=4288.939
2017-09-27 10:18:17,454 [salt.state       ][INFO    ][28937] Running state [cinder-scheduler] at time 10:18:17.453945
2017-09-27 10:18:17,454 [salt.state       ][INFO    ][28937] Executing state service.running for cinder-scheduler
2017-09-27 10:18:17,455 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'status', 'cinder-scheduler.service', '-n', '0'] in directory '/root'
2017-09-27 10:18:17,468 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'is-active', 'cinder-scheduler.service'] in directory '/root'
2017-09-27 10:18:17,484 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'is-enabled', 'cinder-scheduler.service'] in directory '/root'
2017-09-27 10:18:17,497 [salt.state       ][INFO    ][28937] The service cinder-scheduler is already running
2017-09-27 10:18:17,498 [salt.state       ][INFO    ][28937] Completed state [cinder-scheduler] at time 10:18:17.497658 duration_in_ms=43.712
2017-09-27 10:18:17,498 [salt.state       ][INFO    ][28937] Running state [cinder-scheduler] at time 10:18:17.498093
2017-09-27 10:18:17,499 [salt.state       ][INFO    ][28937] Executing state service.mod_watch for cinder-scheduler
2017-09-27 10:18:17,499 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'is-active', 'cinder-scheduler.service'] in directory '/root'
2017-09-27 10:18:17,511 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemctl', 'is-enabled', 'cinder-scheduler.service'] in directory '/root'
2017-09-27 10:18:17,523 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'cinder-scheduler.service'] in directory '/root'
2017-09-27 10:18:21,265 [salt.state       ][INFO    ][28937] {'cinder-scheduler': True}
2017-09-27 10:18:21,265 [salt.state       ][INFO    ][28937] Completed state [cinder-scheduler] at time 10:18:21.265115 duration_in_ms=3767.02
2017-09-27 10:18:21,267 [salt.state       ][INFO    ][28937] Running state [cinder-manage db sync; sleep 5;] at time 10:18:21.266967
2017-09-27 10:18:21,267 [salt.state       ][INFO    ][28937] Executing state cmd.run for cinder-manage db sync; sleep 5;
2017-09-27 10:18:21,268 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command 'cinder-manage db sync; sleep 5;' in directory '/root'
2017-09-27 10:18:26,403 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927101826393104
2017-09-27 10:18:26,427 [salt.minion      ][INFO    ][4057] Starting a new job with PID 4057
2017-09-27 10:18:26,446 [salt.minion      ][INFO    ][4057] Returning information for job: 20170927101826393104
2017-09-27 10:18:28,033 [salt.state       ][INFO    ][28937] {'pid': 4010, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 10:18:28,034 [salt.state       ][INFO    ][28937] Completed state [cinder-manage db sync; sleep 5;] at time 10:18:28.033398 duration_in_ms=6766.425
2017-09-27 10:18:28,043 [salt.state       ][INFO    ][28937] Running state [source /root/keystonerc; cinder type-create lvm-driver] at time 10:18:28.042653
2017-09-27 10:18:28,043 [salt.state       ][INFO    ][28937] Executing state cmd.run for source /root/keystonerc; cinder type-create lvm-driver
2017-09-27 10:18:28,045 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command 'source /root/keystonerc; cinder type-list | grep lvm-driver' in directory '/root'
2017-09-27 10:18:29,684 [salt.state       ][INFO    ][28937] unless execution succeeded
2017-09-27 10:18:29,684 [salt.state       ][INFO    ][28937] Completed state [source /root/keystonerc; cinder type-create lvm-driver] at time 10:18:29.684235 duration_in_ms=1641.582
2017-09-27 10:18:29,685 [salt.state       ][INFO    ][28937] Running state [source /root/keystonerc; cinder type-key lvm-driver set volume_backend_name=lvm-driver] at time 10:18:29.684787
2017-09-27 10:18:29,685 [salt.state       ][INFO    ][28937] Executing state cmd.run for source /root/keystonerc; cinder type-key lvm-driver set volume_backend_name=lvm-driver
2017-09-27 10:18:29,685 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command 'source /root/keystonerc; cinder extra-specs-list | grep "{u'volume_backend_name': u'lvm-driver'}"' in directory '/root'
2017-09-27 10:18:31,674 [salt.loaded.int.module.cmdmod][INFO    ][28937] Executing command 'source /root/keystonerc; cinder type-key lvm-driver set volume_backend_name=lvm-driver' in directory '/root'
2017-09-27 10:18:33,232 [salt.state       ][INFO    ][28937] {'pid': 4084, 'retcode': 0, 'stderr': '', 'stdout': ''}
2017-09-27 10:18:33,233 [salt.state       ][INFO    ][28937] Completed state [source /root/keystonerc; cinder type-key lvm-driver set volume_backend_name=lvm-driver] at time 10:18:33.233367 duration_in_ms=3548.578
2017-09-27 10:18:33,236 [salt.minion      ][INFO    ][28937] Returning information for job: 20170927101655010845
2017-09-27 10:20:14,271 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927102014256888
2017-09-27 10:20:14,298 [salt.minion      ][INFO    ][4279] Starting a new job with PID 4279
2017-09-27 10:20:16,100 [salt.state       ][INFO    ][4279] Loading fresh modules for state activity
2017-09-27 10:20:16,145 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/init.sls'
2017-09-27 10:20:16,170 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/controller.sls'
2017-09-27 10:20:16,224 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:20:16,262 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/user.sls'
2017-09-27 10:20:16,318 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/volume.sls'
2017-09-27 10:20:16,380 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:20:17,869 [salt.state       ][INFO    ][4279] Running state [python-cinder] at time 10:20:17.869120
2017-09-27 10:20:17,870 [salt.state       ][INFO    ][4279] Executing state pkg.installed for python-cinder
2017-09-27 10:20:17,870 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:20:18,371 [salt.state       ][INFO    ][4279] Package python-cinder is already installed
2017-09-27 10:20:18,372 [salt.state       ][INFO    ][4279] Completed state [python-cinder] at time 10:20:18.371483 duration_in_ms=502.363
2017-09-27 10:20:18,372 [salt.state       ][INFO    ][4279] Running state [lvm2] at time 10:20:18.371798
2017-09-27 10:20:18,372 [salt.state       ][INFO    ][4279] Executing state pkg.installed for lvm2
2017-09-27 10:20:18,375 [salt.state       ][INFO    ][4279] Package lvm2 is already installed
2017-09-27 10:20:18,375 [salt.state       ][INFO    ][4279] Completed state [lvm2] at time 10:20:18.374903 duration_in_ms=3.105
2017-09-27 10:20:18,375 [salt.state       ][INFO    ][4279] Running state [cinder-api] at time 10:20:18.375181
2017-09-27 10:20:18,375 [salt.state       ][INFO    ][4279] Executing state pkg.installed for cinder-api
2017-09-27 10:20:18,378 [salt.state       ][INFO    ][4279] Package cinder-api is already installed
2017-09-27 10:20:18,378 [salt.state       ][INFO    ][4279] Completed state [cinder-api] at time 10:20:18.378186 duration_in_ms=3.005
2017-09-27 10:20:18,378 [salt.state       ][INFO    ][4279] Running state [python-memcache] at time 10:20:18.378463
2017-09-27 10:20:18,379 [salt.state       ][INFO    ][4279] Executing state pkg.installed for python-memcache
2017-09-27 10:20:18,381 [salt.state       ][INFO    ][4279] Package python-memcache is already installed
2017-09-27 10:20:18,382 [salt.state       ][INFO    ][4279] Completed state [python-memcache] at time 10:20:18.381522 duration_in_ms=3.059
2017-09-27 10:20:18,382 [salt.state       ][INFO    ][4279] Running state [python-pycadf] at time 10:20:18.381795
2017-09-27 10:20:18,382 [salt.state       ][INFO    ][4279] Executing state pkg.installed for python-pycadf
2017-09-27 10:20:18,385 [salt.state       ][INFO    ][4279] Package python-pycadf is already installed
2017-09-27 10:20:18,385 [salt.state       ][INFO    ][4279] Completed state [python-pycadf] at time 10:20:18.384788 duration_in_ms=2.992
2017-09-27 10:20:18,385 [salt.state       ][INFO    ][4279] Running state [gettext-base] at time 10:20:18.385062
2017-09-27 10:20:18,385 [salt.state       ][INFO    ][4279] Executing state pkg.installed for gettext-base
2017-09-27 10:20:18,388 [salt.state       ][INFO    ][4279] Package gettext-base is already installed
2017-09-27 10:20:18,388 [salt.state       ][INFO    ][4279] Completed state [gettext-base] at time 10:20:18.388088 duration_in_ms=3.026
2017-09-27 10:20:18,388 [salt.state       ][INFO    ][4279] Running state [cinder-scheduler] at time 10:20:18.388360
2017-09-27 10:20:18,389 [salt.state       ][INFO    ][4279] Executing state pkg.installed for cinder-scheduler
2017-09-27 10:20:18,391 [salt.state       ][INFO    ][4279] Package cinder-scheduler is already installed
2017-09-27 10:20:18,391 [salt.state       ][INFO    ][4279] Completed state [cinder-scheduler] at time 10:20:18.391418 duration_in_ms=3.057
2017-09-27 10:20:18,393 [salt.state       ][INFO    ][4279] Running state [/etc/cinder/cinder.conf] at time 10:20:18.393051
2017-09-27 10:20:18,393 [salt.state       ][INFO    ][4279] Executing state file.managed for /etc/cinder/cinder.conf
2017-09-27 10:20:18,482 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/files/ocata/cinder.conf.controller.Debian'
2017-09-27 10:20:18,573 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:20:18,615 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/files/backend/_lvm.conf'
2017-09-27 10:20:18,622 [salt.state       ][INFO    ][4279] File /etc/cinder/cinder.conf is in the correct state
2017-09-27 10:20:18,623 [salt.state       ][INFO    ][4279] Completed state [/etc/cinder/cinder.conf] at time 10:20:18.622901 duration_in_ms=229.848
2017-09-27 10:20:18,624 [salt.state       ][INFO    ][4279] Running state [/etc/cinder/api-paste.ini] at time 10:20:18.623855
2017-09-27 10:20:18,625 [salt.state       ][INFO    ][4279] Executing state file.managed for /etc/cinder/api-paste.ini
2017-09-27 10:20:18,648 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/files/ocata/api-paste.ini.controller.Debian'
2017-09-27 10:20:18,681 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:20:18,715 [salt.state       ][INFO    ][4279] File /etc/cinder/api-paste.ini is in the correct state
2017-09-27 10:20:18,716 [salt.state       ][INFO    ][4279] Completed state [/etc/cinder/api-paste.ini] at time 10:20:18.715656 duration_in_ms=91.801
2017-09-27 10:20:18,716 [salt.state       ][INFO    ][4279] Running state [/etc/apache2/conf-available/cinder-wsgi.conf] at time 10:20:18.716388
2017-09-27 10:20:18,717 [salt.state       ][INFO    ][4279] Executing state file.managed for /etc/apache2/conf-available/cinder-wsgi.conf
2017-09-27 10:20:18,737 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/files/ocata/cinder-wsgi.conf'
2017-09-27 10:20:18,758 [salt.fileclient  ][INFO    ][4279] Fetching file from saltenv 'base', ** done ** 'cinder/map.jinja'
2017-09-27 10:20:18,787 [salt.state       ][INFO    ][4279] File /etc/apache2/conf-available/cinder-wsgi.conf is in the correct state
2017-09-27 10:20:18,788 [salt.state       ][INFO    ][4279] Completed state [/etc/apache2/conf-available/cinder-wsgi.conf] at time 10:20:18.787888 duration_in_ms=71.501
2017-09-27 10:20:18,790 [salt.state       ][INFO    ][4279] Running state [apache2] at time 10:20:18.789995
2017-09-27 10:20:18,791 [salt.state       ][INFO    ][4279] Executing state service.running for apache2
2017-09-27 10:20:18,792 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command ['systemctl', 'status', 'apache2.service', '-n', '0'] in directory '/root'
2017-09-27 10:20:18,825 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command ['systemctl', 'is-active', 'apache2.service'] in directory '/root'
2017-09-27 10:20:18,841 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command ['systemctl', 'is-enabled', 'apache2.service'] in directory '/root'
2017-09-27 10:20:18,861 [salt.state       ][INFO    ][4279] The service apache2 is already running
2017-09-27 10:20:18,862 [salt.state       ][INFO    ][4279] Completed state [apache2] at time 10:20:18.862377 duration_in_ms=72.378
2017-09-27 10:20:18,864 [salt.state       ][INFO    ][4279] Running state [cinder-scheduler] at time 10:20:18.863615
2017-09-27 10:20:18,864 [salt.state       ][INFO    ][4279] Executing state service.running for cinder-scheduler
2017-09-27 10:20:18,866 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command ['systemctl', 'status', 'cinder-scheduler.service', '-n', '0'] in directory '/root'
2017-09-27 10:20:18,882 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command ['systemctl', 'is-active', 'cinder-scheduler.service'] in directory '/root'
2017-09-27 10:20:18,898 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command ['systemctl', 'is-enabled', 'cinder-scheduler.service'] in directory '/root'
2017-09-27 10:20:18,914 [salt.state       ][INFO    ][4279] The service cinder-scheduler is already running
2017-09-27 10:20:18,915 [salt.state       ][INFO    ][4279] Completed state [cinder-scheduler] at time 10:20:18.914797 duration_in_ms=51.179
2017-09-27 10:20:18,918 [salt.state       ][INFO    ][4279] Running state [cinder-manage db sync; sleep 5;] at time 10:20:18.918350
2017-09-27 10:20:18,919 [salt.state       ][INFO    ][4279] Executing state cmd.run for cinder-manage db sync; sleep 5;
2017-09-27 10:20:18,921 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command 'cinder-manage db sync; sleep 5;' in directory '/root'
2017-09-27 10:20:24,366 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927102024351901
2017-09-27 10:20:24,391 [salt.minion      ][INFO    ][4329] Starting a new job with PID 4329
2017-09-27 10:20:24,412 [salt.minion      ][INFO    ][4329] Returning information for job: 20170927102024351901
2017-09-27 10:20:25,841 [salt.state       ][INFO    ][4279] {'pid': 4318, 'retcode': 0, 'stderr': 'Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.', 'stdout': ''}
2017-09-27 10:20:25,842 [salt.state       ][INFO    ][4279] Completed state [cinder-manage db sync; sleep 5;] at time 10:20:25.841789 duration_in_ms=6923.436
2017-09-27 10:20:25,843 [salt.state       ][INFO    ][4279] Running state [source /root/keystonerc; cinder type-create lvm-driver] at time 10:20:25.842634
2017-09-27 10:20:25,843 [salt.state       ][INFO    ][4279] Executing state cmd.run for source /root/keystonerc; cinder type-create lvm-driver
2017-09-27 10:20:25,844 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command 'source /root/keystonerc; cinder type-list | grep lvm-driver' in directory '/root'
2017-09-27 10:20:29,715 [salt.state       ][INFO    ][4279] unless execution succeeded
2017-09-27 10:20:29,715 [salt.state       ][INFO    ][4279] Completed state [source /root/keystonerc; cinder type-create lvm-driver] at time 10:20:29.715119 duration_in_ms=3872.482
2017-09-27 10:20:29,717 [salt.state       ][INFO    ][4279] Running state [source /root/keystonerc; cinder type-key lvm-driver set volume_backend_name=lvm-driver] at time 10:20:29.716516
2017-09-27 10:20:29,717 [salt.state       ][INFO    ][4279] Executing state cmd.run for source /root/keystonerc; cinder type-key lvm-driver set volume_backend_name=lvm-driver
2017-09-27 10:20:29,719 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command 'source /root/keystonerc; cinder extra-specs-list | grep "{u'volume_backend_name': u'lvm-driver'}"' in directory '/root'
2017-09-27 10:20:31,414 [salt.loaded.int.module.cmdmod][INFO    ][4279] Executing command 'source /root/keystonerc; cinder type-key lvm-driver set volume_backend_name=lvm-driver' in directory '/root'
2017-09-27 10:20:33,419 [salt.state       ][INFO    ][4279] {'pid': 4347, 'retcode': 0, 'stderr': '', 'stdout': ''}
2017-09-27 10:20:33,420 [salt.state       ][INFO    ][4279] Completed state [source /root/keystonerc; cinder type-key lvm-driver set volume_backend_name=lvm-driver] at time 10:20:33.420141 duration_in_ms=3703.622
2017-09-27 10:20:33,423 [salt.minion      ][INFO    ][4279] Returning information for job: 20170927102014256888
2017-09-27 10:22:00,175 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command test.ping with jid 20170927102200163659
2017-09-27 10:22:00,201 [salt.minion      ][INFO    ][4373] Starting a new job with PID 4373
2017-09-27 10:22:00,253 [salt.minion      ][INFO    ][4373] Returning information for job: 20170927102200163659
2017-09-27 10:23:40,288 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command state.sls with jid 20170927102340276298
2017-09-27 10:23:40,312 [salt.minion      ][INFO    ][4398] Starting a new job with PID 4398
2017-09-27 10:23:42,046 [salt.state       ][INFO    ][4398] Loading fresh modules for state activity
2017-09-27 10:23:42,088 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/init.sls'
2017-09-27 10:23:42,117 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/server.sls'
2017-09-27 10:23:42,177 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/map.jinja'
2017-09-27 10:23:42,233 [salt.state       ][INFO    ][4398] Running state [/usr/sbin/policy-rc.d] at time 10:23:42.232770
2017-09-27 10:23:42,233 [salt.state       ][INFO    ][4398] Executing state file.managed for /usr/sbin/policy-rc.d
2017-09-27 10:23:42,246 [salt.state       ][INFO    ][4398] File changed:
New file
2017-09-27 10:23:42,247 [salt.state       ][INFO    ][4398] Completed state [/usr/sbin/policy-rc.d] at time 10:23:42.246548 duration_in_ms=13.779
2017-09-27 10:23:43,736 [salt.state       ][INFO    ][4398] Running state [neutron-server] at time 10:23:43.736421
2017-09-27 10:23:43,737 [salt.state       ][INFO    ][4398] Executing state pkg.installed for neutron-server
2017-09-27 10:23:43,737 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:23:44,316 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2017-09-27 10:23:46,967 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-server'] in directory '/root'
2017-09-27 10:23:50,329 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927102350316531
2017-09-27 10:23:50,353 [salt.minion      ][INFO    ][4995] Starting a new job with PID 4995
2017-09-27 10:23:50,378 [salt.minion      ][INFO    ][4995] Returning information for job: 20170927102350316531
2017-09-27 10:23:55,924 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:23:55,961 [salt.state       ][INFO    ][4398] Made the following changes:
'ipset-6.29' changed from 'absent' to '1'
'neutron-plugin-ml2' changed from 'absent' to '2:10.0.3-1~u16.04+mcp28'
'ipset' changed from 'absent' to '6.29-1'
'neutron-server' changed from 'absent' to '2:10.0.3-1~u16.04+mcp28'
'python-neutron-fwaas' changed from 'absent' to '2:10.1.0-1~u16.04+mcp0'
'neutron-plugin' changed from 'absent' to '1'
'libipset3' changed from 'absent' to '6.29-1'
'neutron-common' changed from 'absent' to '2:10.0.3-1~u16.04+mcp28'

2017-09-27 10:23:55,978 [salt.state       ][INFO    ][4398] Loading fresh modules for state activity
2017-09-27 10:23:56,008 [salt.state       ][INFO    ][4398] Completed state [neutron-server] at time 10:23:56.007680 duration_in_ms=12271.259
2017-09-27 10:23:56,014 [salt.state       ][INFO    ][4398] Running state [python-neutron-lbaas] at time 10:23:56.013477
2017-09-27 10:23:56,014 [salt.state       ][INFO    ][4398] Executing state pkg.installed for python-neutron-lbaas
2017-09-27 10:23:56,214 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-neutron-lbaas'] in directory '/root'
2017-09-27 10:23:58,934 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2017-09-27 10:23:58,996 [salt.state       ][INFO    ][4398] Made the following changes:
'python-neutron-lbaas' changed from 'absent' to '2:10.0.1-1~u16.04+mcp3'

2017-09-27 10:23:59,019 [salt.state       ][INFO    ][4398] Loading fresh modules for state activity
2017-09-27 10:23:59,050 [salt.state       ][INFO    ][4398] Completed state [python-neutron-lbaas] at time 10:23:59.049993 duration_in_ms=3036.515
2017-09-27 10:23:59,058 [salt.state       ][INFO    ][4398] Running state [python-pycadf] at time 10:23:59.058274
2017-09-27 10:23:59,059 [salt.state       ][INFO    ][4398] Executing state pkg.installed for python-pycadf
2017-09-27 10:23:59,280 [salt.state       ][INFO    ][4398] Package python-pycadf is already installed
2017-09-27 10:23:59,280 [salt.state       ][INFO    ][4398] Completed state [python-pycadf] at time 10:23:59.280367 duration_in_ms=222.093
2017-09-27 10:23:59,281 [salt.state       ][INFO    ][4398] Running state [gettext-base] at time 10:23:59.280615
2017-09-27 10:23:59,281 [salt.state       ][INFO    ][4398] Executing state pkg.installed for gettext-base
2017-09-27 10:23:59,283 [salt.state       ][INFO    ][4398] Package gettext-base is already installed
2017-09-27 10:23:59,284 [salt.state       ][INFO    ][4398] Completed state [gettext-base] at time 10:23:59.283512 duration_in_ms=2.897
2017-09-27 10:23:59,284 [salt.state       ][INFO    ][4398] Running state [/usr/sbin/policy-rc.d] at time 10:23:59.284361
2017-09-27 10:23:59,285 [salt.state       ][INFO    ][4398] Executing state file.absent for /usr/sbin/policy-rc.d
2017-09-27 10:23:59,285 [salt.state       ][INFO    ][4398] {'removed': '/usr/sbin/policy-rc.d'}
2017-09-27 10:23:59,285 [salt.state       ][INFO    ][4398] Completed state [/usr/sbin/policy-rc.d] at time 10:23:59.284947 duration_in_ms=0.587
2017-09-27 10:23:59,285 [salt.state       ][INFO    ][4398] Running state [/etc/neutron/plugins/ml2/ml2_conf.ini] at time 10:23:59.285243
2017-09-27 10:23:59,285 [salt.state       ][INFO    ][4398] Executing state file.managed for /etc/neutron/plugins/ml2/ml2_conf.ini
2017-09-27 10:23:59,313 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/files/ocata/ml2_conf.ini'
2017-09-27 10:23:59,362 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/map.jinja'
2017-09-27 10:23:59,400 [salt.state       ][INFO    ][4398] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -119,32 +120,38 @@
 # List of network type driver entrypoints to be loaded from the
 # neutron.ml2.type_drivers namespace. (list value)
 #type_drivers = local,flat,vlan,gre,vxlan,geneve
+type_drivers = local,flat,vlan,gre,vxlan
 
 # Ordered list of network_types to allocate as tenant networks. The default
 # value 'local' is useful for single-box testing but provides no connectivity
 # between hosts. (list value)
 #tenant_network_types = local
+tenant_network_types = flat,vxlan
 
 # An ordered list of networking mechanism driver entrypoints to be loaded from
 # the neutron.ml2.mechanism_drivers namespace. (list value)
 #mechanism_drivers =
+mechanism_drivers =opendaylight_v2,l2population
 
 # An ordered list of extension driver entrypoints to be loaded from the
 # neutron.ml2.extension_drivers namespace. For example: extension_drivers =
 # port_security,qos (list value)
 #extension_drivers =
+extension_drivers=port_security
 
 # Maximum size of an IP packet (MTU) that can traverse the underlying physical
 # network infrastructure without fragmentation when using an overlay/tunnel
 # protocol. This option allows specifying a physical network MTU value that
 # differs from the default global_physnet_mtu value. (integer value)
 #path_mtu = 0
+path_mtu = 1500
 
 # A list of mappings of physical networks to MTU values. The format of the
 # mapping is <physnet>:<mtu val>. This mapping allows specifying a physical
 # network MTU value that differs from the default global_physnet_mtu value.
 # (list value)
 #physical_network_mtus =
+physical_network_mtus =physnet1:1500
 
 # Default network type for external networks when no provider attributes are
 # specified. By default it is None, which means that if provider attributes are
@@ -169,6 +176,7 @@
 # default '*' to allow flat networks with arbitrary physical_network names. Use
 # an empty list to disable flat networks. (list value)
 #flat_networks = *
+flat_networks = *
 
 
 [ml2_type_geneve]
@@ -198,6 +206,7 @@
 # Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE
 # tunnel IDs that are available for tenant network allocation (list value)
 #tunnel_id_ranges =
+tunnel_id_ranges =2:65535
 
 
 [ml2_type_vlan]
@@ -211,6 +220,7 @@
 # networks, as well as ranges of VLAN tags on each available for allocation to
 # tenant networks. (list value)
 #network_vlan_ranges =
+network_vlan_ranges = physnet1
 
 
 [ml2_type_vxlan]
@@ -222,11 +232,13 @@
 # Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of
 # VXLAN VNI IDs that are available for tenant network allocation (list value)
 #vni_ranges =
+vni_ranges = 2:65535
 
 # Multicast group for VXLAN. When configured, will enable sending all broadcast
 # traffic to this multicast group. When left unconfigured, will disable
 # multicast VXLAN mode. (string value)
 #vxlan_group = <None>
+vxlan_group = 224.0.0.1
 
 
 [securitygroup]
@@ -242,7 +254,14 @@
 # should be false when using no security groups or using the nova security
 # group API. (boolean value)
 #enable_security_group = true
+firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
+enable_security_group = True
 
 # Use ipset to speed-up the iptables based security groups. Enabling ipset
 # support requires that ipset is installed on L2 agent node. (boolean value)
 #enable_ipset = true
+[ml2_odl]
+port_binding_controller = network-topology
+url = http://10.167.4.111:8282/controller/nb/v2/neutron
+username = admin
+password = admin

2017-09-27 10:23:59,401 [salt.state       ][INFO    ][4398] Completed state [/etc/neutron/plugins/ml2/ml2_conf.ini] at time 10:23:59.400678 duration_in_ms=115.433
2017-09-27 10:23:59,402 [salt.state       ][INFO    ][4398] Running state [ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini] at time 10:23:59.402403
2017-09-27 10:23:59,403 [salt.state       ][INFO    ][4398] Executing state cmd.run for ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2017-09-27 10:23:59,403 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command 'test -e /etc/neutron/plugin.ini' in directory '/root'
2017-09-27 10:23:59,416 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command 'ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini' in directory '/root'
2017-09-27 10:23:59,428 [salt.state       ][INFO    ][4398] {'pid': 5307, 'retcode': 0, 'stderr': '', 'stdout': ''}
2017-09-27 10:23:59,428 [salt.state       ][INFO    ][4398] Completed state [ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini] at time 10:23:59.428309 duration_in_ms=25.904
2017-09-27 10:23:59,430 [salt.state       ][INFO    ][4398] Running state [/etc/neutron/neutron.conf] at time 10:23:59.429437
2017-09-27 10:23:59,430 [salt.state       ][INFO    ][4398] Executing state file.managed for /etc/neutron/neutron.conf
2017-09-27 10:23:59,455 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/files/ocata/neutron-server.conf.Debian'
2017-09-27 10:23:59,552 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/map.jinja'
2017-09-27 10:23:59,581 [salt.state       ][INFO    ][4398] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -7,14 +8,17 @@
 # Where to store Neutron state files. This directory must be writable by the
 # agent. (string value)
 #state_path = /var/lib/neutron
+state_path = /var/lib/neutron
 
 # The host IP to bind to (string value)
 #bind_host = 0.0.0.0
+bind_host = 10.167.4.13
 
 # The port to bind to (port value)
 # Minimum value: 0
 # Maximum value: 65535
 #bind_port = 9696
+bind_port = 9696
 
 # The path for API extensions. Note that this can be a colon-separated list of
 # paths. For example: api_extensions_path =
@@ -22,15 +26,23 @@
 # neutron.extensions is appended to this, so if your extensions are in there
 # you don't need to specify them here. (string value)
 #api_extensions_path =
+agent_down_time = 30
 
 # The type of authentication to use (string value)
 #auth_strategy = keystone
-
-# The core plugin Neutron will use (string value)
-#core_plugin = <None>
+auth_strategy = keystone
+
+
+
+core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
+
+service_plugins = odl-router_v2,metering
 
 # The service plugins Neutron will use (list value)
 #service_plugins =
+
+allow_pagination = False
+
 
 # The base MAC address Neutron will use for VIFs. The first 3 octets will
 # remain unchanged. If the 4th octet is not 00, it will also be used. The
@@ -43,6 +55,7 @@
 # The maximum number of items returned in a single response, value was
 # 'infinite' or negative integer means no limit (string value)
 #pagination_max_limit = -1
+pagination_max_limit = -1
 
 # Default value of availability zone hints. The availability zone aware
 # schedulers use this when the resources availability_zone_hints is empty.
@@ -75,9 +88,11 @@
 # DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite
 # lease times. (integer value)
 #dhcp_lease_duration = 86400
+dhcp_lease_duration = 600
 
 # Domain to use for building the hostnames (string value)
 #dns_domain = openstacklocal
+dns_domain = novalocal
 
 # Driver for external DNS integration. (string value)
 #external_dns_driver = <None>
@@ -89,6 +104,7 @@
 # MUST be set to False if Neutron is being used in conjunction with Nova
 # security groups. (boolean value)
 #allow_overlapping_ips = false
+allow_overlapping_ips = True
 
 # Hostname to be used by the Neutron server, agents and services running on
 # this machine. All the agents and services running on this machine must use
@@ -97,10 +113,12 @@
 
 # Send notification to nova when port status changes (boolean value)
 #notify_nova_on_port_status_changes = true
+notify_nova_on_port_status_changes = true
 
 # Send notification to nova when port data (fixed_ips/floatingip) changes so
 # nova can update its cache. (boolean value)
 #notify_nova_on_port_data_changes = true
+notify_nova_on_port_data_changes = true
 
 # Number of seconds between sending events to nova if there are any events to
 # send. (integer value)
@@ -126,6 +144,7 @@
 # value. Defaults to 1500, the standard value for Ethernet. (integer value)
 # Deprecated group/name - [ml2]/segment_mtu
 #global_physnet_mtu = 1500
+global_physnet_mtu = 1500
 
 # Number of backlog requests to configure the socket with (integer value)
 #backlog = 4096
@@ -146,10 +165,12 @@
 
 # Number of RPC worker processes for service. (integer value)
 #rpc_workers = 1
+rpc_workers = 4
 
 # Number of RPC worker processes dedicated to state reports queue. (integer
 # value)
 #rpc_state_report_workers = 1
+rpc_state_report_workers = 4
 
 # Range of seconds to randomly delay when starting the periodic task scheduler
 # to reduce stampeding. (Disable by setting to 0) (integer value)
@@ -224,6 +245,7 @@
 # a given tenant network, providing high availability for DHCP service.
 # (integer value)
 #dhcp_agents_per_network = 1
+dhcp_agents_per_network = 2
 
 # Enable services on an agent with admin_state_up False. If this option is
 # False, when admin_state_up of an agent is turned False, services on it will
@@ -243,9 +265,11 @@
 # System-wide flag to determine the type of router that tenants can create.
 # Only admin can override. (boolean value)
 #router_distributed = false
+router_distributed = False
 
 # Driver to use for scheduling router to a default L3 agent (string value)
 #router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler
+router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
 
 # Allow auto scheduling of routers to L3 agent. (boolean value)
 #router_auto_schedule = true
@@ -253,13 +277,16 @@
 # Automatically reschedule routers from offline L3 agents to online L3 agents.
 # (boolean value)
 #allow_automatic_l3agent_failover = false
+allow_automatic_l3agent_failover = true
 
 # Enable HA mode for virtual routers. (boolean value)
 #l3_ha = false
+l3_ha = True
 
 # Maximum number of L3 agents which a HA router will be scheduled on. If it is
 # set to 0 then the router will be scheduled on every agent. (integer value)
 #max_l3_agents_per_router = 3
+max_l3_agents_per_router = 0
 
 # Subnet used for the l3 HA admin network. (string value)
 #l3_ha_net_cidr = 169.254.192.0/18
@@ -295,6 +322,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #verbose = true
+verbose = true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -439,6 +467,7 @@
 # upper bound for the linger period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
 #zmq_linger = -1
+zmq_linger = 30
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
@@ -561,13 +590,16 @@
 # Size of executor thread pool. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
 #executor_thread_pool_size = 64
+executor_thread_pool_size = 70
 
 # Seconds to wait for a response from a call. (integer value)
 #rpc_response_timeout = 60
+rpc_response_timeout=120
 
 # A URL representing the messaging driver to use and its full configuration.
 # (string value)
 #transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.41:5672,openstack:opnfv_secret@10.167.4.42:5672,openstack:opnfv_secret@10.167.4.43:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
@@ -612,6 +644,7 @@
 # is idle for this number of seconds it will be closed. A value of '0' means
 # wait forever. (integer value)
 #client_socket_timeout = 900
+nova_url = http://10.167.4.10:8774/v2
 
 
 [agent]
@@ -624,7 +657,7 @@
 # /etc/neutron/rootwrap.conf' to use the real root filter facility. Change to
 # 'sudo' to skip the filtering and just run the command directly. (string
 # value)
-#root_helper = sudo
+root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
 
 # Use the root helper when listing the namespaces on a system. This may not be
 # required depending on the security configuration. If the root helper is not
@@ -635,12 +668,13 @@
 # needs to execute commands in Dom0 in the hypervisor of XenServer, this item
 # should be set to 'xenapi_root_helper', so that it will keep a XenAPI session
 # to pass commands to Dom0. (string value)
-#root_helper_daemon = <None>
+root_helper_daemon = sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
 
 # Seconds between nodes reporting state to server; should be less than
 # agent_down_time, best if it is half or less than agent_down_time. (floating
 # point value)
 #report_interval = 30
+report_interval = 10
 
 # Log agent heartbeats (boolean value)
 #log_agent_heartbeats = false
@@ -697,7 +731,6 @@
 # (list value)
 #allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
 
-
 [cors.subdomain]
 
 #
@@ -762,7 +795,8 @@
 # Deprecated group/name - [DEFAULT]/sql_connection
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
-#connection = <None>
+
+connection = mysql+pymysql://neutron:opnfv_secret@10.167.4.50/neutron?charset=utf8
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
@@ -779,6 +813,7 @@
 # Deprecated group/name - [DATABASE]/sql_idle_timeout
 # Deprecated group/name - [sql]/idle_timeout
 #idle_timeout = 3600
+idle_timeout = 3600
 
 # Minimum number of SQL connections to keep open in a pool. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
@@ -790,22 +825,26 @@
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
 #max_pool_size = 5
+max_pool_size = 20
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
 #max_retries = 10
+max_retries = -1
 
 # Interval between retries of opening a SQL connection. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
 # Deprecated group/name - [DATABASE]/reconnect_interval
 #retry_interval = 10
+retry_interval = 2
 
 # If set, use this value for max_overflow with SQLAlchemy. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
 #max_overflow = 50
+max_overflow = 20
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
@@ -844,6 +883,19 @@
 
 [keystone_authtoken]
 
+auth_region=RegionOne
+auth_protocol=http
+revocation_cache_time = 10
+auth_type = password
+auth_host = 10.167.4.10
+auth_port = 35357
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = neutron
+password = opnfv_secret
+auth_uri=http://10.167.4.10:5000
+auth_url=http://10.167.4.10:35357
 #
 # From keystonemiddleware.auth_token
 #
@@ -1008,6 +1060,43 @@
 # possible. (boolean value)
 #service_token_roles_required = false
 
+# Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
+# (string value)
+#auth_admin_prefix =
+
+# Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
+# (string value)
+#auth_host = 127.0.0.1
+
+# Port of the admin Identity API endpoint. Deprecated, use identity_uri.
+# (integer value)
+#auth_port = 35357
+
+# Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
+# (string value)
+# Allowed values: http, https
+#auth_protocol = https
+
+# Complete admin Identity API endpoint. This should specify the unversioned
+# root endpoint e.g. https://localhost:35357/ (string value)
+#identity_uri = <None>
+
+# This option is deprecated and may be removed in a future release. Single
+# shared secret with the Keystone configuration used for bootstrapping a
+# Keystone installation, or otherwise bypassing the normal authentication
+# process. This option should not be used, use `admin_user` and
+# `admin_password` instead. (string value)
+#admin_token = <None>
+
+# Service username. (string value)
+#admin_user = <None>
+
+# Service user password. (string value)
+#admin_password = <None>
+
+# Service tenant name. (string value)
+#admin_tenant_name = admin
+
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
 #auth_type = <None>
@@ -1071,12 +1160,14 @@
 # Name of nova region to use. Useful if keystone manages more than one region.
 # (string value)
 #region_name = <None>
+region_name = RegionOne
 
 # Type of the nova endpoint to use.  This endpoint will be looked up in the
 # keystone catalog and should be one of public, internal or admin. (string
 # value)
 # Allowed values: public, admin, internal
 #endpoint_type = public
+endpoint_type = internal
 
 #
 # From nova.auth
@@ -1084,6 +1175,13 @@
 
 # Authentication URL (string value)
 #auth_url = <None>
+user_domain_id = default
+project_domain_id = default
+project_name = service
+password = opnfv_secret
+username = nova
+auth_type = password
+auth_url = http://10.167.4.10:35357
 
 # Authentication type to load (string value)
 # Deprecated group/name - [nova]/auth_plugin
@@ -1173,10 +1271,13 @@
 
 # Directory to use for lock files.  For security, the specified directory
 # should only be writable by the user running the processes that need locking.
-# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
-# a lock path must be set. (string value)
+# Defaults to environment variable OSLO_LOCK_PATH. If OSLO_LOCK_PATH is not set
+# in the environment, use the Python tempfile.gettempdir function to find a
+# suitable location. If external locks are used, a lock path must be set.
+# (string value)
 # Deprecated group/name - [DEFAULT]/lock_path
-#lock_path = <None>
+#lock_path = /tmp
+lock_path = $state_path/lock
 
 
 [oslo_messaging_amqp]
@@ -1548,11 +1649,13 @@
 
 # How frequently to retry connecting with RabbitMQ. (integer value)
 #rabbit_retry_interval = 1
+rabbit_retry_interval = 1
 
 # How long to backoff for between retries when connecting to RabbitMQ. (integer
 # value)
 # Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
 #rabbit_retry_backoff = 2
+rabbit_retry_backoff = 2
 
 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
 # (integer value)
@@ -1582,16 +1685,18 @@
 
 # Specifies the number of messages to prefetch. Setting to zero allows
 # unlimited messages. (integer value)
-#rabbit_qos_prefetch_count = 64
+#rabbit_qos_prefetch_count = 0
 
 # Number of seconds after which the Rabbit broker is considered down if
 # heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
 # value)
 #heartbeat_timeout_threshold = 60
+heartbeat_timeout_threshold = 0
 
 # How often times during the heartbeat_timeout_threshold we check the
 # heartbeat. (integer value)
 #heartbeat_rate = 2
+heartbeat_rate = 2
 
 # Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
 # Deprecated group/name - [DEFAULT]/fake_rabbit
@@ -1990,3 +2095,8 @@
 # Sets the list of available ciphers. value should be a string in the OpenSSL
 # cipher list format. (string value)
 #ciphers = <None>
+[service_providers]
+
+
+[ovs]
+ovsdb_connection = tcp:127.0.0.1:6639

2017-09-27 10:23:59,582 [salt.state       ][INFO    ][4398] Completed state [/etc/neutron/neutron.conf] at time 10:23:59.581862 duration_in_ms=152.425
2017-09-27 10:23:59,582 [salt.state       ][INFO    ][4398] Running state [neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head] at time 10:23:59.582177
2017-09-27 10:23:59,582 [salt.state       ][INFO    ][4398] Executing state cmd.run for neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
2017-09-27 10:23:59,583 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command 'neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head' in directory '/root'
2017-09-27 10:24:00,494 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command saltutil.find_job with jid 20170927102400483807
2017-09-27 10:24:00,514 [salt.minion      ][INFO    ][5368] Starting a new job with PID 5368
2017-09-27 10:24:00,535 [salt.minion      ][INFO    ][5368] Returning information for job: 20170927102400483807
2017-09-27 10:24:02,112 [salt.state       ][INFO    ][4398] {'pid': 5311, 'retcode': 0, 'stderr': 'INFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  [alembic.runtime.migration] Will assume non-transactional DDL.', 'stdout': 'Running upgrade for neutron ...\nOK\nRunning upgrade for networking-odl ...\nOK\nRunning upgrade for neutron-fwaas ...\nOK\nRunning upgrade for neutron-lbaas ...\nOK'}
2017-09-27 10:24:02,112 [salt.state       ][INFO    ][4398] Completed state [neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head] at time 10:24:02.111929 duration_in_ms=2529.75
2017-09-27 10:24:02,112 [salt.state       ][INFO    ][4398] Running state [/etc/neutron/api-paste.ini] at time 10:24:02.112427
2017-09-27 10:24:02,113 [salt.state       ][INFO    ][4398] Executing state file.managed for /etc/neutron/api-paste.ini
2017-09-27 10:24:02,132 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/files/ocata/api-paste.ini.Debian'
2017-09-27 10:24:02,157 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/map.jinja'
2017-09-27 10:24:02,187 [salt.state       ][INFO    ][4398] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [composite:neutron]
 use = egg:Paste#urlmap
 /: neutronversions_composite

2017-09-27 10:24:02,188 [salt.state       ][INFO    ][4398] Completed state [/etc/neutron/api-paste.ini] at time 10:24:02.187624 duration_in_ms=75.196
2017-09-27 10:24:02,188 [salt.state       ][INFO    ][4398] Running state [/etc/default/neutron-server] at time 10:24:02.187941
2017-09-27 10:24:02,188 [salt.state       ][INFO    ][4398] Executing state file.managed for /etc/default/neutron-server
2017-09-27 10:24:02,205 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/files/ocata/neutron-server'
2017-09-27 10:24:02,225 [salt.fileclient  ][INFO    ][4398] Fetching file from saltenv 'base', ** done ** 'neutron/map.jinja'
2017-09-27 10:24:02,252 [salt.state       ][INFO    ][4398] File changed:
--- 
+++ 
@@ -1,5 +1,8 @@
+# Generated by Salt.
+
 # defaults for neutron-server
 
 # path to config file corresponding to the core_plugin specified in
 # neutron.conf
-NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini"
+#NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
+NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini"
2017-09-27 10:24:02,252 [salt.state       ][INFO    ][4398] Completed state [/etc/default/neutron-server] at time 10:24:02.251834 duration_in_ms=63.893
2017-09-27 10:24:02,253 [salt.state       ][INFO    ][4398] Running state [neutron-server] at time 10:24:02.252766
2017-09-27 10:24:02,253 [salt.state       ][INFO    ][4398] Executing state service.running for neutron-server
2017-09-27 10:24:02,254 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemctl', 'status', 'neutron-server.service', '-n', '0'] in directory '/root'
2017-09-27 10:24:02,267 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemctl', 'is-active', 'neutron-server.service'] in directory '/root'
2017-09-27 10:24:02,281 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemctl', 'is-enabled', 'neutron-server.service'] in directory '/root'
2017-09-27 10:24:02,294 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemctl', 'is-enabled', 'neutron-server.service'] in directory '/root'
2017-09-27 10:24:02,306 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'neutron-server.service'] in directory '/root'
2017-09-27 10:24:02,349 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemctl', 'is-active', 'neutron-server.service'] in directory '/root'
2017-09-27 10:24:02,364 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemctl', 'is-enabled', 'neutron-server.service'] in directory '/root'
2017-09-27 10:24:02,380 [salt.loaded.int.module.cmdmod][INFO    ][4398] Executing command ['systemctl', 'is-enabled', 'neutron-server.service'] in directory '/root'
2017-09-27 10:24:02,395 [salt.state       ][INFO    ][4398] {'neutron-server': True}
2017-09-27 10:24:02,395 [salt.state       ][INFO    ][4398] Completed state [neutron-server] at time 10:24:02.394924 duration_in_ms=142.155
2017-09-27 10:24:02,397 [salt.minion      ][INFO    ][4398] Returning information for job: 20170927102340276298
2017-09-27 10:44:14,457 [salt.minion      ][INFO    ][25853] User sudo_ubuntu Executing command cp.push_dir with jid 20170927104414450669
2017-09-27 10:44:14,491 [salt.minion      ][INFO    ][6280] Starting a new job with PID 6280
