2018-03-30 05:37:46,587 [salt.loaded.int.module.cmdmod][ERROR   ][3387] Command 'while true; do salt-call saltutil.running|grep fun: && continue; salt-call --local service.restart salt-minion; break; done' failed with return code: None
2018-03-30 05:37:49,860 [salt.loaded.int.module.cmdmod][INFO    ][4258] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
2018-03-30 05:37:49,878 [salt.loaded.int.module.cmdmod][INFO    ][4258] Executing command ['systemctl', 'is-enabled', 'salt-minion.service'] in directory '/root'
2018-03-30 05:37:49,903 [salt.loaded.int.module.cmdmod][INFO    ][4258] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'salt-minion.service'] in directory '/root'
2018-03-30 05:37:49,921 [salt.utils.parsers][WARNING ][1924] Minion received a SIGTERM. Exiting.
2018-03-30 05:37:50,368 [salt.cli.daemons ][INFO    ][4336] Setting up the Salt Minion "cmp001.mcp-pike-ovs-dpdk-ha.local"
2018-03-30 05:37:50,457 [salt.cli.daemons ][INFO    ][4336] Starting up the Salt Minion
2018-03-30 05:37:50,457 [salt.utils.event ][INFO    ][4336] Starting pull socket on /var/run/salt/minion/minion_event_8384fc4d7a_pull.ipc
2018-03-30 05:37:51,198 [salt.minion      ][INFO    ][4336] Creating minion process manager
2018-03-30 05:37:52,612 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][4336] Executing command ['date', '+%z'] in directory '/root'
2018-03-30 05:37:52,638 [salt.utils.schedule][INFO    ][4336] Updating job settings for scheduled job: __mine_interval
2018-03-30 05:37:52,641 [salt.minion      ][INFO    ][4336] Added mine.update to scheduler
2018-03-30 05:37:52,654 [salt.minion      ][INFO    ][4336] Minion is starting as user 'root'
2018-03-30 05:37:52,670 [salt.minion      ][INFO    ][4336] Minion is ready to receive requests!
2018-03-30 05:37:53,670 [salt.utils.schedule][INFO    ][4336] Running scheduled job: __mine_interval
2018-03-30 05:37:58,682 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330053758735073
2018-03-30 05:37:58,699 [salt.minion      ][INFO    ][4429] Starting a new job with PID 4429
2018-03-30 05:37:58,720 [salt.minion      ][INFO    ][4429] Returning information for job: 20180330053758735073
2018-03-30 05:38:08,718 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330053808767226
2018-03-30 05:38:08,735 [salt.minion      ][INFO    ][4434] Starting a new job with PID 4434
2018-03-30 05:38:08,755 [salt.minion      ][INFO    ][4434] Returning information for job: 20180330053808767226
2018-03-30 05:38:18,866 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330053818910300
2018-03-30 05:38:18,883 [salt.minion      ][INFO    ][4439] Starting a new job with PID 4439
2018-03-30 05:38:18,903 [salt.minion      ][INFO    ][4439] Returning information for job: 20180330053818910300
2018-03-30 05:38:28,957 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330053828997226
2018-03-30 05:38:28,974 [salt.minion      ][INFO    ][4444] Starting a new job with PID 4444
2018-03-30 05:38:28,994 [salt.minion      ][INFO    ][4444] Returning information for job: 20180330053828997226
2018-03-30 05:38:39,616 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command state.apply with jid 20180330053839657226
2018-03-30 05:38:39,633 [salt.minion      ][INFO    ][4449] Starting a new job with PID 4449
2018-03-30 05:38:42,099 [salt.state       ][INFO    ][4449] Loading fresh modules for state activity
2018-03-30 05:38:42,150 [salt.fileclient  ][INFO    ][4449] Fetching file from saltenv 'base', ** done ** 'opnfv/route_wrapper.sls'
2018-03-30 05:38:42,171 [salt.state       ][INFO    ][4449] Running state [/usr/local/sbin/route] at time 05:38:42.171561
2018-03-30 05:38:42,172 [salt.state       ][INFO    ][4449] Executing state file.managed for /usr/local/sbin/route
2018-03-30 05:38:42,191 [salt.state       ][INFO    ][4449] File changed:
New file
2018-03-30 05:38:42,191 [salt.state       ][INFO    ][4449] Completed state [/usr/local/sbin/route] at time 05:38:42.191416 duration_in_ms=19.856
2018-03-30 05:38:42,193 [salt.minion      ][INFO    ][4449] Returning information for job: 20180330053839657226
2018-03-30 05:38:43,021 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command state.apply with jid 20180330053843054518
2018-03-30 05:38:43,038 [salt.minion      ][INFO    ][4459] Starting a new job with PID 4459
2018-03-30 05:38:43,450 [salt.state       ][INFO    ][4459] Loading fresh modules for state activity
2018-03-30 05:38:43,503 [salt.fileclient  ][INFO    ][4459] Fetching file from saltenv 'base', ** done ** 'linux/network/init.sls'
2018-03-30 05:38:43,627 [salt.fileclient  ][INFO    ][4459] Fetching file from saltenv 'base', ** done ** 'linux/network/hostname.sls'
2018-03-30 05:38:43,717 [salt.fileclient  ][INFO    ][4459] Fetching file from saltenv 'base', ** done ** 'linux/network/host.sls'
2018-03-30 05:38:43,840 [salt.fileclient  ][INFO    ][4459] Fetching file from saltenv 'base', ** done ** 'linux/network/dpdk.sls'
2018-03-30 05:38:43,946 [salt.fileclient  ][INFO    ][4459] Fetching file from saltenv 'base', ** done ** 'linux/network/interface.sls'
2018-03-30 05:38:44,088 [salt.fileclient  ][INFO    ][4459] Fetching file from saltenv 'base', ** done ** 'linux/network/proxy.sls'
2018-03-30 05:38:44,142 [salt.state       ][INFO    ][4459] Running state [/etc/hostname] at time 05:38:44.142271
2018-03-30 05:38:44,142 [salt.state       ][INFO    ][4459] Executing state file.managed for /etc/hostname
2018-03-30 05:38:44,176 [salt.fileclient  ][INFO    ][4459] Fetching file from saltenv 'base', ** done ** 'linux/files/hostname'
2018-03-30 05:38:44,178 [salt.state       ][INFO    ][4459] File /etc/hostname is in the correct state
2018-03-30 05:38:44,179 [salt.state       ][INFO    ][4459] Completed state [/etc/hostname] at time 05:38:44.179038 duration_in_ms=36.767
2018-03-30 05:38:44,179 [salt.state       ][INFO    ][4459] Running state [hostname cmp001] at time 05:38:44.179956
2018-03-30 05:38:44,180 [salt.state       ][INFO    ][4459] Executing state cmd.wait for hostname cmp001
2018-03-30 05:38:44,180 [salt.state       ][INFO    ][4459] No changes made for hostname cmp001
2018-03-30 05:38:44,180 [salt.state       ][INFO    ][4459] Completed state [hostname cmp001] at time 05:38:44.180514 duration_in_ms=0.558
2018-03-30 05:38:44,192 [salt.state       ][INFO    ][4459] Running state [mdb02] at time 05:38:44.192396
2018-03-30 05:38:44,192 [salt.state       ][INFO    ][4459] Executing state host.present for mdb02
2018-03-30 05:38:44,193 [salt.state       ][INFO    ][4459] {'host': 'mdb02'}
2018-03-30 05:38:44,193 [salt.state       ][INFO    ][4459] Completed state [mdb02] at time 05:38:44.193420 duration_in_ms=1.023
2018-03-30 05:38:44,193 [salt.state       ][INFO    ][4459] Running state [mdb02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.193631
2018-03-30 05:38:44,193 [salt.state       ][INFO    ][4459] Executing state host.present for mdb02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,209 [salt.state       ][INFO    ][4459] {'host': 'mdb02.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,209 [salt.state       ][INFO    ][4459] Completed state [mdb02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.209543 duration_in_ms=15.911
2018-03-30 05:38:44,210 [salt.state       ][INFO    ][4459] Running state [mdb03] at time 05:38:44.209979
2018-03-30 05:38:44,210 [salt.state       ][INFO    ][4459] Executing state host.present for mdb03
2018-03-30 05:38:44,215 [salt.state       ][INFO    ][4459] {'host': 'mdb03'}
2018-03-30 05:38:44,215 [salt.state       ][INFO    ][4459] Completed state [mdb03] at time 05:38:44.215519 duration_in_ms=5.54
2018-03-30 05:38:44,216 [salt.state       ][INFO    ][4459] Running state [mdb03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.215951
2018-03-30 05:38:44,216 [salt.state       ][INFO    ][4459] Executing state host.present for mdb03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,221 [salt.state       ][INFO    ][4459] {'host': 'mdb03.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,221 [salt.state       ][INFO    ][4459] Completed state [mdb03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.221512 duration_in_ms=5.561
2018-03-30 05:38:44,221 [salt.state       ][INFO    ][4459] Running state [mdb01] at time 05:38:44.221924
2018-03-30 05:38:44,222 [salt.state       ][INFO    ][4459] Executing state host.present for mdb01
2018-03-30 05:38:44,227 [salt.state       ][INFO    ][4459] {'host': 'mdb01'}
2018-03-30 05:38:44,227 [salt.state       ][INFO    ][4459] Completed state [mdb01] at time 05:38:44.227495 duration_in_ms=5.572
2018-03-30 05:38:44,227 [salt.state       ][INFO    ][4459] Running state [mdb01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.227902
2018-03-30 05:38:44,228 [salt.state       ][INFO    ][4459] Executing state host.present for mdb01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,233 [salt.state       ][INFO    ][4459] {'host': 'mdb01.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,233 [salt.state       ][INFO    ][4459] Completed state [mdb01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.233541 duration_in_ms=5.639
2018-03-30 05:38:44,234 [salt.state       ][INFO    ][4459] Running state [mdb] at time 05:38:44.233951
2018-03-30 05:38:44,234 [salt.state       ][INFO    ][4459] Executing state host.present for mdb
2018-03-30 05:38:44,239 [salt.state       ][INFO    ][4459] {'host': 'mdb'}
2018-03-30 05:38:44,239 [salt.state       ][INFO    ][4459] Completed state [mdb] at time 05:38:44.239571 duration_in_ms=5.619
2018-03-30 05:38:44,240 [salt.state       ][INFO    ][4459] Running state [mdb.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.239997
2018-03-30 05:38:44,240 [salt.state       ][INFO    ][4459] Executing state host.present for mdb.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,245 [salt.state       ][INFO    ][4459] {'host': 'mdb.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,245 [salt.state       ][INFO    ][4459] Completed state [mdb.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.245620 duration_in_ms=5.623
2018-03-30 05:38:44,246 [salt.state       ][INFO    ][4459] Running state [cfg01] at time 05:38:44.246042
2018-03-30 05:38:44,246 [salt.state       ][INFO    ][4459] Executing state host.present for cfg01
2018-03-30 05:38:44,251 [salt.state       ][INFO    ][4459] {'host': 'cfg01'}
2018-03-30 05:38:44,251 [salt.state       ][INFO    ][4459] Completed state [cfg01] at time 05:38:44.251540 duration_in_ms=5.498
2018-03-30 05:38:44,252 [salt.state       ][INFO    ][4459] Running state [cfg01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.251955
2018-03-30 05:38:44,252 [salt.state       ][INFO    ][4459] Executing state host.present for cfg01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,253 [salt.state       ][INFO    ][4459] {'host': 'cfg01.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,254 [salt.state       ][INFO    ][4459] Completed state [cfg01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.254098 duration_in_ms=2.143
2018-03-30 05:38:44,254 [salt.state       ][INFO    ][4459] Running state [prx01] at time 05:38:44.254488
2018-03-30 05:38:44,254 [salt.state       ][INFO    ][4459] Executing state host.present for prx01
2018-03-30 05:38:44,259 [salt.state       ][INFO    ][4459] {'host': 'prx01'}
2018-03-30 05:38:44,260 [salt.state       ][INFO    ][4459] Completed state [prx01] at time 05:38:44.260105 duration_in_ms=5.616
2018-03-30 05:38:44,260 [salt.state       ][INFO    ][4459] Running state [prx01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.260499
2018-03-30 05:38:44,260 [salt.state       ][INFO    ][4459] Executing state host.present for prx01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,265 [salt.state       ][INFO    ][4459] {'host': 'prx01.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,266 [salt.state       ][INFO    ][4459] Completed state [prx01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.266071 duration_in_ms=5.571
2018-03-30 05:38:44,266 [salt.state       ][INFO    ][4459] Running state [kvm01] at time 05:38:44.266477
2018-03-30 05:38:44,266 [salt.state       ][INFO    ][4459] Executing state host.present for kvm01
2018-03-30 05:38:44,271 [salt.state       ][INFO    ][4459] {'host': 'kvm01'}
2018-03-30 05:38:44,272 [salt.state       ][INFO    ][4459] Completed state [kvm01] at time 05:38:44.272138 duration_in_ms=5.66
2018-03-30 05:38:44,272 [salt.state       ][INFO    ][4459] Running state [kvm01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.272522
2018-03-30 05:38:44,272 [salt.state       ][INFO    ][4459] Executing state host.present for kvm01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,277 [salt.state       ][INFO    ][4459] {'host': 'kvm01.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,278 [salt.state       ][INFO    ][4459] Completed state [kvm01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.278056 duration_in_ms=5.533
2018-03-30 05:38:44,278 [salt.state       ][INFO    ][4459] Running state [kvm03] at time 05:38:44.278443
2018-03-30 05:38:44,278 [salt.state       ][INFO    ][4459] Executing state host.present for kvm03
2018-03-30 05:38:44,283 [salt.state       ][INFO    ][4459] {'host': 'kvm03'}
2018-03-30 05:38:44,284 [salt.state       ][INFO    ][4459] Completed state [kvm03] at time 05:38:44.284094 duration_in_ms=5.652
2018-03-30 05:38:44,284 [salt.state       ][INFO    ][4459] Running state [kvm03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.284472
2018-03-30 05:38:44,284 [salt.state       ][INFO    ][4459] Executing state host.present for kvm03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,289 [salt.state       ][INFO    ][4459] {'host': 'kvm03.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,290 [salt.state       ][INFO    ][4459] Completed state [kvm03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.290162 duration_in_ms=5.688
2018-03-30 05:38:44,290 [salt.state       ][INFO    ][4459] Running state [kvm02] at time 05:38:44.290632
2018-03-30 05:38:44,291 [salt.state       ][INFO    ][4459] Executing state host.present for kvm02
2018-03-30 05:38:44,294 [salt.state       ][INFO    ][4459] {'host': 'kvm02'}
2018-03-30 05:38:44,295 [salt.state       ][INFO    ][4459] Completed state [kvm02] at time 05:38:44.295000 duration_in_ms=4.366
2018-03-30 05:38:44,295 [salt.state       ][INFO    ][4459] Running state [kvm02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.295473
2018-03-30 05:38:44,295 [salt.state       ][INFO    ][4459] Executing state host.present for kvm02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,300 [salt.state       ][INFO    ][4459] {'host': 'kvm02.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,301 [salt.state       ][INFO    ][4459] Completed state [kvm02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.301014 duration_in_ms=5.54
2018-03-30 05:38:44,301 [salt.state       ][INFO    ][4459] Running state [dbs] at time 05:38:44.301453
2018-03-30 05:38:44,301 [salt.state       ][INFO    ][4459] Executing state host.present for dbs
2018-03-30 05:38:44,306 [salt.state       ][INFO    ][4459] {'host': 'dbs'}
2018-03-30 05:38:44,307 [salt.state       ][INFO    ][4459] Completed state [dbs] at time 05:38:44.306959 duration_in_ms=5.505
2018-03-30 05:38:44,307 [salt.state       ][INFO    ][4459] Running state [dbs.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.307385
2018-03-30 05:38:44,307 [salt.state       ][INFO    ][4459] Executing state host.present for dbs.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,312 [salt.state       ][INFO    ][4459] {'host': 'dbs.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,313 [salt.state       ][INFO    ][4459] Completed state [dbs.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.312989 duration_in_ms=5.604
2018-03-30 05:38:44,313 [salt.state       ][INFO    ][4459] Running state [prx] at time 05:38:44.313419
2018-03-30 05:38:44,313 [salt.state       ][INFO    ][4459] Executing state host.present for prx
2018-03-30 05:38:44,318 [salt.state       ][INFO    ][4459] {'host': 'prx'}
2018-03-30 05:38:44,319 [salt.state       ][INFO    ][4459] Completed state [prx] at time 05:38:44.318974 duration_in_ms=5.555
2018-03-30 05:38:44,319 [salt.state       ][INFO    ][4459] Running state [prx.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.319397
2018-03-30 05:38:44,319 [salt.state       ][INFO    ][4459] Executing state host.present for prx.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,324 [salt.state       ][INFO    ][4459] {'host': 'prx.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,325 [salt.state       ][INFO    ][4459] Completed state [prx.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.325008 duration_in_ms=5.61
2018-03-30 05:38:44,325 [salt.state       ][INFO    ][4459] Running state [prx02] at time 05:38:44.325452
2018-03-30 05:38:44,325 [salt.state       ][INFO    ][4459] Executing state host.present for prx02
2018-03-30 05:38:44,327 [salt.state       ][INFO    ][4459] {'host': 'prx02'}
2018-03-30 05:38:44,327 [salt.state       ][INFO    ][4459] Completed state [prx02] at time 05:38:44.327585 duration_in_ms=2.134
2018-03-30 05:38:44,328 [salt.state       ][INFO    ][4459] Running state [prx02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.328003
2018-03-30 05:38:44,328 [salt.state       ][INFO    ][4459] Executing state host.present for prx02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,333 [salt.state       ][INFO    ][4459] {'host': 'prx02.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,333 [salt.state       ][INFO    ][4459] Completed state [prx02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.333298 duration_in_ms=5.294
2018-03-30 05:38:44,333 [salt.state       ][INFO    ][4459] Running state [msg02] at time 05:38:44.333749
2018-03-30 05:38:44,334 [salt.state       ][INFO    ][4459] Executing state host.present for msg02
2018-03-30 05:38:44,339 [salt.state       ][INFO    ][4459] {'host': 'msg02'}
2018-03-30 05:38:44,339 [salt.state       ][INFO    ][4459] Completed state [msg02] at time 05:38:44.339292 duration_in_ms=5.544
2018-03-30 05:38:44,339 [salt.state       ][INFO    ][4459] Running state [msg02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.339701
2018-03-30 05:38:44,340 [salt.state       ][INFO    ][4459] Executing state host.present for msg02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,345 [salt.state       ][INFO    ][4459] {'host': 'msg02.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,345 [salt.state       ][INFO    ][4459] Completed state [msg02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.345307 duration_in_ms=5.606
2018-03-30 05:38:44,345 [salt.state       ][INFO    ][4459] Running state [msg03] at time 05:38:44.345728
2018-03-30 05:38:44,346 [salt.state       ][INFO    ][4459] Executing state host.present for msg03
2018-03-30 05:38:44,351 [salt.state       ][INFO    ][4459] {'host': 'msg03'}
2018-03-30 05:38:44,351 [salt.state       ][INFO    ][4459] Completed state [msg03] at time 05:38:44.351315 duration_in_ms=5.588
2018-03-30 05:38:44,351 [salt.state       ][INFO    ][4459] Running state [msg03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.351717
2018-03-30 05:38:44,352 [salt.state       ][INFO    ][4459] Executing state host.present for msg03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,357 [salt.state       ][INFO    ][4459] {'host': 'msg03.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,357 [salt.state       ][INFO    ][4459] Completed state [msg03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.357303 duration_in_ms=5.584
2018-03-30 05:38:44,357 [salt.state       ][INFO    ][4459] Running state [msg01] at time 05:38:44.357734
2018-03-30 05:38:44,358 [salt.state       ][INFO    ][4459] Executing state host.present for msg01
2018-03-30 05:38:44,363 [salt.state       ][INFO    ][4459] {'host': 'msg01'}
2018-03-30 05:38:44,363 [salt.state       ][INFO    ][4459] Completed state [msg01] at time 05:38:44.363322 duration_in_ms=5.588
2018-03-30 05:38:44,363 [salt.state       ][INFO    ][4459] Running state [msg01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.363729
2018-03-30 05:38:44,364 [salt.state       ][INFO    ][4459] Executing state host.present for msg01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,369 [salt.state       ][INFO    ][4459] {'host': 'msg01.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,369 [salt.state       ][INFO    ][4459] Completed state [msg01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.369335 duration_in_ms=5.606
2018-03-30 05:38:44,369 [salt.state       ][INFO    ][4459] Running state [msg] at time 05:38:44.369761
2018-03-30 05:38:44,370 [salt.state       ][INFO    ][4459] Executing state host.present for msg
2018-03-30 05:38:44,375 [salt.state       ][INFO    ][4459] {'host': 'msg'}
2018-03-30 05:38:44,375 [salt.state       ][INFO    ][4459] Completed state [msg] at time 05:38:44.375322 duration_in_ms=5.561
2018-03-30 05:38:44,375 [salt.state       ][INFO    ][4459] Running state [msg.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.375702
2018-03-30 05:38:44,376 [salt.state       ][INFO    ][4459] Executing state host.present for msg.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,381 [salt.state       ][INFO    ][4459] {'host': 'msg.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,381 [salt.state       ][INFO    ][4459] Completed state [msg.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.381327 duration_in_ms=5.624
2018-03-30 05:38:44,381 [salt.state       ][INFO    ][4459] Running state [cfg01] at time 05:38:44.381737
2018-03-30 05:38:44,382 [salt.state       ][INFO    ][4459] Executing state host.present for cfg01
2018-03-30 05:38:44,382 [salt.state       ][INFO    ][4459] Host cfg01 (10.167.4.11) already present
2018-03-30 05:38:44,382 [salt.state       ][INFO    ][4459] Completed state [cfg01] at time 05:38:44.382921 duration_in_ms=1.185
2018-03-30 05:38:44,383 [salt.state       ][INFO    ][4459] Running state [cfg01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.383263
2018-03-30 05:38:44,383 [salt.state       ][INFO    ][4459] Executing state host.present for cfg01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,384 [salt.state       ][INFO    ][4459] Host cfg01.mcp-pike-ovs-dpdk-ha.local (10.167.4.11) already present
2018-03-30 05:38:44,384 [salt.state       ][INFO    ][4459] Completed state [cfg01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.384404 duration_in_ms=1.14
2018-03-30 05:38:44,384 [salt.state       ][INFO    ][4459] Running state [cmp002] at time 05:38:44.384739
2018-03-30 05:38:44,385 [salt.state       ][INFO    ][4459] Executing state host.present for cmp002
2018-03-30 05:38:44,387 [salt.state       ][INFO    ][4459] {'host': 'cmp002'}
2018-03-30 05:38:44,387 [salt.state       ][INFO    ][4459] Completed state [cmp002] at time 05:38:44.387334 duration_in_ms=2.595
2018-03-30 05:38:44,387 [salt.state       ][INFO    ][4459] Running state [cmp002.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.387690
2018-03-30 05:38:44,388 [salt.state       ][INFO    ][4459] Executing state host.present for cmp002.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,393 [salt.state       ][INFO    ][4459] {'host': 'cmp002.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,393 [salt.state       ][INFO    ][4459] Completed state [cmp002.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.393336 duration_in_ms=5.645
2018-03-30 05:38:44,393 [salt.state       ][INFO    ][4459] Running state [cmp001] at time 05:38:44.393699
2018-03-30 05:38:44,394 [salt.state       ][INFO    ][4459] Executing state host.present for cmp001
2018-03-30 05:38:44,399 [salt.state       ][INFO    ][4459] {'host': 'cmp001'}
2018-03-30 05:38:44,399 [salt.state       ][INFO    ][4459] Completed state [cmp001] at time 05:38:44.399300 duration_in_ms=5.602
2018-03-30 05:38:44,399 [salt.state       ][INFO    ][4459] Running state [cmp001.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.399644
2018-03-30 05:38:44,399 [salt.state       ][INFO    ][4459] Executing state host.present for cmp001.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,405 [salt.state       ][INFO    ][4459] {'host': 'cmp001.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,405 [salt.state       ][INFO    ][4459] Completed state [cmp001.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.405282 duration_in_ms=5.639
2018-03-30 05:38:44,405 [salt.state       ][INFO    ][4459] Running state [dbs01] at time 05:38:44.405641
2018-03-30 05:38:44,405 [salt.state       ][INFO    ][4459] Executing state host.present for dbs01
2018-03-30 05:38:44,411 [salt.state       ][INFO    ][4459] {'host': 'dbs01'}
2018-03-30 05:38:44,411 [salt.state       ][INFO    ][4459] Completed state [dbs01] at time 05:38:44.411281 duration_in_ms=5.641
2018-03-30 05:38:44,411 [salt.state       ][INFO    ][4459] Running state [dbs01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.411632
2018-03-30 05:38:44,411 [salt.state       ][INFO    ][4459] Executing state host.present for dbs01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,417 [salt.state       ][INFO    ][4459] {'host': 'dbs01.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,417 [salt.state       ][INFO    ][4459] Completed state [dbs01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.417316 duration_in_ms=5.684
2018-03-30 05:38:44,417 [salt.state       ][INFO    ][4459] Running state [dbs02] at time 05:38:44.417669
2018-03-30 05:38:44,417 [salt.state       ][INFO    ][4459] Executing state host.present for dbs02
2018-03-30 05:38:44,420 [salt.state       ][INFO    ][4459] {'host': 'dbs02'}
2018-03-30 05:38:44,420 [salt.state       ][INFO    ][4459] Completed state [dbs02] at time 05:38:44.420866 duration_in_ms=3.197
2018-03-30 05:38:44,421 [salt.state       ][INFO    ][4459] Running state [dbs02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.421190
2018-03-30 05:38:44,421 [salt.state       ][INFO    ][4459] Executing state host.present for dbs02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,426 [salt.state       ][INFO    ][4459] {'host': 'dbs02.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,426 [salt.state       ][INFO    ][4459] Completed state [dbs02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.426880 duration_in_ms=5.69
2018-03-30 05:38:44,427 [salt.state       ][INFO    ][4459] Running state [dbs03] at time 05:38:44.427211
2018-03-30 05:38:44,427 [salt.state       ][INFO    ][4459] Executing state host.present for dbs03
2018-03-30 05:38:44,432 [salt.state       ][INFO    ][4459] {'host': 'dbs03'}
2018-03-30 05:38:44,432 [salt.state       ][INFO    ][4459] Completed state [dbs03] at time 05:38:44.432676 duration_in_ms=5.464
2018-03-30 05:38:44,433 [salt.state       ][INFO    ][4459] Running state [dbs03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.433008
2018-03-30 05:38:44,433 [salt.state       ][INFO    ][4459] Executing state host.present for dbs03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,438 [salt.state       ][INFO    ][4459] {'host': 'dbs03.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,438 [salt.state       ][INFO    ][4459] Completed state [dbs03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.438913 duration_in_ms=5.904
2018-03-30 05:38:44,439 [salt.state       ][INFO    ][4459] Running state [mas01] at time 05:38:44.439249
2018-03-30 05:38:44,439 [salt.state       ][INFO    ][4459] Executing state host.present for mas01
2018-03-30 05:38:44,444 [salt.state       ][INFO    ][4459] {'host': 'mas01'}
2018-03-30 05:38:44,444 [salt.state       ][INFO    ][4459] Completed state [mas01] at time 05:38:44.444921 duration_in_ms=5.671
2018-03-30 05:38:44,445 [salt.state       ][INFO    ][4459] Running state [mas01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.445255
2018-03-30 05:38:44,445 [salt.state       ][INFO    ][4459] Executing state host.present for mas01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,450 [salt.state       ][INFO    ][4459] {'host': 'mas01.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,450 [salt.state       ][INFO    ][4459] Completed state [mas01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.450926 duration_in_ms=5.671
2018-03-30 05:38:44,451 [salt.state       ][INFO    ][4459] Running state [ctl02] at time 05:38:44.451264
2018-03-30 05:38:44,451 [salt.state       ][INFO    ][4459] Executing state host.present for ctl02
2018-03-30 05:38:44,456 [salt.state       ][INFO    ][4459] {'host': 'ctl02'}
2018-03-30 05:38:44,457 [salt.state       ][INFO    ][4459] Completed state [ctl02] at time 05:38:44.456957 duration_in_ms=5.693
2018-03-30 05:38:44,457 [salt.state       ][INFO    ][4459] Running state [ctl02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.457308
2018-03-30 05:38:44,457 [salt.state       ][INFO    ][4459] Executing state host.present for ctl02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,462 [salt.state       ][INFO    ][4459] {'host': 'ctl02.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,462 [salt.state       ][INFO    ][4459] Completed state [ctl02.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.462885 duration_in_ms=5.577
2018-03-30 05:38:44,463 [salt.state       ][INFO    ][4459] Running state [ctl03] at time 05:38:44.463233
2018-03-30 05:38:44,463 [salt.state       ][INFO    ][4459] Executing state host.present for ctl03
2018-03-30 05:38:44,468 [salt.state       ][INFO    ][4459] {'host': 'ctl03'}
2018-03-30 05:38:44,468 [salt.state       ][INFO    ][4459] Completed state [ctl03] at time 05:38:44.468919 duration_in_ms=5.686
2018-03-30 05:38:44,469 [salt.state       ][INFO    ][4459] Running state [ctl03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.469266
2018-03-30 05:38:44,469 [salt.state       ][INFO    ][4459] Executing state host.present for ctl03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,474 [salt.state       ][INFO    ][4459] {'host': 'ctl03.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,474 [salt.state       ][INFO    ][4459] Completed state [ctl03.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.474927 duration_in_ms=5.66
2018-03-30 05:38:44,475 [salt.state       ][INFO    ][4459] Running state [ctl01] at time 05:38:44.475286
2018-03-30 05:38:44,475 [salt.state       ][INFO    ][4459] Executing state host.present for ctl01
2018-03-30 05:38:44,480 [salt.state       ][INFO    ][4459] {'host': 'ctl01'}
2018-03-30 05:38:44,480 [salt.state       ][INFO    ][4459] Completed state [ctl01] at time 05:38:44.480797 duration_in_ms=5.511
2018-03-30 05:38:44,481 [salt.state       ][INFO    ][4459] Running state [ctl01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.481160
2018-03-30 05:38:44,481 [salt.state       ][INFO    ][4459] Executing state host.present for ctl01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,486 [salt.state       ][INFO    ][4459] {'host': 'ctl01.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,486 [salt.state       ][INFO    ][4459] Completed state [ctl01.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.486837 duration_in_ms=5.677
2018-03-30 05:38:44,487 [salt.state       ][INFO    ][4459] Running state [ctl] at time 05:38:44.487140
2018-03-30 05:38:44,487 [salt.state       ][INFO    ][4459] Executing state host.present for ctl
2018-03-30 05:38:44,492 [salt.state       ][INFO    ][4459] {'host': 'ctl'}
2018-03-30 05:38:44,492 [salt.state       ][INFO    ][4459] Completed state [ctl] at time 05:38:44.492925 duration_in_ms=5.785
2018-03-30 05:38:44,493 [salt.state       ][INFO    ][4459] Running state [ctl.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.493205
2018-03-30 05:38:44,493 [salt.state       ][INFO    ][4459] Executing state host.present for ctl.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:38:44,498 [salt.state       ][INFO    ][4459] {'host': 'ctl.mcp-pike-ovs-dpdk-ha.local'}
2018-03-30 05:38:44,499 [salt.state       ][INFO    ][4459] Completed state [ctl.mcp-pike-ovs-dpdk-ha.local] at time 05:38:44.498977 duration_in_ms=5.772
2018-03-30 05:38:45,001 [salt.state       ][INFO    ][4459] Running state [linux_dpdk_pkgs] at time 05:38:45.001092
2018-03-30 05:38:45,001 [salt.state       ][INFO    ][4459] Executing state pkg.installed for linux_dpdk_pkgs
2018-03-30 05:38:45,002 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:38:45,373 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['apt-cache', '-q', 'policy', 'dpdk-dev'] in directory '/root'
2018-03-30 05:38:45,446 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['apt-cache', '-q', 'policy', 'dpdk-igb-uio-dkms'] in directory '/root'
2018-03-30 05:38:45,518 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['apt-cache', '-q', 'policy', 'dpdk-rte-kni-dkms'] in directory '/root'
2018-03-30 05:38:45,597 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['apt-cache', '-q', 'policy', 'dpdk'] in directory '/root'
2018-03-30 05:38:45,699 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-03-30 05:38:47,498 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 05:38:47,531 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'dpdk-dev', 'dpdk-igb-uio-dkms', 'dpdk-rte-kni-dkms', 'dpdk'] in directory '/root'
2018-03-30 05:38:47,780 [salt.loaded.int.module.cmdmod][ERROR   ][4459] Command '['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'dpdk-dev', 'dpdk-igb-uio-dkms', 'dpdk-rte-kni-dkms', 'dpdk']' failed with return code: 100
2018-03-30 05:38:47,781 [salt.loaded.int.module.cmdmod][ERROR   ][4459] stdout: Reading package lists...
Building dependency tree...
Reading state information...
2018-03-30 05:38:47,782 [salt.loaded.int.module.cmdmod][ERROR   ][4459] stderr: Running scope as unit run-r74a080d9e3354abe9ebff1e682767d9b.scope.
E: Unable to locate package dpdk-igb-uio-dkms
E: Unable to locate package dpdk-rte-kni-dkms
2018-03-30 05:38:47,782 [salt.loaded.int.module.cmdmod][ERROR   ][4459] retcode: 100
2018-03-30 05:38:47,784 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:38:47,837 [salt.state       ][ERROR   ][4459] Problem encountered installing package(s). Additional info follows:

errors:
    - Running scope as unit run-r74a080d9e3354abe9ebff1e682767d9b.scope.
      E: Unable to locate package dpdk-igb-uio-dkms
      E: Unable to locate package dpdk-rte-kni-dkms
2018-03-30 05:38:47,838 [salt.state       ][INFO    ][4459] Completed state [linux_dpdk_pkgs] at time 05:38:47.837969 duration_in_ms=2836.876
2018-03-30 05:38:47,856 [salt.state       ][INFO    ][4459] Running state [openvswitch_dpdk_pkgs] at time 05:38:47.856404
2018-03-30 05:38:47,856 [salt.state       ][INFO    ][4459] Executing state pkg.installed for openvswitch_dpdk_pkgs
2018-03-30 05:38:47,880 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 05:38:47,912 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'openvswitch-switch-dpdk', 'openvswitch-switch', 'bridge-utils'] in directory '/root'
2018-03-30 05:38:53,119 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330053853150246
2018-03-30 05:38:53,137 [salt.minion      ][INFO    ][4860] Starting a new job with PID 4860
2018-03-30 05:38:53,157 [salt.minion      ][INFO    ][4860] Returning information for job: 20180330053853150246
2018-03-30 05:38:59,665 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:38:59,716 [salt.state       ][INFO    ][4459] Made the following changes:
'openvswitch-switch' changed from 'absent' to '2.5.4-0ubuntu0.16.04.1'
'libdpdk0' changed from 'absent' to '2.2.0-0ubuntu8'
'openvswitch-common' changed from 'absent' to '2.5.4-0ubuntu0.16.04.1'
'openvswitch-switch-dpdk' changed from 'absent' to '2.5.4-0ubuntu0.16.04.1'
'bridge-utils' changed from 'absent' to '1.5-9ubuntu1'
'libxenstore3.0' changed from 'absent' to '4.6.5-0ubuntu1.4'
'dpdk' changed from 'absent' to '2.2.0-0ubuntu8'

2018-03-30 05:38:59,737 [salt.state       ][INFO    ][4459] Loading fresh modules for state activity
2018-03-30 05:38:59,776 [salt.state       ][INFO    ][4459] Completed state [openvswitch_dpdk_pkgs] at time 05:38:59.776124 duration_in_ms=11919.719
2018-03-30 05:38:59,790 [salt.state       ][INFO    ][4459] Running state [ovs-vswitchd] at time 05:38:59.790053
2018-03-30 05:38:59,790 [salt.state       ][INFO    ][4459] Executing state alternatives.remove for ovs-vswitchd
2018-03-30 05:38:59,793 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['update-alternatives', '--display', 'ovs-vswitchd'] in directory '/root'
2018-03-30 05:38:59,809 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['update-alternatives', '--remove', 'ovs-vswitchd', '/usr/lib/openvswitch-switch/ovs-vswitchd'] in directory '/root'
2018-03-30 05:38:59,834 [salt.state       ][INFO    ][4459] {'path': '/usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk'}
2018-03-30 05:38:59,835 [salt.state       ][INFO    ][4459] Completed state [ovs-vswitchd] at time 05:38:59.834969 duration_in_ms=44.913
2018-03-30 05:39:00,049 [salt.state       ][INFO    ][4459] Running state [ovs-vsctl --no-wait add-br br-prv -- set bridge br-prv datapath_type=netdev] at time 05:39:00.049804
2018-03-30 05:39:00,050 [salt.state       ][INFO    ][4459] Executing state cmd.run for ovs-vsctl --no-wait add-br br-prv -- set bridge br-prv datapath_type=netdev
2018-03-30 05:39:00,050 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl show | grep br-prv' in directory '/root'
2018-03-30 05:39:00,069 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl --no-wait add-br br-prv -- set bridge br-prv datapath_type=netdev' in directory '/root'
2018-03-30 05:39:00,087 [salt.state       ][INFO    ][4459] {'pid': 5816, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:39:00,087 [salt.state       ][INFO    ][4459] Completed state [ovs-vsctl --no-wait add-br br-prv -- set bridge br-prv datapath_type=netdev] at time 05:39:00.087558 duration_in_ms=37.755
2018-03-30 05:39:00,088 [salt.state       ][INFO    ][4459] Running state [ovs-vsctl --no-wait add-port br-prv dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:07:00.0] at time 05:39:00.088559
2018-03-30 05:39:00,088 [salt.state       ][INFO    ][4459] Executing state cmd.run for ovs-vsctl --no-wait add-port br-prv dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:07:00.0
2018-03-30 05:39:00,089 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl show | grep dpdk0' in directory '/root'
2018-03-30 05:39:00,105 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl --no-wait add-port br-prv dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:07:00.0' in directory '/root'
2018-03-30 05:39:00,118 [salt.state       ][INFO    ][4459] {'pid': 5851, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:39:00,119 [salt.state       ][INFO    ][4459] Completed state [ovs-vsctl --no-wait add-port br-prv dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:07:00.0] at time 05:39:00.119132 duration_in_ms=30.572
2018-03-30 05:39:00,119 [salt.state       ][INFO    ][4459] Running state [ovs-vsctl --no-wait set Interface dpdk0 options:n_rxq=2 ] at time 05:39:00.119511
2018-03-30 05:39:00,119 [salt.state       ][INFO    ][4459] Executing state cmd.run for ovs-vsctl --no-wait set Interface dpdk0 options:n_rxq=2 
2018-03-30 05:39:00,120 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl get Interface dpdk0 options | grep 'n_rxq="2"'
' in directory '/root'
2018-03-30 05:39:00,137 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl --no-wait set Interface dpdk0 options:n_rxq=2 ' in directory '/root'
2018-03-30 05:39:00,152 [salt.state       ][INFO    ][4459] {'pid': 5919, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:39:00,152 [salt.state       ][INFO    ][4459] Completed state [ovs-vsctl --no-wait set Interface dpdk0 options:n_rxq=2 ] at time 05:39:00.152845 duration_in_ms=33.334
2018-03-30 05:39:00,154 [salt.state       ][INFO    ][4459] Running state [linux_network_bridge_pkgs] at time 05:39:00.154170
2018-03-30 05:39:00,154 [salt.state       ][INFO    ][4459] Executing state pkg.installed for linux_network_bridge_pkgs
2018-03-30 05:39:00,282 [salt.state       ][INFO    ][4459] All specified packages are already installed
2018-03-30 05:39:00,282 [salt.state       ][INFO    ][4459] Completed state [linux_network_bridge_pkgs] at time 05:39:00.282661 duration_in_ms=128.491
2018-03-30 05:39:00,286 [salt.state       ][INFO    ][4459] Running state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 05:39:00.286125
2018-03-30 05:39:00,286 [salt.state       ][INFO    ][4459] Executing state file.absent for /etc/network/interfaces.d/50-cloud-init.cfg
2018-03-30 05:39:00,286 [salt.state       ][INFO    ][4459] {'removed': '/etc/network/interfaces.d/50-cloud-init.cfg'}
2018-03-30 05:39:00,286 [salt.state       ][INFO    ][4459] Completed state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 05:39:00.286740 duration_in_ms=0.615
2018-03-30 05:39:00,294 [salt.state       ][INFO    ][4459] Running state [enp6s0.300] at time 05:39:00.294477
2018-03-30 05:39:00,294 [salt.state       ][INFO    ][4459] Executing state network.managed for enp6s0.300
2018-03-30 05:39:00,396 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['ifup', 'enp6s0.300'] in directory '/root'
2018-03-30 05:39:01,469 [salt.state       ][INFO    ][4459] {'interface': 'Added network interface.', 'status': 'Interface enp6s0.300 is up'}
2018-03-30 05:39:01,469 [salt.state       ][INFO    ][4459] Completed state [enp6s0.300] at time 05:39:01.469872 duration_in_ms=1175.393
2018-03-30 05:39:01,470 [salt.state       ][INFO    ][4459] Running state [br-ctl] at time 05:39:01.470907
2018-03-30 05:39:01,471 [salt.state       ][INFO    ][4459] Executing state network.managed for br-ctl
2018-03-30 05:39:01,480 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 05:39:01,509 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'bridge-utils'] in directory '/root'
2018-03-30 05:39:01,806 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:39:01,874 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['ifup', 'br-ctl'] in directory '/root'
2018-03-30 05:39:03,092 [salt.state       ][INFO    ][4459] {'interface': 'Added network interface.', 'status': 'Interface br-ctl is up'}
2018-03-30 05:39:03,092 [salt.state       ][INFO    ][4459] Completed state [br-ctl] at time 05:39:03.092791 duration_in_ms=1621.884
2018-03-30 05:39:03,093 [salt.state       ][INFO    ][4459] Running state [enp6s0] at time 05:39:03.093032
2018-03-30 05:39:03,093 [salt.state       ][INFO    ][4459] Executing state network.managed for enp6s0
2018-03-30 05:39:03,125 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['ifdown', 'enp6s0'] in directory '/root'
2018-03-30 05:39:04,022 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330053903355405
2018-03-30 05:39:04,039 [salt.minion      ][INFO    ][6405] Starting a new job with PID 6405
2018-03-30 05:39:04,065 [salt.minion      ][INFO    ][6405] Returning information for job: 20180330053903355405
2018-03-30 05:39:04,393 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['ifup', 'enp6s0'] in directory '/root'
2018-03-30 05:39:05,922 [salt.state       ][INFO    ][4459] {'interface': 'Added network interface.', 'status': 'Interface enp6s0 restart to validate'}
2018-03-30 05:39:05,923 [salt.state       ][INFO    ][4459] Completed state [enp6s0] at time 05:39:05.923120 duration_in_ms=2830.084
2018-03-30 05:39:05,930 [salt.state       ][INFO    ][4459] Running state [br-floating] at time 05:39:05.930591
2018-03-30 05:39:05,931 [salt.state       ][INFO    ][4459] Executing state openvswitch_bridge.present for br-floating
2018-03-30 05:39:05,932 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl br-exists br-floating' in directory '/root'
2018-03-30 05:39:05,949 [salt.loaded.int.module.cmdmod][ERROR   ][4459] Command 'ovs-vsctl br-exists br-floating' failed with return code: 2
2018-03-30 05:39:05,949 [salt.loaded.int.module.cmdmod][ERROR   ][4459] retcode: 2
2018-03-30 05:39:05,950 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl --may-exist add-br br-floating' in directory '/root'
2018-03-30 05:39:06,065 [salt.state       ][INFO    ][4459] Made the following changes:
'br-floating' changed from 'Bridge br-floating does not exist.' to 'Bridge br-floating created'

2018-03-30 05:39:06,066 [salt.state       ][INFO    ][4459] Completed state [br-floating] at time 05:39:06.066091 duration_in_ms=135.5
2018-03-30 05:39:06,066 [salt.state       ][INFO    ][4459] Running state [/etc/network/interfaces.u/ifcfg-br-floating] at time 05:39:06.066829
2018-03-30 05:39:06,067 [salt.state       ][INFO    ][4459] Executing state file.managed for /etc/network/interfaces.u/ifcfg-br-floating
2018-03-30 05:39:06,095 [salt.fileclient  ][INFO    ][4459] Fetching file from saltenv 'base', ** done ** 'linux/files/ovs_bridge'
2018-03-30 05:39:06,104 [salt.state       ][INFO    ][4459] File changed:
New file
2018-03-30 05:39:06,104 [salt.state       ][INFO    ][4459] Completed state [/etc/network/interfaces.u/ifcfg-br-floating] at time 05:39:06.104463 duration_in_ms=37.634
2018-03-30 05:39:06,104 [salt.state       ][INFO    ][4459] Running state [/etc/network/interfaces] at time 05:39:06.104811
2018-03-30 05:39:06,105 [salt.state       ][INFO    ][4459] Executing state file.prepend for /etc/network/interfaces
2018-03-30 05:39:06,110 [salt.state       ][INFO    ][4459] File changed:
--- 
+++ 
@@ -1,3 +1,6 @@
+source /etc/network/interfaces.d/*
+# Workaround for Upstream-Bug: https://github.com/saltstack/salt/issues/40262
+source /etc/network/interfaces.u/*
 auto lo
 iface lo inet loopback
 auto enp6s0.300

2018-03-30 05:39:06,110 [salt.state       ][INFO    ][4459] Completed state [/etc/network/interfaces] at time 05:39:06.110285 duration_in_ms=5.475
2018-03-30 05:39:06,114 [salt.state       ][INFO    ][4459] Running state [/etc/network/interfaces] at time 05:39:06.114385
2018-03-30 05:39:06,114 [salt.state       ][INFO    ][4459] Executing state file.prepend for /etc/network/interfaces
2018-03-30 05:39:06,116 [salt.state       ][INFO    ][4459] File /etc/network/interfaces is in correct state
2018-03-30 05:39:06,116 [salt.state       ][INFO    ][4459] Completed state [/etc/network/interfaces] at time 05:39:06.116694 duration_in_ms=2.31
2018-03-30 05:39:06,118 [salt.state       ][INFO    ][4459] Running state [ifup br-floating] at time 05:39:06.118731
2018-03-30 05:39:06,119 [salt.state       ][INFO    ][4459] Executing state cmd.run for ifup br-floating
2018-03-30 05:39:06,119 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ip link show br-floating | grep -q '\<UP\>'' in directory '/root'
2018-03-30 05:39:06,137 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ifup br-floating' in directory '/root'
2018-03-30 05:39:06,605 [salt.state       ][INFO    ][4459] {'pid': 6753, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:39:06,606 [salt.state       ][INFO    ][4459] Completed state [ifup br-floating] at time 05:39:06.606341 duration_in_ms=487.61
2018-03-30 05:39:06,606 [salt.state       ][INFO    ][4459] Running state [ovs-vsctl --no-wait add-port br-floating enp8s0] at time 05:39:06.606810
2018-03-30 05:39:06,607 [salt.state       ][INFO    ][4459] Executing state cmd.run for ovs-vsctl --no-wait add-port br-floating enp8s0
2018-03-30 05:39:06,608 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl show | grep enp8s0' in directory '/root'
2018-03-30 05:39:06,623 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ovs-vsctl --no-wait add-port br-floating enp8s0' in directory '/root'
2018-03-30 05:39:06,637 [salt.state       ][INFO    ][4459] {'pid': 6947, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:39:06,637 [salt.state       ][INFO    ][4459] Completed state [ovs-vsctl --no-wait add-port br-floating enp8s0] at time 05:39:06.637520 duration_in_ms=30.71
2018-03-30 05:39:06,637 [salt.state       ][INFO    ][4459] Running state [br-floating] at time 05:39:06.637884
2018-03-30 05:39:06,638 [salt.state       ][INFO    ][4459] Executing state network.routes for br-floating
2018-03-30 05:39:06,650 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['systemctl', 'status', 'networking.service', '-n', '0'] in directory '/root'
2018-03-30 05:39:06,663 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['systemd-run', '--scope', 'systemctl', 'stop', 'networking.service'] in directory '/root'
2018-03-30 05:39:10,587 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['systemctl', 'is-enabled', 'networking.service'] in directory '/root'
2018-03-30 05:39:10,608 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'networking.service'] in directory '/root'
2018-03-30 05:39:11,883 [salt.state       ][INFO    ][4459] {'network_routes': 'Added interface br-floating routes.'}
2018-03-30 05:39:11,883 [salt.state       ][INFO    ][4459] Completed state [br-floating] at time 05:39:11.883864 duration_in_ms=5245.978
2018-03-30 05:39:11,884 [salt.state       ][INFO    ][4459] Running state [/etc/network/interfaces] at time 05:39:11.884499
2018-03-30 05:39:11,885 [salt.state       ][INFO    ][4459] Executing state file.prepend for /etc/network/interfaces
2018-03-30 05:39:11,887 [salt.state       ][INFO    ][4459] File /etc/network/interfaces is in correct state
2018-03-30 05:39:11,888 [salt.state       ][INFO    ][4459] Completed state [/etc/network/interfaces] at time 05:39:11.888041 duration_in_ms=3.541
2018-03-30 05:39:11,888 [salt.state       ][INFO    ][4459] Running state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 05:39:11.888414
2018-03-30 05:39:11,888 [salt.state       ][INFO    ][4459] Executing state file.managed for /etc/network/interfaces.u/ifcfg-enp8s0
2018-03-30 05:39:11,913 [salt.fileclient  ][INFO    ][4459] Fetching file from saltenv 'base', ** done ** 'linux/files/ovs_port'
2018-03-30 05:39:11,929 [salt.state       ][INFO    ][4459] File changed:
New file
2018-03-30 05:39:11,930 [salt.state       ][INFO    ][4459] Completed state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 05:39:11.930037 duration_in_ms=41.623
2018-03-30 05:39:11,930 [salt.state       ][INFO    ][4459] Running state [/etc/network/interfaces] at time 05:39:11.930390
2018-03-30 05:39:11,930 [salt.state       ][INFO    ][4459] Executing state file.replace for /etc/network/interfaces
2018-03-30 05:39:11,932 [salt.state       ][INFO    ][4459] No changes needed to be made
2018-03-30 05:39:11,932 [salt.state       ][INFO    ][4459] Completed state [/etc/network/interfaces] at time 05:39:11.932810 duration_in_ms=2.42
2018-03-30 05:39:11,933 [salt.state       ][INFO    ][4459] Running state [/etc/network/interfaces] at time 05:39:11.933146
2018-03-30 05:39:11,933 [salt.state       ][INFO    ][4459] Executing state file.replace for /etc/network/interfaces
2018-03-30 05:39:11,935 [salt.state       ][INFO    ][4459] No changes needed to be made
2018-03-30 05:39:11,935 [salt.state       ][INFO    ][4459] Completed state [/etc/network/interfaces] at time 05:39:11.935397 duration_in_ms=2.251
2018-03-30 05:39:11,939 [salt.state       ][INFO    ][4459] Running state [ifup enp8s0] at time 05:39:11.939322
2018-03-30 05:39:11,939 [salt.state       ][INFO    ][4459] Executing state cmd.run for ifup enp8s0
2018-03-30 05:39:11,940 [salt.loaded.int.module.cmdmod][INFO    ][4459] Executing command 'ip link show enp8s0 | grep -q '\<UP\>'' in directory '/root'
2018-03-30 05:39:11,958 [salt.state       ][INFO    ][4459] unless execution succeeded
2018-03-30 05:39:11,959 [salt.state       ][INFO    ][4459] Completed state [ifup enp8s0] at time 05:39:11.959308 duration_in_ms=19.985
2018-03-30 05:39:11,959 [salt.state       ][INFO    ][4459] Running state [/etc/network/interfaces] at time 05:39:11.959902
2018-03-30 05:39:11,960 [salt.state       ][INFO    ][4459] Executing state file.prepend for /etc/network/interfaces
2018-03-30 05:39:11,963 [salt.state       ][INFO    ][4459] File /etc/network/interfaces is in correct state
2018-03-30 05:39:11,963 [salt.state       ][INFO    ][4459] Completed state [/etc/network/interfaces] at time 05:39:11.963564 duration_in_ms=3.662
2018-03-30 05:39:11,964 [salt.state       ][INFO    ][4459] Running state [/etc/profile.d/proxy.sh] at time 05:39:11.963997
2018-03-30 05:39:11,964 [salt.state       ][INFO    ][4459] Executing state file.absent for /etc/profile.d/proxy.sh
2018-03-30 05:39:11,969 [salt.state       ][INFO    ][4459] File /etc/profile.d/proxy.sh is not present
2018-03-30 05:39:11,970 [salt.state       ][INFO    ][4459] Completed state [/etc/profile.d/proxy.sh] at time 05:39:11.969951 duration_in_ms=5.954
2018-03-30 05:39:11,970 [salt.state       ][INFO    ][4459] Running state [/etc/apt/apt.conf.d/95proxies] at time 05:39:11.970371
2018-03-30 05:39:11,970 [salt.state       ][INFO    ][4459] Executing state file.absent for /etc/apt/apt.conf.d/95proxies
2018-03-30 05:39:11,971 [salt.state       ][INFO    ][4459] File /etc/apt/apt.conf.d/95proxies is not present
2018-03-30 05:39:11,971 [salt.state       ][INFO    ][4459] Completed state [/etc/apt/apt.conf.d/95proxies] at time 05:39:11.971546 duration_in_ms=1.174
2018-03-30 05:39:11,976 [salt.minion      ][INFO    ][4459] Returning information for job: 20180330053843054518
2018-03-30 05:40:51,912 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command test.ping with jid 20180330054051950545
2018-03-30 05:40:51,931 [salt.minion      ][INFO    ][8042] Starting a new job with PID 8042
2018-03-30 05:40:51,992 [salt.minion      ][INFO    ][8042] Returning information for job: 20180330054051950545
2018-03-30 05:40:52,623 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command file.write with jid 20180330054052655096
2018-03-30 05:40:52,642 [salt.minion      ][INFO    ][8047] Starting a new job with PID 8047
2018-03-30 05:40:52,665 [salt.minion      ][INFO    ][8047] Returning information for job: 20180330054052655096
2018-03-30 05:40:53,296 [salt.minion      ][INFO    ][4336] User sudo_ubuntu Executing command system.reboot with jid 20180330054053332086
2018-03-30 05:40:53,314 [salt.minion      ][INFO    ][8052] Starting a new job with PID 8052
2018-03-30 05:40:53,323 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][8052] Executing command ['shutdown', '-r', 'now'] in directory '/root'
2018-03-30 05:40:53,609 [salt.utils.parsers][WARNING ][4336] Minion received a SIGTERM. Exiting.
2018-03-30 05:40:53,610 [salt.cli.daemons ][INFO    ][4336] Shutting down the Salt Minion
2018-03-30 05:43:02,626 [salt.cli.daemons ][INFO    ][2771] Setting up the Salt Minion "cmp001.mcp-pike-ovs-dpdk-ha.local"
2018-03-30 05:43:03,215 [salt.cli.daemons ][INFO    ][2771] Starting up the Salt Minion
2018-03-30 05:43:03,215 [salt.utils.event ][INFO    ][2771] Starting pull socket on /var/run/salt/minion/minion_event_8384fc4d7a_pull.ipc
2018-03-30 05:43:03,913 [salt.minion      ][INFO    ][2771] Creating minion process manager
2018-03-30 05:43:04,895 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][2771] Executing command ['date', '+%z'] in directory '/root'
2018-03-30 05:43:04,931 [salt.utils.schedule][INFO    ][2771] Updating job settings for scheduled job: __mine_interval
2018-03-30 05:43:04,944 [salt.minion      ][INFO    ][2771] Added mine.update to scheduler
2018-03-30 05:43:04,957 [salt.minion      ][INFO    ][2771] Minion is starting as user 'root'
2018-03-30 05:43:04,976 [salt.minion      ][INFO    ][2771] Minion is ready to receive requests!
2018-03-30 05:43:05,978 [salt.utils.schedule][INFO    ][2771] Running scheduled job: __mine_interval
2018-03-30 05:43:17,170 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command test.ping with jid 20180330054316704031
2018-03-30 05:43:17,178 [salt.minion      ][INFO    ][3110] Starting a new job with PID 3110
2018-03-30 05:43:17,303 [salt.minion      ][INFO    ][3110] Returning information for job: 20180330054316704031
2018-03-30 05:43:17,926 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command state.apply with jid 20180330054317464553
2018-03-30 05:43:17,933 [salt.minion      ][INFO    ][3115] Starting a new job with PID 3115
2018-03-30 05:43:20,401 [salt.state       ][INFO    ][3115] Loading fresh modules for state activity
2018-03-30 05:43:20,459 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/init.sls'
2018-03-30 05:43:20,488 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/init.sls'
2018-03-30 05:43:20,572 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/env.sls'
2018-03-30 05:43:20,632 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/profile.sls'
2018-03-30 05:43:20,696 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/repo.sls'
2018-03-30 05:43:20,805 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/package.sls'
2018-03-30 05:43:20,873 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/timezone.sls'
2018-03-30 05:43:20,949 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/kernel.sls'
2018-03-30 05:43:21,029 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/grub.sls'
2018-03-30 05:43:21,045 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/hugepages.sls'
2018-03-30 05:43:21,108 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/dpdk.sls'
2018-03-30 05:43:21,169 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/cpu.sls'
2018-03-30 05:43:21,277 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/sysfs.sls'
2018-03-30 05:43:21,339 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/locale.sls'
2018-03-30 05:43:21,397 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/user.sls'
2018-03-30 05:43:21,470 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/group.sls'
2018-03-30 05:43:21,541 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/limit.sls'
2018-03-30 05:43:21,602 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/service.sls'
2018-03-30 05:43:21,666 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/systemd.sls'
2018-03-30 05:43:21,731 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/system/apt.sls'
2018-03-30 05:43:22,316 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/storage/init.sls'
2018-03-30 05:43:22,391 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/storage/lvm.sls'
2018-03-30 05:43:22,497 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'ntp/init.sls'
2018-03-30 05:43:22,512 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'ntp/client.sls'
2018-03-30 05:43:22,549 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'ntp/server.sls'
2018-03-30 05:43:22,580 [salt.state       ][INFO    ][3115] Running state [/etc/environment] at time 05:43:22.580880
2018-03-30 05:43:22,581 [salt.state       ][INFO    ][3115] Executing state file.blockreplace for /etc/environment
2018-03-30 05:43:22,582 [salt.state       ][INFO    ][3115] File changed:
--- 
+++ 
@@ -1 +1,4 @@
 PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
+# SALT MANAGED VARIABLES - DO NOT EDIT - START
+# +# SALT MANAGED VARIABLES - END

2018-03-30 05:43:22,582 [salt.state       ][INFO    ][3115] Completed state [/etc/environment] at time 05:43:22.582778 duration_in_ms=1.898
2018-03-30 05:43:22,582 [salt.state       ][INFO    ][3115] Running state [/etc/profile.d] at time 05:43:22.582957
2018-03-30 05:43:22,583 [salt.state       ][INFO    ][3115] Executing state file.directory for /etc/profile.d
2018-03-30 05:43:22,591 [salt.state       ][INFO    ][3115] Directory /etc/profile.d is in the correct state
2018-03-30 05:43:22,591 [salt.state       ][INFO    ][3115] Completed state [/etc/profile.d] at time 05:43:22.591233 duration_in_ms=8.276
2018-03-30 05:43:23,490 [salt.state       ][INFO    ][3115] Running state [/etc/apt/apt.conf.d/99prefer_ipv4-salt] at time 05:43:23.490690
2018-03-30 05:43:23,491 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/apt/apt.conf.d/99prefer_ipv4-salt
2018-03-30 05:43:23,513 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/files/apt.conf'
2018-03-30 05:43:23,517 [salt.state       ][INFO    ][3115] File changed:
New file
2018-03-30 05:43:23,517 [salt.state       ][INFO    ][3115] Completed state [/etc/apt/apt.conf.d/99prefer_ipv4-salt] at time 05:43:23.517486 duration_in_ms=26.796
2018-03-30 05:43:23,518 [salt.state       ][INFO    ][3115] Running state [linux_repo_prereq_pkgs] at time 05:43:23.518205
2018-03-30 05:43:23,518 [salt.state       ][INFO    ][3115] Executing state pkg.installed for linux_repo_prereq_pkgs
2018-03-30 05:43:23,518 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:43:24,013 [salt.state       ][INFO    ][3115] All specified packages are already installed
2018-03-30 05:43:24,013 [salt.state       ][INFO    ][3115] Completed state [linux_repo_prereq_pkgs] at time 05:43:24.013896 duration_in_ms=495.691
2018-03-30 05:43:24,014 [salt.state       ][INFO    ][3115] Running state [/etc/apt/apt.conf.d/99proxies-salt-glusterfs-ppa] at time 05:43:24.014198
2018-03-30 05:43:24,014 [salt.state       ][INFO    ][3115] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-glusterfs-ppa
2018-03-30 05:43:24,014 [salt.state       ][INFO    ][3115] File /etc/apt/apt.conf.d/99proxies-salt-glusterfs-ppa is not present
2018-03-30 05:43:24,015 [salt.state       ][INFO    ][3115] Completed state [/etc/apt/apt.conf.d/99proxies-salt-glusterfs-ppa] at time 05:43:24.014981 duration_in_ms=0.783
2018-03-30 05:43:24,015 [salt.state       ][INFO    ][3115] Running state [/etc/apt/preferences.d/glusterfs-ppa] at time 05:43:24.015197
2018-03-30 05:43:24,015 [salt.state       ][INFO    ][3115] Executing state file.absent for /etc/apt/preferences.d/glusterfs-ppa
2018-03-30 05:43:24,015 [salt.state       ][INFO    ][3115] File /etc/apt/preferences.d/glusterfs-ppa is not present
2018-03-30 05:43:24,015 [salt.state       ][INFO    ][3115] Completed state [/etc/apt/preferences.d/glusterfs-ppa] at time 05:43:24.015836 duration_in_ms=0.638
2018-03-30 05:43:24,020 [salt.state       ][INFO    ][3115] Running state [apt-key adv --keyserver keyserver.ubuntu.com --recv 3FE869A9] at time 05:43:24.020760
2018-03-30 05:43:24,020 [salt.state       ][INFO    ][3115] Executing state cmd.run for apt-key adv --keyserver keyserver.ubuntu.com --recv 3FE869A9
2018-03-30 05:43:24,021 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'apt-key adv --keyserver keyserver.ubuntu.com --recv 3FE869A9' in directory '/root'
2018-03-30 05:43:24,385 [salt.state       ][INFO    ][3115] {'pid': 3186, 'retcode': 0, 'stderr': 'gpg: requesting key 3FE869A9 from hkp server keyserver.ubuntu.com\ngpg: key 3FE869A9: public key "Launchpad PPA for Gluster" imported\ngpg: Total number processed: 1\ngpg:               imported: 1  (RSA: 1)', 'stdout': 'Executing: /tmp/tmp.F0L0TgDG6x/gpg.1.sh --keyserver\nkeyserver.ubuntu.com\n--recv\n3FE869A9'}
2018-03-30 05:43:24,386 [salt.state       ][INFO    ][3115] Completed state [apt-key adv --keyserver keyserver.ubuntu.com --recv 3FE869A9] at time 05:43:24.386256 duration_in_ms=365.495
2018-03-30 05:43:24,393 [salt.state       ][INFO    ][3115] Running state [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main] at time 05:43:24.393548
2018-03-30 05:43:24,393 [salt.state       ][INFO    ][3115] Executing state pkgrepo.managed for deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main
2018-03-30 05:43:24,452 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-03-30 05:43:27,782 [salt.state       ][INFO    ][3115] {'repo': 'deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main'}
2018-03-30 05:43:27,782 [salt.state       ][INFO    ][3115] Completed state [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main] at time 05:43:27.782222 duration_in_ms=3388.674
2018-03-30 05:43:27,782 [salt.state       ][INFO    ][3115] Running state [/etc/apt/apt.conf.d/99proxies-salt-uca] at time 05:43:27.782401
2018-03-30 05:43:27,782 [salt.state       ][INFO    ][3115] Executing state file.absent for /etc/apt/apt.conf.d/99proxies-salt-uca
2018-03-30 05:43:27,782 [salt.state       ][INFO    ][3115] File /etc/apt/apt.conf.d/99proxies-salt-uca is not present
2018-03-30 05:43:27,782 [salt.state       ][INFO    ][3115] Completed state [/etc/apt/apt.conf.d/99proxies-salt-uca] at time 05:43:27.782931 duration_in_ms=0.529
2018-03-30 05:43:27,783 [salt.state       ][INFO    ][3115] Running state [/etc/apt/preferences.d/uca] at time 05:43:27.783069
2018-03-30 05:43:27,783 [salt.state       ][INFO    ][3115] Executing state file.absent for /etc/apt/preferences.d/uca
2018-03-30 05:43:27,783 [salt.state       ][INFO    ][3115] File /etc/apt/preferences.d/uca is not present
2018-03-30 05:43:27,783 [salt.state       ][INFO    ][3115] Completed state [/etc/apt/preferences.d/uca] at time 05:43:27.783491 duration_in_ms=0.422
2018-03-30 05:43:27,783 [salt.state       ][INFO    ][3115] Running state [apt-key adv --keyserver keyserver.ubuntu.com --recv EC4926EA] at time 05:43:27.783626
2018-03-30 05:43:27,783 [salt.state       ][INFO    ][3115] Executing state cmd.run for apt-key adv --keyserver keyserver.ubuntu.com --recv EC4926EA
2018-03-30 05:43:27,784 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'apt-key adv --keyserver keyserver.ubuntu.com --recv EC4926EA' in directory '/root'
2018-03-30 05:43:28,031 [salt.state       ][INFO    ][3115] {'pid': 3613, 'retcode': 0, 'stderr': 'gpg: requesting key EC4926EA from hkp server keyserver.ubuntu.com\ngpg: key EC4926EA: public key "Canonical Cloud Archive Signing Key <ftpmaster@canonical.com>" imported\ngpg: Total number processed: 1\ngpg:               imported: 1  (RSA: 1)', 'stdout': 'Executing: /tmp/tmp.V9Y5GCV8O0/gpg.1.sh --keyserver\nkeyserver.ubuntu.com\n--recv\nEC4926EA'}
2018-03-30 05:43:28,031 [salt.state       ][INFO    ][3115] Completed state [apt-key adv --keyserver keyserver.ubuntu.com --recv EC4926EA] at time 05:43:28.031513 duration_in_ms=247.886
2018-03-30 05:43:28,032 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054327564461
2018-03-30 05:43:28,032 [salt.state       ][INFO    ][3115] Running state [deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/pike main] at time 05:43:28.032662
2018-03-30 05:43:28,032 [salt.state       ][INFO    ][3115] Executing state pkgrepo.managed for deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/pike main
2018-03-30 05:43:28,040 [salt.minion      ][INFO    ][3750] Starting a new job with PID 3750
2018-03-30 05:43:28,052 [salt.minion      ][INFO    ][3750] Returning information for job: 20180330054327564461
2018-03-30 05:43:28,070 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-03-30 05:43:31,096 [salt.state       ][INFO    ][3115] {'repo': 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/pike main'}
2018-03-30 05:43:31,097 [salt.state       ][INFO    ][3115] Completed state [deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/pike main] at time 05:43:31.097046 duration_in_ms=3064.384
2018-03-30 05:43:31,097 [salt.state       ][INFO    ][3115] Running state [linux_extra_packages_latest] at time 05:43:31.097247
2018-03-30 05:43:31,097 [salt.state       ][INFO    ][3115] Executing state pkg.latest for linux_extra_packages_latest
2018-03-30 05:43:31,104 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['apt-cache', '-q', 'policy', 'python-pymysql'] in directory '/root'
2018-03-30 05:43:31,169 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 05:43:31,182 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-pymysql'] in directory '/root'
2018-03-30 05:43:33,922 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:43:33,943 [salt.state       ][INFO    ][3115] Made the following changes:
'python-pymysql' changed from '0.7.2-1ubuntu1' to '0.7.11-1~cloud0'

2018-03-30 05:43:33,964 [salt.state       ][INFO    ][3115] Loading fresh modules for state activity
2018-03-30 05:43:33,980 [salt.state       ][INFO    ][3115] Completed state [linux_extra_packages_latest] at time 05:43:33.980453 duration_in_ms=2883.206
2018-03-30 05:43:33,984 [salt.state       ][INFO    ][3115] Running state [UTC] at time 05:43:33.984124
2018-03-30 05:43:33,984 [salt.state       ][INFO    ][3115] Executing state timezone.system for UTC
2018-03-30 05:43:33,985 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['timedatectl'] in directory '/root'
2018-03-30 05:43:34,040 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['timedatectl'] in directory '/root'
2018-03-30 05:43:34,048 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'timedatectl set-timezone UTC' in directory '/root'
2018-03-30 05:43:34,058 [salt.state       ][INFO    ][3115] {'timezone': 'UTC'}
2018-03-30 05:43:34,059 [salt.state       ][INFO    ][3115] Completed state [UTC] at time 05:43:34.059114 duration_in_ms=74.99
2018-03-30 05:43:34,060 [salt.state       ][INFO    ][3115] Running state [/etc/default/grub.d] at time 05:43:34.060501
2018-03-30 05:43:34,060 [salt.state       ][INFO    ][3115] Executing state file.directory for /etc/default/grub.d
2018-03-30 05:43:34,062 [salt.state       ][INFO    ][3115] Directory /etc/default/grub.d is in the correct state
2018-03-30 05:43:34,063 [salt.state       ][INFO    ][3115] Completed state [/etc/default/grub.d] at time 05:43:34.063083 duration_in_ms=2.582
2018-03-30 05:43:34,066 [salt.state       ][INFO    ][3115] Running state [/etc/default/grub.d/99-custom-settings.cfg] at time 05:43:34.066293
2018-03-30 05:43:34,066 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/default/grub.d/99-custom-settings.cfg
2018-03-30 05:43:34,082 [salt.state       ][INFO    ][3115] File changed:
New file
2018-03-30 05:43:34,082 [salt.state       ][INFO    ][3115] Completed state [/etc/default/grub.d/99-custom-settings.cfg] at time 05:43:34.082883 duration_in_ms=16.589
2018-03-30 05:43:34,083 [salt.state       ][INFO    ][3115] Running state [/etc/default/grub.d/90-hugepages.cfg] at time 05:43:34.083564
2018-03-30 05:43:34,083 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/default/grub.d/90-hugepages.cfg
2018-03-30 05:43:34,106 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/files/grub_hugepages'
2018-03-30 05:43:34,157 [salt.state       ][INFO    ][3115] File changed:
New file
2018-03-30 05:43:34,157 [salt.state       ][INFO    ][3115] Completed state [/etc/default/grub.d/90-hugepages.cfg] at time 05:43:34.157614 duration_in_ms=74.049
2018-03-30 05:43:34,158 [salt.state       ][INFO    ][3115] Running state [update-grub] at time 05:43:34.158679
2018-03-30 05:43:34,158 [salt.state       ][INFO    ][3115] Executing state cmd.wait for update-grub
2018-03-30 05:43:34,159 [salt.state       ][INFO    ][3115] No changes made for update-grub
2018-03-30 05:43:34,159 [salt.state       ][INFO    ][3115] Completed state [update-grub] at time 05:43:34.159196 duration_in_ms=0.517
2018-03-30 05:43:34,159 [salt.state       ][INFO    ][3115] Running state [update-grub] at time 05:43:34.159330
2018-03-30 05:43:34,159 [salt.state       ][INFO    ][3115] Executing state cmd.mod_watch for update-grub
2018-03-30 05:43:34,160 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'update-grub' in directory '/root'
2018-03-30 05:43:34,878 [salt.state       ][INFO    ][3115] {'pid': 4234, 'retcode': 0, 'stderr': 'Generating grub configuration file ...\nFound linux image: /boot/vmlinuz-4.13.0-37-generic\nFound initrd image: /boot/initrd.img-4.13.0-37-generic\ndone', 'stdout': ''}
2018-03-30 05:43:34,878 [salt.state       ][INFO    ][3115] Completed state [update-grub] at time 05:43:34.878775 duration_in_ms=719.445
2018-03-30 05:43:34,884 [salt.state       ][INFO    ][3115] Running state [nf_conntrack] at time 05:43:34.884324
2018-03-30 05:43:34,884 [salt.state       ][INFO    ][3115] Executing state kmod.present for nf_conntrack
2018-03-30 05:43:34,884 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'lsmod' in directory '/root'
2018-03-30 05:43:35,158 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'lsmod' in directory '/root'
2018-03-30 05:43:35,166 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'modprobe nf_conntrack' in directory '/root'
2018-03-30 05:43:35,172 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'lsmod' in directory '/root'
2018-03-30 05:43:35,214 [salt.state       ][INFO    ][3115] {'nf_conntrack': 'loaded'}
2018-03-30 05:43:35,215 [salt.state       ][INFO    ][3115] Completed state [nf_conntrack] at time 05:43:35.215049 duration_in_ms=330.725
2018-03-30 05:43:35,220 [salt.state       ][INFO    ][3115] Running state [kernel.panic] at time 05:43:35.220302
2018-03-30 05:43:35,220 [salt.state       ][INFO    ][3115] Executing state sysctl.present for kernel.panic
2018-03-30 05:43:35,220 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,257 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w kernel.panic="60"' in directory '/root'
2018-03-30 05:43:35,265 [salt.state       ][INFO    ][3115] {'kernel.panic': 60}
2018-03-30 05:43:35,265 [salt.state       ][INFO    ][3115] Completed state [kernel.panic] at time 05:43:35.265241 duration_in_ms=44.939
2018-03-30 05:43:35,265 [salt.state       ][INFO    ][3115] Running state [net.ipv4.tcp_keepalive_probes] at time 05:43:35.265541
2018-03-30 05:43:35,265 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.tcp_keepalive_probes
2018-03-30 05:43:35,266 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,293 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.tcp_keepalive_probes="8"' in directory '/root'
2018-03-30 05:43:35,300 [salt.state       ][INFO    ][3115] {'net.ipv4.tcp_keepalive_probes': 8}
2018-03-30 05:43:35,300 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.tcp_keepalive_probes] at time 05:43:35.300756 duration_in_ms=35.215
2018-03-30 05:43:35,301 [salt.state       ][INFO    ][3115] Running state [fs.file-max] at time 05:43:35.301038
2018-03-30 05:43:35,301 [salt.state       ][INFO    ][3115] Executing state sysctl.present for fs.file-max
2018-03-30 05:43:35,301 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,327 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w fs.file-max="124165"' in directory '/root'
2018-03-30 05:43:35,334 [salt.state       ][INFO    ][3115] {'fs.file-max': 124165}
2018-03-30 05:43:35,334 [salt.state       ][INFO    ][3115] Completed state [fs.file-max] at time 05:43:35.334896 duration_in_ms=33.858
2018-03-30 05:43:35,335 [salt.state       ][INFO    ][3115] Running state [net.core.somaxconn] at time 05:43:35.335188
2018-03-30 05:43:35,335 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.core.somaxconn
2018-03-30 05:43:35,335 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,361 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.core.somaxconn="4096"' in directory '/root'
2018-03-30 05:43:35,368 [salt.state       ][INFO    ][3115] {'net.core.somaxconn': 4096}
2018-03-30 05:43:35,368 [salt.state       ][INFO    ][3115] Completed state [net.core.somaxconn] at time 05:43:35.368847 duration_in_ms=33.659
2018-03-30 05:43:35,369 [salt.state       ][INFO    ][3115] Running state [vm.dirty_ratio] at time 05:43:35.369060
2018-03-30 05:43:35,369 [salt.state       ][INFO    ][3115] Executing state sysctl.present for vm.dirty_ratio
2018-03-30 05:43:35,369 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,396 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w vm.dirty_ratio="10"' in directory '/root'
2018-03-30 05:43:35,403 [salt.state       ][INFO    ][3115] {'vm.dirty_ratio': 10}
2018-03-30 05:43:35,404 [salt.state       ][INFO    ][3115] Completed state [vm.dirty_ratio] at time 05:43:35.404098 duration_in_ms=35.038
2018-03-30 05:43:35,404 [salt.state       ][INFO    ][3115] Running state [vm.dirty_background_ratio] at time 05:43:35.404290
2018-03-30 05:43:35,404 [salt.state       ][INFO    ][3115] Executing state sysctl.present for vm.dirty_background_ratio
2018-03-30 05:43:35,404 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,429 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w vm.dirty_background_ratio="5"' in directory '/root'
2018-03-30 05:43:35,436 [salt.state       ][INFO    ][3115] {'vm.dirty_background_ratio': 5}
2018-03-30 05:43:35,436 [salt.state       ][INFO    ][3115] Completed state [vm.dirty_background_ratio] at time 05:43:35.436787 duration_in_ms=32.497
2018-03-30 05:43:35,437 [salt.state       ][INFO    ][3115] Running state [net.ipv4.tcp_congestion_control] at time 05:43:35.436981
2018-03-30 05:43:35,437 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.tcp_congestion_control
2018-03-30 05:43:35,437 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,463 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.tcp_congestion_control="yeah"' in directory '/root'
2018-03-30 05:43:35,473 [salt.state       ][INFO    ][3115] {'net.ipv4.tcp_congestion_control': 'yeah'}
2018-03-30 05:43:35,473 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.tcp_congestion_control] at time 05:43:35.473457 duration_in_ms=36.476
2018-03-30 05:43:35,473 [salt.state       ][INFO    ][3115] Running state [net.ipv4.tcp_max_syn_backlog] at time 05:43:35.473652
2018-03-30 05:43:35,473 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.tcp_max_syn_backlog
2018-03-30 05:43:35,474 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,499 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.tcp_max_syn_backlog="8192"' in directory '/root'
2018-03-30 05:43:35,506 [salt.state       ][INFO    ][3115] {'net.ipv4.tcp_max_syn_backlog': 8192}
2018-03-30 05:43:35,506 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.tcp_max_syn_backlog] at time 05:43:35.506340 duration_in_ms=32.687
2018-03-30 05:43:35,506 [salt.state       ][INFO    ][3115] Running state [net.ipv4.tcp_tw_reuse] at time 05:43:35.506547
2018-03-30 05:43:35,506 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.tcp_tw_reuse
2018-03-30 05:43:35,507 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,532 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.tcp_tw_reuse="1"' in directory '/root'
2018-03-30 05:43:35,538 [salt.state       ][INFO    ][3115] {'net.ipv4.tcp_tw_reuse': 1}
2018-03-30 05:43:35,539 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.tcp_tw_reuse] at time 05:43:35.539127 duration_in_ms=32.58
2018-03-30 05:43:35,539 [salt.state       ][INFO    ][3115] Running state [net.ipv4.tcp_retries2] at time 05:43:35.539313
2018-03-30 05:43:35,539 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.tcp_retries2
2018-03-30 05:43:35,539 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,564 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.tcp_retries2="5"' in directory '/root'
2018-03-30 05:43:35,571 [salt.state       ][INFO    ][3115] {'net.ipv4.tcp_retries2': 5}
2018-03-30 05:43:35,571 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.tcp_retries2] at time 05:43:35.571653 duration_in_ms=32.34
2018-03-30 05:43:35,571 [salt.state       ][INFO    ][3115] Running state [net.core.netdev_max_backlog] at time 05:43:35.571849
2018-03-30 05:43:35,572 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.core.netdev_max_backlog
2018-03-30 05:43:35,572 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,597 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.core.netdev_max_backlog="261144"' in directory '/root'
2018-03-30 05:43:35,603 [salt.state       ][INFO    ][3115] {'net.core.netdev_max_backlog': 261144}
2018-03-30 05:43:35,604 [salt.state       ][INFO    ][3115] Completed state [net.core.netdev_max_backlog] at time 05:43:35.604083 duration_in_ms=32.234
2018-03-30 05:43:35,604 [salt.state       ][INFO    ][3115] Running state [net.ipv4.tcp_slow_start_after_idle] at time 05:43:35.604289
2018-03-30 05:43:35,604 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.tcp_slow_start_after_idle
2018-03-30 05:43:35,604 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,630 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.tcp_slow_start_after_idle="0"' in directory '/root'
2018-03-30 05:43:35,637 [salt.state       ][INFO    ][3115] {'net.ipv4.tcp_slow_start_after_idle': 0}
2018-03-30 05:43:35,637 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.tcp_slow_start_after_idle] at time 05:43:35.637663 duration_in_ms=33.372
2018-03-30 05:43:35,637 [salt.state       ][INFO    ][3115] Running state [vm.swappiness] at time 05:43:35.637946
2018-03-30 05:43:35,638 [salt.state       ][INFO    ][3115] Executing state sysctl.present for vm.swappiness
2018-03-30 05:43:35,638 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,664 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w vm.swappiness="10"' in directory '/root'
2018-03-30 05:43:35,671 [salt.state       ][INFO    ][3115] {'vm.swappiness': 10}
2018-03-30 05:43:35,671 [salt.state       ][INFO    ][3115] Completed state [vm.swappiness] at time 05:43:35.671225 duration_in_ms=33.278
2018-03-30 05:43:35,671 [salt.state       ][INFO    ][3115] Running state [net.ipv4.tcp_keepalive_intvl] at time 05:43:35.671449
2018-03-30 05:43:35,671 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.tcp_keepalive_intvl
2018-03-30 05:43:35,672 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,699 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.tcp_keepalive_intvl="3"' in directory '/root'
2018-03-30 05:43:35,706 [salt.state       ][INFO    ][3115] {'net.ipv4.tcp_keepalive_intvl': 3}
2018-03-30 05:43:35,706 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.tcp_keepalive_intvl] at time 05:43:35.706473 duration_in_ms=35.024
2018-03-30 05:43:35,706 [salt.state       ][INFO    ][3115] Running state [net.ipv4.neigh.default.gc_thresh1] at time 05:43:35.706689
2018-03-30 05:43:35,706 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.neigh.default.gc_thresh1
2018-03-30 05:43:35,707 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,733 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.neigh.default.gc_thresh1="4096"' in directory '/root'
2018-03-30 05:43:35,739 [salt.state       ][INFO    ][3115] {'net.ipv4.neigh.default.gc_thresh1': 4096}
2018-03-30 05:43:35,739 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.neigh.default.gc_thresh1] at time 05:43:35.739864 duration_in_ms=33.174
2018-03-30 05:43:35,740 [salt.state       ][INFO    ][3115] Running state [net.ipv4.neigh.default.gc_thresh2] at time 05:43:35.740070
2018-03-30 05:43:35,740 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.neigh.default.gc_thresh2
2018-03-30 05:43:35,740 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,765 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.neigh.default.gc_thresh2="8192"' in directory '/root'
2018-03-30 05:43:35,773 [salt.state       ][INFO    ][3115] {'net.ipv4.neigh.default.gc_thresh2': 8192}
2018-03-30 05:43:35,773 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.neigh.default.gc_thresh2] at time 05:43:35.773225 duration_in_ms=33.155
2018-03-30 05:43:35,773 [salt.state       ][INFO    ][3115] Running state [net.ipv4.neigh.default.gc_thresh3] at time 05:43:35.773429
2018-03-30 05:43:35,773 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.neigh.default.gc_thresh3
2018-03-30 05:43:35,774 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,799 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.neigh.default.gc_thresh3="16384"' in directory '/root'
2018-03-30 05:43:35,806 [salt.state       ][INFO    ][3115] {'net.ipv4.neigh.default.gc_thresh3': 16384}
2018-03-30 05:43:35,806 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.neigh.default.gc_thresh3] at time 05:43:35.806700 duration_in_ms=33.27
2018-03-30 05:43:35,806 [salt.state       ][INFO    ][3115] Running state [net.ipv4.tcp_fin_timeout] at time 05:43:35.806939
2018-03-30 05:43:35,807 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.tcp_fin_timeout
2018-03-30 05:43:35,807 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,833 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.tcp_fin_timeout="30"' in directory '/root'
2018-03-30 05:43:35,840 [salt.state       ][INFO    ][3115] {'net.ipv4.tcp_fin_timeout': 30}
2018-03-30 05:43:35,840 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.tcp_fin_timeout] at time 05:43:35.840724 duration_in_ms=33.786
2018-03-30 05:43:35,840 [salt.state       ][INFO    ][3115] Running state [net.ipv4.tcp_keepalive_time] at time 05:43:35.840938
2018-03-30 05:43:35,841 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.ipv4.tcp_keepalive_time
2018-03-30 05:43:35,841 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,867 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.ipv4.tcp_keepalive_time="30"' in directory '/root'
2018-03-30 05:43:35,874 [salt.state       ][INFO    ][3115] {'net.ipv4.tcp_keepalive_time': 30}
2018-03-30 05:43:35,874 [salt.state       ][INFO    ][3115] Completed state [net.ipv4.tcp_keepalive_time] at time 05:43:35.874414 duration_in_ms=33.477
2018-03-30 05:43:35,874 [salt.state       ][INFO    ][3115] Running state [net.nf_conntrack_max] at time 05:43:35.874633
2018-03-30 05:43:35,874 [salt.state       ][INFO    ][3115] Executing state sysctl.present for net.nf_conntrack_max
2018-03-30 05:43:35,875 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,901 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w net.nf_conntrack_max="1048576"' in directory '/root'
2018-03-30 05:43:35,907 [salt.state       ][INFO    ][3115] {'net.nf_conntrack_max': 1048576}
2018-03-30 05:43:35,907 [salt.state       ][INFO    ][3115] Completed state [net.nf_conntrack_max] at time 05:43:35.907355 duration_in_ms=32.722
2018-03-30 05:43:35,907 [salt.state       ][INFO    ][3115] Running state [fs.inotify.max_user_instances] at time 05:43:35.907568
2018-03-30 05:43:35,907 [salt.state       ][INFO    ][3115] Executing state sysctl.present for fs.inotify.max_user_instances
2018-03-30 05:43:35,908 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -a' in directory '/root'
2018-03-30 05:43:35,933 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl -w fs.inotify.max_user_instances="4096"' in directory '/root'
2018-03-30 05:43:35,939 [salt.state       ][INFO    ][3115] {'fs.inotify.max_user_instances': 4096}
2018-03-30 05:43:35,939 [salt.state       ][INFO    ][3115] Completed state [fs.inotify.max_user_instances] at time 05:43:35.939812 duration_in_ms=32.243
2018-03-30 05:43:35,945 [salt.state       ][INFO    ][3115] Running state [/mnt/hugepages_2M] at time 05:43:35.944997
2018-03-30 05:43:35,945 [salt.state       ][INFO    ][3115] Executing state mount.mounted for /mnt/hugepages_2M
2018-03-30 05:43:35,949 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'mount -l' in directory '/root'
2018-03-30 05:43:35,979 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'blkid' in directory '/root'
2018-03-30 05:43:36,001 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'mount -o mode=775,pagesize=2M -t hugetlbfs Hugetlbfs-kvm /mnt/hugepages_2M ' in directory '/root'
2018-03-30 05:43:36,010 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'mount -l' in directory '/root'
2018-03-30 05:43:36,019 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'blkid' in directory '/root'
2018-03-30 05:43:36,034 [salt.state       ][INFO    ][3115] {'mount': True, 'persist': 'new'}
2018-03-30 05:43:36,034 [salt.state       ][INFO    ][3115] Completed state [/mnt/hugepages_2M] at time 05:43:36.034527 duration_in_ms=89.528
2018-03-30 05:43:36,034 [salt.state       ][INFO    ][3115] Running state [sysctl vm.nr_hugepages=8192] at time 05:43:36.034890
2018-03-30 05:43:36,035 [salt.state       ][INFO    ][3115] Executing state cmd.run for sysctl vm.nr_hugepages=8192
2018-03-30 05:43:36,035 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl vm.nr_hugepages | grep -qE '8192'' in directory '/root'
2018-03-30 05:43:36,044 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'sysctl vm.nr_hugepages=8192' in directory '/root'
2018-03-30 05:43:36,092 [salt.state       ][INFO    ][3115] {'pid': 4714, 'retcode': 0, 'stderr': '', 'stdout': 'vm.nr_hugepages = 8192'}
2018-03-30 05:43:36,092 [salt.state       ][INFO    ][3115] Completed state [sysctl vm.nr_hugepages=8192] at time 05:43:36.092671 duration_in_ms=57.781
2018-03-30 05:43:36,098 [salt.state       ][INFO    ][3115] Running state [linux_sysfs_package] at time 05:43:36.098580
2018-03-30 05:43:36,098 [salt.state       ][INFO    ][3115] Executing state pkg.installed for linux_sysfs_package
2018-03-30 05:43:36,319 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['apt-cache', '-q', 'policy', 'sysfsutils'] in directory '/root'
2018-03-30 05:43:36,356 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-03-30 05:43:38,124 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 05:43:38,138 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'sysfsutils'] in directory '/root'
2018-03-30 05:43:38,203 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054337737605
2018-03-30 05:43:38,210 [salt.minion      ][INFO    ][5066] Starting a new job with PID 5066
2018-03-30 05:43:38,221 [salt.minion      ][INFO    ][5066] Returning information for job: 20180330054337737605
2018-03-30 05:43:42,315 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:43:42,337 [salt.state       ][INFO    ][3115] Made the following changes:
'libsysfs2' changed from 'absent' to '2.1.0+repack-4'
'sysfsutils' changed from 'absent' to '2.1.0+repack-4'

2018-03-30 05:43:42,348 [salt.state       ][INFO    ][3115] Loading fresh modules for state activity
2018-03-30 05:43:42,365 [salt.state       ][INFO    ][3115] Completed state [linux_sysfs_package] at time 05:43:42.365423 duration_in_ms=6266.839
2018-03-30 05:43:42,368 [salt.state       ][INFO    ][3115] Running state [/etc/sysfs.d] at time 05:43:42.368488
2018-03-30 05:43:42,368 [salt.state       ][INFO    ][3115] Executing state file.directory for /etc/sysfs.d
2018-03-30 05:43:42,373 [salt.state       ][INFO    ][3115] Directory /etc/sysfs.d is in the correct state
2018-03-30 05:43:42,373 [salt.state       ][INFO    ][3115] Completed state [/etc/sysfs.d] at time 05:43:42.373325 duration_in_ms=4.837
2018-03-30 05:43:42,502 [salt.state       ][INFO    ][3115] Running state [ondemand] at time 05:43:42.502189
2018-03-30 05:43:42,502 [salt.state       ][INFO    ][3115] Executing state service.dead for ondemand
2018-03-30 05:43:42,503 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'status', 'ondemand.service', '-n', '0'] in directory '/root'
2018-03-30 05:43:42,512 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'ondemand.service'] in directory '/root'
2018-03-30 05:43:42,520 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'ondemand.service'] in directory '/root'
2018-03-30 05:43:42,528 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'stop', 'ondemand.service'] in directory '/root'
2018-03-30 05:43:42,556 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'ondemand.service'] in directory '/root'
2018-03-30 05:43:42,564 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'ondemand.service'] in directory '/root'
2018-03-30 05:43:42,573 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'ondemand.service'] in directory '/root'
2018-03-30 05:43:42,583 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', '/usr/sbin/update-rc.d', '-f', 'ondemand', 'remove'] in directory '/root'
2018-03-30 05:43:42,681 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'ondemand.service'] in directory '/root'
2018-03-30 05:43:42,692 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'runlevel' in directory '/root'
2018-03-30 05:43:42,697 [salt.state       ][INFO    ][3115] {'ondemand': True}
2018-03-30 05:43:42,697 [salt.state       ][INFO    ][3115] Completed state [ondemand] at time 05:43:42.697916 duration_in_ms=195.726
2018-03-30 05:43:42,698 [salt.state       ][INFO    ][3115] Running state [/etc/sysfs.d/governor.conf] at time 05:43:42.698969
2018-03-30 05:43:42,699 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/sysfs.d/governor.conf
2018-03-30 05:43:42,721 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/files/governor.conf.jinja'
2018-03-30 05:43:42,726 [salt.state       ][INFO    ][3115] File changed:
New file
2018-03-30 05:43:42,726 [salt.state       ][INFO    ][3115] Completed state [/etc/sysfs.d/governor.conf] at time 05:43:42.726228 duration_in_ms=27.258
2018-03-30 05:43:42,736 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.736249
2018-03-30 05:43:42,736 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,737 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,737 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.737561 duration_in_ms=1.312
2018-03-30 05:43:42,737 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.737720
2018-03-30 05:43:42,737 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,738 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,738 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.738327 duration_in_ms=0.608
2018-03-30 05:43:42,738 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.738482
2018-03-30 05:43:42,738 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,738 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,739 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.739099 duration_in_ms=0.617
2018-03-30 05:43:42,739 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.739245
2018-03-30 05:43:42,739 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,739 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,739 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.739860 duration_in_ms=0.614
2018-03-30 05:43:42,740 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.740007
2018-03-30 05:43:42,740 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,740 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,740 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.740597 duration_in_ms=0.59
2018-03-30 05:43:42,740 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.740743
2018-03-30 05:43:42,740 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,741 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,741 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.741340 duration_in_ms=0.597
2018-03-30 05:43:42,741 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.741486
2018-03-30 05:43:42,741 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,741 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,742 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.742064 duration_in_ms=0.578
2018-03-30 05:43:42,742 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.742210
2018-03-30 05:43:42,742 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,742 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,742 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.742802 duration_in_ms=0.592
2018-03-30 05:43:42,742 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.742963
2018-03-30 05:43:42,743 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,743 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,743 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.743564 duration_in_ms=0.6
2018-03-30 05:43:42,743 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.743711
2018-03-30 05:43:42,743 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,744 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,744 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.744322 duration_in_ms=0.611
2018-03-30 05:43:42,744 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.744470
2018-03-30 05:43:42,744 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,747 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,747 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.747455 duration_in_ms=2.985
2018-03-30 05:43:42,747 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.747610
2018-03-30 05:43:42,747 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,748 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,748 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.748236 duration_in_ms=0.626
2018-03-30 05:43:42,748 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.748386
2018-03-30 05:43:42,748 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,748 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,749 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.748995 duration_in_ms=0.61
2018-03-30 05:43:42,749 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.749140
2018-03-30 05:43:42,749 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,749 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,749 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.749732 duration_in_ms=0.592
2018-03-30 05:43:42,749 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.749877
2018-03-30 05:43:42,750 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,750 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,750 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.750453 duration_in_ms=0.576
2018-03-30 05:43:42,750 [salt.state       ][INFO    ][3115] Running state [sysfs.write] at time 05:43:42.750597
2018-03-30 05:43:42,750 [salt.state       ][INFO    ][3115] Executing state module.run for sysfs.write
2018-03-30 05:43:42,751 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:42,751 [salt.state       ][INFO    ][3115] Completed state [sysfs.write] at time 05:43:42.751183 duration_in_ms=0.587
2018-03-30 05:43:42,751 [salt.state       ][INFO    ][3115] Running state [cs_CZ.UTF-8] at time 05:43:42.751652
2018-03-30 05:43:42,751 [salt.state       ][INFO    ][3115] Executing state locale.present for cs_CZ.UTF-8
2018-03-30 05:43:42,752 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'locale -a' in directory '/root'
2018-03-30 05:43:42,776 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['locale-gen', 'cs_CZ.utf8'] in directory '/root'
2018-03-30 05:43:43,348 [salt.state       ][INFO    ][3115] {'locale': 'cs_CZ.UTF-8'}
2018-03-30 05:43:43,348 [salt.state       ][INFO    ][3115] Completed state [cs_CZ.UTF-8] at time 05:43:43.348737 duration_in_ms=597.084
2018-03-30 05:43:43,349 [salt.state       ][INFO    ][3115] Running state [en_US.UTF-8] at time 05:43:43.348997
2018-03-30 05:43:43,349 [salt.state       ][INFO    ][3115] Executing state locale.present for en_US.UTF-8
2018-03-30 05:43:43,349 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'locale -a' in directory '/root'
2018-03-30 05:43:43,357 [salt.state       ][INFO    ][3115] Locale en_US.UTF-8 is already present
2018-03-30 05:43:43,357 [salt.state       ][INFO    ][3115] Completed state [en_US.UTF-8] at time 05:43:43.357227 duration_in_ms=8.231
2018-03-30 05:43:43,358 [salt.state       ][INFO    ][3115] Running state [en_US.UTF-8] at time 05:43:43.358420
2018-03-30 05:43:43,358 [salt.state       ][INFO    ][3115] Executing state locale.system for en_US.UTF-8
2018-03-30 05:43:43,359 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'localectl' in directory '/root'
2018-03-30 05:43:43,420 [salt.state       ][INFO    ][3115] System locale en_US.UTF-8 already set
2018-03-30 05:43:43,420 [salt.state       ][INFO    ][3115] Completed state [en_US.UTF-8] at time 05:43:43.420504 duration_in_ms=62.084
2018-03-30 05:43:43,433 [salt.state       ][INFO    ][3115] Running state [root] at time 05:43:43.433566
2018-03-30 05:43:43,433 [salt.state       ][INFO    ][3115] Executing state user.present for root
2018-03-30 05:43:43,434 [salt.state       ][INFO    ][3115] User root is present and up to date
2018-03-30 05:43:43,434 [salt.state       ][INFO    ][3115] Completed state [root] at time 05:43:43.434670 duration_in_ms=1.104
2018-03-30 05:43:43,435 [salt.state       ][INFO    ][3115] Running state [/root] at time 05:43:43.435617
2018-03-30 05:43:43,435 [salt.state       ][INFO    ][3115] Executing state file.directory for /root
2018-03-30 05:43:43,436 [salt.state       ][INFO    ][3115] Directory /root is in the correct state
2018-03-30 05:43:43,436 [salt.state       ][INFO    ][3115] Completed state [/root] at time 05:43:43.436302 duration_in_ms=0.685
2018-03-30 05:43:43,436 [salt.state       ][INFO    ][3115] Running state [/etc/sudoers.d/90-salt-user-root] at time 05:43:43.436441
2018-03-30 05:43:43,436 [salt.state       ][INFO    ][3115] Executing state file.absent for /etc/sudoers.d/90-salt-user-root
2018-03-30 05:43:43,436 [salt.state       ][INFO    ][3115] File /etc/sudoers.d/90-salt-user-root is not present
2018-03-30 05:43:43,437 [salt.state       ][INFO    ][3115] Completed state [/etc/sudoers.d/90-salt-user-root] at time 05:43:43.437047 duration_in_ms=0.605
2018-03-30 05:43:43,437 [salt.state       ][INFO    ][3115] Running state [ubuntu] at time 05:43:43.437189
2018-03-30 05:43:43,437 [salt.state       ][INFO    ][3115] Executing state user.present for ubuntu
2018-03-30 05:43:43,441 [salt.state       ][INFO    ][3115] {'passwd': 'XXX-REDACTED-XXX'}
2018-03-30 05:43:43,441 [salt.state       ][INFO    ][3115] Completed state [ubuntu] at time 05:43:43.441317 duration_in_ms=4.128
2018-03-30 05:43:43,442 [salt.state       ][INFO    ][3115] Running state [/home/ubuntu] at time 05:43:43.442071
2018-03-30 05:43:43,442 [salt.state       ][INFO    ][3115] Executing state file.directory for /home/ubuntu
2018-03-30 05:43:43,442 [salt.state       ][INFO    ][3115] {'mode': '0700'}
2018-03-30 05:43:43,442 [salt.state       ][INFO    ][3115] Completed state [/home/ubuntu] at time 05:43:43.442851 duration_in_ms=0.779
2018-03-30 05:43:43,443 [salt.state       ][INFO    ][3115] Running state [/etc/sudoers.d/90-salt-user-ubuntu] at time 05:43:43.443506
2018-03-30 05:43:43,443 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/sudoers.d/90-salt-user-ubuntu
2018-03-30 05:43:43,460 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/files/sudoer'
2018-03-30 05:43:43,462 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command '/usr/sbin/visudo -c -f /tmp/tmpE5yegy' in directory '/root'
2018-03-30 05:43:43,499 [salt.state       ][INFO    ][3115] File changed:
New file
2018-03-30 05:43:43,500 [salt.state       ][INFO    ][3115] Completed state [/etc/sudoers.d/90-salt-user-ubuntu] at time 05:43:43.500120 duration_in_ms=56.614
2018-03-30 05:43:43,500 [salt.state       ][INFO    ][3115] Running state [/etc/security/limits.d/90-salt-default.conf] at time 05:43:43.500304
2018-03-30 05:43:43,500 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/security/limits.d/90-salt-default.conf
2018-03-30 05:43:43,521 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/files/limits.conf'
2018-03-30 05:43:43,568 [salt.state       ][INFO    ][3115] File changed:
New file
2018-03-30 05:43:43,569 [salt.state       ][INFO    ][3115] Completed state [/etc/security/limits.d/90-salt-default.conf] at time 05:43:43.569002 duration_in_ms=68.698
2018-03-30 05:43:43,569 [salt.state       ][INFO    ][3115] Running state [apt-daily.timer] at time 05:43:43.569147
2018-03-30 05:43:43,569 [salt.state       ][INFO    ][3115] Executing state service.dead for apt-daily.timer
2018-03-30 05:43:43,569 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'status', 'apt-daily.timer', '-n', '0'] in directory '/root'
2018-03-30 05:43:43,578 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'apt-daily.timer'] in directory '/root'
2018-03-30 05:43:43,584 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'apt-daily.timer'] in directory '/root'
2018-03-30 05:43:43,593 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'stop', 'apt-daily.timer'] in directory '/root'
2018-03-30 05:43:43,605 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'apt-daily.timer'] in directory '/root'
2018-03-30 05:43:43,614 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'apt-daily.timer'] in directory '/root'
2018-03-30 05:43:43,623 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'apt-daily.timer'] in directory '/root'
2018-03-30 05:43:43,633 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'disable', 'apt-daily.timer'] in directory '/root'
2018-03-30 05:43:43,700 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'apt-daily.timer'] in directory '/root'
2018-03-30 05:43:43,711 [salt.state       ][INFO    ][3115] {'apt-daily.timer': True}
2018-03-30 05:43:43,711 [salt.state       ][INFO    ][3115] Completed state [apt-daily.timer] at time 05:43:43.711881 duration_in_ms=142.734
2018-03-30 05:43:43,712 [salt.state       ][INFO    ][3115] Running state [/etc/systemd/system.conf.d/90-salt.conf] at time 05:43:43.712128
2018-03-30 05:43:43,712 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/systemd/system.conf.d/90-salt.conf
2018-03-30 05:43:43,738 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/files/systemd.conf'
2018-03-30 05:43:43,790 [salt.state       ][INFO    ][3115] File changed:
New file
2018-03-30 05:43:43,790 [salt.state       ][INFO    ][3115] Completed state [/etc/systemd/system.conf.d/90-salt.conf] at time 05:43:43.790915 duration_in_ms=78.787
2018-03-30 05:43:43,791 [salt.state       ][INFO    ][3115] Running state [service.systemctl_reload] at time 05:43:43.791897
2018-03-30 05:43:43,792 [salt.state       ][INFO    ][3115] Executing state module.wait for service.systemctl_reload
2018-03-30 05:43:43,792 [salt.state       ][INFO    ][3115] No changes made for service.systemctl_reload
2018-03-30 05:43:43,792 [salt.state       ][INFO    ][3115] Completed state [service.systemctl_reload] at time 05:43:43.792448 duration_in_ms=0.55
2018-03-30 05:43:43,792 [salt.state       ][INFO    ][3115] Running state [service.systemctl_reload] at time 05:43:43.792589
2018-03-30 05:43:43,792 [salt.state       ][INFO    ][3115] Executing state module.mod_watch for service.systemctl_reload
2018-03-30 05:43:43,793 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', '--system', 'daemon-reload'] in directory '/root'
2018-03-30 05:43:43,863 [salt.state       ][INFO    ][3115] {'ret': True}
2018-03-30 05:43:43,863 [salt.state       ][INFO    ][3115] Completed state [service.systemctl_reload] at time 05:43:43.863743 duration_in_ms=71.153
2018-03-30 05:43:43,864 [salt.state       ][INFO    ][3115] Running state [/etc/hostname] at time 05:43:43.864024
2018-03-30 05:43:43,864 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/hostname
2018-03-30 05:43:43,881 [salt.state       ][INFO    ][3115] File /etc/hostname is in the correct state
2018-03-30 05:43:43,881 [salt.state       ][INFO    ][3115] Completed state [/etc/hostname] at time 05:43:43.881706 duration_in_ms=17.681
2018-03-30 05:43:43,883 [salt.state       ][INFO    ][3115] Running state [hostname cmp001] at time 05:43:43.883289
2018-03-30 05:43:43,883 [salt.state       ][INFO    ][3115] Executing state cmd.wait for hostname cmp001
2018-03-30 05:43:43,883 [salt.state       ][INFO    ][3115] No changes made for hostname cmp001
2018-03-30 05:43:43,883 [salt.state       ][INFO    ][3115] Completed state [hostname cmp001] at time 05:43:43.883898 duration_in_ms=0.609
2018-03-30 05:43:43,884 [salt.state       ][INFO    ][3115] Running state [mdb02] at time 05:43:43.884506
2018-03-30 05:43:43,884 [salt.state       ][INFO    ][3115] Executing state host.present for mdb02
2018-03-30 05:43:43,885 [salt.state       ][INFO    ][3115] Host mdb02 (10.167.4.33) already present
2018-03-30 05:43:43,885 [salt.state       ][INFO    ][3115] Completed state [mdb02] at time 05:43:43.885313 duration_in_ms=0.808
2018-03-30 05:43:43,885 [salt.state       ][INFO    ][3115] Running state [mdb02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.885503
2018-03-30 05:43:43,885 [salt.state       ][INFO    ][3115] Executing state host.present for mdb02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,886 [salt.state       ][INFO    ][3115] Host mdb02.mcp-pike-ovs-dpdk-ha.local (10.167.4.33) already present
2018-03-30 05:43:43,886 [salt.state       ][INFO    ][3115] Completed state [mdb02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.886221 duration_in_ms=0.717
2018-03-30 05:43:43,886 [salt.state       ][INFO    ][3115] Running state [mdb03] at time 05:43:43.886411
2018-03-30 05:43:43,886 [salt.state       ][INFO    ][3115] Executing state host.present for mdb03
2018-03-30 05:43:43,886 [salt.state       ][INFO    ][3115] Host mdb03 (10.167.4.34) already present
2018-03-30 05:43:43,887 [salt.state       ][INFO    ][3115] Completed state [mdb03] at time 05:43:43.887102 duration_in_ms=0.691
2018-03-30 05:43:43,887 [salt.state       ][INFO    ][3115] Running state [mdb03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.887288
2018-03-30 05:43:43,887 [salt.state       ][INFO    ][3115] Executing state host.present for mdb03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,887 [salt.state       ][INFO    ][3115] Host mdb03.mcp-pike-ovs-dpdk-ha.local (10.167.4.34) already present
2018-03-30 05:43:43,888 [salt.state       ][INFO    ][3115] Completed state [mdb03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.888001 duration_in_ms=0.713
2018-03-30 05:43:43,888 [salt.state       ][INFO    ][3115] Running state [mdb01] at time 05:43:43.888189
2018-03-30 05:43:43,888 [salt.state       ][INFO    ][3115] Executing state host.present for mdb01
2018-03-30 05:43:43,888 [salt.state       ][INFO    ][3115] Host mdb01 (10.167.4.32) already present
2018-03-30 05:43:43,888 [salt.state       ][INFO    ][3115] Completed state [mdb01] at time 05:43:43.888867 duration_in_ms=0.678
2018-03-30 05:43:43,889 [salt.state       ][INFO    ][3115] Running state [mdb01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.889051
2018-03-30 05:43:43,889 [salt.state       ][INFO    ][3115] Executing state host.present for mdb01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,889 [salt.state       ][INFO    ][3115] Host mdb01.mcp-pike-ovs-dpdk-ha.local (10.167.4.32) already present
2018-03-30 05:43:43,889 [salt.state       ][INFO    ][3115] Completed state [mdb01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.889758 duration_in_ms=0.707
2018-03-30 05:43:43,889 [salt.state       ][INFO    ][3115] Running state [mdb] at time 05:43:43.889956
2018-03-30 05:43:43,890 [salt.state       ][INFO    ][3115] Executing state host.present for mdb
2018-03-30 05:43:43,890 [salt.state       ][INFO    ][3115] Host mdb (10.167.4.31) already present
2018-03-30 05:43:43,890 [salt.state       ][INFO    ][3115] Completed state [mdb] at time 05:43:43.890668 duration_in_ms=0.712
2018-03-30 05:43:43,890 [salt.state       ][INFO    ][3115] Running state [mdb.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.890865
2018-03-30 05:43:43,891 [salt.state       ][INFO    ][3115] Executing state host.present for mdb.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,891 [salt.state       ][INFO    ][3115] Host mdb.mcp-pike-ovs-dpdk-ha.local (10.167.4.31) already present
2018-03-30 05:43:43,891 [salt.state       ][INFO    ][3115] Completed state [mdb.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.891577 duration_in_ms=0.713
2018-03-30 05:43:43,891 [salt.state       ][INFO    ][3115] Running state [cfg01] at time 05:43:43.891767
2018-03-30 05:43:43,891 [salt.state       ][INFO    ][3115] Executing state host.present for cfg01
2018-03-30 05:43:43,892 [salt.state       ][INFO    ][3115] Host cfg01 (10.167.4.11) already present
2018-03-30 05:43:43,892 [salt.state       ][INFO    ][3115] Completed state [cfg01] at time 05:43:43.892484 duration_in_ms=0.717
2018-03-30 05:43:43,892 [salt.state       ][INFO    ][3115] Running state [cfg01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.892682
2018-03-30 05:43:43,892 [salt.state       ][INFO    ][3115] Executing state host.present for cfg01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,893 [salt.state       ][INFO    ][3115] Host cfg01.mcp-pike-ovs-dpdk-ha.local (10.167.4.11) already present
2018-03-30 05:43:43,893 [salt.state       ][INFO    ][3115] Completed state [cfg01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.893371 duration_in_ms=0.689
2018-03-30 05:43:43,893 [salt.state       ][INFO    ][3115] Running state [prx01] at time 05:43:43.893554
2018-03-30 05:43:43,893 [salt.state       ][INFO    ][3115] Executing state host.present for prx01
2018-03-30 05:43:43,894 [salt.state       ][INFO    ][3115] Host prx01 (10.167.4.14) already present
2018-03-30 05:43:43,894 [salt.state       ][INFO    ][3115] Completed state [prx01] at time 05:43:43.894239 duration_in_ms=0.685
2018-03-30 05:43:43,894 [salt.state       ][INFO    ][3115] Running state [prx01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.894416
2018-03-30 05:43:43,894 [salt.state       ][INFO    ][3115] Executing state host.present for prx01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,894 [salt.state       ][INFO    ][3115] Host prx01.mcp-pike-ovs-dpdk-ha.local (10.167.4.14) already present
2018-03-30 05:43:43,895 [salt.state       ][INFO    ][3115] Completed state [prx01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.895090 duration_in_ms=0.673
2018-03-30 05:43:43,895 [salt.state       ][INFO    ][3115] Running state [kvm01] at time 05:43:43.895269
2018-03-30 05:43:43,895 [salt.state       ][INFO    ][3115] Executing state host.present for kvm01
2018-03-30 05:43:43,895 [salt.state       ][INFO    ][3115] Host kvm01 (10.167.4.20) already present
2018-03-30 05:43:43,895 [salt.state       ][INFO    ][3115] Completed state [kvm01] at time 05:43:43.895929 duration_in_ms=0.66
2018-03-30 05:43:43,896 [salt.state       ][INFO    ][3115] Running state [kvm01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.896106
2018-03-30 05:43:43,896 [salt.state       ][INFO    ][3115] Executing state host.present for kvm01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,896 [salt.state       ][INFO    ][3115] Host kvm01.mcp-pike-ovs-dpdk-ha.local (10.167.4.20) already present
2018-03-30 05:43:43,896 [salt.state       ][INFO    ][3115] Completed state [kvm01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.896787 duration_in_ms=0.681
2018-03-30 05:43:43,896 [salt.state       ][INFO    ][3115] Running state [kvm03] at time 05:43:43.896967
2018-03-30 05:43:43,897 [salt.state       ][INFO    ][3115] Executing state host.present for kvm03
2018-03-30 05:43:43,897 [salt.state       ][INFO    ][3115] Host kvm03 (10.167.4.22) already present
2018-03-30 05:43:43,897 [salt.state       ][INFO    ][3115] Completed state [kvm03] at time 05:43:43.897663 duration_in_ms=0.697
2018-03-30 05:43:43,897 [salt.state       ][INFO    ][3115] Running state [kvm03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.897843
2018-03-30 05:43:43,898 [salt.state       ][INFO    ][3115] Executing state host.present for kvm03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,898 [salt.state       ][INFO    ][3115] Host kvm03.mcp-pike-ovs-dpdk-ha.local (10.167.4.22) already present
2018-03-30 05:43:43,898 [salt.state       ][INFO    ][3115] Completed state [kvm03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.898507 duration_in_ms=0.664
2018-03-30 05:43:43,898 [salt.state       ][INFO    ][3115] Running state [kvm02] at time 05:43:43.898686
2018-03-30 05:43:43,898 [salt.state       ][INFO    ][3115] Executing state host.present for kvm02
2018-03-30 05:43:43,899 [salt.state       ][INFO    ][3115] Host kvm02 (10.167.4.21) already present
2018-03-30 05:43:43,899 [salt.state       ][INFO    ][3115] Completed state [kvm02] at time 05:43:43.899383 duration_in_ms=0.697
2018-03-30 05:43:43,899 [salt.state       ][INFO    ][3115] Running state [kvm02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.899563
2018-03-30 05:43:43,899 [salt.state       ][INFO    ][3115] Executing state host.present for kvm02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,900 [salt.state       ][INFO    ][3115] Host kvm02.mcp-pike-ovs-dpdk-ha.local (10.167.4.21) already present
2018-03-30 05:43:43,900 [salt.state       ][INFO    ][3115] Completed state [kvm02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.900407 duration_in_ms=0.843
2018-03-30 05:43:43,900 [salt.state       ][INFO    ][3115] Running state [dbs] at time 05:43:43.900590
2018-03-30 05:43:43,900 [salt.state       ][INFO    ][3115] Executing state host.present for dbs
2018-03-30 05:43:43,901 [salt.state       ][INFO    ][3115] Host dbs (10.167.4.23) already present
2018-03-30 05:43:43,901 [salt.state       ][INFO    ][3115] Completed state [dbs] at time 05:43:43.901270 duration_in_ms=0.68
2018-03-30 05:43:43,901 [salt.state       ][INFO    ][3115] Running state [dbs.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.901452
2018-03-30 05:43:43,901 [salt.state       ][INFO    ][3115] Executing state host.present for dbs.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,901 [salt.state       ][INFO    ][3115] Host dbs.mcp-pike-ovs-dpdk-ha.local (10.167.4.23) already present
2018-03-30 05:43:43,902 [salt.state       ][INFO    ][3115] Completed state [dbs.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.902134 duration_in_ms=0.683
2018-03-30 05:43:43,902 [salt.state       ][INFO    ][3115] Running state [prx] at time 05:43:43.902317
2018-03-30 05:43:43,902 [salt.state       ][INFO    ][3115] Executing state host.present for prx
2018-03-30 05:43:43,902 [salt.state       ][INFO    ][3115] Host prx (10.167.4.13) already present
2018-03-30 05:43:43,903 [salt.state       ][INFO    ][3115] Completed state [prx] at time 05:43:43.903012 duration_in_ms=0.696
2018-03-30 05:43:43,903 [salt.state       ][INFO    ][3115] Running state [prx.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.903187
2018-03-30 05:43:43,903 [salt.state       ][INFO    ][3115] Executing state host.present for prx.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,903 [salt.state       ][INFO    ][3115] Host prx.mcp-pike-ovs-dpdk-ha.local (10.167.4.13) already present
2018-03-30 05:43:43,903 [salt.state       ][INFO    ][3115] Completed state [prx.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.903847 duration_in_ms=0.66
2018-03-30 05:43:43,904 [salt.state       ][INFO    ][3115] Running state [prx02] at time 05:43:43.904021
2018-03-30 05:43:43,904 [salt.state       ][INFO    ][3115] Executing state host.present for prx02
2018-03-30 05:43:43,904 [salt.state       ][INFO    ][3115] Host prx02 (10.167.4.15) already present
2018-03-30 05:43:43,904 [salt.state       ][INFO    ][3115] Completed state [prx02] at time 05:43:43.904683 duration_in_ms=0.663
2018-03-30 05:43:43,904 [salt.state       ][INFO    ][3115] Running state [prx02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.904859
2018-03-30 05:43:43,905 [salt.state       ][INFO    ][3115] Executing state host.present for prx02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,905 [salt.state       ][INFO    ][3115] Host prx02.mcp-pike-ovs-dpdk-ha.local (10.167.4.15) already present
2018-03-30 05:43:43,905 [salt.state       ][INFO    ][3115] Completed state [prx02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.905523 duration_in_ms=0.664
2018-03-30 05:43:43,905 [salt.state       ][INFO    ][3115] Running state [msg02] at time 05:43:43.905699
2018-03-30 05:43:43,905 [salt.state       ][INFO    ][3115] Executing state host.present for msg02
2018-03-30 05:43:43,906 [salt.state       ][INFO    ][3115] Host msg02 (10.167.4.29) already present
2018-03-30 05:43:43,906 [salt.state       ][INFO    ][3115] Completed state [msg02] at time 05:43:43.906359 duration_in_ms=0.661
2018-03-30 05:43:43,906 [salt.state       ][INFO    ][3115] Running state [msg02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.906534
2018-03-30 05:43:43,906 [salt.state       ][INFO    ][3115] Executing state host.present for msg02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,907 [salt.state       ][INFO    ][3115] Host msg02.mcp-pike-ovs-dpdk-ha.local (10.167.4.29) already present
2018-03-30 05:43:43,907 [salt.state       ][INFO    ][3115] Completed state [msg02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.907212 duration_in_ms=0.678
2018-03-30 05:43:43,907 [salt.state       ][INFO    ][3115] Running state [msg03] at time 05:43:43.907387
2018-03-30 05:43:43,907 [salt.state       ][INFO    ][3115] Executing state host.present for msg03
2018-03-30 05:43:43,907 [salt.state       ][INFO    ][3115] Host msg03 (10.167.4.30) already present
2018-03-30 05:43:43,908 [salt.state       ][INFO    ][3115] Completed state [msg03] at time 05:43:43.908048 duration_in_ms=0.66
2018-03-30 05:43:43,908 [salt.state       ][INFO    ][3115] Running state [msg03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.908221
2018-03-30 05:43:43,908 [salt.state       ][INFO    ][3115] Executing state host.present for msg03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,908 [salt.state       ][INFO    ][3115] Host msg03.mcp-pike-ovs-dpdk-ha.local (10.167.4.30) already present
2018-03-30 05:43:43,908 [salt.state       ][INFO    ][3115] Completed state [msg03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.908881 duration_in_ms=0.661
2018-03-30 05:43:43,909 [salt.state       ][INFO    ][3115] Running state [msg01] at time 05:43:43.909058
2018-03-30 05:43:43,909 [salt.state       ][INFO    ][3115] Executing state host.present for msg01
2018-03-30 05:43:43,909 [salt.state       ][INFO    ][3115] Host msg01 (10.167.4.28) already present
2018-03-30 05:43:43,909 [salt.state       ][INFO    ][3115] Completed state [msg01] at time 05:43:43.909729 duration_in_ms=0.67
2018-03-30 05:43:43,909 [salt.state       ][INFO    ][3115] Running state [msg01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.909906
2018-03-30 05:43:43,910 [salt.state       ][INFO    ][3115] Executing state host.present for msg01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,910 [salt.state       ][INFO    ][3115] Host msg01.mcp-pike-ovs-dpdk-ha.local (10.167.4.28) already present
2018-03-30 05:43:43,910 [salt.state       ][INFO    ][3115] Completed state [msg01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.910577 duration_in_ms=0.671
2018-03-30 05:43:43,910 [salt.state       ][INFO    ][3115] Running state [msg] at time 05:43:43.910754
2018-03-30 05:43:43,910 [salt.state       ][INFO    ][3115] Executing state host.present for msg
2018-03-30 05:43:43,911 [salt.state       ][INFO    ][3115] Host msg (10.167.4.27) already present
2018-03-30 05:43:43,911 [salt.state       ][INFO    ][3115] Completed state [msg] at time 05:43:43.911442 duration_in_ms=0.688
2018-03-30 05:43:43,911 [salt.state       ][INFO    ][3115] Running state [msg.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.911617
2018-03-30 05:43:43,911 [salt.state       ][INFO    ][3115] Executing state host.present for msg.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,912 [salt.state       ][INFO    ][3115] Host msg.mcp-pike-ovs-dpdk-ha.local (10.167.4.27) already present
2018-03-30 05:43:43,912 [salt.state       ][INFO    ][3115] Completed state [msg.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.912294 duration_in_ms=0.677
2018-03-30 05:43:43,912 [salt.state       ][INFO    ][3115] Running state [cfg01] at time 05:43:43.912469
2018-03-30 05:43:43,912 [salt.state       ][INFO    ][3115] Executing state host.present for cfg01
2018-03-30 05:43:43,912 [salt.state       ][INFO    ][3115] Host cfg01 (10.167.4.11) already present
2018-03-30 05:43:43,913 [salt.state       ][INFO    ][3115] Completed state [cfg01] at time 05:43:43.913139 duration_in_ms=0.67
2018-03-30 05:43:43,913 [salt.state       ][INFO    ][3115] Running state [cfg01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.913310
2018-03-30 05:43:43,913 [salt.state       ][INFO    ][3115] Executing state host.present for cfg01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,913 [salt.state       ][INFO    ][3115] Host cfg01.mcp-pike-ovs-dpdk-ha.local (10.167.4.11) already present
2018-03-30 05:43:43,913 [salt.state       ][INFO    ][3115] Completed state [cfg01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.913959 duration_in_ms=0.65
2018-03-30 05:43:43,914 [salt.state       ][INFO    ][3115] Running state [cmp002] at time 05:43:43.914133
2018-03-30 05:43:43,914 [salt.state       ][INFO    ][3115] Executing state host.present for cmp002
2018-03-30 05:43:43,914 [salt.state       ][INFO    ][3115] Host cmp002 (10.167.4.53) already present
2018-03-30 05:43:43,914 [salt.state       ][INFO    ][3115] Completed state [cmp002] at time 05:43:43.914789 duration_in_ms=0.656
2018-03-30 05:43:43,914 [salt.state       ][INFO    ][3115] Running state [cmp002.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.914971
2018-03-30 05:43:43,915 [salt.state       ][INFO    ][3115] Executing state host.present for cmp002.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,915 [salt.state       ][INFO    ][3115] Host cmp002.mcp-pike-ovs-dpdk-ha.local (10.167.4.53) already present
2018-03-30 05:43:43,915 [salt.state       ][INFO    ][3115] Completed state [cmp002.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.915628 duration_in_ms=0.657
2018-03-30 05:43:43,915 [salt.state       ][INFO    ][3115] Running state [cmp001] at time 05:43:43.915805
2018-03-30 05:43:43,915 [salt.state       ][INFO    ][3115] Executing state host.present for cmp001
2018-03-30 05:43:43,916 [salt.state       ][INFO    ][3115] Host cmp001 (10.167.4.52) already present
2018-03-30 05:43:43,916 [salt.state       ][INFO    ][3115] Completed state [cmp001] at time 05:43:43.916482 duration_in_ms=0.677
2018-03-30 05:43:43,916 [salt.state       ][INFO    ][3115] Running state [cmp001.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.916656
2018-03-30 05:43:43,916 [salt.state       ][INFO    ][3115] Executing state host.present for cmp001.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:43,917 [salt.state       ][INFO    ][3115] Host cmp001.mcp-pike-ovs-dpdk-ha.local (10.167.4.52) already present
2018-03-30 05:43:43,917 [salt.state       ][INFO    ][3115] Completed state [cmp001.mcp-pike-ovs-dpdk-ha.local] at time 05:43:43.917309 duration_in_ms=0.653
2018-03-30 05:43:43,918 [salt.state       ][INFO    ][3115] Running state [file.replace] at time 05:43:43.918235
2018-03-30 05:43:43,918 [salt.state       ][INFO    ][3115] Executing state module.run for file.replace
2018-03-30 05:43:44,206 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['git', '--version'] in directory '/root'
2018-03-30 05:43:44,285 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'grep -q "cmp001 cmp001.mcp-pike-ovs-dpdk-ha.local" /etc/hosts' in directory '/root'
2018-03-30 05:43:44,296 [salt.state       ][INFO    ][3115] {'ret': '--- \n+++ \n@@ -23,7 +23,7 @@\n 10.167.4.28\t\tmsg01 msg01.mcp-pike-ovs-dpdk-ha.local\n 10.167.4.27\t\tmsg msg.mcp-pike-ovs-dpdk-ha.local\n 10.167.4.53\t\tcmp002 cmp002.mcp-pike-ovs-dpdk-ha.local\n-10.167.4.52\t\tcmp001 cmp001.mcp-pike-ovs-dpdk-ha.local\n+10.167.4.52\t\tcmp001.mcp-pike-ovs-dpdk-ha.local cmp001\n 10.167.4.24\t\tdbs01 dbs01.mcp-pike-ovs-dpdk-ha.local\n 10.167.4.25\t\tdbs02 dbs02.mcp-pike-ovs-dpdk-ha.local\n 10.167.4.26\t\tdbs03 dbs03.mcp-pike-ovs-dpdk-ha.local\n'}
2018-03-30 05:43:44,296 [salt.state       ][INFO    ][3115] Completed state [file.replace] at time 05:43:44.296546 duration_in_ms=378.309
2018-03-30 05:43:44,296 [salt.state       ][INFO    ][3115] Running state [dbs01] at time 05:43:44.296818
2018-03-30 05:43:44,297 [salt.state       ][INFO    ][3115] Executing state host.present for dbs01
2018-03-30 05:43:44,297 [salt.state       ][INFO    ][3115] Host dbs01 (10.167.4.24) already present
2018-03-30 05:43:44,297 [salt.state       ][INFO    ][3115] Completed state [dbs01] at time 05:43:44.297718 duration_in_ms=0.899
2018-03-30 05:43:44,297 [salt.state       ][INFO    ][3115] Running state [dbs01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.297965
2018-03-30 05:43:44,298 [salt.state       ][INFO    ][3115] Executing state host.present for dbs01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:44,298 [salt.state       ][INFO    ][3115] Host dbs01.mcp-pike-ovs-dpdk-ha.local (10.167.4.24) already present
2018-03-30 05:43:44,298 [salt.state       ][INFO    ][3115] Completed state [dbs01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.298725 duration_in_ms=0.76
2018-03-30 05:43:44,298 [salt.state       ][INFO    ][3115] Running state [dbs02] at time 05:43:44.298947
2018-03-30 05:43:44,299 [salt.state       ][INFO    ][3115] Executing state host.present for dbs02
2018-03-30 05:43:44,299 [salt.state       ][INFO    ][3115] Host dbs02 (10.167.4.25) already present
2018-03-30 05:43:44,299 [salt.state       ][INFO    ][3115] Completed state [dbs02] at time 05:43:44.299703 duration_in_ms=0.756
2018-03-30 05:43:44,299 [salt.state       ][INFO    ][3115] Running state [dbs02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.299909
2018-03-30 05:43:44,300 [salt.state       ][INFO    ][3115] Executing state host.present for dbs02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:44,300 [salt.state       ][INFO    ][3115] Host dbs02.mcp-pike-ovs-dpdk-ha.local (10.167.4.25) already present
2018-03-30 05:43:44,301 [salt.state       ][INFO    ][3115] Completed state [dbs02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.300968 duration_in_ms=1.06
2018-03-30 05:43:44,301 [salt.state       ][INFO    ][3115] Running state [dbs03] at time 05:43:44.301168
2018-03-30 05:43:44,301 [salt.state       ][INFO    ][3115] Executing state host.present for dbs03
2018-03-30 05:43:44,301 [salt.state       ][INFO    ][3115] Host dbs03 (10.167.4.26) already present
2018-03-30 05:43:44,301 [salt.state       ][INFO    ][3115] Completed state [dbs03] at time 05:43:44.301893 duration_in_ms=0.725
2018-03-30 05:43:44,302 [salt.state       ][INFO    ][3115] Running state [dbs03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.302088
2018-03-30 05:43:44,302 [salt.state       ][INFO    ][3115] Executing state host.present for dbs03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:44,302 [salt.state       ][INFO    ][3115] Host dbs03.mcp-pike-ovs-dpdk-ha.local (10.167.4.26) already present
2018-03-30 05:43:44,302 [salt.state       ][INFO    ][3115] Completed state [dbs03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.302815 duration_in_ms=0.726
2018-03-30 05:43:44,303 [salt.state       ][INFO    ][3115] Running state [mas01] at time 05:43:44.303035
2018-03-30 05:43:44,303 [salt.state       ][INFO    ][3115] Executing state host.present for mas01
2018-03-30 05:43:44,303 [salt.state       ][INFO    ][3115] Host mas01 (10.167.4.12) already present
2018-03-30 05:43:44,303 [salt.state       ][INFO    ][3115] Completed state [mas01] at time 05:43:44.303760 duration_in_ms=0.725
2018-03-30 05:43:44,303 [salt.state       ][INFO    ][3115] Running state [mas01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.303958
2018-03-30 05:43:44,304 [salt.state       ][INFO    ][3115] Executing state host.present for mas01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:44,304 [salt.state       ][INFO    ][3115] Host mas01.mcp-pike-ovs-dpdk-ha.local (10.167.4.12) already present
2018-03-30 05:43:44,304 [salt.state       ][INFO    ][3115] Completed state [mas01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.304696 duration_in_ms=0.738
2018-03-30 05:43:44,304 [salt.state       ][INFO    ][3115] Running state [ctl02] at time 05:43:44.304891
2018-03-30 05:43:44,305 [salt.state       ][INFO    ][3115] Executing state host.present for ctl02
2018-03-30 05:43:44,305 [salt.state       ][INFO    ][3115] Host ctl02 (10.167.4.37) already present
2018-03-30 05:43:44,305 [salt.state       ][INFO    ][3115] Completed state [ctl02] at time 05:43:44.305594 duration_in_ms=0.702
2018-03-30 05:43:44,305 [salt.state       ][INFO    ][3115] Running state [ctl02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.305786
2018-03-30 05:43:44,305 [salt.state       ][INFO    ][3115] Executing state host.present for ctl02.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:44,306 [salt.state       ][INFO    ][3115] Host ctl02.mcp-pike-ovs-dpdk-ha.local (10.167.4.37) already present
2018-03-30 05:43:44,306 [salt.state       ][INFO    ][3115] Completed state [ctl02.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.306514 duration_in_ms=0.727
2018-03-30 05:43:44,306 [salt.state       ][INFO    ][3115] Running state [ctl03] at time 05:43:44.306709
2018-03-30 05:43:44,306 [salt.state       ][INFO    ][3115] Executing state host.present for ctl03
2018-03-30 05:43:44,307 [salt.state       ][INFO    ][3115] Host ctl03 (10.167.4.38) already present
2018-03-30 05:43:44,307 [salt.state       ][INFO    ][3115] Completed state [ctl03] at time 05:43:44.307454 duration_in_ms=0.744
2018-03-30 05:43:44,307 [salt.state       ][INFO    ][3115] Running state [ctl03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.307647
2018-03-30 05:43:44,307 [salt.state       ][INFO    ][3115] Executing state host.present for ctl03.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:44,308 [salt.state       ][INFO    ][3115] Host ctl03.mcp-pike-ovs-dpdk-ha.local (10.167.4.38) already present
2018-03-30 05:43:44,308 [salt.state       ][INFO    ][3115] Completed state [ctl03.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.308377 duration_in_ms=0.73
2018-03-30 05:43:44,308 [salt.state       ][INFO    ][3115] Running state [ctl01] at time 05:43:44.308577
2018-03-30 05:43:44,308 [salt.state       ][INFO    ][3115] Executing state host.present for ctl01
2018-03-30 05:43:44,309 [salt.state       ][INFO    ][3115] Host ctl01 (10.167.4.36) already present
2018-03-30 05:43:44,309 [salt.state       ][INFO    ][3115] Completed state [ctl01] at time 05:43:44.309304 duration_in_ms=0.727
2018-03-30 05:43:44,309 [salt.state       ][INFO    ][3115] Running state [ctl01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.309497
2018-03-30 05:43:44,309 [salt.state       ][INFO    ][3115] Executing state host.present for ctl01.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:44,310 [salt.state       ][INFO    ][3115] Host ctl01.mcp-pike-ovs-dpdk-ha.local (10.167.4.36) already present
2018-03-30 05:43:44,310 [salt.state       ][INFO    ][3115] Completed state [ctl01.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.310226 duration_in_ms=0.729
2018-03-30 05:43:44,310 [salt.state       ][INFO    ][3115] Running state [ctl] at time 05:43:44.310418
2018-03-30 05:43:44,310 [salt.state       ][INFO    ][3115] Executing state host.present for ctl
2018-03-30 05:43:44,311 [salt.state       ][INFO    ][3115] Host ctl (10.167.4.35) already present
2018-03-30 05:43:44,311 [salt.state       ][INFO    ][3115] Completed state [ctl] at time 05:43:44.311170 duration_in_ms=0.752
2018-03-30 05:43:44,311 [salt.state       ][INFO    ][3115] Running state [ctl.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.311356
2018-03-30 05:43:44,311 [salt.state       ][INFO    ][3115] Executing state host.present for ctl.mcp-pike-ovs-dpdk-ha.local
2018-03-30 05:43:44,311 [salt.state       ][INFO    ][3115] Host ctl.mcp-pike-ovs-dpdk-ha.local (10.167.4.35) already present
2018-03-30 05:43:44,312 [salt.state       ][INFO    ][3115] Completed state [ctl.mcp-pike-ovs-dpdk-ha.local] at time 05:43:44.312072 duration_in_ms=0.715
2018-03-30 05:43:44,312 [salt.state       ][INFO    ][3115] Running state [linux_dpdk_pkgs] at time 05:43:44.312259
2018-03-30 05:43:44,312 [salt.state       ][INFO    ][3115] Executing state pkg.installed for linux_dpdk_pkgs
2018-03-30 05:43:44,323 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 05:43:44,338 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'dpdk-dev', 'dpdk-igb-uio-dkms', 'dpdk-rte-kni-dkms'] in directory '/root'
2018-03-30 05:43:48,413 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054347947595
2018-03-30 05:43:48,424 [salt.minion      ][INFO    ][6321] Starting a new job with PID 6321
2018-03-30 05:43:48,435 [salt.minion      ][INFO    ][6321] Returning information for job: 20180330054347947595
2018-03-30 05:43:58,597 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054358133782
2018-03-30 05:43:58,608 [salt.minion      ][INFO    ][6722] Starting a new job with PID 6722
2018-03-30 05:43:58,619 [salt.minion      ][INFO    ][6722] Returning information for job: 20180330054358133782
2018-03-30 05:44:08,776 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054408312883
2018-03-30 05:44:08,794 [salt.minion      ][INFO    ][7005] Starting a new job with PID 7005
2018-03-30 05:44:08,823 [salt.minion      ][INFO    ][7005] Returning information for job: 20180330054408312883
2018-03-30 05:44:18,989 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054418521271
2018-03-30 05:44:19,007 [salt.minion      ][INFO    ][7298] Starting a new job with PID 7298
2018-03-30 05:44:19,029 [salt.minion      ][INFO    ][7298] Returning information for job: 20180330054418521271
2018-03-30 05:44:29,198 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054428731659
2018-03-30 05:44:29,216 [salt.minion      ][INFO    ][11773] Starting a new job with PID 11773
2018-03-30 05:44:29,243 [salt.minion      ][INFO    ][11773] Returning information for job: 20180330054428731659
2018-03-30 05:44:39,413 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054438945977
2018-03-30 05:44:39,430 [salt.minion      ][INFO    ][12169] Starting a new job with PID 12169
2018-03-30 05:44:39,458 [salt.minion      ][INFO    ][12169] Returning information for job: 20180330054438945977
2018-03-30 05:44:49,636 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054449165886
2018-03-30 05:44:49,654 [salt.minion      ][INFO    ][14034] Starting a new job with PID 14034
2018-03-30 05:44:49,674 [salt.minion      ][INFO    ][14034] Returning information for job: 20180330054449165886
2018-03-30 05:44:59,866 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054459396797
2018-03-30 05:44:59,883 [salt.minion      ][INFO    ][15179] Starting a new job with PID 15179
2018-03-30 05:44:59,901 [salt.minion      ][INFO    ][15179] Returning information for job: 20180330054459396797
2018-03-30 05:45:10,091 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054509623128
2018-03-30 05:45:10,110 [salt.minion      ][INFO    ][17103] Starting a new job with PID 17103
2018-03-30 05:45:10,132 [salt.minion      ][INFO    ][17103] Returning information for job: 20180330054509623128
2018-03-30 05:45:17,026 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:45:17,084 [salt.state       ][INFO    ][3115] Made the following changes:
'librte-pmd-bnxt17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-kvargs17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'dpdk-igb-uio-dkms' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-octeontx-ssovf17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-mempool-stack17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-sched17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-power17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-vmxnet3-uio17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-lpm17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-cfgfile17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-eventdev17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'binutils-gold' changed from 'absent' to '1'
'librte-mbuf17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-vhost17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-null17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libubsan0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'librte-port17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-ark17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-meter17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-kni17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-efd17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-cmdline17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-reorder17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-virtio17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'c-compiler' changed from 'absent' to '1'
'libquadmath0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'librte-pmd-qede17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libmpc3' changed from 'absent' to '1.0.3-1'
'librte-pmd-ena17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'dpdk' changed from '2.2.0-0ubuntu8' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-nfp17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'make:any' changed from 'absent' to '1'
'librte-pmd-fm10k17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-distributor17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-hash17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libmpx0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'librte-ip-frag17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-net17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-sfc-efx17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-avp17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'openvswitch-switch' changed from '2.5.4-0ubuntu0.16.04.1' to '2.8.1-0ubuntu0.17.10.2~cloud0'
'librte-pmd-e1000-17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libpcap0.8-dev' changed from 'absent' to '1.7.4-2'
'librte-vhost17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-bitratestats17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-latencystats17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'liblsan0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'librte-mempool-ring17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libisl15' changed from 'absent' to '0.16.1-1'
'librte-ethdev17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pdump17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-table17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-skeleton-event17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libgomp1' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'librte-jobstats17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-thunderx-nicvf17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-pcap17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'linux-kernel-headers' changed from 'absent' to '1'
'libpcap-dev' changed from 'absent' to '1.7.4-2'
'librte-pmd-ixgbe17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-enic17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libfakeroot' changed from 'absent' to '1.20.2-1ubuntu1'
'linux-libc-dev' changed from 'absent' to '4.4.0-116.140'
'librte-pmd-xenvirt17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'gcc-5' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'librte-pmd-tap17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'dkms' changed from 'absent' to '2.2.0.3-2ubuntu11.5'
'dpdk-dev' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'gcc' changed from 'absent' to '4:5.3.1-1ubuntu1'
'librte-eal17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libdpdk0' changed from '2.2.0-0ubuntu8' to 'absent'
'librte-pmd-cxgbe17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-lio17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'hwdata' changed from 'absent' to '0.267-1'
'librte-ring17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libitm1' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'librte-mempool17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'dpdk-rte-kni-dkms' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-cryptodev17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'cpp:any' changed from 'absent' to '1'
'librte-pipeline17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-metrics17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-acl17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-ring17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libasan2' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'elf-binutils' changed from 'absent' to '1'
'libc-dev' changed from 'absent' to '1'
'librte-timer17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libatomic1' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'libtsan0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'libgcc-5-dev' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'librte-pmd-bond17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'openvswitch-switch-dpdk' changed from '2.5.4-0ubuntu0.16.04.1' to '2.8.1-0ubuntu0.17.10.2~cloud0'
'libcilkrts5' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'librte-pmd-null-crypto17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-af-packet17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'fakeroot' changed from 'absent' to '1.20.2-1ubuntu1'
'manpages-dev' changed from 'absent' to '4.04-2'
'libcc1-0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'cpp-5' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'openvswitch-common' changed from '2.5.4-0ubuntu0.16.04.1' to '2.8.1-0ubuntu0.17.10.2~cloud0'
'libc6-dev' changed from 'absent' to '2.23-0ubuntu10'
'librte-pmd-crypto-scheduler17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-kni17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-i40e17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'librte-pmd-sw-event17.05' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'
'libc-dev-bin' changed from 'absent' to '2.23-0ubuntu10'
'cpp' changed from 'absent' to '4:5.3.1-1ubuntu1'
'make' changed from 'absent' to '4.1-6'
'binutils' changed from 'absent' to '2.26.1-1ubuntu1~16.04.6'
'libdpdk-dev' changed from 'absent' to '17.05.2-0ubuntu1~cloud0'

2018-03-30 05:45:17,104 [salt.state       ][INFO    ][3115] Loading fresh modules for state activity
2018-03-30 05:45:17,137 [salt.state       ][INFO    ][3115] Completed state [linux_dpdk_pkgs] at time 05:45:17.136996 duration_in_ms=92824.735
2018-03-30 05:45:17,143 [salt.state       ][INFO    ][3115] Running state [uio] at time 05:45:17.143497
2018-03-30 05:45:17,144 [salt.state       ][INFO    ][3115] Executing state kmod.present for uio
2018-03-30 05:45:17,147 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'lsmod' in directory '/root'
2018-03-30 05:45:17,235 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'lsmod' in directory '/root'
2018-03-30 05:45:17,255 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'modprobe uio' in directory '/root'
2018-03-30 05:45:17,274 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'lsmod' in directory '/root'
2018-03-30 05:45:17,373 [salt.state       ][INFO    ][3115] {'uio': 'loaded'}
2018-03-30 05:45:17,374 [salt.state       ][INFO    ][3115] Completed state [uio] at time 05:45:17.374296 duration_in_ms=230.792
2018-03-30 05:45:17,378 [salt.state       ][INFO    ][3115] Running state [/etc/dpdk/interfaces] at time 05:45:17.378555
2018-03-30 05:45:17,379 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/dpdk/interfaces
2018-03-30 05:45:17,418 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/files/dpdk_interfaces'
2018-03-30 05:45:17,592 [salt.state       ][INFO    ][3115] File changed:
--- 
+++ 
@@ -1,19 +1,2 @@
-#
-# <bus>		Currently only "pci" is supported
-# <id>		Device ID on the specified bus
-# <driver>	Driver to bind against (vfio-pci, uio_pci_generic, igb_uio or
-#               rte_kni)
-#
-# Be aware that the two dpdk compatible drivers uio_pci_generic and vfio-pci are
-# part of linux-image-extra-<VERSION> package.
-# This package is not always installed by default - for example in cloud-images.
-# So please install it in case you run into missing module issues.
-#
-# For the module igb_uio, please install the dpdk-igb-uio-dkms package.
-# For the module rte_kni, please install the dpdk-rte-kni-dkms package.
-#
-# <bus>	<id>		<driver>
-# pci	0000:04:00.0	vfio-pci
-# pci	0000:04:00.1	uio_pci_generic
-# pci	0000:05:00.0	igb_uio
-# pci	0000:06:00.0	rte_kni
+
+pci 0000:07:00.0 igb_uio

2018-03-30 05:45:17,592 [salt.state       ][INFO    ][3115] Completed state [/etc/dpdk/interfaces] at time 05:45:17.592271 duration_in_ms=213.717
2018-03-30 05:45:17,728 [salt.state       ][INFO    ][3115] Running state [dpdk] at time 05:45:17.728244
2018-03-30 05:45:17,728 [salt.state       ][INFO    ][3115] Executing state service.running for dpdk
2018-03-30 05:45:17,729 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'status', 'dpdk.service', '-n', '0'] in directory '/root'
2018-03-30 05:45:17,748 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'dpdk.service'] in directory '/root'
2018-03-30 05:45:17,766 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'dpdk.service'] in directory '/root'
2018-03-30 05:45:17,784 [salt.state       ][INFO    ][3115] The service dpdk is already running
2018-03-30 05:45:17,785 [salt.state       ][INFO    ][3115] Completed state [dpdk] at time 05:45:17.784956 duration_in_ms=56.713
2018-03-30 05:45:17,785 [salt.state       ][INFO    ][3115] Running state [dpdk] at time 05:45:17.785143
2018-03-30 05:45:17,785 [salt.state       ][INFO    ][3115] Executing state service.mod_watch for dpdk
2018-03-30 05:45:17,785 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'dpdk.service'] in directory '/root'
2018-03-30 05:45:17,804 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'dpdk.service'] in directory '/root'
2018-03-30 05:45:17,823 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'dpdk.service'] in directory '/root'
2018-03-30 05:45:18,247 [salt.state       ][INFO    ][3115] {'dpdk': True}
2018-03-30 05:45:18,247 [salt.state       ][INFO    ][3115] Completed state [dpdk] at time 05:45:18.247707 duration_in_ms=462.561
2018-03-30 05:45:18,250 [salt.state       ][INFO    ][3115] Running state [openvswitch_dpdk_pkgs] at time 05:45:18.250719
2018-03-30 05:45:18,251 [salt.state       ][INFO    ][3115] Executing state pkg.installed for openvswitch_dpdk_pkgs
2018-03-30 05:45:18,463 [salt.state       ][INFO    ][3115] All specified packages are already installed
2018-03-30 05:45:18,463 [salt.state       ][INFO    ][3115] Completed state [openvswitch_dpdk_pkgs] at time 05:45:18.463783 duration_in_ms=213.065
2018-03-30 05:45:18,465 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true] at time 05:45:18.465946
2018-03-30 05:45:18,466 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
2018-03-30 05:45:18,466 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl get Open_vSwitch . other_config | grep "dpdk-init=\"true\""' in directory '/root'
2018-03-30 05:45:18,488 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true' in directory '/root'
2018-03-30 05:45:18,508 [salt.state       ][INFO    ][3115] {'pid': 17605, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:45:18,509 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true] at time 05:45:18.509101 duration_in_ms=43.155
2018-03-30 05:45:18,510 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask="0xc04"] at time 05:45:18.510834
2018-03-30 05:45:18,511 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask="0xc04"
2018-03-30 05:45:18,511 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl get Open_vSwitch . other_config | grep 'pmd-cpu-mask="0xc04"'
' in directory '/root'
2018-03-30 05:45:18,533 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask="0xc04"' in directory '/root'
2018-03-30 05:45:18,554 [salt.state       ][INFO    ][3115] {'pid': 17610, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:45:18,554 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask="0xc04"] at time 05:45:18.554754 duration_in_ms=43.921
2018-03-30 05:45:18,556 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048"] at time 05:45:18.556295
2018-03-30 05:45:18,556 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048"
2018-03-30 05:45:18,557 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl get Open_vSwitch . other_config | grep 'dpdk-socket-mem="2048,2048"'
' in directory '/root'
2018-03-30 05:45:18,577 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048"' in directory '/root'
2018-03-30 05:45:18,594 [salt.state       ][INFO    ][3115] {'pid': 17615, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:45:18,594 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048"] at time 05:45:18.594884 duration_in_ms=38.59
2018-03-30 05:45:18,596 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask="0x8"] at time 05:45:18.596315
2018-03-30 05:45:18,596 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask="0x8"
2018-03-30 05:45:18,597 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl get Open_vSwitch . other_config | grep 'dpdk-lcore-mask="0x8"'
' in directory '/root'
2018-03-30 05:45:18,615 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask="0x8"' in directory '/root'
2018-03-30 05:45:18,633 [salt.state       ][INFO    ][3115] {'pid': 17620, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:45:18,634 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask="0x8"] at time 05:45:18.634402 duration_in_ms=38.086
2018-03-30 05:45:18,636 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-extra="-n 2 --vhost-owner libvirt-qemu:kvm --vhost-perm 0664"] at time 05:45:18.636183
2018-03-30 05:45:18,636 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-extra="-n 2 --vhost-owner libvirt-qemu:kvm --vhost-perm 0664"
2018-03-30 05:45:18,637 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl get Open_vSwitch . other_config | grep 'dpdk-extra="-n 2 --vhost-owner libvirt-qemu:kvm --vhost-perm 0664"'
' in directory '/root'
2018-03-30 05:45:18,655 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-extra="-n 2 --vhost-owner libvirt-qemu:kvm --vhost-perm 0664"' in directory '/root'
2018-03-30 05:45:18,673 [salt.state       ][INFO    ][3115] {'pid': 17625, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:45:18,673 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-extra="-n 2 --vhost-owner libvirt-qemu:kvm --vhost-perm 0664"] at time 05:45:18.673816 duration_in_ms=37.632
2018-03-30 05:45:18,675 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-sock-dir="/run/openvswitch-vhost"] at time 05:45:18.675665
2018-03-30 05:45:18,676 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-sock-dir="/run/openvswitch-vhost"
2018-03-30 05:45:18,676 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl get Open_vSwitch . other_config | grep 'vhost-sock-dir="/run/openvswitch-vhost"'
' in directory '/root'
2018-03-30 05:45:18,693 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-sock-dir="/run/openvswitch-vhost"' in directory '/root'
2018-03-30 05:45:18,709 [salt.state       ][INFO    ][3115] {'pid': 17630, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:45:18,709 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-sock-dir="/run/openvswitch-vhost"] at time 05:45:18.709589 duration_in_ms=33.924
2018-03-30 05:45:18,712 [salt.state       ][INFO    ][3115] Running state [ovs-vswitchd] at time 05:45:18.712623
2018-03-30 05:45:18,713 [salt.state       ][INFO    ][3115] Executing state alternatives.remove for ovs-vswitchd
2018-03-30 05:45:18,713 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['update-alternatives', '--display', 'ovs-vswitchd'] in directory '/root'
2018-03-30 05:45:18,727 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['update-alternatives', '--remove', 'ovs-vswitchd', '/usr/lib/openvswitch-switch/ovs-vswitchd'] in directory '/root'
2018-03-30 05:45:18,773 [salt.state       ][INFO    ][3115] {'path': '/usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk'}
2018-03-30 05:45:18,774 [salt.state       ][INFO    ][3115] Completed state [ovs-vswitchd] at time 05:45:18.773941 duration_in_ms=61.318
2018-03-30 05:45:18,787 [salt.state       ][INFO    ][3115] Running state [openvswitch-switch] at time 05:45:18.787494
2018-03-30 05:45:18,788 [salt.state       ][INFO    ][3115] Executing state service.running for openvswitch-switch
2018-03-30 05:45:18,789 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'status', 'openvswitch-switch.service', '-n', '0'] in directory '/root'
2018-03-30 05:45:18,812 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'openvswitch-switch.service'] in directory '/root'
2018-03-30 05:45:18,832 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'openvswitch-switch.service'] in directory '/root'
2018-03-30 05:45:18,851 [salt.state       ][INFO    ][3115] The service openvswitch-switch is already running
2018-03-30 05:45:18,851 [salt.state       ][INFO    ][3115] Completed state [openvswitch-switch] at time 05:45:18.851846 duration_in_ms=64.352
2018-03-30 05:45:18,852 [salt.state       ][INFO    ][3115] Running state [openvswitch-switch] at time 05:45:18.852293
2018-03-30 05:45:18,852 [salt.state       ][INFO    ][3115] Executing state service.mod_watch for openvswitch-switch
2018-03-30 05:45:18,854 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'openvswitch-switch.service'] in directory '/root'
2018-03-30 05:45:18,871 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'openvswitch-switch.service'] in directory '/root'
2018-03-30 05:45:18,890 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'openvswitch-switch.service'] in directory '/root'
2018-03-30 05:45:20,317 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054519848787
2018-03-30 05:45:20,334 [salt.minion      ][INFO    ][17774] Starting a new job with PID 17774
2018-03-30 05:45:20,356 [salt.minion      ][INFO    ][17774] Returning information for job: 20180330054519848787
2018-03-30 05:45:25,164 [salt.state       ][INFO    ][3115] {'openvswitch-switch': True}
2018-03-30 05:45:25,165 [salt.state       ][INFO    ][3115] Completed state [openvswitch-switch] at time 05:45:25.165161 duration_in_ms=6312.866
2018-03-30 05:45:25,165 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait add-br br-prv -- set bridge br-prv datapath_type=netdev] at time 05:45:25.165725
2018-03-30 05:45:25,166 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait add-br br-prv -- set bridge br-prv datapath_type=netdev
2018-03-30 05:45:25,167 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl show | grep br-prv' in directory '/root'
2018-03-30 05:45:25,188 [salt.state       ][INFO    ][3115] unless execution succeeded
2018-03-30 05:45:25,188 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait add-br br-prv -- set bridge br-prv datapath_type=netdev] at time 05:45:25.188543 duration_in_ms=22.818
2018-03-30 05:45:25,191 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait add-port br-prv dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:07:00.0] at time 05:45:25.191448
2018-03-30 05:45:25,192 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait add-port br-prv dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:07:00.0
2018-03-30 05:45:25,193 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl show | grep dpdk0' in directory '/root'
2018-03-30 05:45:25,210 [salt.state       ][INFO    ][3115] unless execution succeeded
2018-03-30 05:45:25,210 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait add-port br-prv dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:07:00.0] at time 05:45:25.210731 duration_in_ms=19.284
2018-03-30 05:45:25,211 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait set Interface dpdk0 options:n_rxq=2 ] at time 05:45:25.211224
2018-03-30 05:45:25,211 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait set Interface dpdk0 options:n_rxq=2 
2018-03-30 05:45:25,212 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl get Interface dpdk0 options | grep 'n_rxq="2"'
' in directory '/root'
2018-03-30 05:45:25,228 [salt.state       ][INFO    ][3115] unless execution succeeded
2018-03-30 05:45:25,229 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait set Interface dpdk0 options:n_rxq=2 ] at time 05:45:25.228931 duration_in_ms=17.707
2018-03-30 05:45:25,229 [salt.state       ][INFO    ][3115] Running state [linux_network_bridge_pkgs] at time 05:45:25.229476
2018-03-30 05:45:25,230 [salt.state       ][INFO    ][3115] Executing state pkg.installed for linux_network_bridge_pkgs
2018-03-30 05:45:25,240 [salt.state       ][INFO    ][3115] All specified packages are already installed
2018-03-30 05:45:25,240 [salt.state       ][INFO    ][3115] Completed state [linux_network_bridge_pkgs] at time 05:45:25.240396 duration_in_ms=10.919
2018-03-30 05:45:25,240 [salt.state       ][INFO    ][3115] Running state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 05:45:25.240747
2018-03-30 05:45:25,241 [salt.state       ][INFO    ][3115] Executing state file.absent for /etc/network/interfaces.d/50-cloud-init.cfg
2018-03-30 05:45:25,241 [salt.state       ][INFO    ][3115] File /etc/network/interfaces.d/50-cloud-init.cfg is not present
2018-03-30 05:45:25,241 [salt.state       ][INFO    ][3115] Completed state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 05:45:25.241866 duration_in_ms=1.119
2018-03-30 05:45:25,247 [salt.state       ][INFO    ][3115] Running state [enp6s0.300] at time 05:45:25.247119
2018-03-30 05:45:25,247 [salt.state       ][INFO    ][3115] Executing state network.managed for enp6s0.300
2018-03-30 05:45:26,256 [salt.state       ][INFO    ][3115] Interface enp6s0.300 is up to date.
2018-03-30 05:45:26,256 [salt.state       ][INFO    ][3115] Completed state [enp6s0.300] at time 05:45:26.256509 duration_in_ms=1009.387
2018-03-30 05:45:26,257 [salt.state       ][INFO    ][3115] Running state [br-ctl] at time 05:45:26.257835
2018-03-30 05:45:26,258 [salt.state       ][INFO    ][3115] Executing state network.managed for br-ctl
2018-03-30 05:45:26,265 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 05:45:26,301 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'bridge-utils'] in directory '/root'
2018-03-30 05:45:26,634 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:45:27,486 [salt.state       ][INFO    ][3115] Interface br-ctl is up to date.
2018-03-30 05:45:27,487 [salt.state       ][INFO    ][3115] Completed state [br-ctl] at time 05:45:27.486985 duration_in_ms=1229.151
2018-03-30 05:45:27,487 [salt.state       ][INFO    ][3115] Running state [enp6s0] at time 05:45:27.487241
2018-03-30 05:45:27,487 [salt.state       ][INFO    ][3115] Executing state network.managed for enp6s0
2018-03-30 05:45:28,281 [salt.state       ][INFO    ][3115] Interface enp6s0 is up to date.
2018-03-30 05:45:28,281 [salt.state       ][INFO    ][3115] Completed state [enp6s0] at time 05:45:28.281843 duration_in_ms=794.601
2018-03-30 05:45:28,282 [salt.state       ][INFO    ][3115] Running state [br-floating] at time 05:45:28.282502
2018-03-30 05:45:28,282 [salt.state       ][INFO    ][3115] Executing state openvswitch_bridge.present for br-floating
2018-03-30 05:45:28,283 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl br-exists br-floating' in directory '/root'
2018-03-30 05:45:28,298 [salt.loaded.int.module.cmdmod][ERROR   ][3115] Command 'ovs-vsctl br-exists br-floating' failed with return code: 2
2018-03-30 05:45:28,298 [salt.loaded.int.module.cmdmod][ERROR   ][3115] retcode: 2
2018-03-30 05:45:28,298 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl --may-exist add-br br-floating' in directory '/root'
2018-03-30 05:45:28,337 [salt.state       ][INFO    ][3115] Made the following changes:
'br-floating' changed from 'Bridge br-floating does not exist.' to 'Bridge br-floating created'

2018-03-30 05:45:28,337 [salt.state       ][INFO    ][3115] Completed state [br-floating] at time 05:45:28.337544 duration_in_ms=55.043
2018-03-30 05:45:28,337 [salt.state       ][INFO    ][3115] Running state [/etc/network/interfaces.u/ifcfg-br-floating] at time 05:45:28.337819
2018-03-30 05:45:28,338 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/network/interfaces.u/ifcfg-br-floating
2018-03-30 05:45:28,366 [salt.state       ][INFO    ][3115] File /etc/network/interfaces.u/ifcfg-br-floating is in the correct state
2018-03-30 05:45:28,366 [salt.state       ][INFO    ][3115] Completed state [/etc/network/interfaces.u/ifcfg-br-floating] at time 05:45:28.366433 duration_in_ms=28.614
2018-03-30 05:45:28,366 [salt.state       ][INFO    ][3115] Running state [/etc/network/interfaces] at time 05:45:28.366637
2018-03-30 05:45:28,366 [salt.state       ][INFO    ][3115] Executing state file.prepend for /etc/network/interfaces
2018-03-30 05:45:28,369 [salt.state       ][INFO    ][3115] File changed:
--- 
+++ 
@@ -1,3 +1,6 @@
+source /etc/network/interfaces.d/*
+# Workaround for Upstream-Bug: https://github.com/saltstack/salt/issues/40262
+source /etc/network/interfaces.u/*
 auto lo
 iface lo inet loopback
 auto enp6s0.300

2018-03-30 05:45:28,369 [salt.state       ][INFO    ][3115] Completed state [/etc/network/interfaces] at time 05:45:28.369297 duration_in_ms=2.66
2018-03-30 05:45:28,373 [salt.state       ][INFO    ][3115] Running state [/etc/network/interfaces] at time 05:45:28.373301
2018-03-30 05:45:28,373 [salt.state       ][INFO    ][3115] Executing state file.prepend for /etc/network/interfaces
2018-03-30 05:45:28,374 [salt.state       ][INFO    ][3115] File /etc/network/interfaces is in correct state
2018-03-30 05:45:28,374 [salt.state       ][INFO    ][3115] Completed state [/etc/network/interfaces] at time 05:45:28.374599 duration_in_ms=1.297
2018-03-30 05:45:28,376 [salt.state       ][INFO    ][3115] Running state [ifup br-floating] at time 05:45:28.376400
2018-03-30 05:45:28,376 [salt.state       ][INFO    ][3115] Executing state cmd.run for ifup br-floating
2018-03-30 05:45:28,377 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ip link show br-floating | grep -q '\<UP\>'' in directory '/root'
2018-03-30 05:45:28,389 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ifup br-floating' in directory '/root'
2018-03-30 05:45:28,844 [salt.state       ][INFO    ][3115] {'pid': 18125, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:45:28,845 [salt.state       ][INFO    ][3115] Completed state [ifup br-floating] at time 05:45:28.845144 duration_in_ms=468.743
2018-03-30 05:45:28,845 [salt.state       ][INFO    ][3115] Running state [ovs-vsctl --no-wait add-port br-floating enp8s0] at time 05:45:28.845697
2018-03-30 05:45:28,846 [salt.state       ][INFO    ][3115] Executing state cmd.run for ovs-vsctl --no-wait add-port br-floating enp8s0
2018-03-30 05:45:28,847 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl show | grep enp8s0' in directory '/root'
2018-03-30 05:45:28,863 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ovs-vsctl --no-wait add-port br-floating enp8s0' in directory '/root'
2018-03-30 05:45:28,881 [salt.state       ][INFO    ][3115] {'pid': 18347, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:45:28,882 [salt.state       ][INFO    ][3115] Completed state [ovs-vsctl --no-wait add-port br-floating enp8s0] at time 05:45:28.882199 duration_in_ms=36.503
2018-03-30 05:45:28,882 [salt.state       ][INFO    ][3115] Running state [br-floating] at time 05:45:28.882604
2018-03-30 05:45:28,883 [salt.state       ][INFO    ][3115] Executing state network.routes for br-floating
2018-03-30 05:45:28,904 [salt.state       ][INFO    ][3115] Interface br-floating routes are up to date.
2018-03-30 05:45:28,905 [salt.state       ][INFO    ][3115] Completed state [br-floating] at time 05:45:28.904939 duration_in_ms=22.335
2018-03-30 05:45:28,905 [salt.state       ][INFO    ][3115] Running state [/etc/network/interfaces] at time 05:45:28.905308
2018-03-30 05:45:28,905 [salt.state       ][INFO    ][3115] Executing state file.prepend for /etc/network/interfaces
2018-03-30 05:45:28,907 [salt.state       ][INFO    ][3115] File /etc/network/interfaces is in correct state
2018-03-30 05:45:28,907 [salt.state       ][INFO    ][3115] Completed state [/etc/network/interfaces] at time 05:45:28.907572 duration_in_ms=2.265
2018-03-30 05:45:28,907 [salt.state       ][INFO    ][3115] Running state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 05:45:28.907871
2018-03-30 05:45:28,908 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/network/interfaces.u/ifcfg-enp8s0
2018-03-30 05:45:28,953 [salt.state       ][INFO    ][3115] File /etc/network/interfaces.u/ifcfg-enp8s0 is in the correct state
2018-03-30 05:45:28,953 [salt.state       ][INFO    ][3115] Completed state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 05:45:28.953823 duration_in_ms=45.952
2018-03-30 05:45:28,954 [salt.state       ][INFO    ][3115] Running state [/etc/network/interfaces] at time 05:45:28.954103
2018-03-30 05:45:28,954 [salt.state       ][INFO    ][3115] Executing state file.replace for /etc/network/interfaces
2018-03-30 05:45:28,955 [salt.state       ][INFO    ][3115] No changes needed to be made
2018-03-30 05:45:28,956 [salt.state       ][INFO    ][3115] Completed state [/etc/network/interfaces] at time 05:45:28.956056 duration_in_ms=1.953
2018-03-30 05:45:28,956 [salt.state       ][INFO    ][3115] Running state [/etc/network/interfaces] at time 05:45:28.956322
2018-03-30 05:45:28,956 [salt.state       ][INFO    ][3115] Executing state file.replace for /etc/network/interfaces
2018-03-30 05:45:28,957 [salt.state       ][INFO    ][3115] No changes needed to be made
2018-03-30 05:45:28,958 [salt.state       ][INFO    ][3115] Completed state [/etc/network/interfaces] at time 05:45:28.958175 duration_in_ms=1.853
2018-03-30 05:45:28,963 [salt.state       ][INFO    ][3115] Running state [ifup enp8s0] at time 05:45:28.963532
2018-03-30 05:45:28,963 [salt.state       ][INFO    ][3115] Executing state cmd.run for ifup enp8s0
2018-03-30 05:45:28,964 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command 'ip link show enp8s0 | grep -q '\<UP\>'' in directory '/root'
2018-03-30 05:45:28,979 [salt.state       ][INFO    ][3115] unless execution succeeded
2018-03-30 05:45:28,979 [salt.state       ][INFO    ][3115] Completed state [ifup enp8s0] at time 05:45:28.979627 duration_in_ms=16.095
2018-03-30 05:45:28,980 [salt.state       ][INFO    ][3115] Running state [/etc/network/interfaces] at time 05:45:28.979987
2018-03-30 05:45:28,980 [salt.state       ][INFO    ][3115] Executing state file.prepend for /etc/network/interfaces
2018-03-30 05:45:28,982 [salt.state       ][INFO    ][3115] File /etc/network/interfaces is in correct state
2018-03-30 05:45:28,982 [salt.state       ][INFO    ][3115] Completed state [/etc/network/interfaces] at time 05:45:28.982249 duration_in_ms=2.261
2018-03-30 05:45:28,982 [salt.state       ][INFO    ][3115] Running state [/etc/profile.d/proxy.sh] at time 05:45:28.982524
2018-03-30 05:45:28,982 [salt.state       ][INFO    ][3115] Executing state file.absent for /etc/profile.d/proxy.sh
2018-03-30 05:45:28,983 [salt.state       ][INFO    ][3115] File /etc/profile.d/proxy.sh is not present
2018-03-30 05:45:28,983 [salt.state       ][INFO    ][3115] Completed state [/etc/profile.d/proxy.sh] at time 05:45:28.983431 duration_in_ms=0.907
2018-03-30 05:45:28,983 [salt.state       ][INFO    ][3115] Running state [/etc/apt/apt.conf.d/95proxies] at time 05:45:28.983695
2018-03-30 05:45:28,983 [salt.state       ][INFO    ][3115] Executing state file.absent for /etc/apt/apt.conf.d/95proxies
2018-03-30 05:45:28,984 [salt.state       ][INFO    ][3115] File /etc/apt/apt.conf.d/95proxies is not present
2018-03-30 05:45:28,984 [salt.state       ][INFO    ][3115] Completed state [/etc/apt/apt.conf.d/95proxies] at time 05:45:28.984484 duration_in_ms=0.789
2018-03-30 05:45:28,984 [salt.state       ][INFO    ][3115] Running state [linux_lvm_pkgs] at time 05:45:28.984743
2018-03-30 05:45:28,985 [salt.state       ][INFO    ][3115] Executing state pkg.installed for linux_lvm_pkgs
2018-03-30 05:45:28,992 [salt.state       ][INFO    ][3115] All specified packages are already installed
2018-03-30 05:45:28,993 [salt.state       ][INFO    ][3115] Completed state [linux_lvm_pkgs] at time 05:45:28.993128 duration_in_ms=8.385
2018-03-30 05:45:28,994 [salt.state       ][INFO    ][3115] Running state [/etc/lvm/lvm.conf] at time 05:45:28.994483
2018-03-30 05:45:28,994 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/lvm/lvm.conf
2018-03-30 05:45:29,022 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'linux/files/lvm.conf'
2018-03-30 05:45:29,137 [salt.state       ][INFO    ][3115] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 # This is an example configuration file for the LVM2 system.
 # It contains the default settings that would be used if there was no
 # /etc/lvm/lvm.conf file.
@@ -26,506 +27,509 @@
 # How LVM configuration settings are handled.
 config {
 
-	# Configuration option config/checks.
-	# If enabled, any LVM configuration mismatch is reported.
-	# This implies checking that the configuration key is understood by
-	# LVM and that the value of the key is the proper type. If disabled,
-	# any configuration mismatch is ignored and the default value is used
-	# without any warning (a message about the configuration key not being
-	# found is issued in verbose mode only).
-	checks = 1
-
-	# Configuration option config/abort_on_errors.
-	# Abort the LVM process if a configuration mismatch is found.
-	abort_on_errors = 0
-
-	# Configuration option config/profile_dir.
-	# Directory where LVM looks for configuration profiles.
-	profile_dir = "/etc/lvm/profile"
+        # Configuration option config/checks.
+        # If enabled, any LVM configuration mismatch is reported.
+        # This implies checking that the configuration key is understood by
+        # LVM and that the value of the key is the proper type. If disabled,
+        # any configuration mismatch is ignored and the default value is used
+        # without any warning (a message about the configuration key not being
+        # found is issued in verbose mode only).
+        checks = 1
+
+        # Configuration option config/abort_on_errors.
+        # Abort the LVM process if a configuration mismatch is found.
+        abort_on_errors = 0
+
+        # Configuration option config/profile_dir.
+        # Directory where LVM looks for configuration profiles.
+        profile_dir = "/etc/lvm/profile"
 }
 
 # Configuration section devices.
 # How LVM uses block devices.
 devices {
 
-	# Configuration option devices/dir.
-	# Directory in which to create volume group device nodes.
-	# Commands also accept this as a prefix on volume group names.
-	# This configuration option is advanced.
-	dir = "/dev"
-
-	# Configuration option devices/scan.
-	# Directories containing device nodes to use with LVM.
-	# This configuration option is advanced.
-	scan = [ "/dev" ]
-
-	# Configuration option devices/obtain_device_list_from_udev.
-	# Obtain the list of available devices from udev.
-	# This avoids opening or using any inapplicable non-block devices or
-	# subdirectories found in the udev directory. Any device node or
-	# symlink not managed by udev in the udev directory is ignored. This
-	# setting applies only to the udev-managed device directory; other
-	# directories will be scanned fully. LVM needs to be compiled with
-	# udev support for this setting to apply.
-	obtain_device_list_from_udev = 1
-
-	# Configuration option devices/external_device_info_source.
-	# Select an external device information source.
-	# Some information may already be available in the system and LVM can
-	# use this information to determine the exact type or use of devices it
-	# processes. Using an existing external device information source can
-	# speed up device processing as LVM does not need to run its own native
-	# routines to acquire this information. For example, this information
-	# is used to drive LVM filtering like MD component detection, multipath
-	# component detection, partition detection and others.
-	# 
-	# Accepted values:
-	#   none
-	#     No external device information source is used.
-	#   udev
-	#     Reuse existing udev database records. Applicable only if LVM is
-	#     compiled with udev support.
-	# 
-	external_device_info_source = "none"
-
-	# Configuration option devices/preferred_names.
-	# Select which path name to display for a block device.
-	# If multiple path names exist for a block device, and LVM needs to
-	# display a name for the device, the path names are matched against
-	# each item in this list of regular expressions. The first match is
-	# used. Try to avoid using undescriptive /dev/dm-N names, if present.
-	# If no preferred name matches, or if preferred_names are not defined,
-	# the following built-in preferences are applied in order until one
-	# produces a preferred name:
-	# Prefer names with path prefixes in the order of:
-	# /dev/mapper, /dev/disk, /dev/dm-*, /dev/block.
-	# Prefer the name with the least number of slashes.
-	# Prefer a name that is a symlink.
-	# Prefer the path with least value in lexicographical order.
-	# 
-	# Example
-	# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option devices/filter.
-	# Limit the block devices that are used by LVM commands.
-	# This is a list of regular expressions used to accept or reject block
-	# device path names. Each regex is delimited by a vertical bar '|'
-	# (or any character) and is preceded by 'a' to accept the path, or
-	# by 'r' to reject the path. The first regex in the list to match the
-	# path is used, producing the 'a' or 'r' result for the device.
-	# When multiple path names exist for a block device, if any path name
-	# matches an 'a' pattern before an 'r' pattern, then the device is
-	# accepted. If all the path names match an 'r' pattern first, then the
-	# device is rejected. Unmatching path names do not affect the accept
-	# or reject decision. If no path names for a device match a pattern,
-	# then the device is accepted. Be careful mixing 'a' and 'r' patterns,
-	# as the combination might produce unexpected results (test changes.)
-	# Run vgscan after changing the filter to regenerate the cache.
-	# See the use_lvmetad comment for a special case regarding filters.
-	# 
-	# Example
-	# Accept every block device:
-	# filter = [ "a|.*/|" ]
-	# Reject the cdrom drive:
-	# filter = [ "r|/dev/cdrom|" ]
-	# Work with just loopback devices, e.g. for testing:
-	# filter = [ "a|loop|", "r|.*|" ]
-	# Accept all loop devices and ide drives except hdc:
-	# filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
-	# Use anchors to be very specific:
-	# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
-	# 
-	# This configuration option has an automatic default value.
-	# filter = [ "a|.*/|" ]
-
-	# Configuration option devices/global_filter.
-	# Limit the block devices that are used by LVM system components.
-	# Because devices/filter may be overridden from the command line, it is
-	# not suitable for system-wide device filtering, e.g. udev and lvmetad.
-	# Use global_filter to hide devices from these LVM system components.
-	# The syntax is the same as devices/filter. Devices rejected by
-	# global_filter are not opened by LVM.
-	# This configuration option has an automatic default value.
-	# global_filter = [ "a|.*/|" ]
-
-	# Configuration option devices/cache_dir.
-	# Directory in which to store the device cache file.
-	# The results of filtering are cached on disk to avoid rescanning dud
-	# devices (which can take a very long time). By default this cache is
-	# stored in a file named .cache. It is safe to delete this file; the
-	# tools regenerate it. If obtain_device_list_from_udev is enabled, the
-	# list of devices is obtained from udev and any existing .cache file
-	# is removed.
-	cache_dir = "/run/lvm"
-
-	# Configuration option devices/cache_file_prefix.
-	# A prefix used before the .cache file name. See devices/cache_dir.
-	cache_file_prefix = ""
-
-	# Configuration option devices/write_cache_state.
-	# Enable/disable writing the cache file. See devices/cache_dir.
-	write_cache_state = 1
-
-	# Configuration option devices/types.
-	# List of additional acceptable block device types.
-	# These are of device type names from /proc/devices, followed by the
-	# maximum number of partitions.
-	# 
-	# Example
-	# types = [ "fd", 16 ]
-	# 
-	# This configuration option is advanced.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option devices/sysfs_scan.
-	# Restrict device scanning to block devices appearing in sysfs.
-	# This is a quick way of filtering out block devices that are not
-	# present on the system. sysfs must be part of the kernel and mounted.)
-	sysfs_scan = 1
-
-	# Configuration option devices/multipath_component_detection.
-	# Ignore devices that are components of DM multipath devices.
-	multipath_component_detection = 1
-
-	# Configuration option devices/md_component_detection.
-	# Ignore devices that are components of software RAID (md) devices.
-	md_component_detection = 1
-
-	# Configuration option devices/fw_raid_component_detection.
-	# Ignore devices that are components of firmware RAID devices.
-	# LVM must use an external_device_info_source other than none for this
-	# detection to execute.
-	fw_raid_component_detection = 0
-
-	# Configuration option devices/md_chunk_alignment.
-	# Align PV data blocks with md device's stripe-width.
-	# This applies if a PV is placed directly on an md device.
-	md_chunk_alignment = 1
-
-	# Configuration option devices/default_data_alignment.
-	# Default alignment of the start of a PV data area in MB.
-	# If set to 0, a value of 64KiB will be used.
-	# Set to 1 for 1MiB, 2 for 2MiB, etc.
-	# This configuration option has an automatic default value.
-	# default_data_alignment = 1
-
-	# Configuration option devices/data_alignment_detection.
-	# Detect PV data alignment based on sysfs device information.
-	# The start of a PV data area will be a multiple of minimum_io_size or
-	# optimal_io_size exposed in sysfs. minimum_io_size is the smallest
-	# request the device can perform without incurring a read-modify-write
-	# penalty, e.g. MD chunk size. optimal_io_size is the device's
-	# preferred unit of receiving I/O, e.g. MD stripe width.
-	# minimum_io_size is used if optimal_io_size is undefined (0).
-	# If md_chunk_alignment is enabled, that detects the optimal_io_size.
-	# This setting takes precedence over md_chunk_alignment.
-	data_alignment_detection = 1
-
-	# Configuration option devices/data_alignment.
-	# Alignment of the start of a PV data area in KiB.
-	# If a PV is placed directly on an md device and md_chunk_alignment or
-	# data_alignment_detection are enabled, then this setting is ignored.
-	# Otherwise, md_chunk_alignment and data_alignment_detection are
-	# disabled if this is set. Set to 0 to use the default alignment or the
-	# page size, if larger.
-	data_alignment = 0
-
-	# Configuration option devices/data_alignment_offset_detection.
-	# Detect PV data alignment offset based on sysfs device information.
-	# The start of a PV aligned data area will be shifted by the
-	# alignment_offset exposed in sysfs. This offset is often 0, but may
-	# be non-zero. Certain 4KiB sector drives that compensate for windows
-	# partitioning will have an alignment_offset of 3584 bytes (sector 7
-	# is the lowest aligned logical block, the 4KiB sectors start at
-	# LBA -1, and consequently sector 63 is aligned on a 4KiB boundary).
-	# pvcreate --dataalignmentoffset will skip this detection.
-	data_alignment_offset_detection = 1
-
-	# Configuration option devices/ignore_suspended_devices.
-	# Ignore DM devices that have I/O suspended while scanning devices.
-	# Otherwise, LVM waits for a suspended device to become accessible.
-	# This should only be needed in recovery situations.
-	ignore_suspended_devices = 0
-
-	# Configuration option devices/ignore_lvm_mirrors.
-	# Do not scan 'mirror' LVs to avoid possible deadlocks.
-	# This avoids possible deadlocks when using the 'mirror' segment type.
-	# This setting determines whether LVs using the 'mirror' segment type
-	# are scanned for LVM labels. This affects the ability of mirrors to
-	# be used as physical volumes. If this setting is enabled, it is
-	# impossible to create VGs on top of mirror LVs, i.e. to stack VGs on
-	# mirror LVs. If this setting is disabled, allowing mirror LVs to be
-	# scanned, it may cause LVM processes and I/O to the mirror to become
-	# blocked. This is due to the way that the mirror segment type handles
-	# failures. In order for the hang to occur, an LVM command must be run
-	# just after a failure and before the automatic LVM repair process
-	# takes place, or there must be failures in multiple mirrors in the
-	# same VG at the same time with write failures occurring moments before
-	# a scan of the mirror's labels. The 'mirror' scanning problems do not
-	# apply to LVM RAID types like 'raid1' which handle failures in a
-	# different way, making them a better choice for VG stacking.
-	ignore_lvm_mirrors = 1
-
-	# Configuration option devices/disable_after_error_count.
-	# Number of I/O errors after which a device is skipped.
-	# During each LVM operation, errors received from each device are
-	# counted. If the counter of a device exceeds the limit set here,
-	# no further I/O is sent to that device for the remainder of the
-	# operation. Setting this to 0 disables the counters altogether.
-	disable_after_error_count = 0
-
-	# Configuration option devices/require_restorefile_with_uuid.
-	# Allow use of pvcreate --uuid without requiring --restorefile.
-	require_restorefile_with_uuid = 1
-
-	# Configuration option devices/pv_min_size.
-	# Minimum size in KiB of block devices which can be used as PVs.
-	# In a clustered environment all nodes must use the same value.
-	# Any value smaller than 512KiB is ignored. The previous built-in
-	# value was 512.
-	pv_min_size = 2048
-
-	# Configuration option devices/issue_discards.
-	# Issue discards to PVs that are no longer used by an LV.
-	# Discards are sent to an LV's underlying physical volumes when the LV
-	# is no longer using the physical volumes' space, e.g. lvremove,
-	# lvreduce. Discards inform the storage that a region is no longer
-	# used. Storage that supports discards advertise the protocol-specific
-	# way discards should be issued by the kernel (TRIM, UNMAP, or
-	# WRITE SAME with UNMAP bit set). Not all storage will support or
-	# benefit from discards, but SSDs and thinly provisioned LUNs
-	# generally do. If enabled, discards will only be issued if both the
-	# storage and kernel provide support.
-	issue_discards = 1
+        # Configuration option devices/dir.
+        # Directory in which to create volume group device nodes.
+        # Commands also accept this as a prefix on volume group names.
+        # This configuration option is advanced.
+        dir = "/dev"
+
+        # Configuration option devices/scan.
+        # Directories containing device nodes to use with LVM.
+        # This configuration option is advanced.
+        scan = [ "/dev" ]
+
+        # Configuration option devices/obtain_device_list_from_udev.
+        # Obtain the list of available devices from udev.
+        # This avoids opening or using any inapplicable non-block devices or
+        # subdirectories found in the udev directory. Any device node or
+        # symlink not managed by udev in the udev directory is ignored. This
+        # setting applies only to the udev-managed device directory; other
+        # directories will be scanned fully. LVM needs to be compiled with
+        # udev support for this setting to apply.
+        obtain_device_list_from_udev = 1
+
+        # Configuration option devices/external_device_info_source.
+        # Select an external device information source.
+        # Some information may already be available in the system and LVM can
+        # use this information to determine the exact type or use of devices it
+        # processes. Using an existing external device information source can
+        # speed up device processing as LVM does not need to run its own native
+        # routines to acquire this information. For example, this information
+        # is used to drive LVM filtering like MD component detection, multipath
+        # component detection, partition detection and others.
+        # 
+        # Accepted values:
+        #   none
+        #     No external device information source is used.
+        #   udev
+        #     Reuse existing udev database records. Applicable only if LVM is
+        #     compiled with udev support.
+        # 
+        external_device_info_source = "none"
+
+        # Configuration option devices/preferred_names.
+        # Select which path name to display for a block device.
+        # If multiple path names exist for a block device, and LVM needs to
+        # display a name for the device, the path names are matched against
+        # each item in this list of regular expressions. The first match is
+        # used. Try to avoid using undescriptive /dev/dm-N names, if present.
+        # If no preferred name matches, or if preferred_names are not defined,
+        # the following built-in preferences are applied in order until one
+        # produces a preferred name:
+        # Prefer names with path prefixes in the order of:
+        # /dev/mapper, /dev/disk, /dev/dm-*, /dev/block.
+        # Prefer the name with the least number of slashes.
+        # Prefer a name that is a symlink.
+        # Prefer the path with least value in lexicographical order.
+        # 
+        # Example
+        # preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option devices/filter.
+        # Limit the block devices that are used by LVM commands.
+        # This is a list of regular expressions used to accept or reject block
+        # device path names. Each regex is delimited by a vertical bar '|'
+        # (or any character) and is preceded by 'a' to accept the path, or
+        # by 'r' to reject the path. The first regex in the list to match the
+        # path is used, producing the 'a' or 'r' result for the device.
+        # When multiple path names exist for a block device, if any path name
+        # matches an 'a' pattern before an 'r' pattern, then the device is
+        # accepted. If all the path names match an 'r' pattern first, then the
+        # device is rejected. Unmatching path names do not affect the accept
+        # or reject decision. If no path names for a device match a pattern,
+        # then the device is accepted. Be careful mixing 'a' and 'r' patterns,
+        # as the combination might produce unexpected results (test changes.)
+        # Run vgscan after changing the filter to regenerate the cache.
+        # See the use_lvmetad comment for a special case regarding filters.
+        # 
+        # Example
+        # Accept every block device:
+
+        filter = ["a|/dev/sda2*|", "r|.*|" ]
+
+        # filter = [ "a|.*/|" ]
+        # Reject the cdrom drive:
+        # filter = [ "r|/dev/cdrom|" ]
+        # Work with just loopback devices, e.g. for testing:
+        # filter = [ "a|loop|", "r|.*|" ]
+        # Accept all loop devices and ide drives except hdc:
+        # filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
+        # Use anchors to be very specific:
+        # filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
+        # 
+        # This configuration option has an automatic default value.
+        # filter = [ "a|.*/|" ]
+
+        # Configuration option devices/global_filter.
+        # Limit the block devices that are used by LVM system components.
+        # Because devices/filter may be overridden from the command line, it is
+        # not suitable for system-wide device filtering, e.g. udev and lvmetad.
+        # Use global_filter to hide devices from these LVM system components.
+        # The syntax is the same as devices/filter. Devices rejected by
+        # global_filter are not opened by LVM.
+        # This configuration option has an automatic default value.
+        # global_filter = [ "a|.*/|" ]
+
+        # Configuration option devices/cache_dir.
+        # Directory in which to store the device cache file.
+        # The results of filtering are cached on disk to avoid rescanning dud
+        # devices (which can take a very long time). By default this cache is
+        # stored in a file named .cache. It is safe to delete this file; the
+        # tools regenerate it. If obtain_device_list_from_udev is enabled, the
+        # list of devices is obtained from udev and any existing .cache file
+        # is removed.
+        cache_dir = "/run/lvm"
+
+        # Configuration option devices/cache_file_prefix.
+        # A prefix used before the .cache file name. See devices/cache_dir.
+        cache_file_prefix = ""
+
+        # Configuration option devices/write_cache_state.
+        # Enable/disable writing the cache file. See devices/cache_dir.
+        write_cache_state = 1
+
+        # Configuration option devices/types.
+        # List of additional acceptable block device types.
+        # These are of device type names from /proc/devices, followed by the
+        # maximum number of partitions.
+        # 
+        # Example
+        # types = [ "fd", 16 ]
+        # 
+        # This configuration option is advanced.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option devices/sysfs_scan.
+        # Restrict device scanning to block devices appearing in sysfs.
+        # This is a quick way of filtering out block devices that are not
+        # present on the system. sysfs must be part of the kernel and mounted.)
+        sysfs_scan = 1
+
+        # Configuration option devices/multipath_component_detection.
+        # Ignore devices that are components of DM multipath devices.
+        multipath_component_detection = 1
+
+        # Configuration option devices/md_component_detection.
+        # Ignore devices that are components of software RAID (md) devices.
+        md_component_detection = 1
+
+        # Configuration option devices/fw_raid_component_detection.
+        # Ignore devices that are components of firmware RAID devices.
+        # LVM must use an external_device_info_source other than none for this
+        # detection to execute.
+        fw_raid_component_detection = 0
+
+        # Configuration option devices/md_chunk_alignment.
+        # Align PV data blocks with md device's stripe-width.
+        # This applies if a PV is placed directly on an md device.
+        md_chunk_alignment = 1
+
+        # Configuration option devices/default_data_alignment.
+        # Default alignment of the start of a PV data area in MB.
+        # If set to 0, a value of 64KiB will be used.
+        # Set to 1 for 1MiB, 2 for 2MiB, etc.
+        # This configuration option has an automatic default value.
+        # default_data_alignment = 1
+
+        # Configuration option devices/data_alignment_detection.
+        # Detect PV data alignment based on sysfs device information.
+        # The start of a PV data area will be a multiple of minimum_io_size or
+        # optimal_io_size exposed in sysfs. minimum_io_size is the smallest
+        # request the device can perform without incurring a read-modify-write
+        # penalty, e.g. MD chunk size. optimal_io_size is the device's
+        # preferred unit of receiving I/O, e.g. MD stripe width.
+        # minimum_io_size is used if optimal_io_size is undefined (0).
+        # If md_chunk_alignment is enabled, that detects the optimal_io_size.
+        # This setting takes precedence over md_chunk_alignment.
+        data_alignment_detection = 1
+
+        # Configuration option devices/data_alignment.
+        # Alignment of the start of a PV data area in KiB.
+        # If a PV is placed directly on an md device and md_chunk_alignment or
+        # data_alignment_detection are enabled, then this setting is ignored.
+        # Otherwise, md_chunk_alignment and data_alignment_detection are
+        # disabled if this is set. Set to 0 to use the default alignment or the
+        # page size, if larger.
+        data_alignment = 0
+
+        # Configuration option devices/data_alignment_offset_detection.
+        # Detect PV data alignment offset based on sysfs device information.
+        # The start of a PV aligned data area will be shifted by the
+        # alignment_offset exposed in sysfs. This offset is often 0, but may
+        # be non-zero. Certain 4KiB sector drives that compensate for windows
+        # partitioning will have an alignment_offset of 3584 bytes (sector 7
+        # is the lowest aligned logical block, the 4KiB sectors start at
+        # LBA -1, and consequently sector 63 is aligned on a 4KiB boundary).
+        # pvcreate --dataalignmentoffset will skip this detection.
+        data_alignment_offset_detection = 1
+
+        # Configuration option devices/ignore_suspended_devices.
+        # Ignore DM devices that have I/O suspended while scanning devices.
+        # Otherwise, LVM waits for a suspended device to become accessible.
+        # This should only be needed in recovery situations.
+        ignore_suspended_devices = 0
+
+        # Configuration option devices/ignore_lvm_mirrors.
+        # Do not scan 'mirror' LVs to avoid possible deadlocks.
+        # This avoids possible deadlocks when using the 'mirror' segment type.
+        # This setting determines whether LVs using the 'mirror' segment type
+        # are scanned for LVM labels. This affects the ability of mirrors to
+        # be used as physical volumes. If this setting is enabled, it is
+        # impossible to create VGs on top of mirror LVs, i.e. to stack VGs on
+        # mirror LVs. If this setting is disabled, allowing mirror LVs to be
+        # scanned, it may cause LVM processes and I/O to the mirror to become
+        # blocked. This is due to the way that the mirror segment type handles
+        # failures. In order for the hang to occur, an LVM command must be run
+        # just after a failure and before the automatic LVM repair process
+        # takes place, or there must be failures in multiple mirrors in the
+        # same VG at the same time with write failures occurring moments before
+        # a scan of the mirror's labels. The 'mirror' scanning problems do not
+        # apply to LVM RAID types like 'raid1' which handle failures in a
+        # different way, making them a better choice for VG stacking.
+        ignore_lvm_mirrors = 1
+
+        # Configuration option devices/disable_after_error_count.
+        # Number of I/O errors after which a device is skipped.
+        # During each LVM operation, errors received from each device are
+        # counted. If the counter of a device exceeds the limit set here,
+        # no further I/O is sent to that device for the remainder of the
+        # operation. Setting this to 0 disables the counters altogether.
+        disable_after_error_count = 0
+
+        # Configuration option devices/require_restorefile_with_uuid.
+        # Allow use of pvcreate --uuid without requiring --restorefile.
+        require_restorefile_with_uuid = 1
+
+        # Configuration option devices/pv_min_size.
+        # Minimum size in KiB of block devices which can be used as PVs.
+        # In a clustered environment all nodes must use the same value.
+        # Any value smaller than 512KiB is ignored. The previous built-in
+        # value was 512.
+        pv_min_size = 2048
+
+        # Configuration option devices/issue_discards.
+        # Issue discards to PVs that are no longer used by an LV.
+        # Discards are sent to an LV's underlying physical volumes when the LV
+        # is no longer using the physical volumes' space, e.g. lvremove,
+        # lvreduce. Discards inform the storage that a region is no longer
+        # used. Storage that supports discards advertise the protocol-specific
+        # way discards should be issued by the kernel (TRIM, UNMAP, or
+        # WRITE SAME with UNMAP bit set). Not all storage will support or
+        # benefit from discards, but SSDs and thinly provisioned LUNs
+        # generally do. If enabled, discards will only be issued if both the
+        # storage and kernel provide support.
+        issue_discards = 1
 }
 
 # Configuration section allocation.
 # How LVM selects space and applies properties to LVs.
 allocation {
 
-	# Configuration option allocation/cling_tag_list.
-	# Advise LVM which PVs to use when searching for new space.
-	# When searching for free space to extend an LV, the 'cling' allocation
-	# policy will choose space on the same PVs as the last segment of the
-	# existing LV. If there is insufficient space and a list of tags is
-	# defined here, it will check whether any of them are attached to the
-	# PVs concerned and then seek to match those PV tags between existing
-	# extents and new extents.
-	# 
-	# Example
-	# Use the special tag "@*" as a wildcard to match any PV tag:
-	# cling_tag_list = [ "@*" ]
-	# LVs are mirrored between two sites within a single VG, and
-	# PVs are tagged with either @site1 or @site2 to indicate where
-	# they are situated:
-	# cling_tag_list = [ "@site1", "@site2" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option allocation/maximise_cling.
-	# Use a previous allocation algorithm.
-	# Changes made in version 2.02.85 extended the reach of the 'cling'
-	# policies to detect more situations where data can be grouped onto
-	# the same disks. This setting can be used to disable the changes
-	# and revert to the previous algorithm.
-	maximise_cling = 1
-
-	# Configuration option allocation/use_blkid_wiping.
-	# Use blkid to detect existing signatures on new PVs and LVs.
-	# The blkid library can detect more signatures than the native LVM
-	# detection code, but may take longer. LVM needs to be compiled with
-	# blkid wiping support for this setting to apply. LVM native detection
-	# code is currently able to recognize: MD device signatures,
-	# swap signature, and LUKS signatures. To see the list of signatures
-	# recognized by blkid, check the output of the 'blkid -k' command.
-	use_blkid_wiping = 1
-
-	# Configuration option allocation/wipe_signatures_when_zeroing_new_lvs.
-	# Look for and erase any signatures while zeroing a new LV.
-	# The --wipesignatures option overrides this setting.
-	# Zeroing is controlled by the -Z/--zero option, and if not specified,
-	# zeroing is used by default if possible. Zeroing simply overwrites the
-	# first 4KiB of a new LV with zeroes and does no signature detection or
-	# wiping. Signature wiping goes beyond zeroing and detects exact types
-	# and positions of signatures within the whole LV. It provides a
-	# cleaner LV after creation as all known signatures are wiped. The LV
-	# is not claimed incorrectly by other tools because of old signatures
-	# from previous use. The number of signatures that LVM can detect
-	# depends on the detection code that is selected (see
-	# use_blkid_wiping.) Wiping each detected signature must be confirmed.
-	# When this setting is disabled, signatures on new LVs are not detected
-	# or erased unless the --wipesignatures option is used directly.
-	wipe_signatures_when_zeroing_new_lvs = 1
-
-	# Configuration option allocation/mirror_logs_require_separate_pvs.
-	# Mirror logs and images will always use different PVs.
-	# The default setting changed in version 2.02.85.
-	mirror_logs_require_separate_pvs = 0
-
-	# Configuration option allocation/cache_pool_metadata_require_separate_pvs.
-	# Cache pool metadata and data will always use different PVs.
-	cache_pool_metadata_require_separate_pvs = 0
-
-	# Configuration option allocation/cache_mode.
-	# The default cache mode used for new cache.
-	# 
-	# Accepted values:
-	#   writethrough
-	#     Data blocks are immediately written from the cache to disk.
-	#   writeback
-	#     Data blocks are written from the cache back to disk after some
-	#     delay to improve performance.
-	# 
-	# This setting replaces allocation/cache_pool_cachemode.
-	# This configuration option has an automatic default value.
-	# cache_mode = "writethrough"
-
-	# Configuration option allocation/cache_policy.
-	# The default cache policy used for new cache volume.
-	# Since kernel 4.2 the default policy is smq (Stochastic multique),
-	# otherwise the older mq (Multiqueue) policy is selected.
-	# This configuration option does not have a default value defined.
-
-	# Configuration section allocation/cache_settings.
-	# Settings for the cache policy.
-	# See documentation for individual cache policies for more info.
-	# This configuration section has an automatic default value.
-	# cache_settings {
-	# }
-
-	# Configuration option allocation/cache_pool_chunk_size.
-	# The minimal chunk size in KiB for cache pool volumes.
-	# Using a chunk_size that is too large can result in wasteful use of
-	# the cache, where small reads and writes can cause large sections of
-	# an LV to be mapped into the cache. However, choosing a chunk_size
-	# that is too small can result in more overhead trying to manage the
-	# numerous chunks that become mapped into the cache. The former is
-	# more of a problem than the latter in most cases, so the default is
-	# on the smaller end of the spectrum. Supported values range from
-	# 32KiB to 1GiB in multiples of 32.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option allocation/thin_pool_metadata_require_separate_pvs.
-	# Thin pool metdata and data will always use different PVs.
-	thin_pool_metadata_require_separate_pvs = 0
-
-	# Configuration option allocation/thin_pool_zero.
-	# Thin pool data chunks are zeroed before they are first used.
-	# Zeroing with a larger thin pool chunk size reduces performance.
-	# This configuration option has an automatic default value.
-	# thin_pool_zero = 1
-
-	# Configuration option allocation/thin_pool_discards.
-	# The discards behaviour of thin pool volumes.
-	# 
-	# Accepted values:
-	#   ignore
-	#   nopassdown
-	#   passdown
-	# 
-	# This configuration option has an automatic default value.
-	# thin_pool_discards = "passdown"
-
-	# Configuration option allocation/thin_pool_chunk_size_policy.
-	# The chunk size calculation policy for thin pool volumes.
-	# 
-	# Accepted values:
-	#   generic
-	#     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
-	#     the chunk size based on estimation and device hints exposed in
-	#     sysfs - the minimum_io_size. The chunk size is always at least
-	#     64KiB.
-	#   performance
-	#     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
-	#     the chunk size for performance based on device hints exposed in
-	#     sysfs - the optimal_io_size. The chunk size is always at least
-	#     512KiB.
-	# 
-	# This configuration option has an automatic default value.
-	# thin_pool_chunk_size_policy = "generic"
-
-	# Configuration option allocation/thin_pool_chunk_size.
-	# The minimal chunk size in KiB for thin pool volumes.
-	# Larger chunk sizes may improve performance for plain thin volumes,
-	# however using them for snapshot volumes is less efficient, as it
-	# consumes more space and takes extra time for copying. When unset,
-	# lvm tries to estimate chunk size starting from 64KiB. Supported
-	# values are in the range 64KiB to 1GiB.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option allocation/physical_extent_size.
-	# Default physical extent size in KiB to use for new VGs.
-	# This configuration option has an automatic default value.
-	# physical_extent_size = 4096
+        # Configuration option allocation/cling_tag_list.
+        # Advise LVM which PVs to use when searching for new space.
+        # When searching for free space to extend an LV, the 'cling' allocation
+        # policy will choose space on the same PVs as the last segment of the
+        # existing LV. If there is insufficient space and a list of tags is
+        # defined here, it will check whether any of them are attached to the
+        # PVs concerned and then seek to match those PV tags between existing
+        # extents and new extents.
+        # 
+        # Example
+        # Use the special tag "@*" as a wildcard to match any PV tag:
+        # cling_tag_list = [ "@*" ]
+        # LVs are mirrored between two sites within a single VG, and
+        # PVs are tagged with either @site1 or @site2 to indicate where
+        # they are situated:
+        # cling_tag_list = [ "@site1", "@site2" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option allocation/maximise_cling.
+        # Use a previous allocation algorithm.
+        # Changes made in version 2.02.85 extended the reach of the 'cling'
+        # policies to detect more situations where data can be grouped onto
+        # the same disks. This setting can be used to disable the changes
+        # and revert to the previous algorithm.
+        maximise_cling = 1
+
+        # Configuration option allocation/use_blkid_wiping.
+        # Use blkid to detect existing signatures on new PVs and LVs.
+        # The blkid library can detect more signatures than the native LVM
+        # detection code, but may take longer. LVM needs to be compiled with
+        # blkid wiping support for this setting to apply. LVM native detection
+        # code is currently able to recognize: MD device signatures,
+        # swap signature, and LUKS signatures. To see the list of signatures
+        # recognized by blkid, check the output of the 'blkid -k' command.
+        use_blkid_wiping = 1
+
+        # Configuration option allocation/wipe_signatures_when_zeroing_new_lvs.
+        # Look for and erase any signatures while zeroing a new LV.
+        # The --wipesignatures option overrides this setting.
+        # Zeroing is controlled by the -Z/--zero option, and if not specified,
+        # zeroing is used by default if possible. Zeroing simply overwrites the
+        # first 4KiB of a new LV with zeroes and does no signature detection or
+        # wiping. Signature wiping goes beyond zeroing and detects exact types
+        # and positions of signatures within the whole LV. It provides a
+        # cleaner LV after creation as all known signatures are wiped. The LV
+        # is not claimed incorrectly by other tools because of old signatures
+        # from previous use. The number of signatures that LVM can detect
+        # depends on the detection code that is selected (see
+        # use_blkid_wiping.) Wiping each detected signature must be confirmed.
+        # When this setting is disabled, signatures on new LVs are not detected
+        # or erased unless the --wipesignatures option is used directly.
+        wipe_signatures_when_zeroing_new_lvs = 1
+
+        # Configuration option allocation/mirror_logs_require_separate_pvs.
+        # Mirror logs and images will always use different PVs.
+        # The default setting changed in version 2.02.85.
+        mirror_logs_require_separate_pvs = 0
+
+        # Configuration option allocation/cache_pool_metadata_require_separate_pvs.
+        # Cache pool metadata and data will always use different PVs.
+        cache_pool_metadata_require_separate_pvs = 0
+
+        # Configuration option allocation/cache_mode.
+        # The default cache mode used for new cache.
+        # 
+        # Accepted values:
+        #   writethrough
+        #     Data blocks are immediately written from the cache to disk.
+        #   writeback
+        #     Data blocks are written from the cache back to disk after some
+        #     delay to improve performance.
+        # 
+        # This setting replaces allocation/cache_pool_cachemode.
+        # This configuration option has an automatic default value.
+        # cache_mode = "writethrough"
+
+        # Configuration option allocation/cache_policy.
+        # The default cache policy used for new cache volume.
+        # Since kernel 4.2 the default policy is smq (Stochastic multique),
+        # otherwise the older mq (Multiqueue) policy is selected.
+        # This configuration option does not have a default value defined.
+
+        # Configuration section allocation/cache_settings.
+        # Settings for the cache policy.
+        # See documentation for individual cache policies for more info.
+        # This configuration section has an automatic default value.
+        # cache_settings {
+        # }
+
+        # Configuration option allocation/cache_pool_chunk_size.
+        # The minimal chunk size in KiB for cache pool volumes.
+        # Using a chunk_size that is too large can result in wasteful use of
+        # the cache, where small reads and writes can cause large sections of
+        # an LV to be mapped into the cache. However, choosing a chunk_size
+        # that is too small can result in more overhead trying to manage the
+        # numerous chunks that become mapped into the cache. The former is
+        # more of a problem than the latter in most cases, so the default is
+        # on the smaller end of the spectrum. Supported values range from
+        # 32KiB to 1GiB in multiples of 32.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option allocation/thin_pool_metadata_require_separate_pvs.
+        # Thin pool metdata and data will always use different PVs.
+        thin_pool_metadata_require_separate_pvs = 0
+
+        # Configuration option allocation/thin_pool_zero.
+        # Thin pool data chunks are zeroed before they are first used.
+        # Zeroing with a larger thin pool chunk size reduces performance.
+        # This configuration option has an automatic default value.
+        # thin_pool_zero = 1
+
+        # Configuration option allocation/thin_pool_discards.
+        # The discards behaviour of thin pool volumes.
+        # 
+        # Accepted values:
+        #   ignore
+        #   nopassdown
+        #   passdown
+        # 
+        # This configuration option has an automatic default value.
+        # thin_pool_discards = "passdown"
+
+        # Configuration option allocation/thin_pool_chunk_size_policy.
+        # The chunk size calculation policy for thin pool volumes.
+        # 
+        # Accepted values:
+        #   generic
+        #     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
+        #     the chunk size based on estimation and device hints exposed in
+        #     sysfs - the minimum_io_size. The chunk size is always at least
+        #     64KiB.
+        #   performance
+        #     If thin_pool_chunk_size is defined, use it. Otherwise, calculate
+        #     the chunk size for performance based on device hints exposed in
+        #     sysfs - the optimal_io_size. The chunk size is always at least
+        #     512KiB.
+        # 
+        # This configuration option has an automatic default value.
+        # thin_pool_chunk_size_policy = "generic"
+
+        # Configuration option allocation/thin_pool_chunk_size.
+        # The minimal chunk size in KiB for thin pool volumes.
+        # Larger chunk sizes may improve performance for plain thin volumes,
+        # however using them for snapshot volumes is less efficient, as it
+        # consumes more space and takes extra time for copying. When unset,
+        # lvm tries to estimate chunk size starting from 64KiB. Supported
+        # values are in the range 64KiB to 1GiB.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option allocation/physical_extent_size.
+        # Default physical extent size in KiB to use for new VGs.
+        # This configuration option has an automatic default value.
+        # physical_extent_size = 4096
 }
 
 # Configuration section log.
 # How LVM log information is reported.
 log {
 
-	# Configuration option log/verbose.
-	# Controls the messages sent to stdout or stderr.
-	verbose = 0
-
-	# Configuration option log/silent.
-	# Suppress all non-essential messages from stdout.
-	# This has the same effect as -qq. When enabled, the following commands
-	# still produce output: dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck,
-	# pvdisplay, pvs, version, vgcfgrestore -l, vgdisplay, vgs.
-	# Non-essential messages are shifted from log level 4 to log level 5
-	# for syslog and lvm2_log_fn purposes.
-	# Any 'yes' or 'no' questions not overridden by other arguments are
-	# suppressed and default to 'no'.
-	silent = 0
-
-	# Configuration option log/syslog.
-	# Send log messages through syslog.
-	syslog = 1
-
-	# Configuration option log/file.
-	# Write error and debug log messages to a file specified here.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option log/overwrite.
-	# Overwrite the log file each time the program is run.
-	overwrite = 0
-
-	# Configuration option log/level.
-	# The level of log messages that are sent to the log file or syslog.
-	# There are 6 syslog-like log levels currently in use: 2 to 7 inclusive.
-	# 7 is the most verbose (LOG_DEBUG).
-	level = 0
-
-	# Configuration option log/indent.
-	# Indent messages according to their severity.
-	indent = 1
-
-	# Configuration option log/command_names.
-	# Display the command name on each line of output.
-	command_names = 0
-
-	# Configuration option log/prefix.
-	# A prefix to use before the log message text.
-	# (After the command name, if selected).
-	# Two spaces allows you to see/grep the severity of each message.
-	# To make the messages look similar to the original LVM tools use:
-	# indent = 0, command_names = 1, prefix = " -- "
-	prefix = "  "
-
-	# Configuration option log/activation.
-	# Log messages during activation.
-	# Don't use this in low memory situations (can deadlock).
-	activation = 0
-
-	# Configuration option log/debug_classes.
-	# Select log messages by class.
-	# Some debugging messages are assigned to a class and only appear in
-	# debug output if the class is listed here. Classes currently
-	# available: memory, devices, activation, allocation, lvmetad,
-	# metadata, cache, locking, lvmpolld. Use "all" to see everything.
-	debug_classes = [ "memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld" ]
+        # Configuration option log/verbose.
+        # Controls the messages sent to stdout or stderr.
+        verbose = 0
+
+        # Configuration option log/silent.
+        # Suppress all non-essential messages from stdout.
+        # This has the same effect as -qq. When enabled, the following commands
+        # still produce output: dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck,
+        # pvdisplay, pvs, version, vgcfgrestore -l, vgdisplay, vgs.
+        # Non-essential messages are shifted from log level 4 to log level 5
+        # for syslog and lvm2_log_fn purposes.
+        # Any 'yes' or 'no' questions not overridden by other arguments are
+        # suppressed and default to 'no'.
+        silent = 0
+
+        # Configuration option log/syslog.
+        # Send log messages through syslog.
+        syslog = 1
+
+        # Configuration option log/file.
+        # Write error and debug log messages to a file specified here.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option log/overwrite.
+        # Overwrite the log file each time the program is run.
+        overwrite = 0
+
+        # Configuration option log/level.
+        # The level of log messages that are sent to the log file or syslog.
+        # There are 6 syslog-like log levels currently in use: 2 to 7 inclusive.
+        # 7 is the most verbose (LOG_DEBUG).
+        level = 0
+
+        # Configuration option log/indent.
+        # Indent messages according to their severity.
+        indent = 1
+
+        # Configuration option log/command_names.
+        # Display the command name on each line of output.
+        command_names = 0
+
+        # Configuration option log/prefix.
+        # A prefix to use before the log message text.
+        # (After the command name, if selected).
+        # Two spaces allows you to see/grep the severity of each message.
+        # To make the messages look similar to the original LVM tools use:
+        # indent = 0, command_names = 1, prefix = " -- "
+        prefix = "  "
+
+        # Configuration option log/activation.
+        # Log messages during activation.
+        # Don't use this in low memory situations (can deadlock).
+        activation = 0
+
+        # Configuration option log/debug_classes.
+        # Select log messages by class.
+        # Some debugging messages are assigned to a class and only appear in
+        # debug output if the class is listed here. Classes currently
+        # available: memory, devices, activation, allocation, lvmetad,
+        # metadata, cache, locking, lvmpolld. Use "all" to see everything.
+        debug_classes = [ "memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld" ]
 }
 
 # Configuration section backup.
@@ -535,957 +539,957 @@
 # stored in a human readable text format.
 backup {
 
-	# Configuration option backup/backup.
-	# Maintain a backup of the current metadata configuration.
-	# Think very hard before turning this off!
-	backup = 1
-
-	# Configuration option backup/backup_dir.
-	# Location of the metadata backup files.
-	# Remember to back up this directory regularly!
-	backup_dir = "/etc/lvm/backup"
-
-	# Configuration option backup/archive.
-	# Maintain an archive of old metadata configurations.
-	# Think very hard before turning this off.
-	archive = 1
-
-	# Configuration option backup/archive_dir.
-	# Location of the metdata archive files.
-	# Remember to back up this directory regularly!
-	archive_dir = "/etc/lvm/archive"
-
-	# Configuration option backup/retain_min.
-	# Minimum number of archives to keep.
-	retain_min = 10
-
-	# Configuration option backup/retain_days.
-	# Minimum number of days to keep archive files.
-	retain_days = 30
+        # Configuration option backup/backup.
+        # Maintain a backup of the current metadata configuration.
+        # Think very hard before turning this off!
+        backup = 1
+
+        # Configuration option backup/backup_dir.
+        # Location of the metadata backup files.
+        # Remember to back up this directory regularly!
+        backup_dir = "/etc/lvm/backup"
+
+        # Configuration option backup/archive.
+        # Maintain an archive of old metadata configurations.
+        # Think very hard before turning this off.
+        archive = 1
+
+        # Configuration option backup/archive_dir.
+        # Location of the metdata archive files.
+        # Remember to back up this directory regularly!
+        archive_dir = "/etc/lvm/archive"
+
+        # Configuration option backup/retain_min.
+        # Minimum number of archives to keep.
+        retain_min = 10
+
+        # Configuration option backup/retain_days.
+        # Minimum number of days to keep archive files.
+        retain_days = 30
 }
 
 # Configuration section shell.
 # Settings for running LVM in shell (readline) mode.
 shell {
 
-	# Configuration option shell/history_size.
-	# Number of lines of history to store in ~/.lvm_history.
-	history_size = 100
+        # Configuration option shell/history_size.
+        # Number of lines of history to store in ~/.lvm_history.
+        history_size = 100
 }
 
 # Configuration section global.
 # Miscellaneous global LVM settings.
 global {
 
-	# Configuration option global/umask.
-	# The file creation mask for any files and directories created.
-	# Interpreted as octal if the first digit is zero.
-	umask = 077
-
-	# Configuration option global/test.
-	# No on-disk metadata changes will be made in test mode.
-	# Equivalent to having the -t option on every command.
-	test = 0
-
-	# Configuration option global/units.
-	# Default value for --units argument.
-	units = "h"
-
-	# Configuration option global/si_unit_consistency.
-	# Distinguish between powers of 1024 and 1000 bytes.
-	# The LVM commands distinguish between powers of 1024 bytes,
-	# e.g. KiB, MiB, GiB, and powers of 1000 bytes, e.g. KB, MB, GB.
-	# If scripts depend on the old behaviour, disable this setting
-	# temporarily until they are updated.
-	si_unit_consistency = 1
-
-	# Configuration option global/suffix.
-	# Display unit suffix for sizes.
-	# This setting has no effect if the units are in human-readable form
-	# (global/units = "h") in which case the suffix is always displayed.
-	suffix = 1
-
-	# Configuration option global/activation.
-	# Enable/disable communication with the kernel device-mapper.
-	# Disable to use the tools to manipulate LVM metadata without
-	# activating any logical volumes. If the device-mapper driver
-	# is not present in the kernel, disabling this should suppress
-	# the error messages.
-	activation = 1
-
-	# Configuration option global/fallback_to_lvm1.
-	# Try running LVM1 tools if LVM cannot communicate with DM.
-	# This option only applies to 2.4 kernels and is provided to help
-	# switch between device-mapper kernels and LVM1 kernels. The LVM1
-	# tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
-	# They will stop working once the lvm2 on-disk metadata format is used.
-	# This configuration option has an automatic default value.
-	# fallback_to_lvm1 = 0
-
-	# Configuration option global/format.
-	# The default metadata format that commands should use.
-	# The -M 1|2 option overrides this setting.
-	# 
-	# Accepted values:
-	#   lvm1
-	#   lvm2
-	# 
-	# This configuration option has an automatic default value.
-	# format = "lvm2"
-
-	# Configuration option global/format_libraries.
-	# Shared libraries that process different metadata formats.
-	# If support for LVM1 metadata was compiled as a shared library use
-	# format_libraries = "liblvm2format1.so"
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/segment_libraries.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/proc.
-	# Location of proc filesystem.
-	# This configuration option is advanced.
-	proc = "/proc"
-
-	# Configuration option global/etc.
-	# Location of /etc system configuration directory.
-	etc = "/etc"
-
-	# Configuration option global/locking_type.
-	# Type of locking to use.
-	# 
-	# Accepted values:
-	#   0
-	#     Turns off locking. Warning: this risks metadata corruption if
-	#     commands run concurrently.
-	#   1
-	#     LVM uses local file-based locking, the standard mode.
-	#   2
-	#     LVM uses the external shared library locking_library.
-	#   3
-	#     LVM uses built-in clustered locking with clvmd.
-	#     This is incompatible with lvmetad. If use_lvmetad is enabled,
-	#     LVM prints a warning and disables lvmetad use.
-	#   4
-	#     LVM uses read-only locking which forbids any operations that
-	#     might change metadata.
-	#   5
-	#     Offers dummy locking for tools that do not need any locks.
-	#     You should not need to set this directly; the tools will select
-	#     when to use it instead of the configured locking_type.
-	#     Do not use lvmetad or the kernel device-mapper driver with this
-	#     locking type. It is used by the --readonly option that offers
-	#     read-only access to Volume Group metadata that cannot be locked
-	#     safely because it belongs to an inaccessible domain and might be
-	#     in use, for example a virtual machine image or a disk that is
-	#     shared by a clustered machine.
-	# 
-	locking_type = 1
-
-	# Configuration option global/wait_for_locks.
-	# When disabled, fail if a lock request would block.
-	wait_for_locks = 1
-
-	# Configuration option global/fallback_to_clustered_locking.
-	# Attempt to use built-in cluster locking if locking_type 2 fails.
-	# If using external locking (type 2) and initialisation fails, with
-	# this enabled, an attempt will be made to use the built-in clustered
-	# locking. Disable this if using a customised locking_library.
-	fallback_to_clustered_locking = 1
-
-	# Configuration option global/fallback_to_local_locking.
-	# Use locking_type 1 (local) if locking_type 2 or 3 fail.
-	# If an attempt to initialise type 2 or type 3 locking failed, perhaps
-	# because cluster components such as clvmd are not running, with this
-	# enabled, an attempt will be made to use local file-based locking
-	# (type 1). If this succeeds, only commands against local VGs will
-	# proceed. VGs marked as clustered will be ignored.
-	fallback_to_local_locking = 1
-
-	# Configuration option global/locking_dir.
-	# Directory to use for LVM command file locks.
-	# Local non-LV directory that holds file-based locks while commands are
-	# in progress. A directory like /tmp that may get wiped on reboot is OK.
-	locking_dir = "/run/lock/lvm"
-
-	# Configuration option global/prioritise_write_locks.
-	# Allow quicker VG write access during high volume read access.
-	# When there are competing read-only and read-write access requests for
-	# a volume group's metadata, instead of always granting the read-only
-	# requests immediately, delay them to allow the read-write requests to
-	# be serviced. Without this setting, write access may be stalled by a
-	# high volume of read-only requests. This option only affects
-	# locking_type 1 viz. local file-based locking.
-	prioritise_write_locks = 1
-
-	# Configuration option global/library_dir.
-	# Search this directory first for shared libraries.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/locking_library.
-	# The external locking library to use for locking_type 2.
-	# This configuration option has an automatic default value.
-	# locking_library = "liblvm2clusterlock.so"
-
-	# Configuration option global/abort_on_internal_errors.
-	# Abort a command that encounters an internal error.
-	# Treat any internal errors as fatal errors, aborting the process that
-	# encountered the internal error. Please only enable for debugging.
-	abort_on_internal_errors = 0
-
-	# Configuration option global/detect_internal_vg_cache_corruption.
-	# Internal verification of VG structures.
-	# Check if CRC matches when a parsed VG is used multiple times. This
-	# is useful to catch unexpected changes to cached VG structures.
-	# Please only enable for debugging.
-	detect_internal_vg_cache_corruption = 0
-
-	# Configuration option global/metadata_read_only.
-	# No operations that change on-disk metadata are permitted.
-	# Additionally, read-only commands that encounter metadata in need of
-	# repair will still be allowed to proceed exactly as if the repair had
-	# been performed (except for the unchanged vg_seqno). Inappropriate
-	# use could mess up your system, so seek advice first!
-	metadata_read_only = 0
-
-	# Configuration option global/mirror_segtype_default.
-	# The segment type used by the short mirroring option -m.
-	# The --type mirror|raid1 option overrides this setting.
-	# 
-	# Accepted values:
-	#   mirror
-	#     The original RAID1 implementation from LVM/DM. It is
-	#     characterized by a flexible log solution (core, disk, mirrored),
-	#     and by the necessity to block I/O while handling a failure.
-	#     There is an inherent race in the dmeventd failure handling logic
-	#     with snapshots of devices using this type of RAID1 that in the
-	#     worst case could cause a deadlock. (Also see
-	#     devices/ignore_lvm_mirrors.)
-	#   raid1
-	#     This is a newer RAID1 implementation using the MD RAID1
-	#     personality through device-mapper. It is characterized by a
-	#     lack of log options. (A log is always allocated for every
-	#     device and they are placed on the same device as the image,
-	#     so no separate devices are required.) This mirror
-	#     implementation does not require I/O to be blocked while
-	#     handling a failure. This mirror implementation is not
-	#     cluster-aware and cannot be used in a shared (active/active)
-	#     fashion in a cluster.
-	# 
-	mirror_segtype_default = "raid1"
-
-	# Configuration option global/raid10_segtype_default.
-	# The segment type used by the -i -m combination.
-	# The --type raid10|mirror option overrides this setting.
-	# The --stripes/-i and --mirrors/-m options can both be specified
-	# during the creation of a logical volume to use both striping and
-	# mirroring for the LV. There are two different implementations.
-	# 
-	# Accepted values:
-	#   raid10
-	#     LVM uses MD's RAID10 personality through DM. This is the
-	#     preferred option.
-	#   mirror
-	#     LVM layers the 'mirror' and 'stripe' segment types. The layering
-	#     is done by creating a mirror LV on top of striped sub-LVs,
-	#     effectively creating a RAID 0+1 array. The layering is suboptimal
-	#     in terms of providing redundancy and performance.
-	# 
-	raid10_segtype_default = "raid10"
-
-	# Configuration option global/sparse_segtype_default.
-	# The segment type used by the -V -L combination.
-	# The --type snapshot|thin option overrides this setting.
-	# The combination of -V and -L options creates a sparse LV. There are
-	# two different implementations.
-	# 
-	# Accepted values:
-	#   snapshot
-	#     The original snapshot implementation from LVM/DM. It uses an old
-	#     snapshot that mixes data and metadata within a single COW
-	#     storage volume and performs poorly when the size of stored data
-	#     passes hundreds of MB.
-	#   thin
-	#     A newer implementation that uses thin provisioning. It has a
-	#     bigger minimal chunk size (64KiB) and uses a separate volume for
-	#     metadata. It has better performance, especially when more data
-	#     is used. It also supports full snapshots.
-	# 
-	sparse_segtype_default = "thin"
-
-	# Configuration option global/lvdisplay_shows_full_device_path.
-	# Enable this to reinstate the previous lvdisplay name format.
-	# The default format for displaying LV names in lvdisplay was changed
-	# in version 2.02.89 to show the LV name and path separately.
-	# Previously this was always shown as /dev/vgname/lvname even when that
-	# was never a valid path in the /dev filesystem.
-	# This configuration option has an automatic default value.
-	# lvdisplay_shows_full_device_path = 0
-
-	# Configuration option global/use_lvmetad.
-	# Use lvmetad to cache metadata and reduce disk scanning.
-	# When enabled (and running), lvmetad provides LVM commands with VG
-	# metadata and PV state. LVM commands then avoid reading this
-	# information from disks which can be slow. When disabled (or not
-	# running), LVM commands fall back to scanning disks to obtain VG
-	# metadata. lvmetad is kept updated via udev rules which must be set
-	# up for LVM to work correctly. (The udev rules should be installed
-	# by default.) Without a proper udev setup, changes in the system's
-	# block device configuration will be unknown to LVM, and ignored
-	# until a manual 'pvscan --cache' is run. If lvmetad was running
-	# while use_lvmetad was disabled, it must be stopped, use_lvmetad
-	# enabled, and then started. When using lvmetad, LV activation is
-	# switched to an automatic, event-based mode. In this mode, LVs are
-	# activated based on incoming udev events that inform lvmetad when
-	# PVs appear on the system. When a VG is complete (all PVs present),
-	# it is auto-activated. The auto_activation_volume_list setting
-	# controls which LVs are auto-activated (all by default.)
-	# When lvmetad is updated (automatically by udev events, or directly
-	# by pvscan --cache), devices/filter is ignored and all devices are
-	# scanned by default. lvmetad always keeps unfiltered information
-	# which is provided to LVM commands. Each LVM command then filters
-	# based on devices/filter. This does not apply to other, non-regexp,
-	# filtering settings: component filters such as multipath and MD
-	# are checked during pvscan --cache. To filter a device and prevent
-	# scanning from the LVM system entirely, including lvmetad, use
-	# devices/global_filter.
-	use_lvmetad = 1
-
-	# Configuration option global/use_lvmlockd.
-	# Use lvmlockd for locking among hosts using LVM on shared storage.
-	# See lvmlockd(8) for more information.
-	use_lvmlockd = 0
-
-	# Configuration option global/lvmlockd_lock_retries.
-	# Retry lvmlockd lock requests this many times.
-	# This configuration option has an automatic default value.
-	# lvmlockd_lock_retries = 3
-
-	# Configuration option global/sanlock_lv_extend.
-	# Size in MiB to extend the internal LV holding sanlock locks.
-	# The internal LV holds locks for each LV in the VG, and after enough
-	# LVs have been created, the internal LV needs to be extended. lvcreate
-	# will automatically extend the internal LV when needed by the amount
-	# specified here. Setting this to 0 disables the automatic extension
-	# and can cause lvcreate to fail.
-	# This configuration option has an automatic default value.
-	# sanlock_lv_extend = 256
-
-	# Configuration option global/thin_check_executable.
-	# The full path to the thin_check command.
-	# LVM uses this command to check that a thin metadata device is in a
-	# usable state. When a thin pool is activated and after it is
-	# deactivated, this command is run. Activation will only proceed if
-	# the command has an exit status of 0. Set to "" to skip this check.
-	# (Not recommended.) Also see thin_check_options.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# thin_check_executable = "/usr/sbin/thin_check"
-
-	# Configuration option global/thin_dump_executable.
-	# The full path to the thin_dump command.
-	# LVM uses this command to dump thin pool metadata.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# thin_dump_executable = "/usr/sbin/thin_dump"
-
-	# Configuration option global/thin_repair_executable.
-	# The full path to the thin_repair command.
-	# LVM uses this command to repair a thin metadata device if it is in
-	# an unusable state. Also see thin_repair_options.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# thin_repair_executable = "/usr/sbin/thin_repair"
-
-	# Configuration option global/thin_check_options.
-	# List of options passed to the thin_check command.
-	# With thin_check version 2.1 or newer you can add the option
-	# --ignore-non-fatal-errors to let it pass through ignorable errors
-	# and fix them later. With thin_check version 3.2 or newer you should
-	# include the option --clear-needs-check-flag.
-	# This configuration option has an automatic default value.
-	# thin_check_options = [ "-q", "--clear-needs-check-flag" ]
-
-	# Configuration option global/thin_repair_options.
-	# List of options passed to the thin_repair command.
-	# This configuration option has an automatic default value.
-	# thin_repair_options = [ "" ]
-
-	# Configuration option global/thin_disabled_features.
-	# Features to not use in the thin driver.
-	# This can be helpful for testing, or to avoid using a feature that is
-	# causing problems. Features include: block_size, discards,
-	# discards_non_power_2, external_origin, metadata_resize,
-	# external_origin_extend, error_if_no_space.
-	# 
-	# Example
-	# thin_disabled_features = [ "discards", "block_size" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/cache_disabled_features.
-	# Features to not use in the cache driver.
-	# This can be helpful for testing, or to avoid using a feature that is
-	# causing problems. Features include: policy_mq, policy_smq.
-	# 
-	# Example
-	# cache_disabled_features = [ "policy_smq" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/cache_check_executable.
-	# The full path to the cache_check command.
-	# LVM uses this command to check that a cache metadata device is in a
-	# usable state. When a cached LV is activated and after it is
-	# deactivated, this command is run. Activation will only proceed if the
-	# command has an exit status of 0. Set to "" to skip this check.
-	# (Not recommended.) Also see cache_check_options.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# cache_check_executable = "/usr/sbin/cache_check"
-
-	# Configuration option global/cache_dump_executable.
-	# The full path to the cache_dump command.
-	# LVM uses this command to dump cache pool metadata.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# cache_dump_executable = "/usr/sbin/cache_dump"
-
-	# Configuration option global/cache_repair_executable.
-	# The full path to the cache_repair command.
-	# LVM uses this command to repair a cache metadata device if it is in
-	# an unusable state. Also see cache_repair_options.
-	# (See package device-mapper-persistent-data or thin-provisioning-tools)
-	# This configuration option has an automatic default value.
-	# cache_repair_executable = "/usr/sbin/cache_repair"
-
-	# Configuration option global/cache_check_options.
-	# List of options passed to the cache_check command.
-	# With cache_check version 5.0 or newer you should include the option
-	# --clear-needs-check-flag.
-	# This configuration option has an automatic default value.
-	# cache_check_options = [ "-q", "--clear-needs-check-flag" ]
-
-	# Configuration option global/cache_repair_options.
-	# List of options passed to the cache_repair command.
-	# This configuration option has an automatic default value.
-	# cache_repair_options = [ "" ]
-
-	# Configuration option global/system_id_source.
-	# The method LVM uses to set the local system ID.
-	# Volume Groups can also be given a system ID (by vgcreate, vgchange,
-	# or vgimport.) A VG on shared storage devices is accessible only to
-	# the host with a matching system ID. See 'man lvmsystemid' for
-	# information on limitations and correct usage.
-	# 
-	# Accepted values:
-	#   none
-	#     The host has no system ID.
-	#   lvmlocal
-	#     Obtain the system ID from the system_id setting in the 'local'
-	#     section of an lvm configuration file, e.g. lvmlocal.conf.
-	#   uname
-	#     Set the system ID from the hostname (uname) of the system.
-	#     System IDs beginning localhost are not permitted.
-	#   machineid
-	#     Use the contents of the machine-id file to set the system ID.
-	#     Some systems create this file at installation time.
-	#     See 'man machine-id' and global/etc.
-	#   file
-	#     Use the contents of another file (system_id_file) to set the
-	#     system ID.
-	# 
-	system_id_source = "none"
-
-	# Configuration option global/system_id_file.
-	# The full path to the file containing a system ID.
-	# This is used when system_id_source is set to 'file'.
-	# Comments starting with the character # are ignored.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option global/use_lvmpolld.
-	# Use lvmpolld to supervise long running LVM commands.
-	# When enabled, control of long running LVM commands is transferred
-	# from the original LVM command to the lvmpolld daemon. This allows
-	# the operation to continue independent of the original LVM command.
-	# After lvmpolld takes over, the LVM command displays the progress
-	# of the ongoing operation. lvmpolld itself runs LVM commands to
-	# manage the progress of ongoing operations. lvmpolld can be used as
-	# a native systemd service, which allows it to be started on demand,
-	# and to use its own control group. When this option is disabled, LVM
-	# commands will supervise long running operations by forking themselves.
-	use_lvmpolld = 1
+        # Configuration option global/umask.
+        # The file creation mask for any files and directories created.
+        # Interpreted as octal if the first digit is zero.
+        umask = 077
+
+        # Configuration option global/test.
+        # No on-disk metadata changes will be made in test mode.
+        # Equivalent to having the -t option on every command.
+        test = 0
+
+        # Configuration option global/units.
+        # Default value for --units argument.
+        units = "h"
+
+        # Configuration option global/si_unit_consistency.
+        # Distinguish between powers of 1024 and 1000 bytes.
+        # The LVM commands distinguish between powers of 1024 bytes,
+        # e.g. KiB, MiB, GiB, and powers of 1000 bytes, e.g. KB, MB, GB.
+        # If scripts depend on the old behaviour, disable this setting
+        # temporarily until they are updated.
+        si_unit_consistency = 1
+
+        # Configuration option global/suffix.
+        # Display unit suffix for sizes.
+        # This setting has no effect if the units are in human-readable form
+        # (global/units = "h") in which case the suffix is always displayed.
+        suffix = 1
+
+        # Configuration option global/activation.
+        # Enable/disable communication with the kernel device-mapper.
+        # Disable to use the tools to manipulate LVM metadata without
+        # activating any logical volumes. If the device-mapper driver
+        # is not present in the kernel, disabling this should suppress
+        # the error messages.
+        activation = 1
+
+        # Configuration option global/fallback_to_lvm1.
+        # Try running LVM1 tools if LVM cannot communicate with DM.
+        # This option only applies to 2.4 kernels and is provided to help
+        # switch between device-mapper kernels and LVM1 kernels. The LVM1
+        # tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
+        # They will stop working once the lvm2 on-disk metadata format is used.
+        # This configuration option has an automatic default value.
+        # fallback_to_lvm1 = 0
+
+        # Configuration option global/format.
+        # The default metadata format that commands should use.
+        # The -M 1|2 option overrides this setting.
+        # 
+        # Accepted values:
+        #   lvm1
+        #   lvm2
+        # 
+        # This configuration option has an automatic default value.
+        # format = "lvm2"
+
+        # Configuration option global/format_libraries.
+        # Shared libraries that process different metadata formats.
+        # If support for LVM1 metadata was compiled as a shared library use
+        # format_libraries = "liblvm2format1.so"
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/segment_libraries.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/proc.
+        # Location of proc filesystem.
+        # This configuration option is advanced.
+        proc = "/proc"
+
+        # Configuration option global/etc.
+        # Location of /etc system configuration directory.
+        etc = "/etc"
+
+        # Configuration option global/locking_type.
+        # Type of locking to use.
+        # 
+        # Accepted values:
+        #   0
+        #     Turns off locking. Warning: this risks metadata corruption if
+        #     commands run concurrently.
+        #   1
+        #     LVM uses local file-based locking, the standard mode.
+        #   2
+        #     LVM uses the external shared library locking_library.
+        #   3
+        #     LVM uses built-in clustered locking with clvmd.
+        #     This is incompatible with lvmetad. If use_lvmetad is enabled,
+        #     LVM prints a warning and disables lvmetad use.
+        #   4
+        #     LVM uses read-only locking which forbids any operations that
+        #     might change metadata.
+        #   5
+        #     Offers dummy locking for tools that do not need any locks.
+        #     You should not need to set this directly; the tools will select
+        #     when to use it instead of the configured locking_type.
+        #     Do not use lvmetad or the kernel device-mapper driver with this
+        #     locking type. It is used by the --readonly option that offers
+        #     read-only access to Volume Group metadata that cannot be locked
+        #     safely because it belongs to an inaccessible domain and might be
+        #     in use, for example a virtual machine image or a disk that is
+        #     shared by a clustered machine.
+        # 
+        locking_type = 1
+
+        # Configuration option global/wait_for_locks.
+        # When disabled, fail if a lock request would block.
+        wait_for_locks = 1
+
+        # Configuration option global/fallback_to_clustered_locking.
+        # Attempt to use built-in cluster locking if locking_type 2 fails.
+        # If using external locking (type 2) and initialisation fails, with
+        # this enabled, an attempt will be made to use the built-in clustered
+        # locking. Disable this if using a customised locking_library.
+        fallback_to_clustered_locking = 1
+
+        # Configuration option global/fallback_to_local_locking.
+        # Use locking_type 1 (local) if locking_type 2 or 3 fail.
+        # If an attempt to initialise type 2 or type 3 locking failed, perhaps
+        # because cluster components such as clvmd are not running, with this
+        # enabled, an attempt will be made to use local file-based locking
+        # (type 1). If this succeeds, only commands against local VGs will
+        # proceed. VGs marked as clustered will be ignored.
+        fallback_to_local_locking = 1
+
+        # Configuration option global/locking_dir.
+        # Directory to use for LVM command file locks.
+        # Local non-LV directory that holds file-based locks while commands are
+        # in progress. A directory like /tmp that may get wiped on reboot is OK.
+        locking_dir = "/run/lock/lvm"
+
+        # Configuration option global/prioritise_write_locks.
+        # Allow quicker VG write access during high volume read access.
+        # When there are competing read-only and read-write access requests for
+        # a volume group's metadata, instead of always granting the read-only
+        # requests immediately, delay them to allow the read-write requests to
+        # be serviced. Without this setting, write access may be stalled by a
+        # high volume of read-only requests. This option only affects
+        # locking_type 1 viz. local file-based locking.
+        prioritise_write_locks = 1
+
+        # Configuration option global/library_dir.
+        # Search this directory first for shared libraries.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/locking_library.
+        # The external locking library to use for locking_type 2.
+        # This configuration option has an automatic default value.
+        # locking_library = "liblvm2clusterlock.so"
+
+        # Configuration option global/abort_on_internal_errors.
+        # Abort a command that encounters an internal error.
+        # Treat any internal errors as fatal errors, aborting the process that
+        # encountered the internal error. Please only enable for debugging.
+        abort_on_internal_errors = 0
+
+        # Configuration option global/detect_internal_vg_cache_corruption.
+        # Internal verification of VG structures.
+        # Check if CRC matches when a parsed VG is used multiple times. This
+        # is useful to catch unexpected changes to cached VG structures.
+        # Please only enable for debugging.
+        detect_internal_vg_cache_corruption = 0
+
+        # Configuration option global/metadata_read_only.
+        # No operations that change on-disk metadata are permitted.
+        # Additionally, read-only commands that encounter metadata in need of
+        # repair will still be allowed to proceed exactly as if the repair had
+        # been performed (except for the unchanged vg_seqno). Inappropriate
+        # use could mess up your system, so seek advice first!
+        metadata_read_only = 0
+
+        # Configuration option global/mirror_segtype_default.
+        # The segment type used by the short mirroring option -m.
+        # The --type mirror|raid1 option overrides this setting.
+        # 
+        # Accepted values:
+        #   mirror
+        #     The original RAID1 implementation from LVM/DM. It is
+        #     characterized by a flexible log solution (core, disk, mirrored),
+        #     and by the necessity to block I/O while handling a failure.
+        #     There is an inherent race in the dmeventd failure handling logic
+        #     with snapshots of devices using this type of RAID1 that in the
+        #     worst case could cause a deadlock. (Also see
+        #     devices/ignore_lvm_mirrors.)
+        #   raid1
+        #     This is a newer RAID1 implementation using the MD RAID1
+        #     personality through device-mapper. It is characterized by a
+        #     lack of log options. (A log is always allocated for every
+        #     device and they are placed on the same device as the image,
+        #     so no separate devices are required.) This mirror
+        #     implementation does not require I/O to be blocked while
+        #     handling a failure. This mirror implementation is not
+        #     cluster-aware and cannot be used in a shared (active/active)
+        #     fashion in a cluster.
+        # 
+        mirror_segtype_default = "raid1"
+
+        # Configuration option global/raid10_segtype_default.
+        # The segment type used by the -i -m combination.
+        # The --type raid10|mirror option overrides this setting.
+        # The --stripes/-i and --mirrors/-m options can both be specified
+        # during the creation of a logical volume to use both striping and
+        # mirroring for the LV. There are two different implementations.
+        # 
+        # Accepted values:
+        #   raid10
+        #     LVM uses MD's RAID10 personality through DM. This is the
+        #     preferred option.
+        #   mirror
+        #     LVM layers the 'mirror' and 'stripe' segment types. The layering
+        #     is done by creating a mirror LV on top of striped sub-LVs,
+        #     effectively creating a RAID 0+1 array. The layering is suboptimal
+        #     in terms of providing redundancy and performance.
+        # 
+        raid10_segtype_default = "raid10"
+
+        # Configuration option global/sparse_segtype_default.
+        # The segment type used by the -V -L combination.
+        # The --type snapshot|thin option overrides this setting.
+        # The combination of -V and -L options creates a sparse LV. There are
+        # two different implementations.
+        # 
+        # Accepted values:
+        #   snapshot
+        #     The original snapshot implementation from LVM/DM. It uses an old
+        #     snapshot that mixes data and metadata within a single COW
+        #     storage volume and performs poorly when the size of stored data
+        #     passes hundreds of MB.
+        #   thin
+        #     A newer implementation that uses thin provisioning. It has a
+        #     bigger minimal chunk size (64KiB) and uses a separate volume for
+        #     metadata. It has better performance, especially when more data
+        #     is used. It also supports full snapshots.
+        # 
+        sparse_segtype_default = "thin"
+
+        # Configuration option global/lvdisplay_shows_full_device_path.
+        # Enable this to reinstate the previous lvdisplay name format.
+        # The default format for displaying LV names in lvdisplay was changed
+        # in version 2.02.89 to show the LV name and path separately.
+        # Previously this was always shown as /dev/vgname/lvname even when that
+        # was never a valid path in the /dev filesystem.
+        # This configuration option has an automatic default value.
+        # lvdisplay_shows_full_device_path = 0
+
+        # Configuration option global/use_lvmetad.
+        # Use lvmetad to cache metadata and reduce disk scanning.
+        # When enabled (and running), lvmetad provides LVM commands with VG
+        # metadata and PV state. LVM commands then avoid reading this
+        # information from disks which can be slow. When disabled (or not
+        # running), LVM commands fall back to scanning disks to obtain VG
+        # metadata. lvmetad is kept updated via udev rules which must be set
+        # up for LVM to work correctly. (The udev rules should be installed
+        # by default.) Without a proper udev setup, changes in the system's
+        # block device configuration will be unknown to LVM, and ignored
+        # until a manual 'pvscan --cache' is run. If lvmetad was running
+        # while use_lvmetad was disabled, it must be stopped, use_lvmetad
+        # enabled, and then started. When using lvmetad, LV activation is
+        # switched to an automatic, event-based mode. In this mode, LVs are
+        # activated based on incoming udev events that inform lvmetad when
+        # PVs appear on the system. When a VG is complete (all PVs present),
+        # it is auto-activated. The auto_activation_volume_list setting
+        # controls which LVs are auto-activated (all by default.)
+        # When lvmetad is updated (automatically by udev events, or directly
+        # by pvscan --cache), devices/filter is ignored and all devices are
+        # scanned by default. lvmetad always keeps unfiltered information
+        # which is provided to LVM commands. Each LVM command then filters
+        # based on devices/filter. This does not apply to other, non-regexp,
+        # filtering settings: component filters such as multipath and MD
+        # are checked during pvscan --cache. To filter a device and prevent
+        # scanning from the LVM system entirely, including lvmetad, use
+        # devices/global_filter.
+        use_lvmetad = 1
+
+        # Configuration option global/use_lvmlockd.
+        # Use lvmlockd for locking among hosts using LVM on shared storage.
+        # See lvmlockd(8) for more information.
+        use_lvmlockd = 0
+
+        # Configuration option global/lvmlockd_lock_retries.
+        # Retry lvmlockd lock requests this many times.
+        # This configuration option has an automatic default value.
+        # lvmlockd_lock_retries = 3
+
+        # Configuration option global/sanlock_lv_extend.
+        # Size in MiB to extend the internal LV holding sanlock locks.
+        # The internal LV holds locks for each LV in the VG, and after enough
+        # LVs have been created, the internal LV needs to be extended. lvcreate
+        # will automatically extend the internal LV when needed by the amount
+        # specified here. Setting this to 0 disables the automatic extension
+        # and can cause lvcreate to fail.
+        # This configuration option has an automatic default value.
+        # sanlock_lv_extend = 256
+
+        # Configuration option global/thin_check_executable.
+        # The full path to the thin_check command.
+        # LVM uses this command to check that a thin metadata device is in a
+        # usable state. When a thin pool is activated and after it is
+        # deactivated, this command is run. Activation will only proceed if
+        # the command has an exit status of 0. Set to "" to skip this check.
+        # (Not recommended.) Also see thin_check_options.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # thin_check_executable = "/usr/sbin/thin_check"
+
+        # Configuration option global/thin_dump_executable.
+        # The full path to the thin_dump command.
+        # LVM uses this command to dump thin pool metadata.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # thin_dump_executable = "/usr/sbin/thin_dump"
+
+        # Configuration option global/thin_repair_executable.
+        # The full path to the thin_repair command.
+        # LVM uses this command to repair a thin metadata device if it is in
+        # an unusable state. Also see thin_repair_options.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # thin_repair_executable = "/usr/sbin/thin_repair"
+
+        # Configuration option global/thin_check_options.
+        # List of options passed to the thin_check command.
+        # With thin_check version 2.1 or newer you can add the option
+        # --ignore-non-fatal-errors to let it pass through ignorable errors
+        # and fix them later. With thin_check version 3.2 or newer you should
+        # include the option --clear-needs-check-flag.
+        # This configuration option has an automatic default value.
+        # thin_check_options = [ "-q", "--clear-needs-check-flag" ]
+
+        # Configuration option global/thin_repair_options.
+        # List of options passed to the thin_repair command.
+        # This configuration option has an automatic default value.
+        # thin_repair_options = [ "" ]
+
+        # Configuration option global/thin_disabled_features.
+        # Features to not use in the thin driver.
+        # This can be helpful for testing, or to avoid using a feature that is
+        # causing problems. Features include: block_size, discards,
+        # discards_non_power_2, external_origin, metadata_resize,
+        # external_origin_extend, error_if_no_space.
+        # 
+        # Example
+        # thin_disabled_features = [ "discards", "block_size" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/cache_disabled_features.
+        # Features to not use in the cache driver.
+        # This can be helpful for testing, or to avoid using a feature that is
+        # causing problems. Features include: policy_mq, policy_smq.
+        # 
+        # Example
+        # cache_disabled_features = [ "policy_smq" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/cache_check_executable.
+        # The full path to the cache_check command.
+        # LVM uses this command to check that a cache metadata device is in a
+        # usable state. When a cached LV is activated and after it is
+        # deactivated, this command is run. Activation will only proceed if the
+        # command has an exit status of 0. Set to "" to skip this check.
+        # (Not recommended.) Also see cache_check_options.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # cache_check_executable = "/usr/sbin/cache_check"
+
+        # Configuration option global/cache_dump_executable.
+        # The full path to the cache_dump command.
+        # LVM uses this command to dump cache pool metadata.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # cache_dump_executable = "/usr/sbin/cache_dump"
+
+        # Configuration option global/cache_repair_executable.
+        # The full path to the cache_repair command.
+        # LVM uses this command to repair a cache metadata device if it is in
+        # an unusable state. Also see cache_repair_options.
+        # (See package device-mapper-persistent-data or thin-provisioning-tools)
+        # This configuration option has an automatic default value.
+        # cache_repair_executable = "/usr/sbin/cache_repair"
+
+        # Configuration option global/cache_check_options.
+        # List of options passed to the cache_check command.
+        # With cache_check version 5.0 or newer you should include the option
+        # --clear-needs-check-flag.
+        # This configuration option has an automatic default value.
+        # cache_check_options = [ "-q", "--clear-needs-check-flag" ]
+
+        # Configuration option global/cache_repair_options.
+        # List of options passed to the cache_repair command.
+        # This configuration option has an automatic default value.
+        # cache_repair_options = [ "" ]
+
+        # Configuration option global/system_id_source.
+        # The method LVM uses to set the local system ID.
+        # Volume Groups can also be given a system ID (by vgcreate, vgchange,
+        # or vgimport.) A VG on shared storage devices is accessible only to
+        # the host with a matching system ID. See 'man lvmsystemid' for
+        # information on limitations and correct usage.
+        # 
+        # Accepted values:
+        #   none
+        #     The host has no system ID.
+        #   lvmlocal
+        #     Obtain the system ID from the system_id setting in the 'local'
+        #     section of an lvm configuration file, e.g. lvmlocal.conf.
+        #   uname
+        #     Set the system ID from the hostname (uname) of the system.
+        #     System IDs beginning localhost are not permitted.
+        #   machineid
+        #     Use the contents of the machine-id file to set the system ID.
+        #     Some systems create this file at installation time.
+        #     See 'man machine-id' and global/etc.
+        #   file
+        #     Use the contents of another file (system_id_file) to set the
+        #     system ID.
+        # 
+        system_id_source = "none"
+
+        # Configuration option global/system_id_file.
+        # The full path to the file containing a system ID.
+        # This is used when system_id_source is set to 'file'.
+        # Comments starting with the character # are ignored.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option global/use_lvmpolld.
+        # Use lvmpolld to supervise long running LVM commands.
+        # When enabled, control of long running LVM commands is transferred
+        # from the original LVM command to the lvmpolld daemon. This allows
+        # the operation to continue independent of the original LVM command.
+        # After lvmpolld takes over, the LVM command displays the progress
+        # of the ongoing operation. lvmpolld itself runs LVM commands to
+        # manage the progress of ongoing operations. lvmpolld can be used as
+        # a native systemd service, which allows it to be started on demand,
+        # and to use its own control group. When this option is disabled, LVM
+        # commands will supervise long running operations by forking themselves.
+        use_lvmpolld = 1
 }
 
 # Configuration section activation.
 activation {
 
-	# Configuration option activation/checks.
-	# Perform internal checks of libdevmapper operations.
-	# Useful for debugging problems with activation. Some of the checks may
-	# be expensive, so it's best to use this only when there seems to be a
-	# problem.
-	checks = 0
-
-	# Configuration option activation/udev_sync.
-	# Use udev notifications to synchronize udev and LVM.
-	# The --nodevsync option overrides this setting.
-	# When disabled, LVM commands will not wait for notifications from
-	# udev, but continue irrespective of any possible udev processing in
-	# the background. Only use this if udev is not running or has rules
-	# that ignore the devices LVM creates. If enabled when udev is not
-	# running, and LVM processes are waiting for udev, run the command
-	# 'dmsetup udevcomplete_all' to wake them up.
-	udev_sync = 1
-
-	# Configuration option activation/udev_rules.
-	# Use udev rules to manage LV device nodes and symlinks.
-	# When disabled, LVM will manage the device nodes and symlinks for
-	# active LVs itself. Manual intervention may be required if this
-	# setting is changed while LVs are active.
-	udev_rules = 1
-
-	# Configuration option activation/verify_udev_operations.
-	# Use extra checks in LVM to verify udev operations.
-	# This enables additional checks (and if necessary, repairs) on entries
-	# in the device directory after udev has completed processing its
-	# events. Useful for diagnosing problems with LVM/udev interactions.
-	verify_udev_operations = 0
-
-	# Configuration option activation/retry_deactivation.
-	# Retry failed LV deactivation.
-	# If LV deactivation fails, LVM will retry for a few seconds before
-	# failing. This may happen because a process run from a quick udev rule
-	# temporarily opened the device.
-	retry_deactivation = 1
-
-	# Configuration option activation/missing_stripe_filler.
-	# Method to fill missing stripes when activating an incomplete LV.
-	# Using 'error' will make inaccessible parts of the device return I/O
-	# errors on access. You can instead use a device path, in which case,
-	# that device will be used in place of missing stripes. Using anything
-	# other than 'error' with mirrored or snapshotted volumes is likely to
-	# result in data corruption.
-	# This configuration option is advanced.
-	missing_stripe_filler = "error"
-
-	# Configuration option activation/use_linear_target.
-	# Use the linear target to optimize single stripe LVs.
-	# When disabled, the striped target is used. The linear target is an
-	# optimised version of the striped target that only handles a single
-	# stripe.
-	use_linear_target = 1
-
-	# Configuration option activation/reserved_stack.
-	# Stack size in KiB to reserve for use while devices are suspended.
-	# Insufficent reserve risks I/O deadlock during device suspension.
-	reserved_stack = 64
-
-	# Configuration option activation/reserved_memory.
-	# Memory size in KiB to reserve for use while devices are suspended.
-	# Insufficent reserve risks I/O deadlock during device suspension.
-	reserved_memory = 8192
-
-	# Configuration option activation/process_priority.
-	# Nice value used while devices are suspended.
-	# Use a high priority so that LVs are suspended
-	# for the shortest possible time.
-	process_priority = -18
-
-	# Configuration option activation/volume_list.
-	# Only LVs selected by this list are activated.
-	# If this list is defined, an LV is only activated if it matches an
-	# entry in this list. If this list is undefined, it imposes no limits
-	# on LV activation (all are allowed).
-	# 
-	# Accepted values:
-	#   vgname
-	#     The VG name is matched exactly and selects all LVs in the VG.
-	#   vgname/lvname
-	#     The VG name and LV name are matched exactly and selects the LV.
-	#   @tag
-	#     Selects an LV if the specified tag matches a tag set on the LV
-	#     or VG.
-	#   @*
-	#     Selects an LV if a tag defined on the host is also set on the LV
-	#     or VG. See tags/hosttags. If any host tags exist but volume_list
-	#     is not defined, a default single-entry list containing '@*' is
-	#     assumed.
-	# 
-	# Example
-	# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/auto_activation_volume_list.
-	# Only LVs selected by this list are auto-activated.
-	# This list works like volume_list, but it is used only by
-	# auto-activation commands. It does not apply to direct activation
-	# commands. If this list is defined, an LV is only auto-activated
-	# if it matches an entry in this list. If this list is undefined, it
-	# imposes no limits on LV auto-activation (all are allowed.) If this
-	# list is defined and empty, i.e. "[]", then no LVs are selected for
-	# auto-activation. An LV that is selected by this list for
-	# auto-activation, must also be selected by volume_list (if defined)
-	# before it is activated. Auto-activation is an activation command that
-	# includes the 'a' argument: --activate ay or -a ay. The 'a' (auto)
-	# argument for auto-activation is meant to be used by activation
-	# commands that are run automatically by the system, as opposed to LVM
-	# commands run directly by a user. A user may also use the 'a' flag
-	# directly to perform auto-activation. Also see pvscan(8) for more
-	# information about auto-activation.
-	# 
-	# Accepted values:
-	#   vgname
-	#     The VG name is matched exactly and selects all LVs in the VG.
-	#   vgname/lvname
-	#     The VG name and LV name are matched exactly and selects the LV.
-	#   @tag
-	#     Selects an LV if the specified tag matches a tag set on the LV
-	#     or VG.
-	#   @*
-	#     Selects an LV if a tag defined on the host is also set on the LV
-	#     or VG. See tags/hosttags. If any host tags exist but volume_list
-	#     is not defined, a default single-entry list containing '@*' is
-	#     assumed.
-	# 
-	# Example
-	# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/read_only_volume_list.
-	# LVs in this list are activated in read-only mode.
-	# If this list is defined, each LV that is to be activated is checked
-	# against this list, and if it matches, it is activated in read-only
-	# mode. This overrides the permission setting stored in the metadata,
-	# e.g. from --permission rw.
-	# 
-	# Accepted values:
-	#   vgname
-	#     The VG name is matched exactly and selects all LVs in the VG.
-	#   vgname/lvname
-	#     The VG name and LV name are matched exactly and selects the LV.
-	#   @tag
-	#     Selects an LV if the specified tag matches a tag set on the LV
-	#     or VG.
-	#   @*
-	#     Selects an LV if a tag defined on the host is also set on the LV
-	#     or VG. See tags/hosttags. If any host tags exist but volume_list
-	#     is not defined, a default single-entry list containing '@*' is
-	#     assumed.
-	# 
-	# Example
-	# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
-	# 
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/raid_region_size.
-	# Size in KiB of each raid or mirror synchronization region.
-	# For raid or mirror segment types, this is the amount of data that is
-	# copied at once when initializing, or moved at once by pvmove.
-	raid_region_size = 512
-
-	# Configuration option activation/error_when_full.
-	# Return errors if a thin pool runs out of space.
-	# The --errorwhenfull option overrides this setting.
-	# When enabled, writes to thin LVs immediately return an error if the
-	# thin pool is out of data space. When disabled, writes to thin LVs
-	# are queued if the thin pool is out of space, and processed when the
-	# thin pool data space is extended. New thin pools are assigned the
-	# behavior defined here.
-	# This configuration option has an automatic default value.
-	# error_when_full = 0
-
-	# Configuration option activation/readahead.
-	# Setting to use when there is no readahead setting in metadata.
-	# 
-	# Accepted values:
-	#   none
-	#     Disable readahead.
-	#   auto
-	#     Use default value chosen by kernel.
-	# 
-	readahead = "auto"
-
-	# Configuration option activation/raid_fault_policy.
-	# Defines how a device failure in a RAID LV is handled.
-	# This includes LVs that have the following segment types:
-	# raid1, raid4, raid5*, and raid6*.
-	# If a device in the LV fails, the policy determines the steps
-	# performed by dmeventd automatically, and the steps perfomed by the
-	# manual command lvconvert --repair --use-policies.
-	# Automatic handling requires dmeventd to be monitoring the LV.
-	# 
-	# Accepted values:
-	#   warn
-	#     Use the system log to warn the user that a device in the RAID LV
-	#     has failed. It is left to the user to run lvconvert --repair
-	#     manually to remove or replace the failed device. As long as the
-	#     number of failed devices does not exceed the redundancy of the LV
-	#     (1 device for raid4/5, 2 for raid6), the LV will remain usable.
-	#   allocate
-	#     Attempt to use any extra physical volumes in the VG as spares and
-	#     replace faulty devices.
-	# 
-	raid_fault_policy = "warn"
-
-	# Configuration option activation/mirror_image_fault_policy.
-	# Defines how a device failure in a 'mirror' LV is handled.
-	# An LV with the 'mirror' segment type is composed of mirror images
-	# (copies) and a mirror log. A disk log ensures that a mirror LV does
-	# not need to be re-synced (all copies made the same) every time a
-	# machine reboots or crashes. If a device in the LV fails, this policy
-	# determines the steps perfomed by dmeventd automatically, and the steps
-	# performed by the manual command lvconvert --repair --use-policies.
-	# Automatic handling requires dmeventd to be monitoring the LV.
-	# 
-	# Accepted values:
-	#   remove
-	#     Simply remove the faulty device and run without it. If the log
-	#     device fails, the mirror would convert to using an in-memory log.
-	#     This means the mirror will not remember its sync status across
-	#     crashes/reboots and the entire mirror will be re-synced. If a
-	#     mirror image fails, the mirror will convert to a non-mirrored
-	#     device if there is only one remaining good copy.
-	#   allocate
-	#     Remove the faulty device and try to allocate space on a new
-	#     device to be a replacement for the failed device. Using this
-	#     policy for the log is fast and maintains the ability to remember
-	#     sync state through crashes/reboots. Using this policy for a
-	#     mirror device is slow, as it requires the mirror to resynchronize
-	#     the devices, but it will preserve the mirror characteristic of
-	#     the device. This policy acts like 'remove' if no suitable device
-	#     and space can be allocated for the replacement.
-	#   allocate_anywhere
-	#     Not yet implemented. Useful to place the log device temporarily
-	#     on the same physical volume as one of the mirror images. This
-	#     policy is not recommended for mirror devices since it would break
-	#     the redundant nature of the mirror. This policy acts like
-	#     'remove' if no suitable device and space can be allocated for the
-	#     replacement.
-	# 
-	mirror_image_fault_policy = "remove"
-
-	# Configuration option activation/mirror_log_fault_policy.
-	# Defines how a device failure in a 'mirror' log LV is handled.
-	# The mirror_image_fault_policy description for mirrored LVs also
-	# applies to mirrored log LVs.
-	mirror_log_fault_policy = "allocate"
-
-	# Configuration option activation/snapshot_autoextend_threshold.
-	# Auto-extend a snapshot when its usage exceeds this percent.
-	# Setting this to 100 disables automatic extension.
-	# The minimum value is 50 (a smaller value is treated as 50.)
-	# Also see snapshot_autoextend_percent.
-	# Automatic extension requires dmeventd to be monitoring the LV.
-	# 
-	# Example
-	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
-	# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
-	# 840M, it is extended to 1.44G:
-	# snapshot_autoextend_threshold = 70
-	# 
-	snapshot_autoextend_threshold = 100
-
-	# Configuration option activation/snapshot_autoextend_percent.
-	# Auto-extending a snapshot adds this percent extra space.
-	# The amount of additional space added to a snapshot is this
-	# percent of its current size.
-	# 
-	# Example
-	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
-	# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
-	# 840M, it is extended to 1.44G:
-	# snapshot_autoextend_percent = 20
-	# 
-	snapshot_autoextend_percent = 20
-
-	# Configuration option activation/thin_pool_autoextend_threshold.
-	# Auto-extend a thin pool when its usage exceeds this percent.
-	# Setting this to 100 disables automatic extension.
-	# The minimum value is 50 (a smaller value is treated as 50.)
-	# Also see thin_pool_autoextend_percent.
-	# Automatic extension requires dmeventd to be monitoring the LV.
-	# 
-	# Example
-	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
-	# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
-	# 840M, it is extended to 1.44G:
-	# thin_pool_autoextend_threshold = 70
-	# 
-	thin_pool_autoextend_threshold = 100
-
-	# Configuration option activation/thin_pool_autoextend_percent.
-	# Auto-extending a thin pool adds this percent extra space.
-	# The amount of additional space added to a thin pool is this
-	# percent of its current size.
-	# 
-	# Example
-	# Using 70% autoextend threshold and 20% autoextend size, when a 1G
-	# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
-	# 840M, it is extended to 1.44G:
-	# thin_pool_autoextend_percent = 20
-	# 
-	thin_pool_autoextend_percent = 20
-
-	# Configuration option activation/mlock_filter.
-	# Do not mlock these memory areas.
-	# While activating devices, I/O to devices being (re)configured is
-	# suspended. As a precaution against deadlocks, LVM pins memory it is
-	# using so it is not paged out, and will not require I/O to reread.
-	# Groups of pages that are known not to be accessed during activation
-	# do not need to be pinned into memory. Each string listed in this
-	# setting is compared against each line in /proc/self/maps, and the
-	# pages corresponding to lines that match are not pinned. On some
-	# systems, locale-archive was found to make up over 80% of the memory
-	# used by the process.
-	# 
-	# Example
-	# mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
-	# 
-	# This configuration option is advanced.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/use_mlockall.
-	# Use the old behavior of mlockall to pin all memory.
-	# Prior to version 2.02.62, LVM used mlockall() to pin the whole
-	# process's memory while activating devices.
-	use_mlockall = 0
-
-	# Configuration option activation/monitoring.
-	# Monitor LVs that are activated.
-	# The --ignoremonitoring option overrides this setting.
-	# When enabled, LVM will ask dmeventd to monitor activated LVs.
-	monitoring = 1
-
-	# Configuration option activation/polling_interval.
-	# Check pvmove or lvconvert progress at this interval (seconds).
-	# When pvmove or lvconvert must wait for the kernel to finish
-	# synchronising or merging data, they check and report progress at
-	# intervals of this number of seconds. If this is set to 0 and there
-	# is only one thing to wait for, there are no progress reports, but
-	# the process is awoken immediately once the operation is complete.
-	polling_interval = 15
-
-	# Configuration option activation/auto_set_activation_skip.
-	# Set the activation skip flag on new thin snapshot LVs.
-	# The --setactivationskip option overrides this setting.
-	# An LV can have a persistent 'activation skip' flag. The flag causes
-	# the LV to be skipped during normal activation. The lvchange/vgchange
-	# -K option is required to activate LVs that have the activation skip
-	# flag set. When this setting is enabled, the activation skip flag is
-	# set on new thin snapshot LVs.
-	# This configuration option has an automatic default value.
-	# auto_set_activation_skip = 1
-
-	# Configuration option activation/activation_mode.
-	# How LVs with missing devices are activated.
-	# The --activationmode option overrides this setting.
-	# 
-	# Accepted values:
-	#   complete
-	#     Only allow activation of an LV if all of the Physical Volumes it
-	#     uses are present. Other PVs in the Volume Group may be missing.
-	#   degraded
-	#     Like complete, but additionally RAID LVs of segment type raid1,
-	#     raid4, raid5, radid6 and raid10 will be activated if there is no
-	#     data loss, i.e. they have sufficient redundancy to present the
-	#     entire addressable range of the Logical Volume.
-	#   partial
-	#     Allows the activation of any LV even if a missing or failed PV
-	#     could cause data loss with a portion of the LV inaccessible.
-	#     This setting should not normally be used, but may sometimes
-	#     assist with data recovery.
-	# 
-	activation_mode = "degraded"
-
-	# Configuration option activation/lock_start_list.
-	# Locking is started only for VGs selected by this list.
-	# The rules are the same as those for volume_list.
-	# This configuration option does not have a default value defined.
-
-	# Configuration option activation/auto_lock_start_list.
-	# Locking is auto-started only for VGs selected by this list.
-	# The rules are the same as those for auto_activation_volume_list.
-	# This configuration option does not have a default value defined.
+        # Configuration option activation/checks.
+        # Perform internal checks of libdevmapper operations.
+        # Useful for debugging problems with activation. Some of the checks may
+        # be expensive, so it's best to use this only when there seems to be a
+        # problem.
+        checks = 0
+
+        # Configuration option activation/udev_sync.
+        # Use udev notifications to synchronize udev and LVM.
+        # The --nodevsync option overrides this setting.
+        # When disabled, LVM commands will not wait for notifications from
+        # udev, but continue irrespective of any possible udev processing in
+        # the background. Only use this if udev is not running or has rules
+        # that ignore the devices LVM creates. If enabled when udev is not
+        # running, and LVM processes are waiting for udev, run the command
+        # 'dmsetup udevcomplete_all' to wake them up.
+        udev_sync = 1
+
+        # Configuration option activation/udev_rules.
+        # Use udev rules to manage LV device nodes and symlinks.
+        # When disabled, LVM will manage the device nodes and symlinks for
+        # active LVs itself. Manual intervention may be required if this
+        # setting is changed while LVs are active.
+        udev_rules = 1
+
+        # Configuration option activation/verify_udev_operations.
+        # Use extra checks in LVM to verify udev operations.
+        # This enables additional checks (and if necessary, repairs) on entries
+        # in the device directory after udev has completed processing its
+        # events. Useful for diagnosing problems with LVM/udev interactions.
+        verify_udev_operations = 0
+
+        # Configuration option activation/retry_deactivation.
+        # Retry failed LV deactivation.
+        # If LV deactivation fails, LVM will retry for a few seconds before
+        # failing. This may happen because a process run from a quick udev rule
+        # temporarily opened the device.
+        retry_deactivation = 1
+
+        # Configuration option activation/missing_stripe_filler.
+        # Method to fill missing stripes when activating an incomplete LV.
+        # Using 'error' will make inaccessible parts of the device return I/O
+        # errors on access. You can instead use a device path, in which case,
+        # that device will be used in place of missing stripes. Using anything
+        # other than 'error' with mirrored or snapshotted volumes is likely to
+        # result in data corruption.
+        # This configuration option is advanced.
+        missing_stripe_filler = "error"
+
+        # Configuration option activation/use_linear_target.
+        # Use the linear target to optimize single stripe LVs.
+        # When disabled, the striped target is used. The linear target is an
+        # optimised version of the striped target that only handles a single
+        # stripe.
+        use_linear_target = 1
+
+        # Configuration option activation/reserved_stack.
+        # Stack size in KiB to reserve for use while devices are suspended.
+        # Insufficent reserve risks I/O deadlock during device suspension.
+        reserved_stack = 64
+
+        # Configuration option activation/reserved_memory.
+        # Memory size in KiB to reserve for use while devices are suspended.
+        # Insufficent reserve risks I/O deadlock during device suspension.
+        reserved_memory = 8192
+
+        # Configuration option activation/process_priority.
+        # Nice value used while devices are suspended.
+        # Use a high priority so that LVs are suspended
+        # for the shortest possible time.
+        process_priority = -18
+
+        # Configuration option activation/volume_list.
+        # Only LVs selected by this list are activated.
+        # If this list is defined, an LV is only activated if it matches an
+        # entry in this list. If this list is undefined, it imposes no limits
+        # on LV activation (all are allowed).
+        # 
+        # Accepted values:
+        #   vgname
+        #     The VG name is matched exactly and selects all LVs in the VG.
+        #   vgname/lvname
+        #     The VG name and LV name are matched exactly and selects the LV.
+        #   @tag
+        #     Selects an LV if the specified tag matches a tag set on the LV
+        #     or VG.
+        #   @*
+        #     Selects an LV if a tag defined on the host is also set on the LV
+        #     or VG. See tags/hosttags. If any host tags exist but volume_list
+        #     is not defined, a default single-entry list containing '@*' is
+        #     assumed.
+        # 
+        # Example
+        # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/auto_activation_volume_list.
+        # Only LVs selected by this list are auto-activated.
+        # This list works like volume_list, but it is used only by
+        # auto-activation commands. It does not apply to direct activation
+        # commands. If this list is defined, an LV is only auto-activated
+        # if it matches an entry in this list. If this list is undefined, it
+        # imposes no limits on LV auto-activation (all are allowed.) If this
+        # list is defined and empty, i.e. "[]", then no LVs are selected for
+        # auto-activation. An LV that is selected by this list for
+        # auto-activation, must also be selected by volume_list (if defined)
+        # before it is activated. Auto-activation is an activation command that
+        # includes the 'a' argument: --activate ay or -a ay. The 'a' (auto)
+        # argument for auto-activation is meant to be used by activation
+        # commands that are run automatically by the system, as opposed to LVM
+        # commands run directly by a user. A user may also use the 'a' flag
+        # directly to perform auto-activation. Also see pvscan(8) for more
+        # information about auto-activation.
+        # 
+        # Accepted values:
+        #   vgname
+        #     The VG name is matched exactly and selects all LVs in the VG.
+        #   vgname/lvname
+        #     The VG name and LV name are matched exactly and selects the LV.
+        #   @tag
+        #     Selects an LV if the specified tag matches a tag set on the LV
+        #     or VG.
+        #   @*
+        #     Selects an LV if a tag defined on the host is also set on the LV
+        #     or VG. See tags/hosttags. If any host tags exist but volume_list
+        #     is not defined, a default single-entry list containing '@*' is
+        #     assumed.
+        # 
+        # Example
+        # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/read_only_volume_list.
+        # LVs in this list are activated in read-only mode.
+        # If this list is defined, each LV that is to be activated is checked
+        # against this list, and if it matches, it is activated in read-only
+        # mode. This overrides the permission setting stored in the metadata,
+        # e.g. from --permission rw.
+        # 
+        # Accepted values:
+        #   vgname
+        #     The VG name is matched exactly and selects all LVs in the VG.
+        #   vgname/lvname
+        #     The VG name and LV name are matched exactly and selects the LV.
+        #   @tag
+        #     Selects an LV if the specified tag matches a tag set on the LV
+        #     or VG.
+        #   @*
+        #     Selects an LV if a tag defined on the host is also set on the LV
+        #     or VG. See tags/hosttags. If any host tags exist but volume_list
+        #     is not defined, a default single-entry list containing '@*' is
+        #     assumed.
+        # 
+        # Example
+        # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
+        # 
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/raid_region_size.
+        # Size in KiB of each raid or mirror synchronization region.
+        # For raid or mirror segment types, this is the amount of data that is
+        # copied at once when initializing, or moved at once by pvmove.
+        raid_region_size = 512
+
+        # Configuration option activation/error_when_full.
+        # Return errors if a thin pool runs out of space.
+        # The --errorwhenfull option overrides this setting.
+        # When enabled, writes to thin LVs immediately return an error if the
+        # thin pool is out of data space. When disabled, writes to thin LVs
+        # are queued if the thin pool is out of space, and processed when the
+        # thin pool data space is extended. New thin pools are assigned the
+        # behavior defined here.
+        # This configuration option has an automatic default value.
+        # error_when_full = 0
+
+        # Configuration option activation/readahead.
+        # Setting to use when there is no readahead setting in metadata.
+        # 
+        # Accepted values:
+        #   none
+        #     Disable readahead.
+        #   auto
+        #     Use default value chosen by kernel.
+        # 
+        readahead = "auto"
+
+        # Configuration option activation/raid_fault_policy.
+        # Defines how a device failure in a RAID LV is handled.
+        # This includes LVs that have the following segment types:
+        # raid1, raid4, raid5*, and raid6*.
+        # If a device in the LV fails, the policy determines the steps
+        # performed by dmeventd automatically, and the steps perfomed by the
+        # manual command lvconvert --repair --use-policies.
+        # Automatic handling requires dmeventd to be monitoring the LV.
+        # 
+        # Accepted values:
+        #   warn
+        #     Use the system log to warn the user that a device in the RAID LV
+        #     has failed. It is left to the user to run lvconvert --repair
+        #     manually to remove or replace the failed device. As long as the
+        #     number of failed devices does not exceed the redundancy of the LV
+        #     (1 device for raid4/5, 2 for raid6), the LV will remain usable.
+        #   allocate
+        #     Attempt to use any extra physical volumes in the VG as spares and
+        #     replace faulty devices.
+        # 
+        raid_fault_policy = "warn"
+
+        # Configuration option activation/mirror_image_fault_policy.
+        # Defines how a device failure in a 'mirror' LV is handled.
+        # An LV with the 'mirror' segment type is composed of mirror images
+        # (copies) and a mirror log. A disk log ensures that a mirror LV does
+        # not need to be re-synced (all copies made the same) every time a
+        # machine reboots or crashes. If a device in the LV fails, this policy
+        # determines the steps perfomed by dmeventd automatically, and the steps
+        # performed by the manual command lvconvert --repair --use-policies.
+        # Automatic handling requires dmeventd to be monitoring the LV.
+        # 
+        # Accepted values:
+        #   remove
+        #     Simply remove the faulty device and run without it. If the log
+        #     device fails, the mirror would convert to using an in-memory log.
+        #     This means the mirror will not remember its sync status across
+        #     crashes/reboots and the entire mirror will be re-synced. If a
+        #     mirror image fails, the mirror will convert to a non-mirrored
+        #     device if there is only one remaining good copy.
+        #   allocate
+        #     Remove the faulty device and try to allocate space on a new
+        #     device to be a replacement for the failed device. Using this
+        #     policy for the log is fast and maintains the ability to remember
+        #     sync state through crashes/reboots. Using this policy for a
+        #     mirror device is slow, as it requires the mirror to resynchronize
+        #     the devices, but it will preserve the mirror characteristic of
+        #     the device. This policy acts like 'remove' if no suitable device
+        #     and space can be allocated for the replacement.
+        #   allocate_anywhere
+        #     Not yet implemented. Useful to place the log device temporarily
+        #     on the same physical volume as one of the mirror images. This
+        #     policy is not recommended for mirror devices since it would break
+        #     the redundant nature of the mirror. This policy acts like
+        #     'remove' if no suitable device and space can be allocated for the
+        #     replacement.
+        # 
+        mirror_image_fault_policy = "remove"
+
+        # Configuration option activation/mirror_log_fault_policy.
+        # Defines how a device failure in a 'mirror' log LV is handled.
+        # The mirror_image_fault_policy description for mirrored LVs also
+        # applies to mirrored log LVs.
+        mirror_log_fault_policy = "allocate"
+
+        # Configuration option activation/snapshot_autoextend_threshold.
+        # Auto-extend a snapshot when its usage exceeds this percent.
+        # Setting this to 100 disables automatic extension.
+        # The minimum value is 50 (a smaller value is treated as 50.)
+        # Also see snapshot_autoextend_percent.
+        # Automatic extension requires dmeventd to be monitoring the LV.
+        # 
+        # Example
+        # Using 70% autoextend threshold and 20% autoextend size, when a 1G
+        # snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
+        # 840M, it is extended to 1.44G:
+        # snapshot_autoextend_threshold = 70
+        # 
+        snapshot_autoextend_threshold = 100
+
+        # Configuration option activation/snapshot_autoextend_percent.
+        # Auto-extending a snapshot adds this percent extra space.
+        # The amount of additional space added to a snapshot is this
+        # percent of its current size.
+        # 
+        # Example
+        # Using 70% autoextend threshold and 20% autoextend size, when a 1G
+        # snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
+        # 840M, it is extended to 1.44G:
+        # snapshot_autoextend_percent = 20
+        # 
+        snapshot_autoextend_percent = 20
+
+        # Configuration option activation/thin_pool_autoextend_threshold.
+        # Auto-extend a thin pool when its usage exceeds this percent.
+        # Setting this to 100 disables automatic extension.
+        # The minimum value is 50 (a smaller value is treated as 50.)
+        # Also see thin_pool_autoextend_percent.
+        # Automatic extension requires dmeventd to be monitoring the LV.
+        # 
+        # Example
+        # Using 70% autoextend threshold and 20% autoextend size, when a 1G
+        # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
+        # 840M, it is extended to 1.44G:
+        # thin_pool_autoextend_threshold = 70
+        # 
+        thin_pool_autoextend_threshold = 100
+
+        # Configuration option activation/thin_pool_autoextend_percent.
+        # Auto-extending a thin pool adds this percent extra space.
+        # The amount of additional space added to a thin pool is this
+        # percent of its current size.
+        # 
+        # Example
+        # Using 70% autoextend threshold and 20% autoextend size, when a 1G
+        # thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
+        # 840M, it is extended to 1.44G:
+        # thin_pool_autoextend_percent = 20
+        # 
+        thin_pool_autoextend_percent = 20
+
+        # Configuration option activation/mlock_filter.
+        # Do not mlock these memory areas.
+        # While activating devices, I/O to devices being (re)configured is
+        # suspended. As a precaution against deadlocks, LVM pins memory it is
+        # using so it is not paged out, and will not require I/O to reread.
+        # Groups of pages that are known not to be accessed during activation
+        # do not need to be pinned into memory. Each string listed in this
+        # setting is compared against each line in /proc/self/maps, and the
+        # pages corresponding to lines that match are not pinned. On some
+        # systems, locale-archive was found to make up over 80% of the memory
+        # used by the process.
+        # 
+        # Example
+        # mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
+        # 
+        # This configuration option is advanced.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/use_mlockall.
+        # Use the old behavior of mlockall to pin all memory.
+        # Prior to version 2.02.62, LVM used mlockall() to pin the whole
+        # process's memory while activating devices.
+        use_mlockall = 0
+
+        # Configuration option activation/monitoring.
+        # Monitor LVs that are activated.
+        # The --ignoremonitoring option overrides this setting.
+        # When enabled, LVM will ask dmeventd to monitor activated LVs.
+        monitoring = 1
+
+        # Configuration option activation/polling_interval.
+        # Check pvmove or lvconvert progress at this interval (seconds).
+        # When pvmove or lvconvert must wait for the kernel to finish
+        # synchronising or merging data, they check and report progress at
+        # intervals of this number of seconds. If this is set to 0 and there
+        # is only one thing to wait for, there are no progress reports, but
+        # the process is awoken immediately once the operation is complete.
+        polling_interval = 15
+
+        # Configuration option activation/auto_set_activation_skip.
+        # Set the activation skip flag on new thin snapshot LVs.
+        # The --setactivationskip option overrides this setting.
+        # An LV can have a persistent 'activation skip' flag. The flag causes
+        # the LV to be skipped during normal activation. The lvchange/vgchange
+        # -K option is required to activate LVs that have the activation skip
+        # flag set. When this setting is enabled, the activation skip flag is
+        # set on new thin snapshot LVs.
+        # This configuration option has an automatic default value.
+        # auto_set_activation_skip = 1
+
+        # Configuration option activation/activation_mode.
+        # How LVs with missing devices are activated.
+        # The --activationmode option overrides this setting.
+        # 
+        # Accepted values:
+        #   complete
+        #     Only allow activation of an LV if all of the Physical Volumes it
+        #     uses are present. Other PVs in the Volume Group may be missing.
+        #   degraded
+        #     Like complete, but additionally RAID LVs of segment type raid1,
+        #     raid4, raid5, radid6 and raid10 will be activated if there is no
+        #     data loss, i.e. they have sufficient redundancy to present the
+        #     entire addressable range of the Logical Volume.
+        #   partial
+        #     Allows the activation of any LV even if a missing or failed PV
+        #     could cause data loss with a portion of the LV inaccessible.
+        #     This setting should not normally be used, but may sometimes
+        #     assist with data recovery.
+        # 
+        activation_mode = "degraded"
+
+        # Configuration option activation/lock_start_list.
+        # Locking is started only for VGs selected by this list.
+        # The rules are the same as those for volume_list.
+        # This configuration option does not have a default value defined.
+
+        # Configuration option activation/auto_lock_start_list.
+        # Locking is auto-started only for VGs selected by this list.
+        # The rules are the same as those for auto_activation_volume_list.
+        # This configuration option does not have a default value defined.
 }
 
 # Configuration section metadata.
 # This configuration section has an automatic default value.
 # metadata {
 
-	# Configuration option metadata/pvmetadatacopies.
-	# Number of copies of metadata to store on each PV.
-	# The --pvmetadatacopies option overrides this setting.
-	# 
-	# Accepted values:
-	#   2
-	#     Two copies of the VG metadata are stored on the PV, one at the
-	#     front of the PV, and one at the end.
-	#   1
-	#     One copy of VG metadata is stored at the front of the PV.
-	#   0
-	#     No copies of VG metadata are stored on the PV. This may be
-	#     useful for VGs containing large numbers of PVs.
-	# 
-	# This configuration option is advanced.
-	# This configuration option has an automatic default value.
-	# pvmetadatacopies = 1
-
-	# Configuration option metadata/vgmetadatacopies.
-	# Number of copies of metadata to maintain for each VG.
-	# The --vgmetadatacopies option overrides this setting.
-	# If set to a non-zero value, LVM automatically chooses which of the
-	# available metadata areas to use to achieve the requested number of
-	# copies of the VG metadata. If you set a value larger than the the
-	# total number of metadata areas available, then metadata is stored in
-	# them all. The value 0 (unmanaged) disables this automatic management
-	# and allows you to control which metadata areas are used at the
-	# individual PV level using pvchange --metadataignore y|n.
-	# This configuration option has an automatic default value.
-	# vgmetadatacopies = 0
-
-	# Configuration option metadata/pvmetadatasize.
-	# Approximate number of sectors to use for each metadata copy.
-	# VGs with large numbers of PVs or LVs, or VGs containing complex LV
-	# structures, may need additional space for VG metadata. The metadata
-	# areas are treated as circular buffers, so unused space becomes filled
-	# with an archive of the most recent previous versions of the metadata.
-	# This configuration option has an automatic default value.
-	# pvmetadatasize = 255
-
-	# Configuration option metadata/pvmetadataignore.
-	# Ignore metadata areas on a new PV.
-	# The --metadataignore option overrides this setting.
-	# If metadata areas on a PV are ignored, LVM will not store metadata
-	# in them.
-	# This configuration option is advanced.
-	# This configuration option has an automatic default value.
-	# pvmetadataignore = 0
-
-	# Configuration option metadata/stripesize.
-	# This configuration option is advanced.
-	# This configuration option has an automatic default value.
-	# stripesize = 64
-
-	# Configuration option metadata/dirs.
-	# Directories holding live copies of text format metadata.
-	# These directories must not be on logical volumes!
-	# It's possible to use LVM with a couple of directories here,
-	# preferably on different (non-LV) filesystems, and with no other
-	# on-disk metadata (pvmetadatacopies = 0). Or this can be in addition
-	# to on-disk metadata areas. The feature was originally added to
-	# simplify testing and is not supported under low memory situations -
-	# the machine could lock up. Never edit any files in these directories
-	# by hand unless you are absolutely sure you know what you are doing!
-	# Use the supplied toolset to make changes (e.g. vgcfgrestore).
-	# 
-	# Example
-	# dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
-	# 
-	# This configuration option is advanced.
-	# This configuration option does not have a default value defined.
+        # Configuration option metadata/pvmetadatacopies.
+        # Number of copies of metadata to store on each PV.
+        # The --pvmetadatacopies option overrides this setting.
+        # 
+        # Accepted values:
+        #   2
+        #     Two copies of the VG metadata are stored on the PV, one at the
+        #     front of the PV, and one at the end.
+        #   1
+        #     One copy of VG metadata is stored at the front of the PV.
+        #   0
+        #     No copies of VG metadata are stored on the PV. This may be
+        #     useful for VGs containing large numbers of PVs.
+        # 
+        # This configuration option is advanced.
+        # This configuration option has an automatic default value.
+        # pvmetadatacopies = 1
+
+        # Configuration option metadata/vgmetadatacopies.
+        # Number of copies of metadata to maintain for each VG.
+        # The --vgmetadatacopies option overrides this setting.
+        # If set to a non-zero value, LVM automatically chooses which of the
+        # available metadata areas to use to achieve the requested number of
+        # copies of the VG metadata. If you set a value larger than the the
+        # total number of metadata areas available, then metadata is stored in
+        # them all. The value 0 (unmanaged) disables this automatic management
+        # and allows you to control which metadata areas are used at the
+        # individual PV level using pvchange --metadataignore y|n.
+        # This configuration option has an automatic default value.
+        # vgmetadatacopies = 0
+
+        # Configuration option metadata/pvmetadatasize.
+        # Approximate number of sectors to use for each metadata copy.
+        # VGs with large numbers of PVs or LVs, or VGs containing complex LV
+        # structures, may need additional space for VG metadata. The metadata
+        # areas are treated as circular buffers, so unused space becomes filled
+        # with an archive of the most recent previous versions of the metadata.
+        # This configuration option has an automatic default value.
+        # pvmetadatasize = 255
+
+        # Configuration option metadata/pvmetadataignore.
+        # Ignore metadata areas on a new PV.
+        # The --metadataignore option overrides this setting.
+        # If metadata areas on a PV are ignored, LVM will not store metadata
+        # in them.
+        # This configuration option is advanced.
+        # This configuration option has an automatic default value.
+        # pvmetadataignore = 0
+
+        # Configuration option metadata/stripesize.
+        # This configuration option is advanced.
+        # This configuration option has an automatic default value.
+        # stripesize = 64
+
+        # Configuration option metadata/dirs.
+        # Directories holding live copies of text format metadata.
+        # These directories must not be on logical volumes!
+        # It's possible to use LVM with a couple of directories here,
+        # preferably on different (non-LV) filesystems, and with no other
+        # on-disk metadata (pvmetadatacopies = 0). Or this can be in addition
+        # to on-disk metadata areas. The feature was originally added to
+        # simplify testing and is not supported under low memory situations -
+        # the machine could lock up. Never edit any files in these directories
+        # by hand unless you are absolutely sure you know what you are doing!
+        # Use the supplied toolset to make changes (e.g. vgcfgrestore).
+        # 
+        # Example
+        # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
+        # 
+        # This configuration option is advanced.
+        # This configuration option does not have a default value defined.
 # }
 
 # Configuration section report.
@@ -1493,357 +1497,357 @@
 # This configuration section has an automatic default value.
 # report {
 
-	# Configuration option report/compact_output.
-	# Do not print empty values for all report fields.
-	# If enabled, all fields that don't have a value set for any of the
-	# rows reported are skipped and not printed. Compact output is
-	# applicable only if report/buffered is enabled. If you need to
-	# compact only specified fields, use compact_output=0 and define
-	# report/compact_output_cols configuration setting instead.
-	# This configuration option has an automatic default value.
-	# compact_output = 0
-
-	# Configuration option report/compact_output_cols.
-	# Do not print empty values for specified report fields.
-	# If defined, specified fields that don't have a value set for any
-	# of the rows reported are skipped and not printed. Compact output
-	# is applicable only if report/buffered is enabled. If you need to
-	# compact all fields, use compact_output=1 instead in which case
-	# the compact_output_cols setting is then ignored.
-	# This configuration option has an automatic default value.
-	# compact_output_cols = ""
-
-	# Configuration option report/aligned.
-	# Align columns in report output.
-	# This configuration option has an automatic default value.
-	# aligned = 1
-
-	# Configuration option report/buffered.
-	# Buffer report output.
-	# When buffered reporting is used, the report's content is appended
-	# incrementally to include each object being reported until the report
-	# is flushed to output which normally happens at the end of command
-	# execution. Otherwise, if buffering is not used, each object is
-	# reported as soon as its processing is finished.
-	# This configuration option has an automatic default value.
-	# buffered = 1
-
-	# Configuration option report/headings.
-	# Show headings for columns on report.
-	# This configuration option has an automatic default value.
-	# headings = 1
-
-	# Configuration option report/separator.
-	# A separator to use on report after each field.
-	# This configuration option has an automatic default value.
-	# separator = " "
-
-	# Configuration option report/list_item_separator.
-	# A separator to use for list items when reported.
-	# This configuration option has an automatic default value.
-	# list_item_separator = ","
-
-	# Configuration option report/prefixes.
-	# Use a field name prefix for each field reported.
-	# This configuration option has an automatic default value.
-	# prefixes = 0
-
-	# Configuration option report/quoted.
-	# Quote field values when using field name prefixes.
-	# This configuration option has an automatic default value.
-	# quoted = 1
-
-	# Configuration option report/colums_as_rows.
-	# Output each column as a row.
-	# If set, this also implies report/prefixes=1.
-	# This configuration option has an automatic default value.
-	# colums_as_rows = 0
-
-	# Configuration option report/binary_values_as_numeric.
-	# Use binary values 0 or 1 instead of descriptive literal values.
-	# For columns that have exactly two valid values to report
-	# (not counting the 'unknown' value which denotes that the
-	# value could not be determined).
-	# This configuration option has an automatic default value.
-	# binary_values_as_numeric = 0
-
-	# Configuration option report/time_format.
-	# Set time format for fields reporting time values.
-	# Format specification is a string which may contain special character
-	# sequences and ordinary character sequences. Ordinary character
-	# sequences are copied verbatim. Each special character sequence is
-	# introduced by the '%' character and such sequence is then
-	# substituted with a value as described below.
-	# 
-	# Accepted values:
-	#   %a
-	#     The abbreviated name of the day of the week according to the
-	#     current locale.
-	#   %A
-	#     The full name of the day of the week according to the current
-	#     locale.
-	#   %b
-	#     The abbreviated month name according to the current locale.
-	#   %B
-	#     The full month name according to the current locale.
-	#   %c
-	#     The preferred date and time representation for the current
-	#     locale (alt E)
-	#   %C
-	#     The century number (year/100) as a 2-digit integer. (alt E)
-	#   %d
-	#     The day of the month as a decimal number (range 01 to 31).
-	#     (alt O)
-	#   %D
-	#     Equivalent to %m/%d/%y. (For Americans only. Americans should
-	#     note that in other countries%d/%m/%y is rather common. This
-	#     means that in international context this format is ambiguous and
-	#     should not be used.
-	#   %e
-	#     Like %d, the day of the month as a decimal number, but a leading
-	#     zero is replaced by a space. (alt O)
-	#   %E
-	#     Modifier: use alternative local-dependent representation if
-	#     available.
-	#   %F
-	#     Equivalent to %Y-%m-%d (the ISO 8601 date format).
-	#   %G
-	#     The ISO 8601 week-based year with century as adecimal number.
-	#     The 4-digit year corresponding to the ISO week number (see %V).
-	#     This has the same format and value as %Y, except that if the
-	#     ISO week number belongs to the previous or next year, that year
-	#     is used instead.
-	#   %g
-	#     Like %G, but without century, that is, with a 2-digit year
-	#     (00-99).
-	#   %h
-	#     Equivalent to %b.
-	#   %H
-	#     The hour as a decimal number using a 24-hour clock
-	#     (range 00 to 23). (alt O)
-	#   %I
-	#     The hour as a decimal number using a 12-hour clock
-	#     (range 01 to 12). (alt O)
-	#   %j
-	#     The day of the year as a decimal number (range 001 to 366).
-	#   %k
-	#     The hour (24-hour clock) as a decimal number (range 0 to 23);
-	#     single digits are preceded by a blank. (See also %H.)
-	#   %l
-	#     The hour (12-hour clock) as a decimal number (range 1 to 12);
-	#     single digits are preceded by a blank. (See also %I.)
-	#   %m
-	#     The month as a decimal number (range 01 to 12). (alt O)
-	#   %M
-	#     The minute as a decimal number (range 00 to 59). (alt O)
-	#   %O
-	#     Modifier: use alternative numeric symbols.
-	#   %p
-	#     Either "AM" or "PM" according to the given time value,
-	#     or the corresponding strings for the current locale. Noon is
-	#     treated as "PM" and midnight as "AM".
-	#   %P
-	#     Like %p but in lowercase: "am" or "pm" or a corresponding
-	#     string for the current locale.
-	#   %r
-	#     The time in a.m. or p.m. notation. In the POSIX locale this is
-	#     equivalent to %I:%M:%S %p.
-	#   %R
-	#     The time in 24-hour notation (%H:%M). For a version including
-	#     the seconds, see %T below.
-	#   %s
-	#     The number of seconds since the Epoch,
-	#     1970-01-01 00:00:00 +0000 (UTC)
-	#   %S
-	#     The second as a decimal number (range 00 to 60). (The range is
-	#     up to 60 to allow for occasional leap seconds.) (alt O)
-	#   %t
-	#     A tab character.
-	#   %T
-	#     The time in 24-hour notation (%H:%M:%S).
-	#   %u
-	#     The day of the week as a decimal, range 1 to 7, Monday being 1.
-	#     See also %w. (alt O)
-	#   %U
-	#     The week number of the current year as a decimal number,
-	#     range 00 to 53, starting with the first Sunday as the first
-	#     day of week 01. See also %V and %W. (alt O)
-	#   %V
-	#     The ISO 8601 week number of the current year as a decimal number,
-	#     range 01 to 53, where week 1 is the first week that has at least
-	#     4 days in the new year. See also %U and %W. (alt O)
-	#   %w
-	#     The day of the week as a decimal, range 0 to 6, Sunday being 0.
-	#     See also %u. (alt O)
-	#   %W
-	#     The week number of the current year as a decimal number,
-	#     range 00 to 53, starting with the first Monday as the first day
-	#     of week 01. (alt O)
-	#   %x
-	#     The preferred date representation for the current locale without
-	#     the time. (alt E)
-	#   %X
-	#     The preferred time representation for the current locale without
-	#     the date. (alt E)
-	#   %y
-	#     The year as a decimal number without a century (range 00 to 99).
-	#     (alt E, alt O)
-	#   %Y
-	#     The year as a decimal number including the century. (alt E)
-	#   %z
-	#     The +hhmm or -hhmm numeric timezone (that is, the hour and minute
-	#     offset from UTC).
-	#   %Z
-	#     The timezone name or abbreviation.
-	#   %%
-	#     A literal '%' character.
-	# 
-	# This configuration option has an automatic default value.
-	# time_format = "%Y-%m-%d %T %z"
-
-	# Configuration option report/devtypes_sort.
-	# List of columns to sort by when reporting 'lvm devtypes' command.
-	# See 'lvm devtypes -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# devtypes_sort = "devtype_name"
-
-	# Configuration option report/devtypes_cols.
-	# List of columns to report for 'lvm devtypes' command.
-	# See 'lvm devtypes -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# devtypes_cols = "devtype_name,devtype_max_partitions,devtype_description"
-
-	# Configuration option report/devtypes_cols_verbose.
-	# List of columns to report for 'lvm devtypes' command in verbose mode.
-	# See 'lvm devtypes -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# devtypes_cols_verbose = "devtype_name,devtype_max_partitions,devtype_description"
-
-	# Configuration option report/lvs_sort.
-	# List of columns to sort by when reporting 'lvs' command.
-	# See 'lvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# lvs_sort = "vg_name,lv_name"
-
-	# Configuration option report/lvs_cols.
-	# List of columns to report for 'lvs' command.
-	# See 'lvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# lvs_cols = "lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv"
-
-	# Configuration option report/lvs_cols_verbose.
-	# List of columns to report for 'lvs' command in verbose mode.
-	# See 'lvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# lvs_cols_verbose = "lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert_lv,lv_uuid,lv_profile"
-
-	# Configuration option report/vgs_sort.
-	# List of columns to sort by when reporting 'vgs' command.
-	# See 'vgs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# vgs_sort = "vg_name"
-
-	# Configuration option report/vgs_cols.
-	# List of columns to report for 'vgs' command.
-	# See 'vgs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# vgs_cols = "vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"
-
-	# Configuration option report/vgs_cols_verbose.
-	# List of columns to report for 'vgs' command in verbose mode.
-	# See 'vgs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# vgs_cols_verbose = "vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile"
-
-	# Configuration option report/pvs_sort.
-	# List of columns to sort by when reporting 'pvs' command.
-	# See 'pvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvs_sort = "pv_name"
-
-	# Configuration option report/pvs_cols.
-	# List of columns to report for 'pvs' command.
-	# See 'pvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"
-
-	# Configuration option report/pvs_cols_verbose.
-	# List of columns to report for 'pvs' command in verbose mode.
-	# See 'pvs -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
-
-	# Configuration option report/segs_sort.
-	# List of columns to sort by when reporting 'lvs --segments' command.
-	# See 'lvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# segs_sort = "vg_name,lv_name,seg_start"
-
-	# Configuration option report/segs_cols.
-	# List of columns to report for 'lvs --segments' command.
-	# See 'lvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# segs_cols = "lv_name,vg_name,lv_attr,stripes,segtype,seg_size"
-
-	# Configuration option report/segs_cols_verbose.
-	# List of columns to report for 'lvs --segments' command in verbose mode.
-	# See 'lvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# segs_cols_verbose = "lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize"
-
-	# Configuration option report/pvsegs_sort.
-	# List of columns to sort by when reporting 'pvs --segments' command.
-	# See 'pvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvsegs_sort = "pv_name,pvseg_start"
-
-	# Configuration option report/pvsegs_cols.
-	# List of columns to sort by when reporting 'pvs --segments' command.
-	# See 'pvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvsegs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size"
-
-	# Configuration option report/pvsegs_cols_verbose.
-	# List of columns to sort by when reporting 'pvs --segments' command in verbose mode.
-	# See 'pvs --segments -o help' for the list of possible fields.
-	# This configuration option has an automatic default value.
-	# pvsegs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges"
+        # Configuration option report/compact_output.
+        # Do not print empty values for all report fields.
+        # If enabled, all fields that don't have a value set for any of the
+        # rows reported are skipped and not printed. Compact output is
+        # applicable only if report/buffered is enabled. If you need to
+        # compact only specified fields, use compact_output=0 and define
+        # report/compact_output_cols configuration setting instead.
+        # This configuration option has an automatic default value.
+        # compact_output = 0
+
+        # Configuration option report/compact_output_cols.
+        # Do not print empty values for specified report fields.
+        # If defined, specified fields that don't have a value set for any
+        # of the rows reported are skipped and not printed. Compact output
+        # is applicable only if report/buffered is enabled. If you need to
+        # compact all fields, use compact_output=1 instead in which case
+        # the compact_output_cols setting is then ignored.
+        # This configuration option has an automatic default value.
+        # compact_output_cols = ""
+
+        # Configuration option report/aligned.
+        # Align columns in report output.
+        # This configuration option has an automatic default value.
+        # aligned = 1
+
+        # Configuration option report/buffered.
+        # Buffer report output.
+        # When buffered reporting is used, the report's content is appended
+        # incrementally to include each object being reported until the report
+        # is flushed to output which normally happens at the end of command
+        # execution. Otherwise, if buffering is not used, each object is
+        # reported as soon as its processing is finished.
+        # This configuration option has an automatic default value.
+        # buffered = 1
+
+        # Configuration option report/headings.
+        # Show headings for columns on report.
+        # This configuration option has an automatic default value.
+        # headings = 1
+
+        # Configuration option report/separator.
+        # A separator to use on report after each field.
+        # This configuration option has an automatic default value.
+        # separator = " "
+
+        # Configuration option report/list_item_separator.
+        # A separator to use for list items when reported.
+        # This configuration option has an automatic default value.
+        # list_item_separator = ","
+
+        # Configuration option report/prefixes.
+        # Use a field name prefix for each field reported.
+        # This configuration option has an automatic default value.
+        # prefixes = 0
+
+        # Configuration option report/quoted.
+        # Quote field values when using field name prefixes.
+        # This configuration option has an automatic default value.
+        # quoted = 1
+
+        # Configuration option report/colums_as_rows.
+        # Output each column as a row.
+        # If set, this also implies report/prefixes=1.
+        # This configuration option has an automatic default value.
+        # colums_as_rows = 0
+
+        # Configuration option report/binary_values_as_numeric.
+        # Use binary values 0 or 1 instead of descriptive literal values.
+        # For columns that have exactly two valid values to report
+        # (not counting the 'unknown' value which denotes that the
+        # value could not be determined).
+        # This configuration option has an automatic default value.
+        # binary_values_as_numeric = 0
+
+        # Configuration option report/time_format.
+        # Set time format for fields reporting time values.
+        # Format specification is a string which may contain special character
+        # sequences and ordinary character sequences. Ordinary character
+        # sequences are copied verbatim. Each special character sequence is
+        # introduced by the '%' character and such sequence is then
+        # substituted with a value as described below.
+        # 
+        # Accepted values:
+        #   %a
+        #     The abbreviated name of the day of the week according to the
+        #     current locale.
+        #   %A
+        #     The full name of the day of the week according to the current
+        #     locale.
+        #   %b
+        #     The abbreviated month name according to the current locale.
+        #   %B
+        #     The full month name according to the current locale.
+        #   %c
+        #     The preferred date and time representation for the current
+        #     locale (alt E)
+        #   %C
+        #     The century number (year/100) as a 2-digit integer. (alt E)
+        #   %d
+        #     The day of the month as a decimal number (range 01 to 31).
+        #     (alt O)
+        #   %D
+        #     Equivalent to %m/%d/%y. (For Americans only. Americans should
+        #     note that in other countries%d/%m/%y is rather common. This
+        #     means that in international context this format is ambiguous and
+        #     should not be used.
+        #   %e
+        #     Like %d, the day of the month as a decimal number, but a leading
+        #     zero is replaced by a space. (alt O)
+        #   %E
+        #     Modifier: use alternative local-dependent representation if
+        #     available.
+        #   %F
+        #     Equivalent to %Y-%m-%d (the ISO 8601 date format).
+        #   %G
+        #     The ISO 8601 week-based year with century as adecimal number.
+        #     The 4-digit year corresponding to the ISO week number (see %V).
+        #     This has the same format and value as %Y, except that if the
+        #     ISO week number belongs to the previous or next year, that year
+        #     is used instead.
+        #   %g
+        #     Like %G, but without century, that is, with a 2-digit year
+        #     (00-99).
+        #   %h
+        #     Equivalent to %b.
+        #   %H
+        #     The hour as a decimal number using a 24-hour clock
+        #     (range 00 to 23). (alt O)
+        #   %I
+        #     The hour as a decimal number using a 12-hour clock
+        #     (range 01 to 12). (alt O)
+        #   %j
+        #     The day of the year as a decimal number (range 001 to 366).
+        #   %k
+        #     The hour (24-hour clock) as a decimal number (range 0 to 23);
+        #     single digits are preceded by a blank. (See also %H.)
+        #   %l
+        #     The hour (12-hour clock) as a decimal number (range 1 to 12);
+        #     single digits are preceded by a blank. (See also %I.)
+        #   %m
+        #     The month as a decimal number (range 01 to 12). (alt O)
+        #   %M
+        #     The minute as a decimal number (range 00 to 59). (alt O)
+        #   %O
+        #     Modifier: use alternative numeric symbols.
+        #   %p
+        #     Either "AM" or "PM" according to the given time value,
+        #     or the corresponding strings for the current locale. Noon is
+        #     treated as "PM" and midnight as "AM".
+        #   %P
+        #     Like %p but in lowercase: "am" or "pm" or a corresponding
+        #     string for the current locale.
+        #   %r
+        #     The time in a.m. or p.m. notation. In the POSIX locale this is
+        #     equivalent to %I:%M:%S %p.
+        #   %R
+        #     The time in 24-hour notation (%H:%M). For a version including
+        #     the seconds, see %T below.
+        #   %s
+        #     The number of seconds since the Epoch,
+        #     1970-01-01 00:00:00 +0000 (UTC)
+        #   %S
+        #     The second as a decimal number (range 00 to 60). (The range is
+        #     up to 60 to allow for occasional leap seconds.) (alt O)
+        #   %t
+        #     A tab character.
+        #   %T
+        #     The time in 24-hour notation (%H:%M:%S).
+        #   %u
+        #     The day of the week as a decimal, range 1 to 7, Monday being 1.
+        #     See also %w. (alt O)
+        #   %U
+        #     The week number of the current year as a decimal number,
+        #     range 00 to 53, starting with the first Sunday as the first
+        #     day of week 01. See also %V and %W. (alt O)
+        #   %V
+        #     The ISO 8601 week number of the current year as a decimal number,
+        #     range 01 to 53, where week 1 is the first week that has at least
+        #     4 days in the new year. See also %U and %W. (alt O)
+        #   %w
+        #     The day of the week as a decimal, range 0 to 6, Sunday being 0.
+        #     See also %u. (alt O)
+        #   %W
+        #     The week number of the current year as a decimal number,
+        #     range 00 to 53, starting with the first Monday as the first day
+        #     of week 01. (alt O)
+        #   %x
+        #     The preferred date representation for the current locale without
+        #     the time. (alt E)
+        #   %X
+        #     The preferred time representation for the current locale without
+        #     the date. (alt E)
+        #   %y
+        #     The year as a decimal number without a century (range 00 to 99).
+        #     (alt E, alt O)
+        #   %Y
+        #     The year as a decimal number including the century. (alt E)
+        #   %z
+        #     The +hhmm or -hhmm numeric timezone (that is, the hour and minute
+        #     offset from UTC).
+        #   %Z
+        #     The timezone name or abbreviation.
+        #   %%
+        #     A literal '%' character.
+        # 
+        # This configuration option has an automatic default value.
+        # time_format = "%Y-%m-%d %T %z"
+
+        # Configuration option report/devtypes_sort.
+        # List of columns to sort by when reporting 'lvm devtypes' command.
+        # See 'lvm devtypes -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # devtypes_sort = "devtype_name"
+
+        # Configuration option report/devtypes_cols.
+        # List of columns to report for 'lvm devtypes' command.
+        # See 'lvm devtypes -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # devtypes_cols = "devtype_name,devtype_max_partitions,devtype_description"
+
+        # Configuration option report/devtypes_cols_verbose.
+        # List of columns to report for 'lvm devtypes' command in verbose mode.
+        # See 'lvm devtypes -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # devtypes_cols_verbose = "devtype_name,devtype_max_partitions,devtype_description"
+
+        # Configuration option report/lvs_sort.
+        # List of columns to sort by when reporting 'lvs' command.
+        # See 'lvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # lvs_sort = "vg_name,lv_name"
+
+        # Configuration option report/lvs_cols.
+        # List of columns to report for 'lvs' command.
+        # See 'lvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # lvs_cols = "lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv"
+
+        # Configuration option report/lvs_cols_verbose.
+        # List of columns to report for 'lvs' command in verbose mode.
+        # See 'lvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # lvs_cols_verbose = "lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert_lv,lv_uuid,lv_profile"
+
+        # Configuration option report/vgs_sort.
+        # List of columns to sort by when reporting 'vgs' command.
+        # See 'vgs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # vgs_sort = "vg_name"
+
+        # Configuration option report/vgs_cols.
+        # List of columns to report for 'vgs' command.
+        # See 'vgs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # vgs_cols = "vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"
+
+        # Configuration option report/vgs_cols_verbose.
+        # List of columns to report for 'vgs' command in verbose mode.
+        # See 'vgs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # vgs_cols_verbose = "vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile"
+
+        # Configuration option report/pvs_sort.
+        # List of columns to sort by when reporting 'pvs' command.
+        # See 'pvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvs_sort = "pv_name"
+
+        # Configuration option report/pvs_cols.
+        # List of columns to report for 'pvs' command.
+        # See 'pvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"
+
+        # Configuration option report/pvs_cols_verbose.
+        # List of columns to report for 'pvs' command in verbose mode.
+        # See 'pvs -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
+
+        # Configuration option report/segs_sort.
+        # List of columns to sort by when reporting 'lvs --segments' command.
+        # See 'lvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # segs_sort = "vg_name,lv_name,seg_start"
+
+        # Configuration option report/segs_cols.
+        # List of columns to report for 'lvs --segments' command.
+        # See 'lvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # segs_cols = "lv_name,vg_name,lv_attr,stripes,segtype,seg_size"
+
+        # Configuration option report/segs_cols_verbose.
+        # List of columns to report for 'lvs --segments' command in verbose mode.
+        # See 'lvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # segs_cols_verbose = "lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize"
+
+        # Configuration option report/pvsegs_sort.
+        # List of columns to sort by when reporting 'pvs --segments' command.
+        # See 'pvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvsegs_sort = "pv_name,pvseg_start"
+
+        # Configuration option report/pvsegs_cols.
+        # List of columns to sort by when reporting 'pvs --segments' command.
+        # See 'pvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvsegs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size"
+
+        # Configuration option report/pvsegs_cols_verbose.
+        # List of columns to sort by when reporting 'pvs --segments' command in verbose mode.
+        # See 'pvs --segments -o help' for the list of possible fields.
+        # This configuration option has an automatic default value.
+        # pvsegs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges"
 # }
 
 # Configuration section dmeventd.
 # Settings for the LVM event daemon.
 dmeventd {
 
-	# Configuration option dmeventd/mirror_library.
-	# The library dmeventd uses when monitoring a mirror device.
-	# libdevmapper-event-lvm2mirror.so attempts to recover from
-	# failures. It removes failed devices from a volume group and
-	# reconfigures a mirror as necessary. If no mirror library is
-	# provided, mirrors are not monitored through dmeventd.
-	mirror_library = "libdevmapper-event-lvm2mirror.so"
-
-	# Configuration option dmeventd/raid_library.
-	# This configuration option has an automatic default value.
-	# raid_library = "libdevmapper-event-lvm2raid.so"
-
-	# Configuration option dmeventd/snapshot_library.
-	# The library dmeventd uses when monitoring a snapshot device.
-	# libdevmapper-event-lvm2snapshot.so monitors the filling of snapshots
-	# and emits a warning through syslog when the usage exceeds 80%. The
-	# warning is repeated when 85%, 90% and 95% of the snapshot is filled.
-	snapshot_library = "libdevmapper-event-lvm2snapshot.so"
-
-	# Configuration option dmeventd/thin_library.
-	# The library dmeventd uses when monitoring a thin device.
-	# libdevmapper-event-lvm2thin.so monitors the filling of a pool
-	# and emits a warning through syslog when the usage exceeds 80%. The
-	# warning is repeated when 85%, 90% and 95% of the pool is filled.
-	thin_library = "libdevmapper-event-lvm2thin.so"
-
-	# Configuration option dmeventd/executable.
-	# The full path to the dmeventd binary.
-	# This configuration option has an automatic default value.
-	# executable = "/sbin/dmeventd"
+        # Configuration option dmeventd/mirror_library.
+        # The library dmeventd uses when monitoring a mirror device.
+        # libdevmapper-event-lvm2mirror.so attempts to recover from
+        # failures. It removes failed devices from a volume group and
+        # reconfigures a mirror as necessary. If no mirror library is
+        # provided, mirrors are not monitored through dmeventd.
+        mirror_library = "libdevmapper-event-lvm2mirror.so"
+
+        # Configuration option dmeventd/raid_library.
+        # This configuration option has an automatic default value.
+        # raid_library = "libdevmapper-event-lvm2raid.so"
+
+        # Configuration option dmeventd/snapshot_library.
+        # The library dmeventd uses when monitoring a snapshot device.
+        # libdevmapper-event-lvm2snapshot.so monitors the filling of snapshots
+        # and emits a warning through syslog when the usage exceeds 80%. The
+        # warning is repeated when 85%, 90% and 95% of the snapshot is filled.
+        snapshot_library = "libdevmapper-event-lvm2snapshot.so"
+
+        # Configuration option dmeventd/thin_library.
+        # The library dmeventd uses when monitoring a thin device.
+        # libdevmapper-event-lvm2thin.so monitors the filling of a pool
+        # and emits a warning through syslog when the usage exceeds 80%. The
+        # warning is repeated when 85%, 90% and 95% of the pool is filled.
+        thin_library = "libdevmapper-event-lvm2thin.so"
+
+        # Configuration option dmeventd/executable.
+        # The full path to the dmeventd binary.
+        # This configuration option has an automatic default value.
+        # executable = "/sbin/dmeventd"
 }
 
 # Configuration section tags.
@@ -1851,37 +1855,37 @@
 # This configuration section has an automatic default value.
 # tags {
 
-	# Configuration option tags/hosttags.
-	# Create a host tag using the machine name.
-	# The machine name is nodename returned by uname(2).
-	# This configuration option has an automatic default value.
-	# hosttags = 0
-
-	# Configuration section tags/<tag>.
-	# Replace this subsection name with a custom tag name.
-	# Multiple subsections like this can be created. The '@' prefix for
-	# tags is optional. This subsection can contain host_list, which is a
-	# list of machine names. If the name of the local machine is found in
-	# host_list, then the name of this subsection is used as a tag and is
-	# applied to the local machine as a 'host tag'. If this subsection is
-	# empty (has no host_list), then the subsection name is always applied
-	# as a 'host tag'.
-	# 
-	# Example
-	# The host tag foo is given to all hosts, and the host tag
-	# bar is given to the hosts named machine1 and machine2.
-	# tags { foo { } bar { host_list = [ "machine1", "machine2" ] } }
-	# 
-	# This configuration section has variable name.
-	# This configuration section has an automatic default value.
-	# tag {
-
-		# Configuration option tags/<tag>/host_list.
-		# A list of machine names.
-		# These machine names are compared to the nodename returned
-		# by uname(2). If the local machine name matches an entry in
-		# this list, the name of the subsection is applied to the
-		# machine as a 'host tag'.
-		# This configuration option does not have a default value defined.
-	# }
+        # Configuration option tags/hosttags.
+        # Create a host tag using the machine name.
+        # The machine name is nodename returned by uname(2).
+        # This configuration option has an automatic default value.
+        # hosttags = 0
+
+        # Configuration section tags/<tag>.
+        # Replace this subsection name with a custom tag name.
+        # Multiple subsections like this can be created. The '@' prefix for
+        # tags is optional. This subsection can contain host_list, which is a
+        # list of machine names. If the name of the local machine is found in
+        # host_list, then the name of this subsection is used as a tag and is
+        # applied to the local machine as a 'host tag'. If this subsection is
+        # empty (has no host_list), then the subsection name is always applied
+        # as a 'host tag'.
+        # 
+        # Example
+        # The host tag foo is given to all hosts, and the host tag
+        # bar is given to the hosts named machine1 and machine2.
+        # tags { foo { } bar { host_list = [ "machine1", "machine2" ] } }
+        # 
+        # This configuration section has variable name.
+        # This configuration section has an automatic default value.
+        # tag {
+
+                # Configuration option tags/<tag>/host_list.
+                # A list of machine names.
+                # These machine names are compared to the nodename returned
+                # by uname(2). If the local machine name matches an entry in
+                # this list, the name of the subsection is applied to the
+                # machine as a 'host tag'.
+                # This configuration option does not have a default value defined.
+        # }
 # }

2018-03-30 05:45:29,142 [salt.state       ][INFO    ][3115] Completed state [/etc/lvm/lvm.conf] at time 05:45:29.142372 duration_in_ms=147.889
2018-03-30 05:45:29,144 [salt.state       ][INFO    ][3115] Running state [lvm2-lvmetad] at time 05:45:29.144105
2018-03-30 05:45:29,144 [salt.state       ][INFO    ][3115] Executing state service.running for lvm2-lvmetad
2018-03-30 05:45:29,144 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'status', 'lvm2-lvmetad.service', '-n', '0'] in directory '/root'
2018-03-30 05:45:29,166 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'lvm2-lvmetad.service'] in directory '/root'
2018-03-30 05:45:29,181 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmetad.service'] in directory '/root'
2018-03-30 05:45:29,203 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmetad.service'] in directory '/root'
2018-03-30 05:45:29,228 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmetad.service'] in directory '/root'
2018-03-30 05:45:29,252 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'lvm2-lvmetad.service'] in directory '/root'
2018-03-30 05:45:29,633 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmetad.service'] in directory '/root'
2018-03-30 05:45:29,651 [salt.state       ][INFO    ][3115] {'lvm2-lvmetad': True}
2018-03-30 05:45:29,652 [salt.state       ][INFO    ][3115] Completed state [lvm2-lvmetad] at time 05:45:29.652100 duration_in_ms=507.995
2018-03-30 05:45:29,653 [salt.state       ][INFO    ][3115] Running state [lvm2-lvmpolld] at time 05:45:29.653689
2018-03-30 05:45:29,653 [salt.state       ][INFO    ][3115] Executing state service.running for lvm2-lvmpolld
2018-03-30 05:45:29,654 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'status', 'lvm2-lvmpolld.service', '-n', '0'] in directory '/root'
2018-03-30 05:45:29,672 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:29,688 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:29,706 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:29,720 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:29,755 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:29,772 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:29,792 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:29,810 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:29,828 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:30,241 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2018-03-30 05:45:30,258 [salt.state       ][INFO    ][3115] {'lvm2-lvmpolld': True}
2018-03-30 05:45:30,258 [salt.state       ][INFO    ][3115] Completed state [lvm2-lvmpolld] at time 05:45:30.258707 duration_in_ms=605.016
2018-03-30 05:45:30,262 [salt.state       ][INFO    ][3115] Running state [lvm2-monitor] at time 05:45:30.262205
2018-03-30 05:45:30,262 [salt.state       ][INFO    ][3115] Executing state service.running for lvm2-monitor
2018-03-30 05:45:30,263 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'status', 'lvm2-monitor.service', '-n', '0'] in directory '/root'
2018-03-30 05:45:30,279 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'lvm2-monitor.service'] in directory '/root'
2018-03-30 05:45:30,293 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-monitor.service'] in directory '/root'
2018-03-30 05:45:30,308 [salt.state       ][INFO    ][3115] The service lvm2-monitor is already running
2018-03-30 05:45:30,308 [salt.state       ][INFO    ][3115] Completed state [lvm2-monitor] at time 05:45:30.308383 duration_in_ms=46.176
2018-03-30 05:45:30,308 [salt.state       ][INFO    ][3115] Running state [lvm2-monitor] at time 05:45:30.308700
2018-03-30 05:45:30,309 [salt.state       ][INFO    ][3115] Executing state service.mod_watch for lvm2-monitor
2018-03-30 05:45:30,309 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'lvm2-monitor.service'] in directory '/root'
2018-03-30 05:45:30,323 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'lvm2-monitor.service'] in directory '/root'
2018-03-30 05:45:30,336 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'lvm2-monitor.service'] in directory '/root'
2018-03-30 05:45:30,394 [salt.state       ][INFO    ][3115] {'lvm2-monitor': True}
2018-03-30 05:45:30,395 [salt.state       ][INFO    ][3115] Completed state [lvm2-monitor] at time 05:45:30.395216 duration_in_ms=86.515
2018-03-30 05:45:30,407 [salt.state       ][INFO    ][3115] Running state [/dev/sda2] at time 05:45:30.406571
2018-03-30 05:45:30,409 [salt.state       ][INFO    ][3115] Executing state lvm.pv_present for /dev/sda2
2018-03-30 05:45:30,411 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['pvdisplay', '-c', '/dev/sda2'] in directory '/root'
2018-03-30 05:45:30,465 [salt.state       ][INFO    ][3115] Physical Volume /dev/sda2 already present
2018-03-30 05:45:30,466 [salt.state       ][INFO    ][3115] Completed state [/dev/sda2] at time 05:45:30.466259 duration_in_ms=59.694
2018-03-30 05:45:30,469 [salt.state       ][INFO    ][3115] Running state [vgroot] at time 05:45:30.469345
2018-03-30 05:45:30,469 [salt.state       ][INFO    ][3115] Executing state lvm.vg_present for vgroot
2018-03-30 05:45:30,470 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['vgdisplay', '-c', 'vgroot'] in directory '/root'
2018-03-30 05:45:30,494 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['pvdisplay', '-c', '/dev/sda2'] in directory '/root'
2018-03-30 05:45:30,519 [salt.state       ][INFO    ][3115] Volume Group vgroot already present
/dev/sda2 is part of Volume Group
2018-03-30 05:45:30,519 [salt.state       ][INFO    ][3115] Completed state [vgroot] at time 05:45:30.519571 duration_in_ms=50.226
2018-03-30 05:45:30,520 [salt.state       ][INFO    ][3115] Running state [ntp] at time 05:45:30.520082
2018-03-30 05:45:30,520 [salt.state       ][INFO    ][3115] Executing state pkg.installed for ntp
2018-03-30 05:45:30,531 [salt.state       ][INFO    ][3115] All specified packages are already installed
2018-03-30 05:45:30,532 [salt.state       ][INFO    ][3115] Completed state [ntp] at time 05:45:30.532125 duration_in_ms=12.044
2018-03-30 05:45:30,534 [salt.state       ][INFO    ][3115] Running state [/etc/ntp.conf] at time 05:45:30.534100
2018-03-30 05:45:30,534 [salt.state       ][INFO    ][3115] Executing state file.managed for /etc/ntp.conf
2018-03-30 05:45:30,534 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054530065853
2018-03-30 05:45:30,551 [salt.minion      ][INFO    ][18618] Starting a new job with PID 18618
2018-03-30 05:45:30,562 [salt.fileclient  ][INFO    ][3115] Fetching file from saltenv 'base', ** done ** 'ntp/files/ntp.conf'
2018-03-30 05:45:30,572 [salt.minion      ][INFO    ][18618] Returning information for job: 20180330054530065853
2018-03-30 05:45:30,607 [salt.state       ][INFO    ][3115] File changed:
--- 
+++ 
@@ -1,66 +1,24 @@
 
-# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help
 
-driftfile /var/lib/ntp/ntp.drift
+# ntpd will only synchronize your clock.
 
-# Enable this if you want statistics to be logged.
-#statsdir /var/log/ntpstats/
+# For details, see:
+# - the ntp.conf man page
+# - http://support.ntp.org/bin/view/Support/GettingStarted
+# - https://wiki.archlinux.org/index.php/Network_Time_Protocol_daemon
 
-statistics loopstats peerstats clockstats
-filegen loopstats file loopstats type day enable
-filegen peerstats file peerstats type day enable
-filegen clockstats file clockstats type day enable
+# Associate to cloud NTP pool servers
+server 1.pool.ntp.org iburst
+server 0.pool.ntp.org
 
-# Specify one or more NTP servers.
-
-# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
-# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
-# more information.
-# pools
-pool ntp.ubuntu.com iburst
-
-# Use Ubuntu's ntp server as a fallback.
-# pool ntp.ubuntu.com
-
-# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
-# details.  The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
-# might also be helpful.
-#
-# Note that "restrict" applies to both servers and clients, so a configuration
-# that might be intended to block requests from certain clients could also end
-# up blocking replies from your own upstream servers.
-
-# By default, exchange time with everybody, but don't allow configuration.
-restrict -4 default kod notrap nomodify nopeer noquery limited
-restrict -6 default kod notrap nomodify nopeer noquery limited
-
-# Local users may interrogate the ntp server more closely.
+# Only allow read-only access from localhost
+restrict default noquery nopeer
 restrict 127.0.0.1
 restrict ::1
 
-# Needed for adding pool entries
-restrict source notrap nomodify noquery
-
-# Clients from this (example!) subnet have unlimited access, but only if
-# cryptographically authenticated.
-#restrict 192.168.123.0 mask 255.255.255.0 notrust
+# mode7 is required for collectd monitoring
 
 
-# If you want to provide time to your local subnet, change the next line.
-# (Again, the address is an example only.)
-#broadcast 192.168.123.255
-
-# If you want to listen to time broadcasts on your local subnet, de-comment the
-# next lines.  Please do this only if you trust everybody on the network!
-#disable auth
-#broadcastclient
-
-#Changes recquired to use pps synchonisation as explained in documentation:
-#http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#AEN3918
-
-#server 127.127.8.1 mode 135 prefer    # Meinberg GPS167 with PPS
-#fudge 127.127.8.1 time1 0.0042        # relative to PPS for my hardware
-
-#server 127.127.22.1                   # ATOM(PPS)
-#fudge 127.127.22.1 flag3 1            # enable PPS API
-
+# Location of drift file
+driftfile /var/lib/ntp/ntp.drift
+logfile /var/log/ntp.log

2018-03-30 05:45:30,607 [salt.state       ][INFO    ][3115] Completed state [/etc/ntp.conf] at time 05:45:30.607831 duration_in_ms=73.732
2018-03-30 05:45:30,609 [salt.state       ][INFO    ][3115] Running state [ntp] at time 05:45:30.609359
2018-03-30 05:45:30,609 [salt.state       ][INFO    ][3115] Executing state service.running for ntp
2018-03-30 05:45:30,610 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'status', 'ntp.service', '-n', '0'] in directory '/root'
2018-03-30 05:45:30,630 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'ntp.service'] in directory '/root'
2018-03-30 05:45:30,646 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'ntp.service'] in directory '/root'
2018-03-30 05:45:30,667 [salt.state       ][INFO    ][3115] The service ntp is already running
2018-03-30 05:45:30,667 [salt.state       ][INFO    ][3115] Completed state [ntp] at time 05:45:30.667538 duration_in_ms=58.179
2018-03-30 05:45:30,667 [salt.state       ][INFO    ][3115] Running state [ntp] at time 05:45:30.667904
2018-03-30 05:45:30,668 [salt.state       ][INFO    ][3115] Executing state service.mod_watch for ntp
2018-03-30 05:45:30,669 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-active', 'ntp.service'] in directory '/root'
2018-03-30 05:45:30,685 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemctl', 'is-enabled', 'ntp.service'] in directory '/root'
2018-03-30 05:45:30,706 [salt.loaded.int.module.cmdmod][INFO    ][3115] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'ntp.service'] in directory '/root'
2018-03-30 05:45:30,770 [salt.state       ][INFO    ][3115] {'ntp': True}
2018-03-30 05:45:30,770 [salt.state       ][INFO    ][3115] Completed state [ntp] at time 05:45:30.770681 duration_in_ms=102.776
2018-03-30 05:45:30,779 [salt.minion      ][INFO    ][3115] Returning information for job: 20180330054317464553
2018-03-30 05:45:31,524 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command pkg.upgrade with jid 20180330054531056342
2018-03-30 05:45:31,540 [salt.minion      ][INFO    ][18694] Starting a new job with PID 18694
2018-03-30 05:45:31,577 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][18694] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:45:31,933 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][18694] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'upgrade'] in directory '/root'
2018-03-30 05:45:41,619 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054541151387
2018-03-30 05:45:41,636 [salt.minion      ][INFO    ][18808] Starting a new job with PID 18808
2018-03-30 05:45:41,662 [salt.minion      ][INFO    ][18808] Returning information for job: 20180330054541151387
2018-03-30 05:45:51,796 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054551328035
2018-03-30 05:45:51,814 [salt.minion      ][INFO    ][19550] Starting a new job with PID 19550
2018-03-30 05:45:51,836 [salt.minion      ][INFO    ][19550] Returning information for job: 20180330054551328035
2018-03-30 05:45:52,888 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][18694] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:45:52,939 [salt.minion      ][INFO    ][18694] Returning information for job: 20180330054531056342
2018-03-30 05:47:10,296 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command state.apply with jid 20180330054709828411
2018-03-30 05:47:10,313 [salt.minion      ][INFO    ][19623] Starting a new job with PID 19623
2018-03-30 05:47:12,754 [salt.state       ][INFO    ][19623] Loading fresh modules for state activity
2018-03-30 05:47:12,810 [salt.fileclient  ][INFO    ][19623] Fetching file from saltenv 'base', ** done ** 'salt/init.sls'
2018-03-30 05:47:13,190 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,231 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,475 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,507 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,541 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,573 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,600 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,628 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,651 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l libvirt-bin | grep libvirt-bin | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,675 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,698 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l qemu-kvm | grep qemu-kvm | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,725 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,751 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,910 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,944 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:13,977 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,010 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,040 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,068 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,096 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l libvirt-bin | grep libvirt-bin | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,125 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,145 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l qemu-kvm | grep qemu-kvm | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,164 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,183 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,303 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,336 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,367 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,397 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,426 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,457 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,486 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,515 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,543 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,567 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,605 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,632 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,657 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,678 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,702 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,724 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,744 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,763 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,791 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,817 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:14,902 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'salt-minion --version' in directory '/root'
2018-03-30 05:47:15,190 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'salt-minion --version' in directory '/root'
2018-03-30 05:47:16,110 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,151 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,387 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,420 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,451 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,484 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,512 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,536 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,561 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l libvirt-bin | grep libvirt-bin | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,592 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,620 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l qemu-kvm | grep qemu-kvm | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,648 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,673 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,767 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,799 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,833 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,864 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,896 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,925 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,955 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l libvirt-bin | grep libvirt-bin | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:16,983 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,009 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l qemu-kvm | grep qemu-kvm | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,041 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,070 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,213 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,246 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,278 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,308 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,339 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,369 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,398 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,427 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,452 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,484 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,524 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,556 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,585 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,611 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,640 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,670 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,698 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,724 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,756 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,787 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2018-03-30 05:47:17,867 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'salt-minion --version' in directory '/root'
2018-03-30 05:47:18,164 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'salt-minion --version' in directory '/root'
2018-03-30 05:47:19,474 [salt.state       ][INFO    ][19623] Running state [salt-minion] at time 05:47:19.473960
2018-03-30 05:47:19,474 [salt.state       ][INFO    ][19623] Executing state pkg.installed for salt-minion
2018-03-30 05:47:19,474 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 05:47:19,809 [salt.state       ][INFO    ][19623] All specified packages are already installed
2018-03-30 05:47:19,810 [salt.state       ][INFO    ][19623] Completed state [salt-minion] at time 05:47:19.810196 duration_in_ms=336.237
2018-03-30 05:47:19,810 [salt.state       ][INFO    ][19623] Running state [salt_minion_dependency_packages] at time 05:47:19.810427
2018-03-30 05:47:19,810 [salt.state       ][INFO    ][19623] Executing state pkg.installed for salt_minion_dependency_packages
2018-03-30 05:47:19,814 [salt.state       ][INFO    ][19623] All specified packages are already installed
2018-03-30 05:47:19,814 [salt.state       ][INFO    ][19623] Completed state [salt_minion_dependency_packages] at time 05:47:19.814524 duration_in_ms=4.096
2018-03-30 05:47:19,816 [salt.state       ][INFO    ][19623] Running state [/etc/salt/minion.d/minion.conf] at time 05:47:19.816471
2018-03-30 05:47:19,816 [salt.state       ][INFO    ][19623] Executing state file.managed for /etc/salt/minion.d/minion.conf
2018-03-30 05:47:19,975 [salt.state       ][INFO    ][19623] File /etc/salt/minion.d/minion.conf is in the correct state
2018-03-30 05:47:19,975 [salt.state       ][INFO    ][19623] Completed state [/etc/salt/minion.d/minion.conf] at time 05:47:19.975785 duration_in_ms=159.314
2018-03-30 05:47:19,976 [salt.state       ][INFO    ][19623] Running state [salt-minion] at time 05:47:19.976476
2018-03-30 05:47:19,976 [salt.state       ][INFO    ][19623] Executing state service.running for salt-minion
2018-03-30 05:47:19,977 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
2018-03-30 05:47:20,008 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command ['systemctl', 'is-active', 'salt-minion.service'] in directory '/root'
2018-03-30 05:47:20,025 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command ['systemctl', 'is-enabled', 'salt-minion.service'] in directory '/root'
2018-03-30 05:47:20,041 [salt.state       ][INFO    ][19623] The service salt-minion is already running
2018-03-30 05:47:20,041 [salt.state       ][INFO    ][19623] Completed state [salt-minion] at time 05:47:20.041517 duration_in_ms=65.039
2018-03-30 05:47:20,043 [salt.state       ][INFO    ][19623] Running state [/etc/salt/grains.d] at time 05:47:20.043834
2018-03-30 05:47:20,044 [salt.state       ][INFO    ][19623] Executing state file.directory for /etc/salt/grains.d
2018-03-30 05:47:20,045 [salt.state       ][INFO    ][19623] Directory /etc/salt/grains.d is in the correct state
2018-03-30 05:47:20,045 [salt.state       ][INFO    ][19623] Completed state [/etc/salt/grains.d] at time 05:47:20.045789 duration_in_ms=1.954
2018-03-30 05:47:20,046 [salt.state       ][INFO    ][19623] Running state [/etc/salt/grains] at time 05:47:20.046712
2018-03-30 05:47:20,047 [salt.state       ][INFO    ][19623] Executing state file.managed for /etc/salt/grains
2018-03-30 05:47:20,047 [salt.state       ][INFO    ][19623] File /etc/salt/grains exists with proper permissions. No changes made.
2018-03-30 05:47:20,048 [salt.state       ][INFO    ][19623] Completed state [/etc/salt/grains] at time 05:47:20.048275 duration_in_ms=1.563
2018-03-30 05:47:20,048 [salt.state       ][INFO    ][19623] Running state [/etc/salt/grains.d/placeholder] at time 05:47:20.048934
2018-03-30 05:47:20,049 [salt.state       ][INFO    ][19623] Executing state file.managed for /etc/salt/grains.d/placeholder
2018-03-30 05:47:20,055 [salt.state       ][INFO    ][19623] File /etc/salt/grains.d/placeholder exists with proper permissions. No changes made.
2018-03-30 05:47:20,055 [salt.state       ][INFO    ][19623] Completed state [/etc/salt/grains.d/placeholder] at time 05:47:20.055604 duration_in_ms=6.669
2018-03-30 05:47:20,056 [salt.state       ][INFO    ][19623] Running state [/etc/salt/grains.d/sphinx] at time 05:47:20.056242
2018-03-30 05:47:20,056 [salt.state       ][INFO    ][19623] Executing state file.managed for /etc/salt/grains.d/sphinx
2018-03-30 05:47:20,069 [salt.state       ][INFO    ][19623] File changed:
--- 
+++ 
@@ -41,7 +41,7 @@
 
                 * lvm2: 2.02.133-1ubuntu10
 
-                * sysfsutils: dpkg-query: no packages found matching sysfsutils
+                * sysfsutils: 2.1.0+repack-4
 
                 * sg3-utils: dpkg-query: no packages found matching sg3-utils
 
@@ -86,6 +86,7 @@
             ip:
               name: IP Addresses
               value:
+              - 10.167.4.52
               - 127.0.0.1
               - 192.168.11.18
         system:
@@ -135,7 +136,7 @@
 
                 * pm-utils: dpkg-query: no packages found matching pm-utils
 
-                * sysfsutils: dpkg-query: no packages found matching sysfsutils
+                * sysfsutils: 2.1.0+repack-4
 
                 * sg3-utils: dpkg-query: no packages found matching sg3-utils
 

2018-03-30 05:47:20,069 [salt.state       ][INFO    ][19623] Completed state [/etc/salt/grains.d/sphinx] at time 05:47:20.069489 duration_in_ms=13.245
2018-03-30 05:47:20,072 [salt.state       ][INFO    ][19623] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 05:47:20.072735
2018-03-30 05:47:20,073 [salt.state       ][INFO    ][19623] Executing state cmd.wait for python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"
2018-03-30 05:47:20,073 [salt.state       ][INFO    ][19623] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"
2018-03-30 05:47:20,074 [salt.state       ][INFO    ][19623] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 05:47:20.074189 duration_in_ms=1.453
2018-03-30 05:47:20,074 [salt.state       ][INFO    ][19623] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 05:47:20.074590
2018-03-30 05:47:20,075 [salt.state       ][INFO    ][19623] Executing state cmd.mod_watch for python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"
2018-03-30 05:47:20,076 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"' in directory '/root'
2018-03-30 05:47:20,267 [salt.state       ][INFO    ][19623] {'pid': 20029, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:47:20,268 [salt.state       ][INFO    ][19623] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 05:47:20.268715 duration_in_ms=194.123
2018-03-30 05:47:20,269 [salt.state       ][INFO    ][19623] Running state [/etc/salt/grains.d/dns_records] at time 05:47:20.269884
2018-03-30 05:47:20,270 [salt.state       ][INFO    ][19623] Executing state file.managed for /etc/salt/grains.d/dns_records
2018-03-30 05:47:20,278 [salt.state       ][INFO    ][19623] File /etc/salt/grains.d/dns_records is in the correct state
2018-03-30 05:47:20,279 [salt.state       ][INFO    ][19623] Completed state [/etc/salt/grains.d/dns_records] at time 05:47:20.279254 duration_in_ms=9.371
2018-03-30 05:47:20,280 [salt.state       ][INFO    ][19623] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"] at time 05:47:20.280544
2018-03-30 05:47:20,281 [salt.state       ][INFO    ][19623] Executing state cmd.wait for python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"
2018-03-30 05:47:20,281 [salt.state       ][INFO    ][19623] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"
2018-03-30 05:47:20,281 [salt.state       ][INFO    ][19623] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"] at time 05:47:20.281793 duration_in_ms=1.248
2018-03-30 05:47:20,282 [salt.state       ][INFO    ][19623] Running state [/etc/salt/grains.d/salt] at time 05:47:20.282373
2018-03-30 05:47:20,282 [salt.state       ][INFO    ][19623] Executing state file.managed for /etc/salt/grains.d/salt
2018-03-30 05:47:20,284 [salt.state       ][INFO    ][19623] File /etc/salt/grains.d/salt is in the correct state
2018-03-30 05:47:20,285 [salt.state       ][INFO    ][19623] Completed state [/etc/salt/grains.d/salt] at time 05:47:20.285114 duration_in_ms=2.74
2018-03-30 05:47:20,286 [salt.state       ][INFO    ][19623] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"] at time 05:47:20.286163
2018-03-30 05:47:20,286 [salt.state       ][INFO    ][19623] Executing state cmd.wait for python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"
2018-03-30 05:47:20,287 [salt.state       ][INFO    ][19623] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"
2018-03-30 05:47:20,287 [salt.state       ][INFO    ][19623] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"] at time 05:47:20.287352 duration_in_ms=1.189
2018-03-30 05:47:20,289 [salt.state       ][INFO    ][19623] Running state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 05:47:20.289777
2018-03-30 05:47:20,290 [salt.state       ][INFO    ][19623] Executing state cmd.wait for cat /etc/salt/grains.d/* > /etc/salt/grains
2018-03-30 05:47:20,290 [salt.state       ][INFO    ][19623] No changes made for cat /etc/salt/grains.d/* > /etc/salt/grains
2018-03-30 05:47:20,291 [salt.state       ][INFO    ][19623] Completed state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 05:47:20.290956 duration_in_ms=1.179
2018-03-30 05:47:20,291 [salt.state       ][INFO    ][19623] Running state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 05:47:20.291258
2018-03-30 05:47:20,291 [salt.state       ][INFO    ][19623] Executing state cmd.mod_watch for cat /etc/salt/grains.d/* > /etc/salt/grains
2018-03-30 05:47:20,294 [salt.loaded.int.module.cmdmod][INFO    ][19623] Executing command 'cat /etc/salt/grains.d/* > /etc/salt/grains' in directory '/root'
2018-03-30 05:47:20,310 [salt.state       ][INFO    ][19623] {'pid': 20031, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 05:47:20,310 [salt.state       ][INFO    ][19623] Completed state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 05:47:20.310658 duration_in_ms=19.4
2018-03-30 05:47:20,311 [salt.state       ][INFO    ][19623] Running state [mine.update] at time 05:47:20.311699
2018-03-30 05:47:20,312 [salt.state       ][INFO    ][19623] Executing state module.wait for mine.update
2018-03-30 05:47:20,312 [salt.state       ][INFO    ][19623] No changes made for mine.update
2018-03-30 05:47:20,312 [salt.state       ][INFO    ][19623] Completed state [mine.update] at time 05:47:20.312813 duration_in_ms=1.114
2018-03-30 05:47:20,313 [salt.state       ][INFO    ][19623] Running state [mine.update] at time 05:47:20.313094
2018-03-30 05:47:20,313 [salt.state       ][INFO    ][19623] Executing state module.mod_watch for mine.update
2018-03-30 05:47:20,406 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330054719938493
2018-03-30 05:47:20,423 [salt.minion      ][INFO    ][20041] Starting a new job with PID 20041
2018-03-30 05:47:20,446 [salt.minion      ][INFO    ][20041] Returning information for job: 20180330054719938493
2018-03-30 05:47:20,849 [salt.state       ][INFO    ][19623] {'ret': True}
2018-03-30 05:47:20,850 [salt.state       ][INFO    ][19623] Completed state [mine.update] at time 05:47:20.850196 duration_in_ms=537.101
2018-03-30 05:47:20,850 [salt.state       ][INFO    ][19623] Running state [ca-certificates] at time 05:47:20.850746
2018-03-30 05:47:20,851 [salt.state       ][INFO    ][19623] Executing state pkg.installed for ca-certificates
2018-03-30 05:47:20,863 [salt.state       ][INFO    ][19623] All specified packages are already installed
2018-03-30 05:47:20,863 [salt.state       ][INFO    ][19623] Completed state [ca-certificates] at time 05:47:20.863475 duration_in_ms=12.728
2018-03-30 05:47:20,864 [salt.state       ][INFO    ][19623] Running state [update-ca-certificates] at time 05:47:20.864535
2018-03-30 05:47:20,864 [salt.state       ][INFO    ][19623] Executing state cmd.wait for update-ca-certificates
2018-03-30 05:47:20,865 [salt.state       ][INFO    ][19623] No changes made for update-ca-certificates
2018-03-30 05:47:20,865 [salt.state       ][INFO    ][19623] Completed state [update-ca-certificates] at time 05:47:20.865728 duration_in_ms=1.193
2018-03-30 05:47:20,868 [salt.minion      ][INFO    ][19623] Returning information for job: 20180330054709828411
2018-03-30 05:50:24,017 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.sync_all with jid 20180330055023557036
2018-03-30 05:50:24,034 [salt.minion      ][INFO    ][20048] Starting a new job with PID 20048
2018-03-30 05:50:26,179 [salt.state       ][INFO    ][20048] Loading fresh modules for state activity
2018-03-30 05:50:26,544 [salt.utils.extmods][INFO    ][20048] Creating module dir '/var/cache/salt/minion/extmods/beacons'
2018-03-30 05:50:26,550 [salt.utils.extmods][INFO    ][20048] Syncing beacons for environment 'base'
2018-03-30 05:50:26,551 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_beacons, for base)
2018-03-30 05:50:26,551 [salt.fileclient  ][INFO    ][20048] Caching directory '_beacons/' for environment 'base'
2018-03-30 05:50:26,619 [salt.utils.extmods][INFO    ][20048] Syncing modules for environment 'base'
2018-03-30 05:50:26,619 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_modules, for base)
2018-03-30 05:50:26,620 [salt.fileclient  ][INFO    ][20048] Caching directory '_modules/' for environment 'base'
2018-03-30 05:50:27,681 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/__init__.py' to '/var/cache/salt/minion/extmods/modules/__init__.py'
2018-03-30 05:50:27,681 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/architect.py' to '/var/cache/salt/minion/extmods/modules/architect.py'
2018-03-30 05:50:27,682 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/avinetworks.py' to '/var/cache/salt/minion/extmods/modules/avinetworks.py'
2018-03-30 05:50:27,683 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/ceph_ng.py' to '/var/cache/salt/minion/extmods/modules/ceph_ng.py'
2018-03-30 05:50:27,684 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/cinderng.py' to '/var/cache/salt/minion/extmods/modules/cinderng.py'
2018-03-30 05:50:27,685 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/contrail.py' to '/var/cache/salt/minion/extmods/modules/contrail.py'
2018-03-30 05:50:27,687 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/creds.py' to '/var/cache/salt/minion/extmods/modules/creds.py'
2018-03-30 05:50:27,688 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/devops_utils.py' to '/var/cache/salt/minion/extmods/modules/devops_utils.py'
2018-03-30 05:50:27,689 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/dockerng_service.py' to '/var/cache/salt/minion/extmods/modules/dockerng_service.py'
2018-03-30 05:50:27,689 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/encode_json.py' to '/var/cache/salt/minion/extmods/modules/encode_json.py'
2018-03-30 05:50:27,690 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/gerrit.py' to '/var/cache/salt/minion/extmods/modules/gerrit.py'
2018-03-30 05:50:27,691 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/glanceng.py' to '/var/cache/salt/minion/extmods/modules/glanceng.py'
2018-03-30 05:50:27,692 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/heat.py' to '/var/cache/salt/minion/extmods/modules/heat.py'
2018-03-30 05:50:27,693 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/heka_alarming.py' to '/var/cache/salt/minion/extmods/modules/heka_alarming.py'
2018-03-30 05:50:27,694 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/helm.py' to '/var/cache/salt/minion/extmods/modules/helm.py'
2018-03-30 05:50:27,695 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/jenkins_common.py' to '/var/cache/salt/minion/extmods/modules/jenkins_common.py'
2018-03-30 05:50:27,696 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/keystone_policy.py' to '/var/cache/salt/minion/extmods/modules/keystone_policy.py'
2018-03-30 05:50:27,697 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/keystoneng.py' to '/var/cache/salt/minion/extmods/modules/keystoneng.py'
2018-03-30 05:50:27,698 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/linux_hosts.py' to '/var/cache/salt/minion/extmods/modules/linux_hosts.py'
2018-03-30 05:50:27,699 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/linux_netlink.py' to '/var/cache/salt/minion/extmods/modules/linux_netlink.py'
2018-03-30 05:50:27,700 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/maas.py' to '/var/cache/salt/minion/extmods/modules/maas.py'
2018-03-30 05:50:27,701 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/maas_client.py' to '/var/cache/salt/minion/extmods/modules/maas_client.py'
2018-03-30 05:50:27,702 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/maasng.py' to '/var/cache/salt/minion/extmods/modules/maasng.py'
2018-03-30 05:50:27,703 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/modelschema.py' to '/var/cache/salt/minion/extmods/modules/modelschema.py'
2018-03-30 05:50:27,704 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/modelutils.py' to '/var/cache/salt/minion/extmods/modules/modelutils.py'
2018-03-30 05:50:27,704 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/multipart.py' to '/var/cache/salt/minion/extmods/modules/multipart.py'
2018-03-30 05:50:27,705 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/nagios_alarming.py' to '/var/cache/salt/minion/extmods/modules/nagios_alarming.py'
2018-03-30 05:50:27,706 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/neutronng.py' to '/var/cache/salt/minion/extmods/modules/neutronng.py'
2018-03-30 05:50:27,707 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/novang.py' to '/var/cache/salt/minion/extmods/modules/novang.py'
2018-03-30 05:50:27,708 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/reclass.py' to '/var/cache/salt/minion/extmods/modules/reclass.py'
2018-03-30 05:50:27,709 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/rsyslog_util.py' to '/var/cache/salt/minion/extmods/modules/rsyslog_util.py'
2018-03-30 05:50:27,710 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/rundeck.py' to '/var/cache/salt/minion/extmods/modules/rundeck.py'
2018-03-30 05:50:27,711 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/saltkey.py' to '/var/cache/salt/minion/extmods/modules/saltkey.py'
2018-03-30 05:50:27,712 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/saltresource.py' to '/var/cache/salt/minion/extmods/modules/saltresource.py'
2018-03-30 05:50:27,712 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/seedng.py' to '/var/cache/salt/minion/extmods/modules/seedng.py'
2018-03-30 05:50:27,713 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/testing/__init__.py' to '/var/cache/salt/minion/extmods/modules/testing/__init__.py'
2018-03-30 05:50:27,714 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/testing/credentials.py' to '/var/cache/salt/minion/extmods/modules/testing/credentials.py'
2018-03-30 05:50:27,715 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/testing/django.py' to '/var/cache/salt/minion/extmods/modules/testing/django.py'
2018-03-30 05:50:27,715 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/testing/django_client_proxy.py' to '/var/cache/salt/minion/extmods/modules/testing/django_client_proxy.py'
2018-03-30 05:50:27,716 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/utils.py' to '/var/cache/salt/minion/extmods/modules/utils.py'
2018-03-30 05:50:27,717 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_modules/virtng.py' to '/var/cache/salt/minion/extmods/modules/virtng.py'
2018-03-30 05:50:27,726 [salt.utils.extmods][INFO    ][20048] Syncing states for environment 'base'
2018-03-30 05:50:27,727 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_states, for base)
2018-03-30 05:50:27,727 [salt.fileclient  ][INFO    ][20048] Caching directory '_states/' for environment 'base'
2018-03-30 05:50:28,450 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/avinetworks.py' to '/var/cache/salt/minion/extmods/states/avinetworks.py'
2018-03-30 05:50:28,451 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/cinderng.py' to '/var/cache/salt/minion/extmods/states/cinderng.py'
2018-03-30 05:50:28,452 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/contrail.py' to '/var/cache/salt/minion/extmods/states/contrail.py'
2018-03-30 05:50:28,453 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/dockerng_service.py' to '/var/cache/salt/minion/extmods/states/dockerng_service.py'
2018-03-30 05:50:28,454 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/gerrit.py' to '/var/cache/salt/minion/extmods/states/gerrit.py'
2018-03-30 05:50:28,455 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/glanceng.py' to '/var/cache/salt/minion/extmods/states/glanceng.py'
2018-03-30 05:50:28,456 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/grafana3_dashboard.py' to '/var/cache/salt/minion/extmods/states/grafana3_dashboard.py'
2018-03-30 05:50:28,457 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/grafana3_datasource.py' to '/var/cache/salt/minion/extmods/states/grafana3_datasource.py'
2018-03-30 05:50:28,457 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/heat.py' to '/var/cache/salt/minion/extmods/states/heat.py'
2018-03-30 05:50:28,458 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/helm_release.py' to '/var/cache/salt/minion/extmods/states/helm_release.py'
2018-03-30 05:50:28,459 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/helm_repos.py' to '/var/cache/salt/minion/extmods/states/helm_repos.py'
2018-03-30 05:50:28,460 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/httpng.py' to '/var/cache/salt/minion/extmods/states/httpng.py'
2018-03-30 05:50:28,460 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_approval.py' to '/var/cache/salt/minion/extmods/states/jenkins_approval.py'
2018-03-30 05:50:28,461 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_artifactory.py' to '/var/cache/salt/minion/extmods/states/jenkins_artifactory.py'
2018-03-30 05:50:28,462 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_credential.py' to '/var/cache/salt/minion/extmods/states/jenkins_credential.py'
2018-03-30 05:50:28,463 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_globalenvprop.py' to '/var/cache/salt/minion/extmods/states/jenkins_globalenvprop.py'
2018-03-30 05:50:28,463 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_job.py' to '/var/cache/salt/minion/extmods/states/jenkins_job.py'
2018-03-30 05:50:28,464 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_lib.py' to '/var/cache/salt/minion/extmods/states/jenkins_lib.py'
2018-03-30 05:50:28,465 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_node.py' to '/var/cache/salt/minion/extmods/states/jenkins_node.py'
2018-03-30 05:50:28,466 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_plugin.py' to '/var/cache/salt/minion/extmods/states/jenkins_plugin.py'
2018-03-30 05:50:28,466 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_security.py' to '/var/cache/salt/minion/extmods/states/jenkins_security.py'
2018-03-30 05:50:28,467 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_slack.py' to '/var/cache/salt/minion/extmods/states/jenkins_slack.py'
2018-03-30 05:50:28,468 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_smtp.py' to '/var/cache/salt/minion/extmods/states/jenkins_smtp.py'
2018-03-30 05:50:28,468 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_theme.py' to '/var/cache/salt/minion/extmods/states/jenkins_theme.py'
2018-03-30 05:50:28,469 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_user.py' to '/var/cache/salt/minion/extmods/states/jenkins_user.py'
2018-03-30 05:50:28,470 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/jenkins_view.py' to '/var/cache/salt/minion/extmods/states/jenkins_view.py'
2018-03-30 05:50:28,470 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/keystone_policy.py' to '/var/cache/salt/minion/extmods/states/keystone_policy.py'
2018-03-30 05:50:28,471 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/keystoneng.py' to '/var/cache/salt/minion/extmods/states/keystoneng.py'
2018-03-30 05:50:28,472 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/kibana_object.py' to '/var/cache/salt/minion/extmods/states/kibana_object.py'
2018-03-30 05:50:28,473 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/maas_cluster.py' to '/var/cache/salt/minion/extmods/states/maas_cluster.py'
2018-03-30 05:50:28,473 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/maasng.py' to '/var/cache/salt/minion/extmods/states/maasng.py'
2018-03-30 05:50:28,474 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/neutronng.py' to '/var/cache/salt/minion/extmods/states/neutronng.py'
2018-03-30 05:50:28,475 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/novang.py' to '/var/cache/salt/minion/extmods/states/novang.py'
2018-03-30 05:50:28,476 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/reclass.py' to '/var/cache/salt/minion/extmods/states/reclass.py'
2018-03-30 05:50:28,477 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/rundeck_project.py' to '/var/cache/salt/minion/extmods/states/rundeck_project.py'
2018-03-30 05:50:28,477 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/rundeck_scm.py' to '/var/cache/salt/minion/extmods/states/rundeck_scm.py'
2018-03-30 05:50:28,478 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_states/rundeck_secret.py' to '/var/cache/salt/minion/extmods/states/rundeck_secret.py'
2018-03-30 05:50:28,482 [salt.utils.extmods][INFO    ][20048] Creating module dir '/var/cache/salt/minion/extmods/sdb'
2018-03-30 05:50:28,488 [salt.utils.extmods][INFO    ][20048] Syncing sdb for environment 'base'
2018-03-30 05:50:28,488 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_sdb, for base)
2018-03-30 05:50:28,489 [salt.fileclient  ][INFO    ][20048] Caching directory '_sdb/' for environment 'base'
2018-03-30 05:50:28,531 [salt.utils.extmods][INFO    ][20048] Syncing grains for environment 'base'
2018-03-30 05:50:28,531 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_grains, for base)
2018-03-30 05:50:28,531 [salt.fileclient  ][INFO    ][20048] Caching directory '_grains/' for environment 'base'
2018-03-30 05:50:28,676 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/ceilometer_policy.py' to '/var/cache/salt/minion/extmods/grains/ceilometer_policy.py'
2018-03-30 05:50:28,676 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/ceph.py' to '/var/cache/salt/minion/extmods/grains/ceph.py'
2018-03-30 05:50:28,677 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/cinder_policy.py' to '/var/cache/salt/minion/extmods/grains/cinder_policy.py'
2018-03-30 05:50:28,678 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/docker_swarm.py' to '/var/cache/salt/minion/extmods/grains/docker_swarm.py'
2018-03-30 05:50:28,678 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/glance_policy.py' to '/var/cache/salt/minion/extmods/grains/glance_policy.py'
2018-03-30 05:50:28,679 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/heat_policy.py' to '/var/cache/salt/minion/extmods/grains/heat_policy.py'
2018-03-30 05:50:28,679 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/jenkins_plugins.py' to '/var/cache/salt/minion/extmods/grains/jenkins_plugins.py'
2018-03-30 05:50:28,680 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/keystone_policy.py' to '/var/cache/salt/minion/extmods/grains/keystone_policy.py'
2018-03-30 05:50:28,681 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/kubernetes.py' to '/var/cache/salt/minion/extmods/grains/kubernetes.py'
2018-03-30 05:50:28,687 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/neutron_policy.py' to '/var/cache/salt/minion/extmods/grains/neutron_policy.py'
2018-03-30 05:50:28,687 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/nova_policy.py' to '/var/cache/salt/minion/extmods/grains/nova_policy.py'
2018-03-30 05:50:28,688 [salt.utils.extmods][INFO    ][20048] Copying '/var/cache/salt/minion/files/base/_grains/ssh_fingerprints.py' to '/var/cache/salt/minion/extmods/grains/ssh_fingerprints.py'
2018-03-30 05:50:28,690 [salt.utils.extmods][INFO    ][20048] Creating module dir '/var/cache/salt/minion/extmods/renderers'
2018-03-30 05:50:28,694 [salt.utils.extmods][INFO    ][20048] Syncing renderers for environment 'base'
2018-03-30 05:50:28,694 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_renderers, for base)
2018-03-30 05:50:28,694 [salt.fileclient  ][INFO    ][20048] Caching directory '_renderers/' for environment 'base'
2018-03-30 05:50:28,747 [salt.utils.extmods][INFO    ][20048] Creating module dir '/var/cache/salt/minion/extmods/returners'
2018-03-30 05:50:28,753 [salt.utils.extmods][INFO    ][20048] Syncing returners for environment 'base'
2018-03-30 05:50:28,753 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_returners, for base)
2018-03-30 05:50:28,753 [salt.fileclient  ][INFO    ][20048] Caching directory '_returners/' for environment 'base'
2018-03-30 05:50:28,804 [salt.utils.extmods][INFO    ][20048] Creating module dir '/var/cache/salt/minion/extmods/output'
2018-03-30 05:50:28,811 [salt.utils.extmods][INFO    ][20048] Syncing output for environment 'base'
2018-03-30 05:50:28,811 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_output, for base)
2018-03-30 05:50:28,811 [salt.fileclient  ][INFO    ][20048] Caching directory '_output/' for environment 'base'
2018-03-30 05:50:28,856 [salt.utils.extmods][INFO    ][20048] Creating module dir '/var/cache/salt/minion/extmods/utils'
2018-03-30 05:50:28,862 [salt.utils.extmods][INFO    ][20048] Syncing utils for environment 'base'
2018-03-30 05:50:28,863 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_utils, for base)
2018-03-30 05:50:28,863 [salt.fileclient  ][INFO    ][20048] Caching directory '_utils/' for environment 'base'
2018-03-30 05:50:28,904 [salt.utils.extmods][INFO    ][20048] Creating module dir '/var/cache/salt/minion/extmods/log_handlers'
2018-03-30 05:50:28,910 [salt.utils.extmods][INFO    ][20048] Syncing log_handlers for environment 'base'
2018-03-30 05:50:28,910 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_log_handlers, for base)
2018-03-30 05:50:28,910 [salt.fileclient  ][INFO    ][20048] Caching directory '_log_handlers/' for environment 'base'
2018-03-30 05:50:28,954 [salt.utils.extmods][INFO    ][20048] Creating module dir '/var/cache/salt/minion/extmods/proxy'
2018-03-30 05:50:28,960 [salt.utils.extmods][INFO    ][20048] Syncing proxy for environment 'base'
2018-03-30 05:50:28,960 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_proxy, for base)
2018-03-30 05:50:28,961 [salt.fileclient  ][INFO    ][20048] Caching directory '_proxy/' for environment 'base'
2018-03-30 05:50:29,002 [salt.utils.extmods][INFO    ][20048] Creating module dir '/var/cache/salt/minion/extmods/engines'
2018-03-30 05:50:29,008 [salt.utils.extmods][INFO    ][20048] Syncing engines for environment 'base'
2018-03-30 05:50:29,008 [salt.utils.extmods][INFO    ][20048] Loading cache from salt://_engines, for base)
2018-03-30 05:50:29,009 [salt.fileclient  ][INFO    ][20048] Caching directory '_engines/' for environment 'base'
2018-03-30 05:50:29,074 [salt.minion      ][INFO    ][20048] Returning information for job: 20180330055023557036
2018-03-30 05:56:47,026 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command alternatives.set with jid 20180330055646572755
2018-03-30 05:56:47,043 [salt.minion      ][INFO    ][20162] Starting a new job with PID 20162
2018-03-30 05:56:47,052 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][20162] Executing command ['update-alternatives', '--set', 'ovs-vswitchd', '/usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk'] in directory '/root'
2018-03-30 05:56:47,093 [salt.minion      ][INFO    ][20162] Returning information for job: 20180330055646572755
2018-03-30 05:56:47,735 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command service.restart with jid 20180330055647275918
2018-03-30 05:56:47,751 [salt.minion      ][INFO    ][20168] Starting a new job with PID 20168
2018-03-30 05:56:48,057 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][20168] Executing command ['systemctl', 'status', 'openvswitch-switch.service', '-n', '0'] in directory '/root'
2018-03-30 05:56:48,074 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][20168] Executing command ['systemctl', 'is-enabled', 'openvswitch-switch.service'] in directory '/root'
2018-03-30 05:56:48,101 [salt.loader.192.168.11.2.int.module.cmdmod][INFO    ][20168] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'openvswitch-switch.service'] in directory '/root'
2018-03-30 05:56:54,358 [salt.minion      ][INFO    ][20168] Returning information for job: 20180330055647275918
2018-03-30 06:10:00,860 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command state.sls with jid 20180330061000406103
2018-03-30 06:10:00,877 [salt.minion      ][INFO    ][20377] Starting a new job with PID 20377
2018-03-30 06:10:01,313 [salt.state       ][INFO    ][20377] Loading fresh modules for state activity
2018-03-30 06:10:01,358 [salt.fileclient  ][INFO    ][20377] Fetching file from saltenv 'base', ** done ** 'glusterfs/client.sls'
2018-03-30 06:10:01,400 [salt.fileclient  ][INFO    ][20377] Fetching file from saltenv 'base', ** done ** 'glusterfs/map.jinja'
2018-03-30 06:10:01,414 [py.warnings      ][WARNING ][20377] /usr/lib/python2.7/dist-packages/salt/utils/templates.py:73: DeprecationWarning: Starting in 2015.5, cmd.run uses python_shell=False by default, which doesn't support shellisms (pipes, env variables, etc). cmd.run is currently aliased to cmd.shell to prevent breakage. Please switch to cmd.shell or set python_shell=True to avoid breakage in the future, when this aliasing is removed.

2018-03-30 06:10:01,416 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command 'systemd-escape -p --suffix=mount /var/lib/nova/instances' in directory '/root'
2018-03-30 06:10:01,785 [salt.state       ][INFO    ][20377] Running state [glusterfs-client] at time 06:10:01.785375
2018-03-30 06:10:01,785 [salt.state       ][INFO    ][20377] Executing state pkg.installed for glusterfs-client
2018-03-30 06:10:01,786 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:10:02,181 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['apt-cache', '-q', 'policy', 'glusterfs-client'] in directory '/root'
2018-03-30 06:10:02,277 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-03-30 06:10:04,119 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:10:04,136 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'glusterfs-client'] in directory '/root'
2018-03-30 06:10:10,892 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330061010438141
2018-03-30 06:10:10,909 [salt.minion      ][INFO    ][21435] Starting a new job with PID 21435
2018-03-30 06:10:10,931 [salt.minion      ][INFO    ][21435] Returning information for job: 20180330061010438141
2018-03-30 06:10:11,570 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:10:11,624 [salt.state       ][INFO    ][20377] Made the following changes:
'python-jwt' changed from 'absent' to '1.3.0-1ubuntu0.1'
'glusterfs-client' changed from 'absent' to '3.13.2-ubuntu1~xenial2'
'libaio1' changed from 'absent' to '0.3.110-2'
'attr' changed from 'absent' to '1:2.4.47-2'
'libpython2.7' changed from 'absent' to '2.7.12-1ubuntu0~16.04.3'
'glusterfs-common' changed from 'absent' to '3.13.2-ubuntu1~xenial2'
'librdmacm1' changed from 'absent' to '1.0.21-1'
'liburcu4' changed from 'absent' to '0.9.1-3'
'libibverbs1' changed from 'absent' to '1.1.8-1.1ubuntu2'
'python-prettytable' changed from 'absent' to '0.7.2-3'

2018-03-30 06:10:11,644 [salt.state       ][INFO    ][20377] Loading fresh modules for state activity
2018-03-30 06:10:11,677 [salt.state       ][INFO    ][20377] Completed state [glusterfs-client] at time 06:10:11.677272 duration_in_ms=9891.897
2018-03-30 06:10:11,687 [salt.state       ][INFO    ][20377] Running state [attr] at time 06:10:11.687109
2018-03-30 06:10:11,687 [salt.state       ][INFO    ][20377] Executing state pkg.installed for attr
2018-03-30 06:10:12,031 [salt.state       ][INFO    ][20377] All specified packages are already installed
2018-03-30 06:10:12,031 [salt.state       ][INFO    ][20377] Completed state [attr] at time 06:10:12.031258 duration_in_ms=344.147
2018-03-30 06:10:12,032 [salt.state       ][INFO    ][20377] Running state [/etc/systemd/system/var-lib-nova-instances.mount] at time 06:10:12.032484
2018-03-30 06:10:12,032 [salt.state       ][INFO    ][20377] Executing state file.managed for /etc/systemd/system/var-lib-nova-instances.mount
2018-03-30 06:10:12,070 [salt.fileclient  ][INFO    ][20377] Fetching file from saltenv 'base', ** done ** 'glusterfs/files/glusterfs-client.mount'
2018-03-30 06:10:12,076 [salt.state       ][INFO    ][20377] File changed:
New file
2018-03-30 06:10:12,076 [salt.state       ][INFO    ][20377] Completed state [/etc/systemd/system/var-lib-nova-instances.mount] at time 06:10:12.076182 duration_in_ms=43.698
2018-03-30 06:10:12,076 [salt.state       ][INFO    ][20377] Running state [var-lib-nova-instances.mount] at time 06:10:12.076881
2018-03-30 06:10:12,077 [salt.state       ][INFO    ][20377] Executing state service.running for var-lib-nova-instances.mount
2018-03-30 06:10:12,077 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemctl', 'status', 'var-lib-nova-instances.mount', '-n', '0'] in directory '/root'
2018-03-30 06:10:12,094 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemctl', 'is-active', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,106 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,119 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,130 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,200 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemctl', 'is-active', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,216 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,236 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,255 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,273 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,377 [salt.loaded.int.module.cmdmod][INFO    ][20377] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2018-03-30 06:10:12,395 [salt.state       ][INFO    ][20377] {'var-lib-nova-instances.mount': True}
2018-03-30 06:10:12,395 [salt.state       ][INFO    ][20377] Completed state [var-lib-nova-instances.mount] at time 06:10:12.395626 duration_in_ms=318.743
2018-03-30 06:10:12,397 [salt.minion      ][INFO    ][20377] Returning information for job: 20180330061000406103
2018-03-30 06:40:24,767 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command state.sls with jid 20180330064024347758
2018-03-30 06:40:24,786 [salt.minion      ][INFO    ][21927] Starting a new job with PID 21927
2018-03-30 06:40:27,261 [salt.state       ][INFO    ][21927] Loading fresh modules for state activity
2018-03-30 06:40:27,312 [salt.fileclient  ][INFO    ][21927] Fetching file from saltenv 'base', ** done ** 'cinder/init.sls'
2018-03-30 06:40:27,337 [salt.fileclient  ][INFO    ][21927] Fetching file from saltenv 'base', ** done ** 'cinder/volume.sls'
2018-03-30 06:40:27,437 [salt.fileclient  ][INFO    ][21927] Fetching file from saltenv 'base', ** done ** 'cinder/user.sls'
2018-03-30 06:40:27,475 [salt.state       ][INFO    ][21927] Running state [cinder] at time 06:40:27.475185
2018-03-30 06:40:27,475 [salt.state       ][INFO    ][21927] Executing state group.present for cinder
2018-03-30 06:40:27,477 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['groupadd', '-g 304', '-r', 'cinder'] in directory '/root'
2018-03-30 06:40:27,666 [salt.state       ][INFO    ][21927] {'passwd': 'x', 'gid': 304, 'name': 'cinder', 'members': []}
2018-03-30 06:40:27,666 [salt.state       ][INFO    ][21927] Completed state [cinder] at time 06:40:27.666744 duration_in_ms=191.561
2018-03-30 06:40:27,667 [salt.state       ][INFO    ][21927] Running state [cinder] at time 06:40:27.667316
2018-03-30 06:40:27,667 [salt.state       ][INFO    ][21927] Executing state user.present for cinder
2018-03-30 06:40:27,670 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['useradd', '-s', '/bin/false', '-u', '304', '-g', '304', '-m', '-d', '/var/lib/cinder', '-r', 'cinder'] in directory '/root'
2018-03-30 06:40:27,854 [salt.state       ][INFO    ][21927] {'shell': '/bin/false', 'workphone': '', 'uid': 304, 'passwd': 'x', 'roomnumber': '', 'groups': ['cinder'], 'home': '/var/lib/cinder', 'name': 'cinder', 'gid': 304, 'fullname': '', 'homephone': ''}
2018-03-30 06:40:27,855 [salt.state       ][INFO    ][21927] Completed state [cinder] at time 06:40:27.855365 duration_in_ms=188.048
2018-03-30 06:40:28,238 [salt.state       ][INFO    ][21927] Running state [cinder-volume] at time 06:40:28.238481
2018-03-30 06:40:28,238 [salt.state       ][INFO    ][21927] Executing state pkg.installed for cinder-volume
2018-03-30 06:40:28,239 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:40:28,584 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['apt-cache', '-q', 'policy', 'cinder-volume'] in directory '/root'
2018-03-30 06:40:28,681 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-03-30 06:40:30,496 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:40:30,530 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'cinder-volume'] in directory '/root'
2018-03-30 06:40:34,861 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064034439373
2018-03-30 06:40:34,879 [salt.minion      ][INFO    ][22419] Starting a new job with PID 22419
2018-03-30 06:40:34,912 [salt.minion      ][INFO    ][22419] Returning information for job: 20180330064034439373
2018-03-30 06:40:45,078 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064044655998
2018-03-30 06:40:45,096 [salt.minion      ][INFO    ][22727] Starting a new job with PID 22727
2018-03-30 06:40:45,125 [salt.minion      ][INFO    ][22727] Returning information for job: 20180330064044655998
2018-03-30 06:40:55,291 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064054869117
2018-03-30 06:40:55,307 [salt.minion      ][INFO    ][23001] Starting a new job with PID 23001
2018-03-30 06:40:55,348 [salt.minion      ][INFO    ][23001] Returning information for job: 20180330064054869117
2018-03-30 06:41:05,515 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064105091570
2018-03-30 06:41:05,532 [salt.minion      ][INFO    ][23292] Starting a new job with PID 23292
2018-03-30 06:41:05,564 [salt.minion      ][INFO    ][23292] Returning information for job: 20180330064105091570
2018-03-30 06:41:15,730 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064115308839
2018-03-30 06:41:15,747 [salt.minion      ][INFO    ][23619] Starting a new job with PID 23619
2018-03-30 06:41:15,772 [salt.minion      ][INFO    ][23619] Returning information for job: 20180330064115308839
2018-03-30 06:41:25,948 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064125527178
2018-03-30 06:41:25,966 [salt.minion      ][INFO    ][23913] Starting a new job with PID 23913
2018-03-30 06:41:25,991 [salt.minion      ][INFO    ][23913] Returning information for job: 20180330064125527178
2018-03-30 06:41:36,161 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064135741686
2018-03-30 06:41:36,179 [salt.minion      ][INFO    ][26410] Starting a new job with PID 26410
2018-03-30 06:41:36,203 [salt.minion      ][INFO    ][26410] Returning information for job: 20180330064135741686
2018-03-30 06:41:46,363 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064145949130
2018-03-30 06:41:46,380 [salt.minion      ][INFO    ][26415] Starting a new job with PID 26415
2018-03-30 06:41:46,406 [salt.minion      ][INFO    ][26415] Returning information for job: 20180330064145949130
2018-03-30 06:41:56,576 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064156156647
2018-03-30 06:41:56,594 [salt.minion      ][INFO    ][26485] Starting a new job with PID 26485
2018-03-30 06:41:56,620 [salt.minion      ][INFO    ][26485] Returning information for job: 20180330064156156647
2018-03-30 06:42:06,799 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064206378413
2018-03-30 06:42:06,847 [salt.minion      ][INFO    ][26706] Starting a new job with PID 26706
2018-03-30 06:42:06,872 [salt.minion      ][INFO    ][26706] Returning information for job: 20180330064206378413
2018-03-30 06:42:16,832 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064216413056
2018-03-30 06:42:16,850 [salt.minion      ][INFO    ][27197] Starting a new job with PID 27197
2018-03-30 06:42:16,875 [salt.minion      ][INFO    ][27197] Returning information for job: 20180330064216413056
2018-03-30 06:42:27,049 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064226628781
2018-03-30 06:42:27,069 [salt.minion      ][INFO    ][27504] Starting a new job with PID 27504
2018-03-30 06:42:27,093 [salt.minion      ][INFO    ][27504] Returning information for job: 20180330064226628781
2018-03-30 06:42:37,269 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064236848978
2018-03-30 06:42:37,287 [salt.minion      ][INFO    ][27808] Starting a new job with PID 27808
2018-03-30 06:42:37,315 [salt.minion      ][INFO    ][27808] Returning information for job: 20180330064236848978
2018-03-30 06:42:47,324 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064246906682
2018-03-30 06:42:47,343 [salt.minion      ][INFO    ][28130] Starting a new job with PID 28130
2018-03-30 06:42:47,368 [salt.minion      ][INFO    ][28130] Returning information for job: 20180330064246906682
2018-03-30 06:42:57,551 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064257131319
2018-03-30 06:42:57,565 [salt.minion      ][INFO    ][28480] Starting a new job with PID 28480
2018-03-30 06:42:57,591 [salt.minion      ][INFO    ][28480] Returning information for job: 20180330064257131319
2018-03-30 06:42:59,412 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:42:59,454 [salt.state       ][INFO    ][21927] Made the following changes:
'python-routes' changed from 'absent' to '2.4.1-1~cloud0'
'python-retrying' changed from 'absent' to '1.3.3-1'
'python-osprofiler' changed from 'absent' to '1.11.0-0ubuntu1~cloud0'
'python-kombu' changed from 'absent' to '4.0.2+really4.0.2+dfsg-2ubuntu1~cloud0'
'python-oslo.concurrency' changed from 'absent' to '3.21.0-0ubuntu2~cloud0'
'python-sqlparse' changed from 'absent' to '0.1.18-1'
'python-pycadf' changed from 'absent' to '2.6.0-0ubuntu1~cloud0'
'liblapack3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-openssl' changed from '0.15.1-2build1' to '16.2.0-1~cloud0'
'python-psycopg2' changed from 'absent' to '2.6.1-1build2'
'python-secretstorage' changed from 'absent' to '2.1.3-1'
'python2.7-numpy' changed from 'absent' to '1'
'python-glanceclient' changed from 'absent' to '1:2.8.0-0ubuntu1~cloud0'
'python-formencode' changed from 'absent' to '1.3.0-0ubuntu5'
'python-functools32' changed from 'absent' to '3.2.3.2-2'
'libaprutil1-ldap' changed from 'absent' to '1.5.4-1build1'
'python-migrate' changed from 'absent' to '0.11.0-0ubuntu1~cloud0'
'python-cachetools' changed from 'absent' to '1.1.6-1~cloud0'
'python-oslo.serialization' changed from 'absent' to '2.20.0-0ubuntu1~cloud0'
'apache2' changed from 'absent' to '2.4.18-2ubuntu3.5'
'python-asn1crypto' changed from 'absent' to '0.22.0-1~cloud0'
'python-ply-yacc-3.5' changed from 'absent' to '1'
'python-egenix-mxtools' changed from 'absent' to '3.2.9-1'
'python-blinker' changed from 'absent' to '1.3.dfsg2-1build1'
'python-roman' changed from 'absent' to '2.0.0-2'
'python-pastescript' changed from 'absent' to '1.7.5-3build1'
'sg3-utils' changed from 'absent' to '1.40-0ubuntu1'
'python-tenacity' changed from 'absent' to '3.3.0-0ubuntu1~cloud0'
'python-oslo.versionedobjects' changed from 'absent' to '1.26.0-0ubuntu1~cloud0'
'python-setuptools' changed from 'absent' to '36.2.7-2~cloud0'
'python-warlock' changed from 'absent' to '1.1.0-1'
'python-dbus' changed from 'absent' to '1.2.0-3'
'python-traceback2' changed from 'absent' to '1.4.0-3'
'python-monotonic' changed from 'absent' to '0.6-2'
'python-httplib2' changed from 'absent' to '0.9.1+dfsg-1'
'python-testtools' changed from 'absent' to '1.8.1-0ubuntu1'
'python-egenix-mxdatetime' changed from 'absent' to '3.2.9-1'
'python-anyjson' changed from 'absent' to '0.3.3-1build1'
'python-jsonschema' changed from 'absent' to '2.5.1-4'
'libnss3-nssdb' changed from 'absent' to '2:3.28.4-0ubuntu0.16.04.3'
'apache2-bin' changed from 'absent' to '2.4.18-2ubuntu3.5'
'python-pymemcache' changed from 'absent' to '1.3.2-2ubuntu1'
'libblas3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-netaddr' changed from 'absent' to '0.7.18-1'
'python-dnspython' changed from 'absent' to '1.15.0-1~cloud0'
'python-babel' changed from 'absent' to '2.4.0+dfsg.1-2ubuntu1~cloud0'
'python2.7-paramiko' changed from 'absent' to '1'
'python-certifi' changed from 'absent' to '2015.11.20.1-2'
'python-pil' changed from 'absent' to '3.1.2-0ubuntu1.1'
'python-oslo.privsep' changed from 'absent' to '1.22.0-0ubuntu1~cloud0'
'python2.7-lxml' changed from 'absent' to '1'
'python-suds' changed from 'absent' to '0.7~git20150727.94664dd-3'
'python-oslo.db' changed from 'absent' to '4.25.0-0ubuntu1~cloud0'
'apache2-utils' changed from 'absent' to '2.4.18-2ubuntu3.5'
'libnspr4' changed from 'absent' to '2:4.13.1-0ubuntu0.16.04.1'
'python-os-win' changed from 'absent' to '2.2.0-0ubuntu1~cloud0'
'python-tz' changed from 'absent' to '2014.10~dfsg1-0ubuntu2'
'python-requests' changed from '2.9.1-3' to '2.18.1-1~cloud0'
'libtiff5' changed from 'absent' to '4.0.6-1ubuntu0.4'
'python-funcsigs' changed from 'absent' to '1.0.2-3~cloud0'
'python-ply-lex-3.5' changed from 'absent' to '1'
'python-scgi' changed from 'absent' to '1.13-1.1build1'
'python2.7-pil' changed from 'absent' to '1'
'os-brick-common' changed from 'absent' to '1.15.2-0ubuntu1~cloud0'
'python-repoze.lru' changed from 'absent' to '0.6-6'
'python-posix-ipc' changed from 'absent' to '0.9.8-2build2'
'formencode-i18n' changed from 'absent' to '1.3.0-0ubuntu5'
'python-oslo-context' changed from 'absent' to '1'
'python2.7-testtools' changed from 'absent' to '1'
'python-alembic' changed from 'absent' to '0.8.10-0ubuntu2~cloud0'
'docutils' changed from 'absent' to '1'
'ieee-data' changed from 'absent' to '20150531.1'
'python2.7-dbus' changed from 'absent' to '1'
'python-oslo.middleware' changed from 'absent' to '3.30.0-0ubuntu1.1~cloud0'
'python-pygments' changed from 'absent' to '2.2.0+dfsg-1~cloud0'
'python-pillow' changed from 'absent' to '1'
'libpaperg' changed from 'absent' to '1'
'liblapack.so.3' changed from 'absent' to '1'
'python2.7-netifaces' changed from 'absent' to '1'
'python-numpy-dev' changed from 'absent' to '1'
'liblcms2-2' changed from 'absent' to '2.6-3ubuntu2'
'docutils-common' changed from 'absent' to '0.12+dfsg-1'
'python-oslo.context' changed from 'absent' to '1:2.17.0-0ubuntu1~cloud0'
'python-f2py' changed from 'absent' to '1'
'qemu-block-extra' changed from 'absent' to '1:2.10+dfsg-0ubuntu3.5~cloud0'
'sharutils' changed from 'absent' to '1:4.15.2-1ubuntu0.1'
'python-urllib3' changed from '1.13.1-2ubuntu0.16.04.1' to '1.21.1-1~cloud0'
'qemu-utils' changed from 'absent' to '1:2.10+dfsg-0ubuntu3.5~cloud0'
'libaprutil1-dbd-sqlite3' changed from 'absent' to '1.5.4-1build1'
'python2.7-pyinotify' changed from 'absent' to '1'
'python-webob' changed from 'absent' to '1:1.7.2-0ubuntu1~cloud0'
'python-pyparsing' changed from 'absent' to '2.1.10+dfsg1-1~cloud0'
'python-babel-localedata' changed from 'absent' to '2.4.0+dfsg.1-2ubuntu1~cloud0'
'python-cffi' changed from 'absent' to '1.9.1-2build2~cloud0'
'python-barbicanclient' changed from 'absent' to '4.0.1-2'
'python-castellan' changed from 'absent' to '0.12.0-0ubuntu1~cloud0'
'python-cmd2' changed from 'absent' to '0.6.8-1'
'python-oslo.vmware' changed from 'absent' to '2.23.0-0ubuntu1~cloud0'
'python-distribute' changed from 'absent' to '1'
'python-oslo-log' changed from 'absent' to '1'
'httpd-cgi' changed from 'absent' to '1'
'libsgutils2-2' changed from 'absent' to '1.40-0ubuntu1'
'python-iso8601' changed from 'absent' to '0.1.11-1'
'python-jsonpatch' changed from 'absent' to '1.19-3'
'alembic' changed from 'absent' to '0.8.10-0ubuntu2~cloud0'
'libwebpmux1' changed from 'absent' to '0.4.4-1'
'libapr1' changed from 'absent' to '1.5.2-3'
'tgt' changed from 'absent' to '1:1.0.63-1ubuntu1.1'
'python-oslo.policy' changed from 'absent' to '1.25.1-0ubuntu1~cloud0'
'python-stevedore' changed from 'absent' to '1:1.25.0-0ubuntu1~cloud0'
'python-paste' changed from 'absent' to '1.7.5.1-6ubuntu3'
'python-lxml' changed from 'absent' to '3.5.0-1build1'
'python-oslo.config' changed from 'absent' to '1:4.11.0-0ubuntu1~cloud0'
'libnss3' changed from 'absent' to '2:3.28.4-0ubuntu0.16.04.3'
'python-paramiko' changed from 'absent' to '2.0.0-1~cloud0'
'python2.7-sqlalchemy-ext' changed from 'absent' to '1'
'python-futurist' changed from 'absent' to '0.13.0-2'
'httpd' changed from 'absent' to '1'
'libpaper1' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python-fasteners' changed from 'absent' to '0.12.0-2ubuntu1'
'python2.7-gi' changed from 'absent' to '1'
'python-linecache2' changed from 'absent' to '1.0.0-2'
'python-positional' changed from 'absent' to '1.1.1-3~cloud0'
'python-pastedeploy-tpl' changed from 'absent' to '1.5.2-1'
'python-oauthlib' changed from 'absent' to '1.0.3-1'
'python-oslo-db' changed from 'absent' to '1'
'libblas-common' changed from 'absent' to '3.6.0-2ubuntu2'
'python-mimeparse' changed from 'absent' to '0.1.4-1build1'
'libgfortran3' changed from 'absent' to '5.4.0-6ubuntu1~16.04.9'
'python-gi' changed from 'absent' to '3.20.0-0ubuntu1'
'libpq5' changed from 'absent' to '9.5.12-0ubuntu0.16.04'
'python-ply' changed from 'absent' to '3.7-1'
'python-contextlib2' changed from 'absent' to '0.5.1-1'
'libjpeg8' changed from 'absent' to '8c-2ubuntu8'
'python-novaclient' changed from 'absent' to '2:9.1.0-0ubuntu1~cloud0'
'python-oslo.utils' changed from 'absent' to '3.28.0-0ubuntu1~cloud0'
'python-taskflow' changed from 'absent' to '2.14.0-0ubuntu1~cloud0'
'python-pika-pool' changed from 'absent' to '0.1.3-1ubuntu1'
'python-automaton' changed from 'absent' to '1.2.0-1'
'python-oslo.rootwrap' changed from 'absent' to '5.9.0-0ubuntu1~cloud0'
'python2.7-iso8601' changed from 'absent' to '1'
'python-numpy' changed from 'absent' to '1:1.11.0-1ubuntu1'
'python-simplejson' changed from 'absent' to '3.8.1-1ubuntu2'
'python-wrapt' changed from 'absent' to '1.8.0-5build2'
'python-tooz' changed from 'absent' to '1.58.0-0ubuntu1~cloud0'
'python-docutils' changed from 'absent' to '0.12+dfsg-1'
'python-openid' changed from 'absent' to '2.2.5-6'
'python-pastedeploy' changed from 'absent' to '1.5.2-1'
'pycadf-common' changed from 'absent' to '2.6.0-0ubuntu1~cloud0'
'libpaper-utils' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python2.7-zope.interface' changed from 'absent' to '1'
'python-cliff' changed from 'absent' to '2.8.0-0ubuntu1.1~cloud0'
'python-oslo.i18n' changed from 'absent' to '3.17.0-0ubuntu1~cloud0'
'python-bs4' changed from 'absent' to '4.4.1-1'
'cinder-volume' changed from 'absent' to '2:11.0.2-0ubuntu1~cloud0'
'python-oslo.reports' changed from 'absent' to '1.22.0-0ubuntu1~cloud0'
'python-networkx' changed from 'absent' to '1.11-1ubuntu1'
'python-statsd' changed from 'absent' to '3.2.1-2~cloud0'
'libxslt1.1' changed from 'absent' to '1.1.28-2.1ubuntu0.1'
'docutils-doc' changed from 'absent' to '0.12+dfsg-1'
'python-keyring' changed from 'absent' to '7.3-1ubuntu1'
'python-redis' changed from 'absent' to '2.10.5-1ubuntu1'
'python-oslo-utils' changed from 'absent' to '1'
'libblas.so.3' changed from 'absent' to '1'
'python2.7-simplejson' changed from 'absent' to '1'
'apache2-data' changed from 'absent' to '2.4.18-2ubuntu3.5'
'libapache2-mod-wsgi' changed from 'absent' to '4.3.0-1.1build1'
'python-unicodecsv' changed from 'absent' to '0.14.1-1'
'python-extras' changed from 'absent' to '0.0.3-3'
'python-mock' changed from 'absent' to '2.0.0-3~cloud0'
'python-rfc3986' changed from 'absent' to '0.3.1-2~cloud0'
'python-eventlet' changed from 'absent' to '0.18.4-1ubuntu1'
'python-unittest2' changed from 'absent' to '1.1.0-6.1'
'python2.7-pyparsing' changed from 'absent' to '1'
'python-oslo.log' changed from 'absent' to '3.30.0-0ubuntu1~cloud0'
'python-pyinotify' changed from 'absent' to '0.9.6-0fakesync1'
'libjpeg-turbo8' changed from 'absent' to '1.4.2-0ubuntu3'
'python-amqp' changed from 'absent' to '2.1.4-1~cloud0'
'python-cinder' changed from 'absent' to '2:11.0.2-0ubuntu1~cloud0'
'libwebp5' changed from 'absent' to '0.4.4-1'
'python-zope.interface' changed from 'absent' to '4.1.3-1build1'
'libaprutil1' changed from 'absent' to '1.5.4-1build1'
'python-numpy-abi9' changed from 'absent' to '1'
'python-vine' changed from 'absent' to '1.1.3+dfsg-2~cloud0'
'python-kazoo' changed from 'absent' to '2.2.1-1ubuntu1'
'python-decorator' changed from 'absent' to '4.0.6-1'
'libiscsi2' changed from 'absent' to '1.12.0-2'
'httpd-wsgi' changed from 'absent' to '1'
'python-oslo.messaging' changed from 'absent' to '5.30.0-0ubuntu2~cloud0'
'python-os-brick' changed from 'absent' to '1.15.2-0ubuntu1~cloud0'
'python2.7-cmd2' changed from 'absent' to '1'
'python-pycparser' changed from 'absent' to '2.14+dfsg-2build1'
'python-debtcollector' changed from 'absent' to '1.3.0-2'
'librados2' changed from 'absent' to '12.2.2-0ubuntu0.17.10.1~cloud0'
'libconfig-general-perl' changed from 'absent' to '2.60-1'
'python-json-pointer' changed from 'absent' to '1.9-3'
'python-pbr' changed from 'absent' to '2.0.0-0ubuntu1~cloud0'
'python-html5lib' changed from 'absent' to '0.999-4'
'python-swiftclient' changed from 'absent' to '1:3.4.0-0ubuntu1~cloud0'
'python-pika' changed from 'absent' to '0.10.0-1'
'python-keystoneclient' changed from 'absent' to '1:3.13.0-0ubuntu1~cloud0'
'python-greenlet' changed from 'absent' to '0.4.9-2fakesync1'
'python-sqlalchemy-ext' changed from 'absent' to '1.1.9+ds1-0ubuntu3~cloud0'
'python-oslo.service' changed from 'absent' to '1.25.0-0ubuntu1~cloud0'
'librbd1' changed from 'absent' to '12.2.2-0ubuntu0.17.10.1~cloud0'
'apache2-api-20120211' changed from 'absent' to '1'
'python-ceilometerclient' changed from 'absent' to '2.9.0-0ubuntu1~cloud0'
'liblua5.1-0' changed from 'absent' to '5.1.5-8ubuntu1'
'python-zake' changed from 'absent' to '0.1.6-1'
'python-zopeinterface' changed from 'absent' to '1'
'python-fixtures' changed from 'absent' to '3.0.0-2~cloud0'
'python-numpy-api10' changed from 'absent' to '1'
'python-keystoneauth1' changed from 'absent' to '3.1.0-0ubuntu2~cloud0'
'python-tempita' changed from 'absent' to '0.5.2-1build1'
'python-sqlalchemy' changed from 'absent' to '1.1.9+ds1-0ubuntu3~cloud0'
'python-keystonemiddleware' changed from 'absent' to '4.17.0-0ubuntu1~cloud0'
'python-zope' changed from 'absent' to '1'
'python-voluptuous' changed from 'absent' to '0.9.3-1~cloud0'
'ssl-cert' changed from 'absent' to '1.0.37'
'cinder-common' changed from 'absent' to '2:11.0.2-0ubuntu1~cloud0'
'python-oslo-rootwrap' changed from 'absent' to '1'
'python2.7-ply' changed from 'absent' to '1'
'python-netifaces' changed from 'absent' to '0.10.4-0.1build2'
'python-cryptography' changed from '1.2.3-1ubuntu0.1' to '1.9-1~cloud0'
'libjbig0' changed from 'absent' to '2.1-3.1'

2018-03-30 06:42:59,475 [salt.state       ][INFO    ][21927] Loading fresh modules for state activity
2018-03-30 06:42:59,503 [salt.state       ][INFO    ][21927] Completed state [cinder-volume] at time 06:42:59.503732 duration_in_ms=151265.248
2018-03-30 06:42:59,514 [salt.state       ][INFO    ][21927] Running state [lvm2] at time 06:42:59.514111
2018-03-30 06:42:59,514 [salt.state       ][INFO    ][21927] Executing state pkg.installed for lvm2
2018-03-30 06:43:00,328 [salt.state       ][INFO    ][21927] All specified packages are already installed
2018-03-30 06:43:00,328 [salt.state       ][INFO    ][21927] Completed state [lvm2] at time 06:43:00.328407 duration_in_ms=814.296
2018-03-30 06:43:00,328 [salt.state       ][INFO    ][21927] Running state [sysfsutils] at time 06:43:00.328644
2018-03-30 06:43:00,328 [salt.state       ][INFO    ][21927] Executing state pkg.installed for sysfsutils
2018-03-30 06:43:00,332 [salt.state       ][INFO    ][21927] All specified packages are already installed
2018-03-30 06:43:00,332 [salt.state       ][INFO    ][21927] Completed state [sysfsutils] at time 06:43:00.332793 duration_in_ms=4.149
2018-03-30 06:43:00,333 [salt.state       ][INFO    ][21927] Running state [sg3-utils] at time 06:43:00.332990
2018-03-30 06:43:00,333 [salt.state       ][INFO    ][21927] Executing state pkg.installed for sg3-utils
2018-03-30 06:43:00,336 [salt.state       ][INFO    ][21927] All specified packages are already installed
2018-03-30 06:43:00,336 [salt.state       ][INFO    ][21927] Completed state [sg3-utils] at time 06:43:00.336698 duration_in_ms=3.707
2018-03-30 06:43:00,336 [salt.state       ][INFO    ][21927] Running state [python-cinder] at time 06:43:00.336896
2018-03-30 06:43:00,337 [salt.state       ][INFO    ][21927] Executing state pkg.installed for python-cinder
2018-03-30 06:43:00,340 [salt.state       ][INFO    ][21927] All specified packages are already installed
2018-03-30 06:43:00,340 [salt.state       ][INFO    ][21927] Completed state [python-cinder] at time 06:43:00.340878 duration_in_ms=3.983
2018-03-30 06:43:00,341 [salt.state       ][INFO    ][21927] Running state [python-mysqldb] at time 06:43:00.341072
2018-03-30 06:43:00,341 [salt.state       ][INFO    ][21927] Executing state pkg.installed for python-mysqldb
2018-03-30 06:43:00,351 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:43:00,390 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-mysqldb'] in directory '/root'
2018-03-30 06:43:04,186 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:43:04,229 [salt.state       ][INFO    ][21927] Made the following changes:
'python2.7-mysqldb' changed from 'absent' to '1'
'mysql-common' changed from 'absent' to '5.7.21-0ubuntu0.16.04.1'
'mysql-common-5.6' changed from 'absent' to '1'
'libmysqlclient20' changed from 'absent' to '5.7.21-0ubuntu0.16.04.1'
'python-mysqldb' changed from 'absent' to '1.3.7-1build2'

2018-03-30 06:43:04,253 [salt.state       ][INFO    ][21927] Loading fresh modules for state activity
2018-03-30 06:43:04,291 [salt.state       ][INFO    ][21927] Completed state [python-mysqldb] at time 06:43:04.290968 duration_in_ms=3949.895
2018-03-30 06:43:04,299 [salt.state       ][INFO    ][21927] Running state [p7zip] at time 06:43:04.299603
2018-03-30 06:43:04,299 [salt.state       ][INFO    ][21927] Executing state pkg.installed for p7zip
2018-03-30 06:43:04,598 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:43:04,637 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'p7zip'] in directory '/root'
2018-03-30 06:43:05,055 [salt.utils.schedule][INFO    ][2771] Running scheduled job: __mine_interval
2018-03-30 06:43:07,197 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:43:07,275 [salt.state       ][INFO    ][21927] Made the following changes:
'p7zip' changed from 'absent' to '9.20.1~dfsg.1-4.2'

2018-03-30 06:43:07,304 [salt.state       ][INFO    ][21927] Loading fresh modules for state activity
2018-03-30 06:43:07,332 [salt.state       ][INFO    ][21927] Completed state [p7zip] at time 06:43:07.332225 duration_in_ms=3032.62
2018-03-30 06:43:07,341 [salt.state       ][INFO    ][21927] Running state [gettext-base] at time 06:43:07.341044
2018-03-30 06:43:07,341 [salt.state       ][INFO    ][21927] Executing state pkg.installed for gettext-base
2018-03-30 06:43:07,630 [salt.state       ][INFO    ][21927] All specified packages are already installed
2018-03-30 06:43:07,630 [salt.state       ][INFO    ][21927] Completed state [gettext-base] at time 06:43:07.630237 duration_in_ms=289.193
2018-03-30 06:43:07,630 [salt.state       ][INFO    ][21927] Running state [python-memcache] at time 06:43:07.630541
2018-03-30 06:43:07,630 [salt.state       ][INFO    ][21927] Executing state pkg.installed for python-memcache
2018-03-30 06:43:07,641 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:43:07,682 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-memcache'] in directory '/root'
2018-03-30 06:43:08,042 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064307351353
2018-03-30 06:43:08,054 [salt.minion      ][INFO    ][29312] Starting a new job with PID 29312
2018-03-30 06:43:08,075 [salt.minion      ][INFO    ][29312] Returning information for job: 20180330064307351353
2018-03-30 06:43:10,194 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:43:10,260 [salt.state       ][INFO    ][21927] Made the following changes:
'python-memcache' changed from 'absent' to '1.57-1'

2018-03-30 06:43:10,285 [salt.state       ][INFO    ][21927] Loading fresh modules for state activity
2018-03-30 06:43:10,312 [salt.state       ][INFO    ][21927] Completed state [python-memcache] at time 06:43:10.312708 duration_in_ms=2682.167
2018-03-30 06:43:10,321 [salt.state       ][INFO    ][21927] Running state [python-pycadf] at time 06:43:10.321501
2018-03-30 06:43:10,321 [salt.state       ][INFO    ][21927] Executing state pkg.installed for python-pycadf
2018-03-30 06:43:10,596 [salt.state       ][INFO    ][21927] All specified packages are already installed
2018-03-30 06:43:10,596 [salt.state       ][INFO    ][21927] Completed state [python-pycadf] at time 06:43:10.596604 duration_in_ms=275.104
2018-03-30 06:43:10,598 [salt.state       ][INFO    ][21927] Running state [/var/lock/cinder] at time 06:43:10.598285
2018-03-30 06:43:10,598 [salt.state       ][INFO    ][21927] Executing state file.directory for /var/lock/cinder
2018-03-30 06:43:10,599 [salt.state       ][INFO    ][21927] Directory /var/lock/cinder is in the correct state
2018-03-30 06:43:10,599 [salt.state       ][INFO    ][21927] Completed state [/var/lock/cinder] at time 06:43:10.599386 duration_in_ms=1.101
2018-03-30 06:43:10,599 [salt.state       ][INFO    ][21927] Running state [/etc/cinder/cinder.conf] at time 06:43:10.599624
2018-03-30 06:43:10,599 [salt.state       ][INFO    ][21927] Executing state file.managed for /etc/cinder/cinder.conf
2018-03-30 06:43:10,627 [salt.fileclient  ][INFO    ][21927] Fetching file from saltenv 'base', ** done ** 'cinder/files/pike/cinder.conf.volume.Debian'
2018-03-30 06:43:10,701 [salt.fileclient  ][INFO    ][21927] Fetching file from saltenv 'base', ** done ** 'cinder/files/backend/_lvm.conf'
2018-03-30 06:43:10,703 [salt.state       ][INFO    ][21927] File changed:
--- 
+++ 
@@ -1,15 +1,121 @@
+
+
 [DEFAULT]
 rootwrap_config = /etc/cinder/rootwrap.conf
 api_paste_confg = /etc/cinder/api-paste.ini
+
 iscsi_helper = tgtadm
 volume_name_template = volume-%s
-volume_group = cinder-volumes
+#volume_group = cinder
+
 verbose = True
+
+osapi_volume_workers = 4
+
 auth_strategy = keystone
+
 state_path = /var/lib/cinder
-lock_path = /var/lock/cinder
+
+use_syslog=False
+
+glance_num_retries=0
+debug=False
+
+os_region_name=RegionOne
+
+#glance_api_ssl_compression=False
+#glance_api_insecure=False
+
+osapi_volume_listen=10.167.4.52
+
+glance_api_servers = http://10.167.4.35:9292
+
+
+glance_host=10.167.4.35
+glance_port=9292
+glance_api_version=2
+
+os_privileged_user_name=cinder
+os_privileged_user_password=opnfv_secret
+os_privileged_user_tenant=service
+os_privileged_user_auth_url=http://10.167.4.35:5000/v3/
+
+volume_backend_name=DEFAULT
+
+default_volume_type=lvm-driver
+
+enabled_backends=lvm-driver
+
+# Enables the Force option on upload_to_image. This enables running
+# upload_volume on in-use volumes for backends that support it. (boolean value)
+#enable_force_upload = false
+enable_force_upload = false
+
+#RPC response timeout recommended by Hitachi
+rpc_response_timeout=3600
+
+#Rabbit
+control_exchange=cinder
+
+
+volume_clear=none
+
+
+volume_name_template = volume-%s
+
+#volume_group = vg_cinder_volume
+
 volumes_dir = /var/lib/cinder/volumes
-enabled_backends = lvm
+log_dir=/var/log/cinder
+
+# Use syslog for logging. (boolean value)
+#use_syslog=false
+
+use_syslog=false
+verbose=True
+
+nova_catalog_admin_info = compute:nova:adminURL
+nova_catalog_info = compute:nova:internalURL
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
+
+[oslo_messaging_notifications]
+
+[oslo_concurrency]
+
+lock_path=/var/lock/cinder
+
+[oslo_middleware]
+
+enable_proxy_headers_parsing = True
+
+[keystone_authtoken]
+signing_dir=/tmp/keystone-signing-cinder
+revocation_cache_time = 10
+auth_type = password
+user_domain_name = Default
+project_domain_name = Default
+project_name = service
+username = cinder
+password = opnfv_secret
+auth_uri=http://10.167.4.35:5000
+auth_url=http://10.167.4.35:35357
+
+# Temporary disabled for backward compataiblity
+#auth_uri=http://10.167.4.35/identity
+#auth_url=http://10.167.4.35/identity_v2_admin
+memcached_servers=10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
+auth_version = v3
 
 [database]
-connection = sqlite:////var/lib/cinder/cinder.sqlite
+idle_timeout=3600
+max_pool_size=30
+max_retries=-1
+max_overflow=40
+connection = mysql+pymysql://cinder:opnfv_secret@10.167.4.23/cinder?charset=utf8
+[lvm-driver]
+host=cmp001
+volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
+volume_backend_name=lvm-driver
+lvm_type = default
+iscsi_helper = tgtadm
+volume_group = vgroot

2018-03-30 06:43:10,703 [salt.state       ][INFO    ][21927] Completed state [/etc/cinder/cinder.conf] at time 06:43:10.703928 duration_in_ms=104.304
2018-03-30 06:43:10,704 [salt.state       ][INFO    ][21927] Running state [/etc/cinder/api-paste.ini] at time 06:43:10.704186
2018-03-30 06:43:10,704 [salt.state       ][INFO    ][21927] Executing state file.managed for /etc/cinder/api-paste.ini
2018-03-30 06:43:10,731 [salt.fileclient  ][INFO    ][21927] Fetching file from saltenv 'base', ** done ** 'cinder/files/pike/api-paste.ini.volume.Debian'
2018-03-30 06:43:10,763 [salt.state       ][INFO    ][21927] File changed:
--- 
+++ 
@@ -12,8 +12,8 @@
 [composite:openstack_volume_api_v1]
 use = call:cinder.api.middleware.auth:pipeline_factory
 noauth = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler noauth apiv1
-keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv1
-keystone_nolimit = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv1
+keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext  apiv1
+keystone_nolimit = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext  apiv1
 
 [composite:openstack_volume_api_v2]
 use = call:cinder.api.middleware.auth:pipeline_factory

2018-03-30 06:43:10,764 [salt.state       ][INFO    ][21927] Completed state [/etc/cinder/api-paste.ini] at time 06:43:10.764040 duration_in_ms=59.854
2018-03-30 06:43:10,764 [salt.state       ][INFO    ][21927] Running state [/etc/default/cinder-volume] at time 06:43:10.764286
2018-03-30 06:43:10,764 [salt.state       ][INFO    ][21927] Executing state file.managed for /etc/default/cinder-volume
2018-03-30 06:43:10,783 [salt.fileclient  ][INFO    ][21927] Fetching file from saltenv 'base', ** done ** 'cinder/files/default'
2018-03-30 06:43:10,785 [salt.state       ][INFO    ][21927] File changed:
New file
2018-03-30 06:43:10,785 [salt.state       ][INFO    ][21927] Completed state [/etc/default/cinder-volume] at time 06:43:10.785696 duration_in_ms=21.411
2018-03-30 06:43:10,786 [salt.state       ][INFO    ][21927] Running state [cinder-volume] at time 06:43:10.786729
2018-03-30 06:43:10,786 [salt.state       ][INFO    ][21927] Executing state service.running for cinder-volume
2018-03-30 06:43:10,787 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemctl', 'status', 'cinder-volume.service', '-n', '0'] in directory '/root'
2018-03-30 06:43:10,805 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2018-03-30 06:43:10,816 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2018-03-30 06:43:10,827 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2018-03-30 06:43:10,841 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'cinder-volume.service'] in directory '/root'
2018-03-30 06:43:10,867 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2018-03-30 06:43:10,880 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2018-03-30 06:43:10,895 [salt.loaded.int.module.cmdmod][INFO    ][21927] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2018-03-30 06:43:10,913 [salt.state       ][INFO    ][21927] {'cinder-volume': True}
2018-03-30 06:43:10,913 [salt.state       ][INFO    ][21927] Completed state [cinder-volume] at time 06:43:10.913663 duration_in_ms=126.933
2018-03-30 06:43:10,915 [salt.minion      ][INFO    ][21927] Returning information for job: 20180330064024347758
2018-03-30 06:46:38,863 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command state.sls with jid 20180330064638451405
2018-03-30 06:46:38,881 [salt.minion      ][INFO    ][29559] Starting a new job with PID 29559
2018-03-30 06:46:41,335 [salt.state       ][INFO    ][29559] Loading fresh modules for state activity
2018-03-30 06:46:41,390 [salt.fileclient  ][INFO    ][29559] Fetching file from saltenv 'base', ** done ** 'neutron/gateway.sls'
2018-03-30 06:46:41,454 [salt.fileclient  ][INFO    ][29559] Fetching file from saltenv 'base', ** done ** 'neutron/map.jinja'
2018-03-30 06:46:42,411 [salt.state       ][INFO    ][29559] Running state [neutron-dhcp-agent] at time 06:46:42.411841
2018-03-30 06:46:42,412 [salt.state       ][INFO    ][29559] Executing state pkg.installed for neutron-dhcp-agent
2018-03-30 06:46:42,412 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:46:42,793 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['apt-cache', '-q', 'policy', 'neutron-dhcp-agent'] in directory '/root'
2018-03-30 06:46:42,887 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-03-30 06:46:48,962 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064648548003
2018-03-30 06:46:48,979 [salt.minion      ][INFO    ][29802] Starting a new job with PID 29802
2018-03-30 06:46:49,006 [salt.minion      ][INFO    ][29802] Returning information for job: 20180330064648548003
2018-03-30 06:46:57,316 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:46:57,351 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-dhcp-agent'] in directory '/root'
2018-03-30 06:46:59,194 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064658780043
2018-03-30 06:46:59,212 [salt.minion      ][INFO    ][29981] Starting a new job with PID 29981
2018-03-30 06:46:59,235 [salt.minion      ][INFO    ][29981] Returning information for job: 20180330064658780043
2018-03-30 06:47:09,403 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064708990771
2018-03-30 06:47:09,421 [salt.minion      ][INFO    ][30272] Starting a new job with PID 30272
2018-03-30 06:47:09,453 [salt.minion      ][INFO    ][30272] Returning information for job: 20180330064708990771
2018-03-30 06:47:19,619 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064719206151
2018-03-30 06:47:19,635 [salt.minion      ][INFO    ][31111] Starting a new job with PID 31111
2018-03-30 06:47:19,661 [salt.minion      ][INFO    ][31111] Returning information for job: 20180330064719206151
2018-03-30 06:47:26,254 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:47:26,298 [salt.state       ][INFO    ][29559] Made the following changes:
'python-waitress' changed from 'absent' to '0.8.10-1'
'python-neutron-fwaas' changed from 'absent' to '1:11.0.1-0ubuntu1~cloud0'
'python2.7-pymongo' changed from 'absent' to '1'
'libipset3' changed from 'absent' to '6.29-1'
'python2.7-bson' changed from 'absent' to '1'
'python-os-xenapi' changed from 'absent' to '0.2.0-0ubuntu1~cloud0'
'python-bson-ext' changed from 'absent' to '3.2-1build1'
'python-os-client-config' changed from 'absent' to '1.28.0-0ubuntu1~cloud0'
'python-logutils' changed from 'absent' to '0.3.3-5'
'ipset-6.29' changed from 'absent' to '1'
'liblua5.3-0' changed from 'absent' to '5.3.1-1ubuntu2'
'neutron-dhcp-agent' changed from 'absent' to '2:11.0.3-0ubuntu1.1~cloud0'
'python-designateclient' changed from 'absent' to '2.7.0-0ubuntu1~cloud0'
'dnsmasq-utils' changed from 'absent' to '2.78-1~cloud0'
'python-neutronclient' changed from 'absent' to '1:6.5.0-0ubuntu1.1~cloud0'
'python2.7-waitress' changed from 'absent' to '1'
'python-dogpile.cache' changed from 'absent' to '0.6.2-5~cloud0'
'python-neutron-lib' changed from 'absent' to '1.9.1-0ubuntu1~cloud0'
'python2.7-pymongo-ext' changed from 'absent' to '1'
'python-pecan' changed from 'absent' to '1.1.2-3fakesync2~cloud0'
'python-gridfs' changed from 'absent' to '3.2-1build1'
'python2.7-neutron' changed from 'absent' to '1'
'ipset' changed from 'absent' to '6.29-1'
'python-ovsdbapp' changed from 'absent' to '0.4.0-0ubuntu2~cloud0'
'python-pymongo-ext' changed from 'absent' to '3.2-1build1'
'python2.7-gridfs' changed from 'absent' to '1'
'python-singledispatch' changed from 'absent' to '3.4.0.3-2'
'python-oslo.cache' changed from 'absent' to '1.25.0-0ubuntu1~cloud0'
'python2.7-bson-ext' changed from 'absent' to '1'
'python-weakrefmethod' changed from 'absent' to '1.0-1'
'python-neutron' changed from 'absent' to '2:11.0.3-0ubuntu1.1~cloud0'
'python-ryu' changed from 'absent' to '4.15-0ubuntu1~cloud0'
'python-pymongo' changed from 'absent' to '3.2-1build1'
'python-pyroute2' changed from 'absent' to '0.4.18-0ubuntu1~cloud0'
'haproxy' changed from 'absent' to '1.6.3-1ubuntu0.1'
'python-osc-lib' changed from 'absent' to '1.7.0-0ubuntu1~cloud0'
'python-bson' changed from 'absent' to '3.2-1build1'
'python-tinyrpc' changed from 'absent' to '0.5-0ubuntu1~cloud0'
'python-webtest' changed from 'absent' to '2.0.18-1ubuntu1'
'python-appdirs' changed from 'absent' to '1.4.0-2'
'neutron-metadata-agent' changed from 'absent' to '2:11.0.3-0ubuntu1.1~cloud0'
'python-openvswitch' changed from 'absent' to '2.8.1-0ubuntu0.17.10.2~cloud0'
'python-requestsexceptions' changed from 'absent' to '1.1.2-0ubuntu1'
'neutron-common' changed from 'absent' to '2:11.0.3-0ubuntu1.1~cloud0'
'python-simplegeneric' changed from 'absent' to '0.8.1-1'
'python2.7-ryu' changed from 'absent' to '1'

2018-03-30 06:47:26,324 [salt.state       ][INFO    ][29559] Loading fresh modules for state activity
2018-03-30 06:47:26,363 [salt.state       ][INFO    ][29559] Completed state [neutron-dhcp-agent] at time 06:47:26.363245 duration_in_ms=43951.404
2018-03-30 06:47:26,372 [salt.state       ][INFO    ][29559] Running state [openvswitch-common] at time 06:47:26.372676
2018-03-30 06:47:26,373 [salt.state       ][INFO    ][29559] Executing state pkg.installed for openvswitch-common
2018-03-30 06:47:26,747 [salt.state       ][INFO    ][29559] All specified packages are already installed
2018-03-30 06:47:26,747 [salt.state       ][INFO    ][29559] Completed state [openvswitch-common] at time 06:47:26.747373 duration_in_ms=374.697
2018-03-30 06:47:26,747 [salt.state       ][INFO    ][29559] Running state [neutron-metadata-agent] at time 06:47:26.747566
2018-03-30 06:47:26,747 [salt.state       ][INFO    ][29559] Executing state pkg.installed for neutron-metadata-agent
2018-03-30 06:47:26,751 [salt.state       ][INFO    ][29559] All specified packages are already installed
2018-03-30 06:47:26,751 [salt.state       ][INFO    ][29559] Completed state [neutron-metadata-agent] at time 06:47:26.751740 duration_in_ms=4.174
2018-03-30 06:47:26,751 [salt.state       ][INFO    ][29559] Running state [neutron-openvswitch-agent] at time 06:47:26.751908
2018-03-30 06:47:26,752 [salt.state       ][INFO    ][29559] Executing state pkg.installed for neutron-openvswitch-agent
2018-03-30 06:47:26,762 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:47:26,795 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-openvswitch-agent'] in directory '/root'
2018-03-30 06:47:29,832 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064729415691
2018-03-30 06:47:29,846 [salt.minion      ][INFO    ][31874] Starting a new job with PID 31874
2018-03-30 06:47:29,867 [salt.minion      ][INFO    ][31874] Returning information for job: 20180330064729415691
2018-03-30 06:47:32,273 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:47:32,341 [salt.state       ][INFO    ][29559] Made the following changes:
'neutron-openvswitch-agent' changed from 'absent' to '2:11.0.3-0ubuntu1.1~cloud0'
'conntrack' changed from 'absent' to '1:1.4.3-3'

2018-03-30 06:47:32,367 [salt.state       ][INFO    ][29559] Loading fresh modules for state activity
2018-03-30 06:47:32,506 [salt.state       ][INFO    ][29559] Completed state [neutron-openvswitch-agent] at time 06:47:32.506090 duration_in_ms=5754.181
2018-03-30 06:47:32,512 [salt.state       ][INFO    ][29559] Running state [neutron-l3-agent] at time 06:47:32.512461
2018-03-30 06:47:32,512 [salt.state       ][INFO    ][29559] Executing state pkg.installed for neutron-l3-agent
2018-03-30 06:47:32,864 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:47:32,902 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-l3-agent'] in directory '/root'
2018-03-30 06:47:39,964 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064739552006
2018-03-30 06:47:39,981 [salt.minion      ][INFO    ][800] Starting a new job with PID 800
2018-03-30 06:47:40,004 [salt.minion      ][INFO    ][800] Returning information for job: 20180330064739552006
2018-03-30 06:47:43,209 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:47:43,275 [salt.state       ][INFO    ][29559] Made the following changes:
'libsnmp-base' changed from 'absent' to '5.7.3+dfsg-1ubuntu4.1'
'keepalived' changed from 'absent' to '1:1.2.19-1ubuntu0.2'
'ipvsadm' changed from 'absent' to '1:1.28-3'
'neutron-l3-agent' changed from 'absent' to '2:11.0.3-0ubuntu1.1~cloud0'
'libsnmp30' changed from 'absent' to '5.7.3+dfsg-1ubuntu4.1'
'iputils-arping' changed from 'absent' to '3:20121221-5ubuntu2'
'libsensors4' changed from 'absent' to '1:3.4.0-2'
'radvd' changed from 'absent' to '1:2.11-1'

2018-03-30 06:47:43,296 [salt.state       ][INFO    ][29559] Loading fresh modules for state activity
2018-03-30 06:47:43,323 [salt.state       ][INFO    ][29559] Completed state [neutron-l3-agent] at time 06:47:43.323805 duration_in_ms=10811.34
2018-03-30 06:47:43,327 [salt.state       ][INFO    ][29559] Running state [/etc/neutron/neutron.conf] at time 06:47:43.327479
2018-03-30 06:47:43,327 [salt.state       ][INFO    ][29559] Executing state file.managed for /etc/neutron/neutron.conf
2018-03-30 06:47:43,370 [salt.fileclient  ][INFO    ][29559] Fetching file from saltenv 'base', ** done ** 'neutron/files/pike/neutron-generic.conf.Debian'
2018-03-30 06:47:43,456 [salt.state       ][INFO    ][29559] File changed:
--- 
+++ 
@@ -1,5 +1,5 @@
+
 [DEFAULT]
-core_plugin = ml2
 
 #
 # From neutron
@@ -8,8 +8,9 @@
 # Where to store Neutron state files. This directory must be writable by the
 # agent. (string value)
 #state_path = /var/lib/neutron
-
-# The host IP to bind to. (unknown value)
+state_path = /var/lib/neutron
+
+# The host IP to bind to (string value)
 #bind_host = 0.0.0.0
 
 # The port to bind to (port value)
@@ -26,9 +27,15 @@
 
 # The type of authentication to use (string value)
 #auth_strategy = keystone
-
-# The core plugin Neutron will use (string value)
-#core_plugin = <None>
+auth_strategy = keystone
+
+
+
+core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
+
+service_plugins = router, metering
+
+
 
 # The service plugins Neutron will use (list value)
 #service_plugins =
@@ -59,6 +66,12 @@
 # Maximum number of host routes per subnet (integer value)
 #max_subnet_host_routes = 20
 
+# DEPRECATED: Maximum number of fixed ips per port. This option is deprecated
+# and will be removed in the Ocata release. (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#max_fixed_ips_per_port = 5
+
 # Enables IPv6 Prefix Delegation for automatic subnet CIDR allocation. Set to
 # True to enable IPv6 Prefix Delegation for subnet allocation in a PD-capable
 # environment. Users making subnet creation requests for IPv6 subnets without
@@ -70,6 +83,7 @@
 # DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite
 # lease times. (integer value)
 #dhcp_lease_duration = 86400
+dhcp_lease_duration = 600
 
 # Domain to use for building the hostnames (string value)
 #dns_domain = openstacklocal
@@ -84,23 +98,22 @@
 # MUST be set to False if Neutron is being used in conjunction with Nova
 # security groups. (boolean value)
 #allow_overlapping_ips = false
+allow_overlapping_ips = True
 
 # Hostname to be used by the Neutron server, agents and services running on
 # this machine. All the agents and services running on this machine must use
-# the same host value. (unknown value)
+# the same host value. (string value)
 #host = example.domain
 
-# This string is prepended to the normal URL that is returned in links to the
-# OpenStack Network API. If it is empty (the default), the URLs are returned
-# unchanged. (string value)
-#network_link_prefix = <None>
 
 # Send notification to nova when port status changes (boolean value)
 #notify_nova_on_port_status_changes = true
+notify_nova_on_port_status_changes = True
 
 # Send notification to nova when port data (fixed_ips/floatingip) changes so
 # nova can update its cache. (boolean value)
 #notify_nova_on_port_data_changes = true
+notify_nova_on_port_data_changes = True
 
 # Number of seconds between sending events to nova if there are any events to
 # send. (integer value)
@@ -114,13 +127,10 @@
 # networks. (boolean value)
 #vlan_transparent = false
 
-# DEPRECATED: This will choose the web framework in which to run the Neutron
-# API server. 'pecan' is a new rewrite of the API routing components. (string
-# value)
+# This will choose the web framework in which to run the Neutron API server.
+# 'pecan' is a new experimental rewrite of the API server. (string value)
 # Allowed values: legacy, pecan
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#web_framework = pecan
+#web_framework = legacy
 
 # MTU of the underlying physical network. Neutron uses this value to calculate
 # MTU for all virtual network components. For flat and VLAN networks, neutron
@@ -129,6 +139,7 @@
 # value. Defaults to 1500, the standard value for Ethernet. (integer value)
 # Deprecated group/name - [ml2]/segment_mtu
 #global_physnet_mtu = 1500
+global_physnet_mtu = 1500
 
 # Number of backlog requests to configure the socket with (integer value)
 #backlog = 4096
@@ -175,6 +186,14 @@
 # Group (gid or name) running metadata proxy after its initialization (if
 # empty: agent effective group). (string value)
 #metadata_proxy_group =
+
+# Enable/Disable log watch by metadata proxy. It should be disabled when
+# metadata_proxy_user/group is not allowed to read/write its log file and
+# copytruncate logrotate option must be used if logrotate is enabled on
+# metadata proxy log files. Option default value is deduced from
+# metadata_proxy_user: watch log is enabled if metadata_proxy_user is agent
+# effective user id/name. (boolean value)
+#metadata_proxy_watch_log = <None>
 
 #
 # From neutron.db
@@ -239,10 +258,6 @@
 # Only admin can override. (boolean value)
 #router_distributed = false
 
-# Determine if setup is configured for DVR. If False, DVR API extension will be
-# disabled. (boolean value)
-#enable_dvr = true
-
 # Driver to use for scheduling router to a default L3 agent (string value)
 #router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler
 
@@ -288,6 +303,13 @@
 # INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
+
+# DEPRECATED: If set to false, the logging level will be set to WARNING instead
+# of the default INFO level. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#verbose = true
+verbose = true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -327,12 +349,6 @@
 # is set. (boolean value)
 #use_syslog = false
 
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
-
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
@@ -361,7 +377,7 @@
 
 # List of package logging levels in logger=LEVEL pairs. This option is ignored
 # if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
@@ -394,6 +410,7 @@
 #
 
 # Size of RPC connection pool. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
 #rpc_conn_pool_size = 30
 
 # The pool size limit for connections expiration policy (integer value)
@@ -404,24 +421,30 @@
 
 # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
 # The "host" option should point or resolve to this address. (string value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
 #rpc_zmq_bind_address = *
 
 # MatchMaker driver. (string value)
 # Allowed values: redis, sentinel, dummy
+# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
 #rpc_zmq_matchmaker = redis
 
 # Number of ZeroMQ contexts, defaults to 1. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
 #rpc_zmq_contexts = 1
 
 # Maximum number of ingress messages to locally buffer per topic. Default is
 # unlimited. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
 #rpc_zmq_topic_backlog = <None>
 
 # Directory for holding IPC sockets. (string value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
 #rpc_zmq_ipc_dir = /var/run/openstack
 
 # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
 # "host" option, if running Nova. (string value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_host
 #rpc_zmq_host = localhost
 
 # Number of seconds to wait before all pending messages will be sent after
@@ -431,24 +454,30 @@
 # upper bound for the linger period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
 #zmq_linger = -1
+zmq_linger = 30
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
 #rpc_poll_timeout = 1
 
 # Expiration timeout in seconds of a name service record about existing target
 # ( < 0 means no timeout). (integer value)
+# Deprecated group/name - [DEFAULT]/zmq_target_expire
 #zmq_target_expire = 300
 
 # Update period in seconds of a name service record about existing target.
 # (integer value)
+# Deprecated group/name - [DEFAULT]/zmq_target_update
 #zmq_target_update = 180
 
 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
 # value)
+# Deprecated group/name - [DEFAULT]/use_pub_sub
 #use_pub_sub = false
 
 # Use ROUTER remote proxy. (boolean value)
+# Deprecated group/name - [DEFAULT]/use_router_proxy
 #use_router_proxy = false
 
 # This option makes direct connections dynamic or static. It makes sense only
@@ -463,20 +492,24 @@
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
 # Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
 #rpc_zmq_min_port = 49153
 
 # Maximal port number for random ports range. (integer value)
 # Minimum value: 1
 # Maximum value: 65536
+# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
 #rpc_zmq_max_port = 65536
 
 # Number of retries to find free port number before fail with ZMQBindError.
 # (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
 #rpc_zmq_bind_port_retries = 100
 
 # Default serialization mechanism for serializing/deserializing
 # outgoing/incoming messages (string value)
 # Allowed values: json, msgpack
+# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
 #rpc_zmq_serialization = json
 
 # This option configures round-robin mode in zmq socket. True means not keeping
@@ -540,18 +573,21 @@
 # priority then the default publishers list taken from the matchmaker. (list
 # value)
 #subscribe_on =
-
-# Size of executor thread pool when executor is threading or eventlet. (integer
-# value)
+agent_down_time = 30
+
+# Size of executor thread pool. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
 #executor_thread_pool_size = 64
+executor_thread_pool_size = 70
 
 # Seconds to wait for a response from a call. (integer value)
 #rpc_response_timeout = 60
+rpc_response_timeout=120
 
 # A URL representing the messaging driver to use and its full configuration.
 # (string value)
 #transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
@@ -599,7 +635,6 @@
 
 
 [agent]
-root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
 
 #
 # From neutron.agent
@@ -609,7 +644,7 @@
 # /etc/neutron/rootwrap.conf' to use the real root filter facility. Change to
 # 'sudo' to skip the filtering and just run the command directly. (string
 # value)
-#root_helper = sudo
+root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
 
 # Use the root helper when listing the namespaces on a system. This may not be
 # required depending on the security configuration. If the root helper is not
@@ -626,6 +661,7 @@
 # agent_down_time, best if it is half or less than agent_down_time. (floating
 # point value)
 #report_interval = 30
+report_interval = 10
 
 # Log agent heartbeats (boolean value)
 #log_agent_heartbeats = false
@@ -652,6 +688,7 @@
 
 # Availability zone of this node (string value)
 #availability_zone = nova
+availability_zone = nova
 
 
 [cors]
@@ -683,8 +720,36 @@
 #allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
 
 
+[cors.subdomain]
+
+#
+# From oslo.middleware.cors
+#
+
+# Indicate whether this resource may be shared with the domain received in the
+# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
+# slash. Example: https://horizon.example.com (list value)
+#allowed_origin = <None>
+
+# Indicate that the actual request can include user credentials (boolean value)
+#allow_credentials = true
+
+# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
+# Headers. (list value)
+#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-Volume-microversion
+
+# Maximum cache age of CORS preflight requests. (integer value)
+#max_age = 3600
+
+# Indicate which methods can be used during the actual request. (list value)
+#allow_methods = GET,PUT,POST,DELETE,PATCH
+
+# Indicate which header field names may be used during the actual request.
+# (list value)
+#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
+
+
 [database]
-connection = sqlite:////var/lib/neutron/neutron.sqlite
 
 #
 # From neutron.db
@@ -698,7 +763,16 @@
 # From oslo.db
 #
 
+# DEPRECATED: The file name to use with SQLite. (string value)
+# Deprecated group/name - [DEFAULT]/sqlite_db
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Should use config option connection or slave_connection to connect
+# the database.
+#sqlite_db = oslo.sqlite
+
 # If True, SQLite uses synchronous mode. (boolean value)
+# Deprecated group/name - [DEFAULT]/sqlite_synchronous
 #sqlite_synchronous = true
 
 # The back end to use for the database. (string value)
@@ -710,7 +784,7 @@
 # Deprecated group/name - [DEFAULT]/sql_connection
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
-#connection = <None>
+connection = sqlite:////var/lib/neutron/neutron.sqlite
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
@@ -722,10 +796,6 @@
 # (string value)
 #mysql_sql_mode = TRADITIONAL
 
-# If True, transparently enables support for handling MySQL Cluster (NDB).
-# (boolean value)
-#mysql_enable_ndb = false
-
 # Timeout before idle SQL connections are reaped. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_idle_timeout
 # Deprecated group/name - [DATABASE]/sql_idle_timeout
@@ -803,10 +873,10 @@
 # Complete "public" Identity API endpoint. This endpoint should not be an
 # "admin" endpoint, as it should be accessible by all end users.
 # Unauthenticated clients are redirected to this endpoint to authenticate.
-# Although this endpoint should ideally be unversioned, client support in the
-# wild varies. If you're using a versioned v2 endpoint here, then this should
-# *not* be the same endpoint the service user utilizes for validating tokens,
-# because normal end users may not be able to reach that endpoint. (string
+# Although this endpoint should  ideally be unversioned, client support in the
+# wild varies.  If you're using a versioned v2 endpoint here, then this  should
+# *not* be the same endpoint the service user utilizes  for validating tokens,
+# because normal end users may not be  able to reach that endpoint. (string
 # value)
 #auth_uri = <None>
 
@@ -1117,11 +1187,11 @@
 #project_domain_name = <None>
 
 # Project ID to scope to (string value)
-# Deprecated group/name - [nova]/tenant_id
+# Deprecated group/name - [nova]/tenant-id
 #project_id = <None>
 
 # Project name to scope to (string value)
-# Deprecated group/name - [nova]/tenant_name
+# Deprecated group/name - [nova]/tenant-name
 #project_name = <None>
 
 # Tenant ID (string value)
@@ -1146,7 +1216,7 @@
 #user_id = <None>
 
 # Username (string value)
-# Deprecated group/name - [nova]/user_name
+# Deprecated group/name - [nova]/user-name
 #username = <None>
 
 
@@ -1157,6 +1227,7 @@
 #
 
 # Enables or disables inter-process locks. (boolean value)
+# Deprecated group/name - [DEFAULT]/disable_process_locking
 #disable_process_locking = false
 
 # Directory to use for lock files.  For security, the specified directory
@@ -1165,7 +1236,9 @@
 # in the environment, use the Python tempfile.gettempdir function to find a
 # suitable location. If external locks are used, a lock path must be set.
 # (string value)
+# Deprecated group/name - [DEFAULT]/lock_path
 #lock_path = /tmp
+lock_path = $state_path/lock
 
 
 [oslo_messaging_amqp]
@@ -1176,64 +1249,61 @@
 
 # Name for the AMQP container. must be globally unique. Defaults to a generated
 # UUID (string value)
+# Deprecated group/name - [amqp1]/container_name
 #container_name = <None>
 
 # Timeout for inactive connections (in seconds) (integer value)
+# Deprecated group/name - [amqp1]/idle_timeout
 #idle_timeout = 0
 
 # Debug: dump AMQP frames to stdout (boolean value)
+# Deprecated group/name - [amqp1]/trace
 #trace = false
 
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
 # CA certificate PEM file used to verify the server's certificate (string
 # value)
+# Deprecated group/name - [amqp1]/ssl_ca_file
 #ssl_ca_file =
 
 # Self-identifying certificate PEM file for client authentication (string
 # value)
+# Deprecated group/name - [amqp1]/ssl_cert_file
 #ssl_cert_file =
 
 # Private key PEM file used to sign ssl_cert_file certificate (optional)
 # (string value)
+# Deprecated group/name - [amqp1]/ssl_key_file
 #ssl_key_file =
 
 # Password for decrypting ssl_key_file (if encrypted) (string value)
+# Deprecated group/name - [amqp1]/ssl_key_password
 #ssl_key_password = <None>
 
 # DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
+# Deprecated group/name - [amqp1]/allow_insecure_clients
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Not applicable - not a SSL server
 #allow_insecure_clients = false
 
 # Space separated list of acceptable SASL mechanisms (string value)
+# Deprecated group/name - [amqp1]/sasl_mechanisms
 #sasl_mechanisms =
 
 # Path to directory that contains the SASL configuration (string value)
+# Deprecated group/name - [amqp1]/sasl_config_dir
 #sasl_config_dir =
 
 # Name of configuration file (without .conf suffix) (string value)
+# Deprecated group/name - [amqp1]/sasl_config_name
 #sasl_config_name =
 
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# username.
+# User name for message broker authentication (string value)
+# Deprecated group/name - [amqp1]/username
 #username =
 
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# password.
+# Password for message broker authentication (string value)
+# Deprecated group/name - [amqp1]/password
 #password =
 
 # Seconds to pause before attempting to re-connect. (integer value)
@@ -1288,12 +1358,15 @@
 #addressing_mode = dynamic
 
 # address prefix used when sending to a specific server (string value)
+# Deprecated group/name - [amqp1]/server_request_prefix
 #server_request_prefix = exclusive
 
 # address prefix used when broadcasting to all servers (string value)
+# Deprecated group/name - [amqp1]/broadcast_prefix
 #broadcast_prefix = broadcast
 
 # address prefix when sending to any server in group (string value)
+# Deprecated group/name - [amqp1]/group_request_prefix
 #group_request_prefix = unicast
 
 # Address prefix for all generated RPC addresses (string value)
@@ -1381,7 +1454,7 @@
 # Max fetch bytes of Kafka consumer (integer value)
 #kafka_max_fetch_bytes = 1048576
 
-# Default timeout(s) for Kafka consumers (floating point value)
+# Default timeout(s) for Kafka consumers (integer value)
 #kafka_consumer_timeout = 1.0
 
 # Pool Size for Kafka Consumers (integer value)
@@ -1415,7 +1488,6 @@
 # messaging, messagingv2, routing, log, test, noop (multi valued)
 # Deprecated group/name - [DEFAULT]/notification_driver
 #driver =
-
 # A URL representing the messaging driver to use for notifications. If not set,
 # we fall back to the same configuration used for RPC. (string value)
 # Deprecated group/name - [DEFAULT]/notification_transport_url
@@ -1426,11 +1498,6 @@
 # Deprecated group/name - [DEFAULT]/notification_topics
 #topics = notifications
 
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
-
 
 [oslo_messaging_rabbit]
 
@@ -1444,31 +1511,30 @@
 #amqp_durable_queues = false
 
 # Auto-delete queues in AMQP. (boolean value)
+# Deprecated group/name - [DEFAULT]/amqp_auto_delete
 #amqp_auto_delete = false
-
-# Enable SSL (boolean value)
-#ssl = <None>
 
 # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
 # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
 # distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
+# Deprecated group/name - [DEFAULT]/kombu_ssl_version
+#kombu_ssl_version =
 
 # SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
+# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
+#kombu_ssl_keyfile =
 
 # SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
+# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
+#kombu_ssl_certfile =
 
 # SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
+# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
+#kombu_ssl_ca_certs =
 
 # How long to wait before reconnecting in response to an AMQP consumer cancel
 # notification. (floating point value)
+# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
 #kombu_reconnect_delay = 1.0
 
 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
@@ -1488,6 +1554,7 @@
 
 # DEPRECATED: The RabbitMQ broker address where a single node is used. (string
 # value)
+# Deprecated group/name - [DEFAULT]/rabbit_host
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
@@ -1497,24 +1564,32 @@
 # value)
 # Minimum value: 0
 # Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/rabbit_port
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
 #rabbit_port = 5672
 
 # DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
+# Deprecated group/name - [DEFAULT]/rabbit_hosts
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
 #rabbit_hosts = $rabbit_host:$rabbit_port
 
+# Connect over SSL for RabbitMQ. (boolean value)
+# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
+#rabbit_use_ssl = false
+
 # DEPRECATED: The RabbitMQ userid. (string value)
+# Deprecated group/name - [DEFAULT]/rabbit_userid
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
 #rabbit_userid = guest
 
 # DEPRECATED: The RabbitMQ password. (string value)
+# Deprecated group/name - [DEFAULT]/rabbit_password
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
@@ -1522,9 +1597,11 @@
 
 # The RabbitMQ login method. (string value)
 # Allowed values: PLAIN, AMQPLAIN, RABBIT-CR-DEMO
+# Deprecated group/name - [DEFAULT]/rabbit_login_method
 #rabbit_login_method = AMQPLAIN
 
 # DEPRECATED: The RabbitMQ virtual host. (string value)
+# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
@@ -1532,10 +1609,13 @@
 
 # How frequently to retry connecting with RabbitMQ. (integer value)
 #rabbit_retry_interval = 1
+rabbit_retry_interval = 1
 
 # How long to backoff for between retries when connecting to RabbitMQ. (integer
 # value)
+# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
 #rabbit_retry_backoff = 2
+rabbit_retry_backoff = 2
 
 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
 # (integer value)
@@ -1543,6 +1623,7 @@
 
 # DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
 # (infinite retry count). (integer value)
+# Deprecated group/name - [DEFAULT]/rabbit_max_retries
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #rabbit_max_retries = 0
@@ -1553,6 +1634,7 @@
 # If you just want to make sure that all queues (except those with auto-
 # generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
 # HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
+# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
 #rabbit_ha_queues = false
 
 # Positive integer representing duration in seconds for queue TTL (x-expires).
@@ -1569,12 +1651,15 @@
 # heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
 # value)
 #heartbeat_timeout_threshold = 60
+heartbeat_timeout_threshold = 0
 
 # How often times during the heartbeat_timeout_threshold we check the
 # heartbeat. (integer value)
 #heartbeat_rate = 2
+heartbeat_rate = 2
 
 # Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
+# Deprecated group/name - [DEFAULT]/fake_rabbit
 #fake_rabbit = false
 
 # Maximum number of channels to allow (integer value)
@@ -1585,6 +1670,9 @@
 
 # How often to send heartbeats for consumer's connections (integer value)
 #heartbeat_interval = 3
+
+# Enable SSL (boolean value)
+#ssl = <None>
 
 # Arguments passed to ssl.wrap_socket (dict value)
 #ssl_options = <None>
@@ -1690,24 +1778,30 @@
 
 # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
 # The "host" option should point or resolve to this address. (string value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
 #rpc_zmq_bind_address = *
 
 # MatchMaker driver. (string value)
 # Allowed values: redis, sentinel, dummy
+# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
 #rpc_zmq_matchmaker = redis
 
 # Number of ZeroMQ contexts, defaults to 1. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
 #rpc_zmq_contexts = 1
 
 # Maximum number of ingress messages to locally buffer per topic. Default is
 # unlimited. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
 #rpc_zmq_topic_backlog = <None>
 
 # Directory for holding IPC sockets. (string value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
 #rpc_zmq_ipc_dir = /var/run/openstack
 
 # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
 # "host" option, if running Nova. (string value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_host
 #rpc_zmq_host = localhost
 
 # Number of seconds to wait before all pending messages will be sent after
@@ -1720,21 +1814,26 @@
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
 #rpc_poll_timeout = 1
 
 # Expiration timeout in seconds of a name service record about existing target
 # ( < 0 means no timeout). (integer value)
+# Deprecated group/name - [DEFAULT]/zmq_target_expire
 #zmq_target_expire = 300
 
 # Update period in seconds of a name service record about existing target.
 # (integer value)
+# Deprecated group/name - [DEFAULT]/zmq_target_update
 #zmq_target_update = 180
 
 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
 # value)
+# Deprecated group/name - [DEFAULT]/use_pub_sub
 #use_pub_sub = false
 
 # Use ROUTER remote proxy. (boolean value)
+# Deprecated group/name - [DEFAULT]/use_router_proxy
 #use_router_proxy = false
 
 # This option makes direct connections dynamic or static. It makes sense only
@@ -1749,20 +1848,24 @@
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
 # Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
 #rpc_zmq_min_port = 49153
 
 # Maximal port number for random ports range. (integer value)
 # Minimum value: 1
 # Maximum value: 65536
+# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
 #rpc_zmq_max_port = 65536
 
 # Number of retries to find free port number before fail with ZMQBindError.
 # (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
 #rpc_zmq_bind_port_retries = 100
 
 # Default serialization mechanism for serializing/deserializing
 # outgoing/incoming messages (string value)
 # Allowed values: json, msgpack
+# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
 #rpc_zmq_serialization = json
 
 # This option configures round-robin mode in zmq socket. True means not keeping
@@ -1846,9 +1949,11 @@
 #
 
 # The file that defines policies. (string value)
+# Deprecated group/name - [DEFAULT]/policy_file
 #policy_file = policy.json
 
 # Default rule. Enforced when a requested rule is not found. (string value)
+# Deprecated group/name - [DEFAULT]/policy_default_rule
 #policy_default_rule = default
 
 # Directories where policy configuration files are stored. They can be relative
@@ -1856,7 +1961,21 @@
 # absolute paths. The file defined by policy_file must exist for these
 # directories to be searched.  Missing or empty directories are ignored. (multi
 # valued)
+# Deprecated group/name - [DEFAULT]/policy_dirs
 #policy_dirs = policy.d
+
+
+[qos]
+
+#
+# From neutron.qos
+#
+
+# DEPRECATED: Drivers list to use to send the update notification. This option
+# will be unused in Pike. (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#notification_drivers = message_queue
 
 
 [quotas]
@@ -1871,15 +1990,15 @@
 
 # Number of networks allowed per tenant. A negative value means unlimited.
 # (integer value)
-#quota_network = 100
+#quota_network = 10
 
 # Number of subnets allowed per tenant, A negative value means unlimited.
 # (integer value)
-#quota_subnet = 100
+#quota_subnet = 10
 
 # Number of ports allowed per tenant. A negative value means unlimited.
 # (integer value)
-#quota_port = 500
+#quota_port = 50
 
 # Default driver to use for quota checks. (string value)
 #quota_driver = neutron.db.quota.driver.DbQuotaDriver

2018-03-30 06:47:43,457 [salt.state       ][INFO    ][29559] Completed state [/etc/neutron/neutron.conf] at time 06:47:43.457368 duration_in_ms=129.888
2018-03-30 06:47:43,457 [salt.state       ][INFO    ][29559] Running state [/etc/neutron/l3_agent.ini] at time 06:47:43.457701
2018-03-30 06:47:43,457 [salt.state       ][INFO    ][29559] Executing state file.managed for /etc/neutron/l3_agent.ini
2018-03-30 06:47:43,477 [salt.fileclient  ][INFO    ][29559] Fetching file from saltenv 'base', ** done ** 'neutron/files/pike/l3_agent.ini'
2018-03-30 06:47:43,522 [salt.state       ][INFO    ][29559] File changed:
--- 
+++ 
@@ -1,3 +1,5 @@
+
+
 [DEFAULT]
 
 #
@@ -14,6 +16,7 @@
 
 # The driver used to manage the virtual interface. (string value)
 #interface_driver = <None>
+interface_driver = openvswitch
 
 # Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs
 # commands will fail with ALARMCLOCK error. (integer value)
@@ -30,19 +33,22 @@
 # must be used for an L3 agent that runs on a compute host. 'dvr_snat' - this
 # enables centralized SNAT support in conjunction with DVR.  This mode must be
 # used for an L3 agent running on a centralized node (or in single-host
-# deployments, e.g. devstack). 'dvr_no_external' - this mode enables only
-# East/West DVR routing functionality for a L3 agent that runs on a compute
-# host, the North/South functionality such as DNAT and SNAT will be provided by
-# the centralized network node that is running in 'dvr_snat' mode. This mode
-# should be used when there is no external network connectivity on the compute
-# host. (string value)
-# Allowed values: dvr, dvr_snat, legacy, dvr_no_external
+# deployments, e.g. devstack) (string value)
+# Allowed values: dvr, dvr_snat, legacy
 #agent_mode = legacy
+agent_mode = legacy
 
 # TCP Port used by Neutron metadata namespace proxy. (port value)
 # Minimum value: 0
 # Maximum value: 65535
 #metadata_port = 9697
+metadata_port = 8775
+
+# DEPRECATED: Send this many gratuitous ARPs for HA setup, if less than or
+# equal to 0, the feature is disabled (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#send_arp_for_ha = 3
 
 # Indicates that this L3 agent should also handle routers that do not have an
 # external network gateway configured. This option should be True only for a
@@ -50,13 +56,11 @@
 # routers must have an external network gateway. (boolean value)
 #handle_internal_only_routers = true
 
-# DEPRECATED: When external_network_bridge is set, each L3 agent can be
-# associated with no more than one external network. This value should be set
-# to the UUID of that external network. To allow L3 agent support multiple
-# external networks, both the external_network_bridge and
-# gateway_external_network_id must be left empty. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
+# When external_network_bridge is set, each L3 agent can be associated with no
+# more than one external network. This value should be set to the UUID of that
+# external network. To allow L3 agent support multiple external networks, both
+# the external_network_bridge and gateway_external_network_id must be left
+# empty. (string value)
 #gateway_external_network_id =
 
 # With IPv6, the network used for the external gateway does not need to have an
@@ -165,6 +169,13 @@
 # INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
+debug = False
+
+# DEPRECATED: If set to false, the logging level will be set to WARNING instead
+# of the default INFO level. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#verbose = true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -204,12 +215,6 @@
 # is set. (boolean value)
 #use_syslog = false
 
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
-
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
@@ -238,7 +243,7 @@
 
 # List of package logging levels in logger=LEVEL pairs. This option is ignored
 # if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false

2018-03-30 06:47:43,523 [salt.state       ][INFO    ][29559] Completed state [/etc/neutron/l3_agent.ini] at time 06:47:43.523110 duration_in_ms=65.409
2018-03-30 06:47:43,523 [salt.state       ][INFO    ][29559] Running state [/etc/neutron/dhcp_agent.ini] at time 06:47:43.523384
2018-03-30 06:47:43,523 [salt.state       ][INFO    ][29559] Executing state file.managed for /etc/neutron/dhcp_agent.ini
2018-03-30 06:47:43,544 [salt.fileclient  ][INFO    ][29559] Fetching file from saltenv 'base', ** done ** 'neutron/files/pike/dhcp_agent.ini'
2018-03-30 06:47:43,586 [salt.state       ][INFO    ][29559] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -7,78 +8,80 @@
 # Name of Open vSwitch bridge to use (string value)
 #ovs_integration_bridge = br-int
 
-# Uses veth for an OVS interface or not. Support kernels with limited namespace
-# support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. (boolean
-# value)
+# Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) so long as ovs_use_veth is set to
+# True. (boolean value)
 #ovs_use_veth = false
+
+# MTU setting for device. This option will be removed in Newton. Please use the system-wide global_physnet_mtu setting which the agents will
+# take into account when wiring VIFs. (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#network_device_mtu = <None>
 
 # The driver used to manage the virtual interface. (string value)
 #interface_driver = <None>
-
-# Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs
-# commands will fail with ALARMCLOCK error. (integer value)
+interface_driver = openvswitch
+
+# Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs commands will fail with ALARMCLOCK error. (integer value)
 #ovs_vsctl_timeout = 10
 
 #
 # From neutron.dhcp.agent
 #
 
-# The DHCP agent will resync its state with Neutron to recover from any
-# transient notification or RPC errors. The interval is number of seconds
-# between attempts. (integer value)
+# The DHCP agent will resync its state with Neutron to recover from any transient notification or RPC errors. The interval is number of
+# seconds between attempts. (integer value)
 #resync_interval = 5
+resync_interval = 30
 
 # The driver used to manage the DHCP server. (string value)
 #dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
-
-# The DHCP server can assist with providing metadata support on isolated
-# networks. Setting this value to True will cause the DHCP server to append
-# specific host routes to the DHCP request. The metadata service will only be
-# activated when the subnet does not contain any router port. The guest
-# instance must be configured to request host routes via DHCP (Option 121).
-# This option doesn't have any effect when force_metadata is set to True.
-# (boolean value)
+dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
+
+# The DHCP server can assist with providing metadata support on isolated networks. Setting this value to True will cause the DHCP server to
+# append specific host routes to the DHCP request. The metadata service will only be activated when the subnet does not contain any router
+# port. The guest instance must be configured to request host routes via DHCP (Option 121). This option doesn't have any effect when
+# force_metadata is set to True. (boolean value)
 #enable_isolated_metadata = false
-
-# In some cases the Neutron router is not present to provide the metadata IP
-# but the DHCP server can be used to provide this info. Setting this value will
-# force the DHCP server to append specific host routes to the DHCP request. If
-# this option is set, then the metadata service will be activated for all the
-# networks. (boolean value)
+enable_isolated_metadata = True
+
+# In some cases the Neutron router is not present to provide the metadata IP but the DHCP server can be used to provide this info. Setting
+# this value will force the DHCP server to append specific host routes to the DHCP request. If this option is set, then the metadata service
+# will be activated for all the networks. (boolean value)
 #force_metadata = false
 
-# Allows for serving metadata requests coming from a dedicated metadata access
-# network whose CIDR is 169.254.169.254/16 (or larger prefix), and is connected
-# to a Neutron router from which the VMs send metadata:1 request. In this case
-# DHCP Option 121 will not be injected in VMs, as they will be able to reach
-# 169.254.169.254 through a router. This option requires
-# enable_isolated_metadata = True. (boolean value)
+# Allows for serving metadata requests coming from a dedicated metadata access network whose CIDR is 169.254.169.254/16 (or larger prefix),
+# and is connected to a Neutron router from which the VMs send metadata:1 request. In this case DHCP Option 121 will not be injected in VMs,
+# as they will be able to reach 169.254.169.254 through a router. This option requires enable_isolated_metadata = True. (boolean value)
 #enable_metadata_network = false
-
-# Number of threads to use during sync process. Should not exceed connection
-# pool size configured on server. (integer value)
+enable_metadata_network = False
+
+# Number of threads to use during sync process. Should not exceed connection pool size configured on server. (integer value)
 #num_sync_threads = 4
 
 # Location to store DHCP server config files. (string value)
 #dhcp_confs = $state_path/dhcp
 
+# Domain to use for building the hostnames. This option is deprecated. It has been moved to neutron.conf as dns_domain. It will be removed
+# in a future release. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#dhcp_domain = openstacklocal
+
 # Override the default dnsmasq settings with this file. (string value)
 #dnsmasq_config_file =
 
-# Comma-separated list of the DNS servers which will be used as forwarders.
-# (list value)
-#dnsmasq_dns_servers =
-
-# Base log dir for dnsmasq logging. The log contains DHCP and DNS log
-# information and is useful for debugging issues with either DHCP or DNS. If
-# this section is null, disable dnsmasq log. (string value)
+# Comma-separated list of the DNS servers which will be used as forwarders. (list value)
+# Deprecated group/name - [DEFAULT]/dnsmasq_dns_server
+#dnsmasq_dns_servers = <None>
+
+# Base log dir for dnsmasq logging. The log contains DHCP and DNS log information and is useful for debugging issues with either DHCP or
+# DNS. If this section is null, disable dnsmasq log. (string value)
 #dnsmasq_base_log_dir = <None>
 
-# Enables the dnsmasq service to provide name resolution for instances via DNS
-# resolvers on the host running the DHCP agent. Effectively removes the '--no-
-# resolv' option from the dnsmasq process arguments. Adding custom DNS
-# resolvers to the 'dnsmasq_dns_servers' option disables this feature. (boolean
-# value)
+# Enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the host running the DHCP agent. Effectively
+# removes the '--no-resolv' option from the dnsmasq process arguments. Adding custom DNS resolvers to the 'dnsmasq_dns_servers' option
+# disables this feature. (boolean value)
 #dnsmasq_local_resolv = false
 
 # Limit number of leases to prevent a denial-of-service. (integer value)
@@ -91,133 +94,95 @@
 # From oslo.log
 #
 
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
-# Note: This option can be changed without restarting.
+# If set to true, the logging level will be set to DEBUG instead of the default INFO level. (boolean value)
 #debug = false
-
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
-# Note: This option can be changed without restarting.
+debug = False
+
+# If set to false, the logging level will be set to WARNING instead of the default INFO level. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#verbose = true
+
+# The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging configuration options are ignored (for example,
+# logging_context_format_string). (string value)
 # Deprecated group/name - [DEFAULT]/log_config
 #log_config_append = <None>
 
-# Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
+# Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
+# (string value)
 #log_date_format = %Y-%m-%d %H:%M:%S
 
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
+# (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
 #log_file = <None>
 
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
+# (Optional) The base directory used for relative log_file  paths. This option is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
 #log_dir = <None>
 
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
-# (boolean value)
+# Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified
+# path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if
+# log_config_append is set. (boolean value)
 #watch_log_file = false
 
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
+# Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
 #use_syslog = false
 
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
-
-# Syslog facility to receive log lines. This option is ignored if
-# log_config_append is set. (string value)
+# Syslog facility to receive log lines. This option is ignored if log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
 
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
-#use_stderr = false
+# Log output to standard error. This option is ignored if log_config_append is set. (boolean value)
+#use_stderr = true
 
 # Format string to use for log messages with context. (string value)
 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
-# Format string to use for log messages when context is undefined. (string
-# value)
+# Format string to use for log messages when context is undefined. (string value)
 #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
 
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
+# Additional data to append to log message when logging level for the message is DEBUG. (string value)
 #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
 
 # Prefix each line of exception output with this format. (string value)
 #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
 
-# Defines the format string for %(user_identity)s that is used in
-# logging_context_format_string. (string value)
+# Defines the format string for %(user_identity)s that is used in logging_context_format_string. (string value)
 #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
 
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+# List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. (list value)
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
 
-# The format for an instance that is passed with the log message. (string
-# value)
+# The format for an instance that is passed with the log message. (string value)
 #instance_format = "[instance: %(uuid)s] "
 
-# The format for an instance UUID that is passed with the log message. (string
-# value)
+# The format for an instance UUID that is passed with the log message. (string value)
 #instance_uuid_format = "[instance: %(uuid)s] "
-
-# Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
-
-# Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
-
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
-#rate_limit_except_level = CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
 #fatal_deprecations = false
-
-
-[agent]
-
-#
-# From neutron.az.agent
-#
-
-# Availability zone of this node (string value)
-#availability_zone = nova
+root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
+state_path=/var/lib/neutron
+
+
+[AGENT]
 
 #
 # From neutron.base.agent
 #
 
-# Seconds between nodes reporting state to server; should be less than
-# agent_down_time, best if it is half or less than agent_down_time. (floating
-# point value)
+# Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time.
+# (floating point value)
 #report_interval = 30
 
 # Log agent heartbeats (boolean value)
 #log_agent_heartbeats = false
-
 
 [ovs]
 
@@ -233,3 +198,4 @@
 # when monitoring and used for the all ovsdb commands when native
 # ovsdb_interface is enabled (string value)
 #ovsdb_connection = tcp:127.0.0.1:6640
+

2018-03-30 06:47:43,586 [salt.state       ][INFO    ][29559] Completed state [/etc/neutron/dhcp_agent.ini] at time 06:47:43.586946 duration_in_ms=63.561
2018-03-30 06:47:43,587 [salt.state       ][INFO    ][29559] Running state [/etc/neutron/metadata_agent.ini] at time 06:47:43.587222
2018-03-30 06:47:43,587 [salt.state       ][INFO    ][29559] Executing state file.managed for /etc/neutron/metadata_agent.ini
2018-03-30 06:47:43,608 [salt.fileclient  ][INFO    ][29559] Fetching file from saltenv 'base', ** done ** 'neutron/files/pike/metadata_agent.ini'
2018-03-30 06:47:43,653 [salt.state       ][INFO    ][29559] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -18,9 +19,9 @@
 # Certificate Authority public key (CA cert) file for ssl (string value)
 #auth_ca_cert = <None>
 
-# IP address or DNS name of Nova metadata server. (unknown value)
-# Deprecated group/name - [DEFAULT]/nova_metadata_ip
-#nova_metadata_host = 127.0.0.1
+# IP address used by Nova metadata server. (string value)
+#nova_metadata_ip = 127.0.0.1
+nova_metadata_ip = 10.167.4.35
 
 # TCP Port used by Nova metadata server. (port value)
 # Minimum value: 0
@@ -33,10 +34,12 @@
 # Server. NOTE: Nova uses the same config key, but in [neutron] section.
 # (string value)
 #metadata_proxy_shared_secret =
+metadata_proxy_shared_secret = opnfv_secret
 
 # Protocol to access nova metadata, http or https (string value)
 # Allowed values: http, https
 #nova_metadata_protocol = http
+nova_metadata_protocol = http
 
 # Allow to perform insecure SSL (https) requests to nova metadata (boolean
 # value)
@@ -81,6 +84,13 @@
 # INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
+debug = False
+
+# DEPRECATED: If set to false, the logging level will be set to WARNING instead
+# of the default INFO level. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#verbose = true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -120,12 +130,6 @@
 # is set. (boolean value)
 #use_syslog = false
 
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
-
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
@@ -154,7 +158,7 @@
 
 # List of package logging levels in logger=LEVEL pairs. This option is ignored
 # if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
@@ -214,13 +218,12 @@
 # expiration time defined for it. (integer value)
 #expiration_time = 600
 
-# Cache backend module. For eventlet-based or environments with hundreds of
-# threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is
-# recommended. For environments with less than 100 threaded servers, Memcached
-# (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test
-# environments with a single instance of the server can use the
-# dogpile.cache.memory backend. (string value)
-# Allowed values: oslo_cache.memcache_pool, oslo_cache.dict, dogpile.cache.memcached, dogpile.cache.redis, dogpile.cache.memory, dogpile.cache.null
+# Dogpile.cache backend module. It is recommended that Memcache or Redis
+# (dogpile.cache.redis) be used in production deployments. For eventlet-based
+# or highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool)
+# is recommended. For low thread servers, dogpile.cache.memcached is
+# recommended. Test environments with a single instance of the server can use
+# the dogpile.cache.memory backend. (string value)
 #backend = dogpile.cache.null
 
 # Arguments supplied to the backend module. Specify this option once per

2018-03-30 06:47:43,653 [salt.state       ][INFO    ][29559] Completed state [/etc/neutron/metadata_agent.ini] at time 06:47:43.653650 duration_in_ms=66.428
2018-03-30 06:47:43,653 [salt.state       ][INFO    ][29559] Running state [/etc/neutron/plugins/ml2/openvswitch_agent.ini] at time 06:47:43.653915
2018-03-30 06:47:43,654 [salt.state       ][INFO    ][29559] Executing state file.managed for /etc/neutron/plugins/ml2/openvswitch_agent.ini
2018-03-30 06:47:43,679 [salt.fileclient  ][INFO    ][29559] Fetching file from saltenv 'base', ** done ** 'neutron/files/pike/openvswitch_agent.ini'
2018-03-30 06:47:43,734 [salt.state       ][INFO    ][29559] File changed:
--- 
+++ 
@@ -1,3 +1,5 @@
+
+
 [DEFAULT]
 
 #
@@ -8,6 +10,12 @@
 # INFO level. (boolean value)
 # Note: This option can be changed without restarting.
 #debug = false
+
+# DEPRECATED: If set to false, the logging level will be set to WARNING instead
+# of the default INFO level. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#verbose = true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -47,12 +55,6 @@
 # is set. (boolean value)
 #use_syslog = false
 
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
-
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
 #syslog_log_facility = LOG_USER
@@ -81,7 +83,7 @@
 
 # List of package logging levels in logger=LEVEL pairs. This option is ignored
 # if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
@@ -127,26 +129,18 @@
 # losing communication with it. (integer value)
 #ovsdb_monitor_respawn_interval = 30
 
-# Network types supported by the agent (gre and/or vxlan). (list value)
-#tunnel_types =
-
-# The UDP port to use for VXLAN tunnels. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#vxlan_udp_port = 4789
-
-# MTU size of veth interfaces (integer value)
-#veth_mtu = 9000
-
-# Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve
-# tunnel scalability. (boolean value)
-#l2_population = false
-
-# Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2
-# l2population driver. Allows the switch (when supporting an overlay) to
-# respond to an ARP request locally without performing a costly ARP broadcast
-# into the overlay. (boolean value)
-#arp_responder = false
+# DEPRECATED: Enable suppression of ARP responses that don't match an IP
+# address that belongs to the port from which they originate. Note: This
+# prevents the VMs attached to this agent from spoofing, it doesn't protect
+# them from other devices which have the capability to spoof (e.g. bare metal
+# or VMs attached to agents without this flag set to True). Spoofing rules will
+# not be added to any ports that have port security disabled. For LinuxBridge,
+# this requires ebtables. For OVS, it requires a version that supports matching
+# ARP headers. This option will be removed in Ocata so the only way to disable
+# protection will be via the port security extension. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#prevent_arp_spoofing = true
 
 # Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying
 # GRE/VXLAN tunnel. (boolean value)
@@ -154,6 +148,7 @@
 
 # Make the l2 agent run in DVR mode. (boolean value)
 #enable_distributed_routing = false
+enable_distributed_routing = False
 
 # Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If
 # value is set to 0, rpc timeout won't be changed (integer value)
@@ -162,6 +157,7 @@
 # Reset flow table on start. Setting this to True will cause brief traffic
 # interruption. (boolean value)
 #drop_flows_on_start = false
+drop_flows_on_start = False
 
 # Set or un-set the tunnel header checksum  on outgoing IP packet carrying
 # GRE/VXLAN tunnel. (boolean value)
@@ -173,8 +169,9 @@
 #agent_type = Open vSwitch agent
 
 # Extensions list to use (list value)
-#extensions =
-
+
+
+extensions=
 
 [ovs]
 
@@ -188,9 +185,11 @@
 # VIFs are attached to this bridge and then 'patched' according to their
 # network connectivity. (string value)
 #integration_bridge = br-int
+integration_bridge = br-int
 
 # Tunnel bridge to use. (string value)
 #tunnel_bridge = br-tun
+tunnel_bridge = br-tun
 
 # Peer patch port in integration bridge for tunnel bridge. (string value)
 #int_peer_patch_port = patch-tun
@@ -213,17 +212,16 @@
 # have mappings to appropriate bridges on each agent. Note: If you remove a
 # bridge from this mapping, make sure to disconnect it from the integration
 # bridge as it won't be managed by the agent anymore. (list value)
-#bridge_mappings =
+
+bridge_mappings = physnet1:br-floating,physnet2:br-prv
 
 # Use veths instead of patch ports to interconnect the integration bridge to
 # physical networks. Support kernel without Open vSwitch patch port support so
 # long as it is set to True. (boolean value)
 #use_veth_interconnection = false
 
-# DEPRECATED: OpenFlow interface to use. (string value)
+# OpenFlow interface to use. (string value)
 # Allowed values: ovs-ofctl, native
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
 #of_interface = native
 
 # OVS datapath to use. 'system' is the default value and corresponds to the
@@ -231,9 +229,11 @@
 # (string value)
 # Allowed values: system, netdev
 #datapath_type = system
+datapath_type = netdev
 
 # OVS vhost-user socket directory. (string value)
 #vhostuser_socket_dir = /var/run/openvswitch
+vhostuser_socket_dir = /run/openvswitch-vhost
 
 # Address to listen on for OpenFlow connections. Used only for 'native' driver.
 # (IP address value)
@@ -276,6 +276,8 @@
 # should be false when using no security groups or using the nova security
 # group API. (boolean value)
 #enable_security_group = true
+firewall_driver = openvswitch
+enable_security_group = True
 
 # Use ipset to speed-up the iptables based security groups. Enabling ipset
 # support requires that ipset is installed on L2 agent node. (boolean value)

2018-03-30 06:47:43,735 [salt.state       ][INFO    ][29559] Completed state [/etc/neutron/plugins/ml2/openvswitch_agent.ini] at time 06:47:43.735095 duration_in_ms=81.179
2018-03-30 06:47:43,735 [salt.state       ][INFO    ][29559] Running state [/etc/default/neutron-metadata-agent] at time 06:47:43.735381
2018-03-30 06:47:43,735 [salt.state       ][INFO    ][29559] Executing state file.managed for /etc/default/neutron-metadata-agent
2018-03-30 06:47:43,754 [salt.fileclient  ][INFO    ][29559] Fetching file from saltenv 'base', ** done ** 'neutron/files/default'
2018-03-30 06:47:43,756 [salt.state       ][INFO    ][29559] File changed:
New file
2018-03-30 06:47:43,756 [salt.state       ][INFO    ][29559] Completed state [/etc/default/neutron-metadata-agent] at time 06:47:43.756571 duration_in_ms=21.19
2018-03-30 06:47:43,756 [salt.state       ][INFO    ][29559] Running state [/etc/default/neutron-dhcp-agent] at time 06:47:43.756821
2018-03-30 06:47:43,756 [salt.state       ][INFO    ][29559] Executing state file.managed for /etc/default/neutron-dhcp-agent
2018-03-30 06:47:43,769 [salt.state       ][INFO    ][29559] File changed:
New file
2018-03-30 06:47:43,769 [salt.state       ][INFO    ][29559] Completed state [/etc/default/neutron-dhcp-agent] at time 06:47:43.769360 duration_in_ms=12.539
2018-03-30 06:47:43,769 [salt.state       ][INFO    ][29559] Running state [/etc/default/neutron-openvswitch-agent] at time 06:47:43.769607
2018-03-30 06:47:43,769 [salt.state       ][INFO    ][29559] Executing state file.managed for /etc/default/neutron-openvswitch-agent
2018-03-30 06:47:43,782 [salt.state       ][INFO    ][29559] File changed:
New file
2018-03-30 06:47:43,782 [salt.state       ][INFO    ][29559] Completed state [/etc/default/neutron-openvswitch-agent] at time 06:47:43.782400 duration_in_ms=12.793
2018-03-30 06:47:43,782 [salt.state       ][INFO    ][29559] Running state [/etc/default/neutron-l3-agent] at time 06:47:43.782693
2018-03-30 06:47:43,782 [salt.state       ][INFO    ][29559] Executing state file.managed for /etc/default/neutron-l3-agent
2018-03-30 06:47:43,794 [salt.state       ][INFO    ][29559] File changed:
New file
2018-03-30 06:47:43,795 [salt.state       ][INFO    ][29559] Completed state [/etc/default/neutron-l3-agent] at time 06:47:43.795097 duration_in_ms=12.405
2018-03-30 06:47:43,943 [salt.state       ][INFO    ][29559] Running state [neutron-metadata-agent] at time 06:47:43.943756
2018-03-30 06:47:43,944 [salt.state       ][INFO    ][29559] Executing state service.running for neutron-metadata-agent
2018-03-30 06:47:43,945 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'status', 'neutron-metadata-agent.service', '-n', '0'] in directory '/root'
2018-03-30 06:47:43,965 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-active', 'neutron-metadata-agent.service'] in directory '/root'
2018-03-30 06:47:43,979 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-enabled', 'neutron-metadata-agent.service'] in directory '/root'
2018-03-30 06:47:43,988 [salt.state       ][INFO    ][29559] The service neutron-metadata-agent is already running
2018-03-30 06:47:43,989 [salt.state       ][INFO    ][29559] Completed state [neutron-metadata-agent] at time 06:47:43.989205 duration_in_ms=45.449
2018-03-30 06:47:43,989 [salt.state       ][INFO    ][29559] Running state [neutron-metadata-agent] at time 06:47:43.989390
2018-03-30 06:47:43,989 [salt.state       ][INFO    ][29559] Executing state service.mod_watch for neutron-metadata-agent
2018-03-30 06:47:43,990 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-active', 'neutron-metadata-agent.service'] in directory '/root'
2018-03-30 06:47:44,002 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-enabled', 'neutron-metadata-agent.service'] in directory '/root'
2018-03-30 06:47:44,015 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-metadata-agent.service'] in directory '/root'
2018-03-30 06:47:45,137 [salt.state       ][INFO    ][29559] {'neutron-metadata-agent': True}
2018-03-30 06:47:45,137 [salt.state       ][INFO    ][29559] Completed state [neutron-metadata-agent] at time 06:47:45.137536 duration_in_ms=1148.145
2018-03-30 06:47:45,139 [salt.state       ][INFO    ][29559] Running state [neutron-dhcp-agent] at time 06:47:45.138953
2018-03-30 06:47:45,139 [salt.state       ][INFO    ][29559] Executing state service.running for neutron-dhcp-agent
2018-03-30 06:47:45,140 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'status', 'neutron-dhcp-agent.service', '-n', '0'] in directory '/root'
2018-03-30 06:47:45,161 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-active', 'neutron-dhcp-agent.service'] in directory '/root'
2018-03-30 06:47:45,177 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-enabled', 'neutron-dhcp-agent.service'] in directory '/root'
2018-03-30 06:47:45,194 [salt.state       ][INFO    ][29559] The service neutron-dhcp-agent is already running
2018-03-30 06:47:45,194 [salt.state       ][INFO    ][29559] Completed state [neutron-dhcp-agent] at time 06:47:45.194735 duration_in_ms=55.781
2018-03-30 06:47:45,195 [salt.state       ][INFO    ][29559] Running state [neutron-dhcp-agent] at time 06:47:45.195103
2018-03-30 06:47:45,195 [salt.state       ][INFO    ][29559] Executing state service.mod_watch for neutron-dhcp-agent
2018-03-30 06:47:45,196 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-active', 'neutron-dhcp-agent.service'] in directory '/root'
2018-03-30 06:47:45,211 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-enabled', 'neutron-dhcp-agent.service'] in directory '/root'
2018-03-30 06:47:45,225 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-dhcp-agent.service'] in directory '/root'
2018-03-30 06:47:45,384 [salt.state       ][INFO    ][29559] {'neutron-dhcp-agent': True}
2018-03-30 06:47:45,385 [salt.state       ][INFO    ][29559] Completed state [neutron-dhcp-agent] at time 06:47:45.385592 duration_in_ms=190.487
2018-03-30 06:47:45,387 [salt.state       ][INFO    ][29559] Running state [neutron-openvswitch-agent] at time 06:47:45.387879
2018-03-30 06:47:45,388 [salt.state       ][INFO    ][29559] Executing state service.running for neutron-openvswitch-agent
2018-03-30 06:47:45,389 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'status', 'neutron-openvswitch-agent.service', '-n', '0'] in directory '/root'
2018-03-30 06:47:45,406 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-active', 'neutron-openvswitch-agent.service'] in directory '/root'
2018-03-30 06:47:45,423 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-enabled', 'neutron-openvswitch-agent.service'] in directory '/root'
2018-03-30 06:47:45,437 [salt.state       ][INFO    ][29559] The service neutron-openvswitch-agent is already running
2018-03-30 06:47:45,437 [salt.state       ][INFO    ][29559] Completed state [neutron-openvswitch-agent] at time 06:47:45.437578 duration_in_ms=49.699
2018-03-30 06:47:45,437 [salt.state       ][INFO    ][29559] Running state [neutron-openvswitch-agent] at time 06:47:45.437886
2018-03-30 06:47:45,438 [salt.state       ][INFO    ][29559] Executing state service.mod_watch for neutron-openvswitch-agent
2018-03-30 06:47:45,439 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-active', 'neutron-openvswitch-agent.service'] in directory '/root'
2018-03-30 06:47:45,453 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-enabled', 'neutron-openvswitch-agent.service'] in directory '/root'
2018-03-30 06:47:45,468 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-openvswitch-agent.service'] in directory '/root'
2018-03-30 06:47:45,503 [salt.state       ][INFO    ][29559] {'neutron-openvswitch-agent': True}
2018-03-30 06:47:45,503 [salt.state       ][INFO    ][29559] Completed state [neutron-openvswitch-agent] at time 06:47:45.503664 duration_in_ms=65.777
2018-03-30 06:47:45,505 [salt.state       ][INFO    ][29559] Running state [neutron-l3-agent] at time 06:47:45.504998
2018-03-30 06:47:45,505 [salt.state       ][INFO    ][29559] Executing state service.running for neutron-l3-agent
2018-03-30 06:47:45,505 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'status', 'neutron-l3-agent.service', '-n', '0'] in directory '/root'
2018-03-30 06:47:45,521 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-active', 'neutron-l3-agent.service'] in directory '/root'
2018-03-30 06:47:45,533 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-enabled', 'neutron-l3-agent.service'] in directory '/root'
2018-03-30 06:47:45,545 [salt.state       ][INFO    ][29559] The service neutron-l3-agent is already running
2018-03-30 06:47:45,545 [salt.state       ][INFO    ][29559] Completed state [neutron-l3-agent] at time 06:47:45.545756 duration_in_ms=40.757
2018-03-30 06:47:45,546 [salt.state       ][INFO    ][29559] Running state [neutron-l3-agent] at time 06:47:45.546130
2018-03-30 06:47:45,546 [salt.state       ][INFO    ][29559] Executing state service.mod_watch for neutron-l3-agent
2018-03-30 06:47:45,547 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-active', 'neutron-l3-agent.service'] in directory '/root'
2018-03-30 06:47:45,561 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemctl', 'is-enabled', 'neutron-l3-agent.service'] in directory '/root'
2018-03-30 06:47:45,577 [salt.loaded.int.module.cmdmod][INFO    ][29559] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-l3-agent.service'] in directory '/root'
2018-03-30 06:47:45,608 [salt.state       ][INFO    ][29559] {'neutron-l3-agent': True}
2018-03-30 06:47:45,608 [salt.state       ][INFO    ][29559] Completed state [neutron-l3-agent] at time 06:47:45.608495 duration_in_ms=62.364
2018-03-30 06:47:45,611 [salt.minion      ][INFO    ][29559] Returning information for job: 20180330064638451405
2018-03-30 06:47:46,331 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command state.sls with jid 20180330064745919678
2018-03-30 06:47:46,346 [salt.minion      ][INFO    ][1516] Starting a new job with PID 1516
2018-03-30 06:47:48,808 [salt.state       ][INFO    ][1516] Loading fresh modules for state activity
2018-03-30 06:47:48,880 [salt.fileclient  ][INFO    ][1516] Fetching file from saltenv 'base', ** done ** 'nova/init.sls'
2018-03-30 06:47:48,907 [salt.fileclient  ][INFO    ][1516] Fetching file from saltenv 'base', ** done ** 'nova/compute.sls'
2018-03-30 06:47:50,044 [salt.state       ][INFO    ][1516] Running state [nova] at time 06:47:50.044430
2018-03-30 06:47:50,044 [salt.state       ][INFO    ][1516] Executing state group.present for nova
2018-03-30 06:47:50,097 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['groupadd', '-g 303', '-r', 'nova'] in directory '/root'
2018-03-30 06:47:50,206 [salt.state       ][INFO    ][1516] {'passwd': 'x', 'gid': 303, 'name': 'nova', 'members': []}
2018-03-30 06:47:50,207 [salt.state       ][INFO    ][1516] Completed state [nova] at time 06:47:50.207323 duration_in_ms=162.892
2018-03-30 06:47:50,207 [salt.state       ][INFO    ][1516] Running state [libvirtd] at time 06:47:50.207856
2018-03-30 06:47:50,208 [salt.state       ][INFO    ][1516] Executing state group.present for libvirtd
2018-03-30 06:47:50,209 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['groupadd', '-r', 'libvirtd'] in directory '/root'
2018-03-30 06:47:50,331 [salt.state       ][INFO    ][1516] {'passwd': 'x', 'gid': 999, 'name': 'libvirtd', 'members': []}
2018-03-30 06:47:50,332 [salt.state       ][INFO    ][1516] Completed state [libvirtd] at time 06:47:50.332546 duration_in_ms=124.688
2018-03-30 06:47:50,333 [salt.state       ][INFO    ][1516] Running state [nova] at time 06:47:50.333599
2018-03-30 06:47:50,334 [salt.state       ][INFO    ][1516] Executing state user.present for nova
2018-03-30 06:47:50,335 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['useradd', '-s', '/bin/bash', '-u', '303', '-g', '303', '-m', '-d', '/var/lib/nova', '-r', 'nova'] in directory '/root'
2018-03-30 06:47:50,467 [salt.state       ][INFO    ][1516] {'shell': '/bin/bash', 'workphone': '', 'uid': 303, 'passwd': 'x', 'roomnumber': '', 'groups': ['nova'], 'home': '/var/lib/nova', 'name': 'nova', 'gid': 303, 'fullname': '', 'homephone': ''}
2018-03-30 06:47:50,468 [salt.state       ][INFO    ][1516] Completed state [nova] at time 06:47:50.468012 duration_in_ms=134.413
2018-03-30 06:47:50,468 [salt.state       ][INFO    ][1516] Running state [nova-common] at time 06:47:50.468796
2018-03-30 06:47:50,469 [salt.state       ][INFO    ][1516] Executing state pkg.installed for nova-common
2018-03-30 06:47:50,470 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:47:50,913 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['apt-cache', '-q', 'policy', 'nova-common'] in directory '/root'
2018-03-30 06:47:51,003 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-03-30 06:47:55,821 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:47:55,863 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-common'] in directory '/root'
2018-03-30 06:47:56,435 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064756019382
2018-03-30 06:47:56,454 [salt.minion      ][INFO    ][2048] Starting a new job with PID 2048
2018-03-30 06:47:56,477 [salt.minion      ][INFO    ][2048] Returning information for job: 20180330064756019382
2018-03-30 06:48:06,636 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064806224620
2018-03-30 06:48:06,653 [salt.minion      ][INFO    ][2805] Starting a new job with PID 2805
2018-03-30 06:48:06,677 [salt.minion      ][INFO    ][2805] Returning information for job: 20180330064806224620
2018-03-30 06:48:11,720 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:48:11,788 [salt.state       ][INFO    ][1516] Made the following changes:
'python-os-traits' changed from 'absent' to '0.3.3-0ubuntu1~cloud0'
'python-pypowervm' changed from 'absent' to '1.1.6-0ubuntu2~cloud0'
'python-passlib' changed from 'absent' to '1.7.1-1~cloud0'
'python-os-vif' changed from 'absent' to '1.7.0-0ubuntu1~cloud0'
'libxmlsec1' changed from 'absent' to '1.2.20-2ubuntu4'
'nova-common' changed from 'absent' to '2:16.0.4-0ubuntu1~cloud0'
'python-cursive' changed from 'absent' to '0.1.1-1ubuntu1~cloud0'
'xmlsec1' changed from 'absent' to '1.2.20-2ubuntu4'
'python-microversion-parse' changed from 'absent' to '0.1.4-2~cloud0'
'python2.7-nova' changed from 'absent' to '1'
'python-keystone' changed from 'absent' to '2:12.0.0-0ubuntu1~cloud0'
'python-pyasn1-modules' changed from 'absent' to '0.0.7-0.1'
'python-scrypt' changed from 'absent' to '0.8.0-0ubuntu2~cloud0'
'python2.7-keystone' changed from 'absent' to '1'
'python-defusedxml' changed from 'absent' to '0.4.1-2ubuntu0.16.04.1'
'python-pysaml2' changed from 'absent' to '3.0.0-3ubuntu1.16.04.3'
'python-nova' changed from 'absent' to '2:16.0.4-0ubuntu1~cloud0'
'libxmlsec1-openssl' changed from 'absent' to '1.2.20-2ubuntu4'
'python-bcrypt' changed from 'absent' to '3.1.3-1~cloud0'
'python2.7-cinderclient' changed from 'absent' to '1'
'python-cinderclient' changed from 'absent' to '1:3.1.0-0ubuntu1~cloud0'

2018-03-30 06:48:11,816 [salt.state       ][INFO    ][1516] Loading fresh modules for state activity
2018-03-30 06:48:11,968 [salt.state       ][INFO    ][1516] Completed state [nova-common] at time 06:48:11.968805 duration_in_ms=21500.009
2018-03-30 06:48:11,975 [salt.state       ][INFO    ][1516] Running state [nova-compute-kvm] at time 06:48:11.975621
2018-03-30 06:48:11,975 [salt.state       ][INFO    ][1516] Executing state pkg.installed for nova-compute-kvm
2018-03-30 06:48:12,283 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:48:12,302 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-compute-kvm'] in directory '/root'
2018-03-30 06:48:16,838 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064816426680
2018-03-30 06:48:16,856 [salt.minion      ][INFO    ][3071] Starting a new job with PID 3071
2018-03-30 06:48:16,882 [salt.minion      ][INFO    ][3071] Returning information for job: 20180330064816426680
2018-03-30 06:48:27,054 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064826642009
2018-03-30 06:48:27,070 [salt.minion      ][INFO    ][3392] Starting a new job with PID 3392
2018-03-30 06:48:27,098 [salt.minion      ][INFO    ][3392] Returning information for job: 20180330064826642009
2018-03-30 06:48:37,267 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064836855829
2018-03-30 06:48:37,285 [salt.minion      ][INFO    ][4398] Starting a new job with PID 4398
2018-03-30 06:48:37,312 [salt.minion      ][INFO    ][4398] Returning information for job: 20180330064836855829
2018-03-30 06:48:47,484 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064847072605
2018-03-30 06:48:47,499 [salt.minion      ][INFO    ][5485] Starting a new job with PID 5485
2018-03-30 06:48:47,525 [salt.minion      ][INFO    ][5485] Returning information for job: 20180330064847072605
2018-03-30 06:48:47,526 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:48:47,587 [salt.state       ][INFO    ][1516] Made the following changes:
'libbluetooth3' changed from 'absent' to '5.37-0ubuntu5.1'
'libpixman-1-0' changed from 'absent' to '0.33.6-1'
'libavahi-common3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2'
'seabios' changed from 'absent' to '1.10.2-1ubuntu1~cloud0'
'ebtables' changed from 'absent' to '2.0.10.4-3.4ubuntu2'
'libnl-route-3-200' changed from 'absent' to '3.2.27-1ubuntu0.16.04.1'
'qemu-kvm-spice' changed from 'absent' to '1'
'libyajl2' changed from 'absent' to '2.1.0-2'
'libasound2' changed from 'absent' to '1.1.0-0ubuntu1'
'libxen-4.6' changed from 'absent' to '4.6.5-0ubuntu1.4'
'libasyncns0' changed from 'absent' to '0.8-5build1'
'libcacard0' changed from 'absent' to '1:2.5.0-2'
'libsdl1.2debian' changed from 'absent' to '1.2.15+dfsg1-3'
'libpciaccess0' changed from 'absent' to '0.13.4-1'
'libvirt-daemon' changed from 'absent' to '3.6.0-1ubuntu6.2~cloud0'
'libfdt1' changed from 'absent' to '1.4.2-1~cloud0'
'libvorbisenc2' changed from 'absent' to '1.3.5-3ubuntu0.2'
'msr-tools' changed from 'absent' to '1.3-2'
'augeas-lenses' changed from 'absent' to '1.4.0-0ubuntu1.1'
'libvirt-daemon-system' changed from 'absent' to '3.6.0-1ubuntu6.2~cloud0'
'nova-compute-libvirt' changed from 'absent' to '2:16.0.4-0ubuntu1~cloud0'
'libvirt-clients' changed from 'absent' to '3.6.0-1ubuntu6.2~cloud0'
'nova-compute-kvm' changed from 'absent' to '2:16.0.4-0ubuntu1~cloud0'
'ipxe-qemu' changed from 'absent' to '1.0.0+git-20150424.a25a16d-1ubuntu1.2'
'libogg0' changed from 'absent' to '1.3.2-1'
'libsndfile1' changed from 'absent' to '1.0.25-10ubuntu0.16.04.1'
'libasound2-data' changed from 'absent' to '1.1.0-0ubuntu1'
'kpartx' changed from 'absent' to '0.5.0+git1.656f8865-5ubuntu2.5'
'libxml2-utils' changed from 'absent' to '2.9.3+dfsg1-1ubuntu0.5'
'mkisofs' changed from 'absent' to '1'
'libvorbis0a' changed from 'absent' to '1.3.5-3ubuntu0.2'
'kvm' changed from 'absent' to '1'
'libspice-server1' changed from 'absent' to '0.12.6-4ubuntu0.3'
'qemu-system-x86' changed from 'absent' to '1:2.10+dfsg-0ubuntu3.5~cloud0'
'libaugeas0' changed from 'absent' to '1.4.0-0ubuntu1.1'
'libcaca0' changed from 'absent' to '0.99.beta19-2build2~gcc5.2'
'qemu-keymaps' changed from 'absent' to '1'
'qemu-kvm' changed from 'absent' to '1:2.10+dfsg-0ubuntu3.5~cloud0'
'libbrlapi0.6' changed from 'absent' to '5.3.1-2ubuntu2.1'
'libvirt0' changed from 'absent' to '3.6.0-1ubuntu6.2~cloud0'
'libpulse0' changed from 'absent' to '1:8.0-0ubuntu3.8'
'qemu-system-common' changed from 'absent' to '1:2.10+dfsg-0ubuntu3.5~cloud0'
'libavahi-client3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2'
'qemu-system-i386' changed from 'absent' to '1'
'nova-compute' changed from 'absent' to '2:16.0.4-0ubuntu1~cloud0'
'libnetcf1' changed from 'absent' to '1:0.2.8-1ubuntu1'
'libopus0' changed from 'absent' to '1.1.2-1ubuntu1'
'libavahi-common-data' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2'
'python-libvirt' changed from 'absent' to '3.5.0-1build1~cloud0'
'libusbredirparser1' changed from 'absent' to '0.7.1-1'
'genisoimage' changed from 'absent' to '9:1.1.11-3ubuntu1'
'nova-compute-hypervisor' changed from 'absent' to '1'
'qemu-system-x86-64' changed from 'absent' to '1'
'libvirt-bin' changed from 'absent' to '3.6.0-1ubuntu6.2~cloud0'
'libflac8' changed from 'absent' to '1.3.1-4'
'cpu-checker' changed from 'absent' to '0.7-0ubuntu7'

2018-03-30 06:48:47,609 [salt.state       ][INFO    ][1516] Loading fresh modules for state activity
2018-03-30 06:48:47,635 [salt.state       ][INFO    ][1516] Completed state [nova-compute-kvm] at time 06:48:47.635107 duration_in_ms=35659.484
2018-03-30 06:48:47,642 [salt.state       ][INFO    ][1516] Running state [python-novaclient] at time 06:48:47.642687
2018-03-30 06:48:47,642 [salt.state       ][INFO    ][1516] Executing state pkg.installed for python-novaclient
2018-03-30 06:48:47,980 [salt.state       ][INFO    ][1516] All specified packages are already installed
2018-03-30 06:48:47,980 [salt.state       ][INFO    ][1516] Completed state [python-novaclient] at time 06:48:47.980911 duration_in_ms=338.223
2018-03-30 06:48:47,981 [salt.state       ][INFO    ][1516] Running state [pm-utils] at time 06:48:47.981284
2018-03-30 06:48:47,981 [salt.state       ][INFO    ][1516] Executing state pkg.installed for pm-utils
2018-03-30 06:48:47,992 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:48:48,035 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'pm-utils'] in directory '/root'
2018-03-30 06:48:51,760 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:48:51,819 [salt.state       ][INFO    ][1516] Made the following changes:
'pm-utils' changed from 'absent' to '1.4.1-16'
'libx86-1' changed from 'absent' to '1.1+ds1-10'
'vbetool' changed from 'absent' to '1.1-3'

2018-03-30 06:48:51,843 [salt.state       ][INFO    ][1516] Loading fresh modules for state activity
2018-03-30 06:48:51,870 [salt.state       ][INFO    ][1516] Completed state [pm-utils] at time 06:48:51.870690 duration_in_ms=3889.405
2018-03-30 06:48:51,879 [salt.state       ][INFO    ][1516] Running state [sysfsutils] at time 06:48:51.879391
2018-03-30 06:48:51,879 [salt.state       ][INFO    ][1516] Executing state pkg.installed for sysfsutils
2018-03-30 06:48:52,166 [salt.state       ][INFO    ][1516] All specified packages are already installed
2018-03-30 06:48:52,166 [salt.state       ][INFO    ][1516] Completed state [sysfsutils] at time 06:48:52.166869 duration_in_ms=287.477
2018-03-30 06:48:52,167 [salt.state       ][INFO    ][1516] Running state [sg3-utils] at time 06:48:52.167172
2018-03-30 06:48:52,167 [salt.state       ][INFO    ][1516] Executing state pkg.installed for sg3-utils
2018-03-30 06:48:52,171 [salt.state       ][INFO    ][1516] All specified packages are already installed
2018-03-30 06:48:52,171 [salt.state       ][INFO    ][1516] Completed state [sg3-utils] at time 06:48:52.171676 duration_in_ms=4.505
2018-03-30 06:48:52,171 [salt.state       ][INFO    ][1516] Running state [libvirt-bin] at time 06:48:52.171920
2018-03-30 06:48:52,172 [salt.state       ][INFO    ][1516] Executing state pkg.installed for libvirt-bin
2018-03-30 06:48:52,175 [salt.state       ][INFO    ][1516] All specified packages are already installed
2018-03-30 06:48:52,175 [salt.state       ][INFO    ][1516] Completed state [libvirt-bin] at time 06:48:52.175829 duration_in_ms=3.909
2018-03-30 06:48:52,176 [salt.state       ][INFO    ][1516] Running state [python-memcache] at time 06:48:52.176068
2018-03-30 06:48:52,176 [salt.state       ][INFO    ][1516] Executing state pkg.installed for python-memcache
2018-03-30 06:48:52,179 [salt.state       ][INFO    ][1516] All specified packages are already installed
2018-03-30 06:48:52,180 [salt.state       ][INFO    ][1516] Completed state [python-memcache] at time 06:48:52.180034 duration_in_ms=3.965
2018-03-30 06:48:52,180 [salt.state       ][INFO    ][1516] Running state [qemu-kvm] at time 06:48:52.180281
2018-03-30 06:48:52,180 [salt.state       ][INFO    ][1516] Executing state pkg.installed for qemu-kvm
2018-03-30 06:48:52,184 [salt.state       ][INFO    ][1516] All specified packages are already installed
2018-03-30 06:48:52,184 [salt.state       ][INFO    ][1516] Completed state [qemu-kvm] at time 06:48:52.184156 duration_in_ms=3.874
2018-03-30 06:48:52,184 [salt.state       ][INFO    ][1516] Running state [python-guestfs] at time 06:48:52.184404
2018-03-30 06:48:52,184 [salt.state       ][INFO    ][1516] Executing state pkg.installed for python-guestfs
2018-03-30 06:48:52,197 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 06:48:52,238 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-guestfs'] in directory '/root'
2018-03-30 06:48:57,710 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064857297811
2018-03-30 06:48:57,727 [salt.minion      ][INFO    ][6195] Starting a new job with PID 6195
2018-03-30 06:48:57,755 [salt.minion      ][INFO    ][6195] Returning information for job: 20180330064857297811
2018-03-30 06:49:07,932 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064907520378
2018-03-30 06:49:07,949 [salt.minion      ][INFO    ][9562] Starting a new job with PID 9562
2018-03-30 06:49:07,964 [salt.minion      ][INFO    ][9562] Returning information for job: 20180330064907520378
2018-03-30 06:49:18,148 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330064917734555
2018-03-30 06:49:18,164 [salt.minion      ][INFO    ][15913] Starting a new job with PID 15913
2018-03-30 06:49:18,187 [salt.minion      ][INFO    ][15913] Returning information for job: 20180330064917734555
2018-03-30 06:49:23,565 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 06:49:23,627 [salt.state       ][INFO    ][1516] Made the following changes:
'hfsplus' changed from 'absent' to '1.0.4-13'
'scrub' changed from 'absent' to '2.6.1-1'
'syslinux-common' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libguestfs0' changed from 'absent' to '1:1.32.2-4ubuntu2'
'libguestfs-hfsplus' changed from 'absent' to '1:1.32.2-4ubuntu2'
'lzop' changed from 'absent' to '1.03-3.2'
'libhfsp0' changed from 'absent' to '1.0.4-13'
'mtools' changed from 'absent' to '4.0.18-2ubuntu0.16.04'
'reiserfsprogs' changed from 'absent' to '1:3.6.24-3.1'
'lsscsi' changed from 'absent' to '0.27-3'
'python-guestfs' changed from 'absent' to '1:1.32.2-4ubuntu2'
'syslinux' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libguestfs-xfs' changed from 'absent' to '1:1.32.2-4ubuntu2'
'libguestfs-reiserfs' changed from 'absent' to '1:1.32.2-4ubuntu2'
'python-libguestfs' changed from 'absent' to '1'
'supermin' changed from 'absent' to '5.1.14-2ubuntu1.1'
'extlinux' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libhivex0' changed from 'absent' to '1.3.13-1build3'

2018-03-30 06:49:23,647 [salt.state       ][INFO    ][1516] Loading fresh modules for state activity
2018-03-30 06:49:23,671 [salt.state       ][INFO    ][1516] Completed state [python-guestfs] at time 06:49:23.671366 duration_in_ms=31486.96
2018-03-30 06:49:23,679 [salt.state       ][INFO    ][1516] Running state [gettext-base] at time 06:49:23.679257
2018-03-30 06:49:23,679 [salt.state       ][INFO    ][1516] Executing state pkg.installed for gettext-base
2018-03-30 06:49:24,036 [salt.state       ][INFO    ][1516] All specified packages are already installed
2018-03-30 06:49:24,036 [salt.state       ][INFO    ][1516] Completed state [gettext-base] at time 06:49:24.036390 duration_in_ms=357.133
2018-03-30 06:49:24,038 [salt.state       ][INFO    ][1516] Running state [/var/log/nova] at time 06:49:24.038113
2018-03-30 06:49:24,038 [salt.state       ][INFO    ][1516] Executing state file.directory for /var/log/nova
2018-03-30 06:49:24,039 [salt.state       ][INFO    ][1516] {'group': 'nova'}
2018-03-30 06:49:24,039 [salt.state       ][INFO    ][1516] Completed state [/var/log/nova] at time 06:49:24.039270 duration_in_ms=1.157
2018-03-30 06:49:24,039 [salt.state       ][INFO    ][1516] Running state [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com] at time 06:49:24.039734
2018-03-30 06:49:24,039 [salt.state       ][INFO    ][1516] Executing state ssh_auth.present for ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com
2018-03-30 06:49:24,040 [salt.loaded.int.module.ssh][WARNING ][1516] Public Key hashing currently defaults to "md5". This will change to "sha256" in the Nitrogen release.
2018-03-30 06:49:24,042 [salt.state       ][INFO    ][1516] {'AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ': 'New'}
2018-03-30 06:49:24,042 [salt.state       ][INFO    ][1516] Completed state [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com] at time 06:49:24.042225 duration_in_ms=2.49
2018-03-30 06:49:24,042 [salt.state       ][INFO    ][1516] Running state [nova] at time 06:49:24.042769
2018-03-30 06:49:24,043 [salt.state       ][INFO    ][1516] Executing state user.present for nova
2018-03-30 06:49:24,044 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['usermod', '-G', 'libvirtd', 'nova'] in directory '/root'
2018-03-30 06:49:24,152 [salt.state       ][INFO    ][1516] {'groups': ['libvirtd', 'nova']}
2018-03-30 06:49:24,152 [salt.state       ][INFO    ][1516] Completed state [nova] at time 06:49:24.152909 duration_in_ms=110.14
2018-03-30 06:49:24,153 [salt.state       ][INFO    ][1516] Running state [/var/lib/nova/.ssh/id_rsa] at time 06:49:24.153596
2018-03-30 06:49:24,153 [salt.state       ][INFO    ][1516] Executing state file.managed for /var/lib/nova/.ssh/id_rsa
2018-03-30 06:49:24,160 [salt.state       ][INFO    ][1516] File changed:
New file
2018-03-30 06:49:24,160 [salt.state       ][INFO    ][1516] Completed state [/var/lib/nova/.ssh/id_rsa] at time 06:49:24.160827 duration_in_ms=7.229
2018-03-30 06:49:24,161 [salt.state       ][INFO    ][1516] Running state [/var/lib/nova/.ssh/config] at time 06:49:24.161377
2018-03-30 06:49:24,161 [salt.state       ][INFO    ][1516] Executing state file.managed for /var/lib/nova/.ssh/config
2018-03-30 06:49:24,166 [salt.state       ][INFO    ][1516] File changed:
New file
2018-03-30 06:49:24,166 [salt.state       ][INFO    ][1516] Completed state [/var/lib/nova/.ssh/config] at time 06:49:24.166196 duration_in_ms=4.819
2018-03-30 06:49:24,166 [salt.state       ][INFO    ][1516] Running state [/etc/nova/nova.conf] at time 06:49:24.166724
2018-03-30 06:49:24,167 [salt.state       ][INFO    ][1516] Executing state file.managed for /etc/nova/nova.conf
2018-03-30 06:49:24,217 [salt.fileclient  ][INFO    ][1516] Fetching file from saltenv 'base', ** done ** 'nova/files/pike/nova-compute.conf.Debian'
2018-03-30 06:49:24,541 [salt.state       ][INFO    ][1516] File changed:
--- 
+++ 
@@ -1,11 +1,14 @@
+
 [DEFAULT]
-log_dir = /var/log/nova
-lock_path = /var/lock/nova
-state_path = /var/lib/nova
 
 #
 # From nova.conf
 #
+compute_manager=nova.compute.manager.ComputeManager
+network_device_mtu=65000
+use_neutron = True
+security_group_api=neutron
+image_service=nova.image.glance.GlanceImageService
 
 # DEPRECATED:
 # When returning instance metadata, this is the class that is used
@@ -18,7 +21,7 @@
 #  (string value)
 # This option is deprecated for removal since 13.0.0.
 # Its value may be silently ignored in the future.
-#vendordata_driver = nova.api.metadata.vendordata_json.JsonFileVendorData
+#vendordata_driver=nova.api.metadata.vendordata_json.JsonFileVendorData
 
 # DEPRECATED:
 # This option is used to enable or disable quota checking for tenant networks.
@@ -32,7 +35,7 @@
 # Reason:
 # CRUD operations on tenant networks are only available when using nova-network
 # and nova-network is itself deprecated.
-#enable_network_quota = false
+#enable_network_quota=false
 
 # DEPRECATED:
 # This option controls the number of private networks that can be created per
@@ -48,52 +51,40 @@
 # Reason:
 # CRUD operations on tenant networks are only available when using nova-network
 # and nova-network is itself deprecated.
-#quota_networks = 3
-
-#
-# Availability zone for internal services.
-#
-# This option determines the availability zone for the various internal nova
-# services, such as 'nova-scheduler', 'nova-conductor', etc.
-#
-# Possible values:
-#
-# * Any string representing an existing availability zone name.
-#  (string value)
-#internal_service_availability_zone = internal
-
-#
-# Default availability zone for compute services.
-#
-# This option determines the default availability zone for 'nova-compute'
-# services, which will be used if the service(s) do not belong to aggregates
-# with
-# availability zone metadata.
-#
-# Possible values:
-#
-# * Any string representing an existing availability zone name.
-#  (string value)
-#default_availability_zone = nova
-
-#
-# Default availability zone for instances.
-#
-# This option determines the default availability zone for instances, which will
-# be used when a user does not specify one when creating an instance. The
-# instance(s) will be bound to this availability zone for their lifetime.
-#
-# Possible values:
-#
-# * Any string representing an existing availability zone name.
-# * None, which means that the instance can move from one availability zone to
-#   another during its lifetime if it is moved from one compute node to another.
-#  (string value)
-#default_schedule_zone = <None>
+#quota_networks=3
+
+#
+# This option specifies the name of the availability zone for the
+# internal services. Services like nova-scheduler, nova-network,
+# nova-conductor are internal services. These services will appear in
+# their own internal availability_zone.
+#
+# Possible values:
+#
+# * Any string representing an availability zone name
+# * 'internal' is the default value
+#
+#  (string value)
+#internal_service_availability_zone=internal
+
+#
+# Default compute node availability_zone.
+#
+# This option determines the availability zone to be used when it is not
+# specified in the VM creation request. If this option is not set,
+# the default availability zone 'nova' is used.
+#
+# Possible values:
+#
+# * Any string representing an availability zone name
+# * 'nova' is the default value
+#
+#  (string value)
+#default_availability_zone=nova
 
 # Length of generated instance admin passwords. (integer value)
 # Minimum value: 0
-#password_length = 12
+#password_length=12
 
 #
 # Time period to generate instance usages for. It is possible to define optional
@@ -106,14 +97,16 @@
 # *  period with offset, example: ``month@15`` will result in monthly audits
 #    starting on 15th day of month.
 #  (string value)
-#instance_usage_audit_period = month
-
+#instance_usage_audit_period=month
+
+instance_usage_audit = True
+instance_usage_audit_period = hour
 #
 # Start and use a daemon that can run the commands that need to be run with
 # root privileges. This option is usually enabled on nodes that run nova compute
 # processes.
 #  (boolean value)
-#use_rootwrap_daemon = false
+#use_rootwrap_daemon=false
 
 #
 # Path to the rootwrap configuration file.
@@ -123,10 +116,11 @@
 # The configuration file used here must match the one defined in the sudoers
 # entry.
 #  (string value)
-#rootwrap_config = /etc/nova/rootwrap.conf
+#rootwrap_config=/etc/nova/rootwrap.conf
+rootwrap_config=/etc/nova/rootwrap.conf
 
 # Explicitly specify the temporary working directory. (string value)
-#tempdir = <None>
+#tempdir=<None>
 
 #
 # Determine if monkey patching should be applied.
@@ -136,7 +130,7 @@
 # * ``monkey_patch_modules``: This must have values set for this option to
 #   have any effect
 #  (boolean value)
-#monkey_patch = false
+#monkey_patch=false
 
 #
 # List of modules/decorators to monkey patch.
@@ -155,7 +149,7 @@
 # * ``monkey_patch``: This must be set to ``True`` for this option to
 #   have any effect
 #  (list value)
-#monkey_patch_modules = nova.compute.api:nova.notifications.notify_decorator
+#monkey_patch_modules=nova.compute.api:nova.notifications.notify_decorator
 
 #
 # Defines which driver to use for controlling virtualization.
@@ -169,16 +163,31 @@
 # * ``vmwareapi.VMwareVCDriver``
 # * ``hyperv.HyperVDriver``
 #  (string value)
-#compute_driver = <None>
+#compute_driver=<None>
+compute_driver = libvirt.LibvirtDriver
 
 #
 # Allow destination machine to match source for resize. Useful when
 # testing in single-host environments. By default it is not allowed
 # to resize to the same host. Setting this option to true will add
-# the same host to the destination options. Also set to true
-# if you allow the ServerGroupAffinityFilter and need to resize.
+# the same host to the destination options.
 #  (boolean value)
-#allow_resize_to_same_host = false
+#allow_resize_to_same_host=false
+allow_resize_to_same_host=true
+
+#
+# Availability zone to use when user doesn't specify one.
+#
+# This option is used by the scheduler to determine which availability
+# zone to place a new VM instance into if the user did not specify one
+# at the time of VM boot request.
+#
+# Possible values:
+#
+# * Any string representing an availability zone name
+# * Default value is None.
+#  (string value)
+#default_schedule_zone=<None>
 
 #
 # Image properties that should not be inherited from the instance
@@ -195,7 +204,7 @@
 #   doesn't need them.
 # * Default list: ['cache_in_nova', 'bittorrent']
 #  (list value)
-#non_inheritable_image_properties = cache_in_nova,bittorrent
+#non_inheritable_image_properties=cache_in_nova,bittorrent
 
 # DEPRECATED:
 # This option is used to decide when an image should have no external
@@ -211,7 +220,7 @@
 # 'nokernel', Nova assumes the image doesn't require an external kernel and
 # ramdisk. This option allows user to change the API behaviour which should not
 # be allowed and this value "nokernel" should be hard coded.
-#null_kernel = nokernel
+#null_kernel=nokernel
 
 # DEPRECATED:
 # When creating multiple instances with a single request using the
@@ -229,7 +238,7 @@
 # Reason:
 # This config changes API behaviour. All changes in API behaviour should be
 # discoverable.
-#multi_instance_display_name_template = %(name)s-%(count)d
+#multi_instance_display_name_template=%(name)s-%(count)d
 
 #
 # Maximum number of devices that will result in a local image being
@@ -251,7 +260,7 @@
 # * Positive number: Allows only these many number of local discs.
 #                        (Default value is 3).
 #  (integer value)
-#max_local_block_devices = 3
+#max_local_block_devices=3
 
 #
 # A list of monitors that can be used for getting compute metrics.
@@ -279,7 +288,7 @@
 # * ``xfs``
 # * ``ntfs`` (only for Windows guests)
 #  (string value)
-#default_ephemeral_format = <None>
+#default_ephemeral_format=<None>
 
 #
 # Determine if instance should boot or fail on VIF plugging timeout.
@@ -300,7 +309,8 @@
 # * True: Instances should fail after VIF plugging timeout
 # * False: Instances should continue booting after VIF plugging timeout
 #  (boolean value)
-#vif_plugging_is_fatal = true
+#vif_plugging_is_fatal=true
+vif_plugging_is_fatal=true
 
 #
 # Timeout for Neutron VIF plugging event message arrival.
@@ -315,7 +325,8 @@
 #   arrive at all.
 #  (integer value)
 # Minimum value: 0
-#vif_plugging_timeout = 300
+#vif_plugging_timeout=300
+vif_plugging_timeout=300
 
 # Path to '/etc/network/interfaces' template.
 #
@@ -343,7 +354,7 @@
 # * ``flat_inject``: This must be set to ``True`` to ensure nova embeds network
 #   configuration information in the metadata provided through the config drive.
 #  (string value)
-#injected_network_template = $pybasedir/nova/virt/interfaces.template
+#injected_network_template=$pybasedir/nova/virt/interfaces.template
 
 #
 # The image preallocation mode to use.
@@ -361,7 +372,7 @@
 # * "space" => storage is fully allocated at instance start
 #  (string value)
 # Allowed values: none, space
-#preallocate_images = none
+#preallocate_images=none
 
 #
 # Enable use of copy-on-write (cow) images.
@@ -369,7 +380,7 @@
 # QEMU/KVM allow the use of qcow2 as backing files. By disabling this,
 # backing files will not be used.
 #  (boolean value)
-#use_cow_images = true
+#use_cow_images=true
 
 #
 # Force conversion of backing images to raw format.
@@ -383,7 +394,8 @@
 #
 # * ``compute_driver``: Only the libvirt driver uses this option.
 #  (boolean value)
-#force_raw_images = true
+#force_raw_images=true
+force_raw_images=true
 
 #
 # Name of the mkfs commands for ephemeral device.
@@ -400,11 +412,11 @@
 # contains a recent version of cloud-init. Possible mechanisms require the nbd
 # driver (for qcow and raw), or loop (for raw).
 #  (boolean value)
-#resize_fs_using_block_device = false
+#resize_fs_using_block_device=false
 
 # Amount of time, in seconds, to wait for NBD device start up. (integer value)
 # Minimum value: 0
-#timeout_nbd = 10
+#timeout_nbd=10
 
 #
 # Location of cached images.
@@ -412,15 +424,16 @@
 # This is NOT the full path - just a folder name relative to '$instances_path'.
 # For per-compute-host cached images, set to '_base_$my_ip'
 #  (string value)
-#image_cache_subdirectory_name = _base
+#image_cache_subdirectory_name=_base
 
 # Should unused base images be removed? (boolean value)
-#remove_unused_base_images = true
+#remove_unused_base_images=true
 
 #
 # Unused unresized base images younger than this will not be removed.
 #  (integer value)
-#remove_unused_original_minimum_age_seconds = 86400
+#remove_unused_original_minimum_age_seconds=86400
+remove_unused_original_minimum_age_seconds=86400
 
 #
 # Generic property to specify the pointer type.
@@ -445,7 +458,7 @@
 #   configured as HVM.
 #   (string value)
 # Allowed values: <None>, ps2mouse, usbtablet
-#pointer_model = usbtablet
+#pointer_model=usbtablet
 
 #
 # Defines which physical CPUs (pCPUs) can be used by instance
@@ -460,7 +473,8 @@
 #
 #     vcpu_pin_set = "4-12,^8,15"
 #  (string value)
-#vcpu_pin_set = <None>
+#vcpu_pin_set=<None>
+vcpu_pin_set=5-7,13-15
 
 #
 # Number of huge/large memory pages to reserved per NUMA host cell.
@@ -476,7 +490,7 @@
 #   In this example we are reserving on NUMA node 0 64 pages of 2MiB
 #   and on NUMA node 1 1 page of 1GiB.
 #  (dict value)
-#reserved_huge_pages = <None>
+#reserved_huge_pages=<None>
 
 #
 # Amount of disk resources in MB to make them always available to host. The
@@ -490,7 +504,7 @@
 #   for the host.
 #  (integer value)
 # Minimum value: 0
-#reserved_host_disk_mb = 0
+#reserved_host_disk_mb=0
 
 #
 # Amount of memory in MB to reserve for the host so that it is always available
@@ -506,21 +520,8 @@
 #   for the host.
 #  (integer value)
 # Minimum value: 0
-#reserved_host_memory_mb = 512
-
-#
-# Number of physical CPUs to reserve for the host. The host resources usage is
-# reported back to the scheduler continuously from nova-compute running on the
-# compute node. To prevent the host CPU from being considered as available,
-# this option is used to reserve random pCPU(s) for the host.
-#
-# Possible values:
-#
-# * Any positive integer representing number of physical CPUs to reserve
-#   for the host.
-#  (integer value)
-# Minimum value: 0
-#reserved_host_cpus = 0
+#reserved_host_memory_mb=512
+reserved_host_memory_mb = 512
 
 #
 # This option helps you specify virtual CPU to physical CPU allocation ratio.
@@ -536,17 +537,14 @@
 #
 # NOTE: This can be set per-compute, or if set to 0.0, the value
 # set on the scheduler node(s) or compute node(s) will be used
-# and defaulted to 16.0.
-#
-# NOTE: As of the 16.0.0 Pike release, this configuration option is ignored
-# for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
+# and defaulted to 16.0'.
 #
 # Possible values:
 #
 # * Any valid positive integer or float value
 #  (floating point value)
 # Minimum value: 0
-#cpu_allocation_ratio = 0.0
+#cpu_allocation_ratio=0.0
 
 #
 # This option helps you specify virtual RAM to physical RAM
@@ -565,15 +563,27 @@
 # set on the scheduler node(s) or compute node(s) will be used and
 # defaulted to 1.5.
 #
-# NOTE: As of the 16.0.0 Pike release, this configuration option is ignored
-# for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
-#
 # Possible values:
 #
 # * Any valid positive integer or float value
 #  (floating point value)
 # Minimum value: 0
-#ram_allocation_ratio = 0.0
+#ram_allocation_ratio=0.0
+
+#
+# Defines which physical CPUs (pCPUs) can be used by instance
+# virtual CPUs (vCPUs).
+#
+# Possible values:
+#
+# * A comma-separated list of physical CPU numbers that virtual CPUs can be
+#   allocated to by default. Each element should be either a single CPU number,
+#   a range of CPU numbers, or a caret followed by a CPU number to be
+#   excluded from a previous range. For example:
+#
+#     vcpu_pin_set = "4-12,^8,15"
+#  (string value)
+vcpu_pin_set = 5-7,13-15
 
 #
 # This option helps you specify virtual disk to physical disk
@@ -594,17 +604,14 @@
 #
 # NOTE: This can be set per-compute, or if set to 0.0, the value
 # set on the scheduler node(s) or compute node(s) will be used and
-# defaulted to 1.0.
-#
-# NOTE: As of the 16.0.0 Pike release, this configuration option is ignored
-# for the ironic.IronicDriver compute driver and is hardcoded to 1.0.
+# defaulted to 1.0'.
 #
 # Possible values:
 #
 # * Any valid positive integer or float value
 #  (floating point value)
 # Minimum value: 0
-#disk_allocation_ratio = 0.0
+#disk_allocation_ratio=0.0
 
 #
 # Console proxy host to be used to connect to instances on this host. It is the
@@ -614,7 +621,7 @@
 #
 # * Current hostname (default) or any string representing hostname.
 #  (string value)
-#console_host = <current_hostname>
+#console_host=socket.gethostname()
 
 #
 # Name of the network to be used to set access IPs for instances. If there are
@@ -625,13 +632,13 @@
 # * None (default)
 # * Any string representing network name.
 #  (string value)
-#default_access_ip_network_name = <None>
+#default_access_ip_network_name=<None>
 
 #
 # Whether to batch up the application of IPTables rules during a host restart
 # and apply all at the end of the init phase.
 #  (boolean value)
-#defer_iptables_apply = false
+#defer_iptables_apply=false
 
 #
 # Specifies where instances are stored on the hypervisor's disk.
@@ -643,14 +650,15 @@
 #   the top-level directory for maintaining nova's state. (default) or
 #   Any string representing directory path.
 #  (string value)
-#instances_path = $state_path/instances
+#instances_path=$state_path/instances
+instances_path = $state_path/instances
 
 #
 # This option enables periodic compute.instance.exists notifications. Each
 # compute node must be configured to generate system usage data. These
 # notifications are consumed by OpenStack Telemetry service.
 #  (boolean value)
-#instance_usage_audit = false
+#instance_usage_audit=false
 
 #
 # Maximum number of 1 second retries in live_migration. It specifies number
@@ -663,14 +671,15 @@
 # * Any positive integer representing retry count.
 #  (integer value)
 # Minimum value: 0
-#live_migration_retry_count = 30
+#live_migration_retry_count=30
 
 #
 # This option specifies whether to start guests that were running before the
 # host rebooted. It ensures that all of the instances on a Nova compute node
 # resume their state each time the compute node boots or restarts.
 #  (boolean value)
-#resume_guests_state_on_host_boot = false
+#resume_guests_state_on_host_boot=false
+resume_guests_state_on_host_boot=True
 
 #
 # Number of times to retry network allocation. It is required to attempt network
@@ -681,7 +690,7 @@
 # * Any positive integer representing retry count.
 #  (integer value)
 # Minimum value: 0
-#network_allocate_retries = 0
+#network_allocate_retries=0
 
 #
 # Limits the maximum number of instance builds to run concurrently by
@@ -696,7 +705,7 @@
 # * Any positive integer representing maximum concurrent builds.
 #  (integer value)
 # Minimum value: 0
-#max_concurrent_builds = 10
+#max_concurrent_builds=10
 
 #
 # Maximum number of live migrations to run concurrently. This limit is enforced
@@ -711,7 +720,7 @@
 # * Any positive integer representing maximum number of live migrations
 #   to run concurrently.
 #  (integer value)
-#max_concurrent_live_migrations = 1
+#max_concurrent_live_migrations=1
 
 #
 # Number of times to retry block device allocation on failures. Starting with
@@ -726,7 +735,8 @@
 # * Any negative value is treated as 0.
 # * For any value > 0, total attempts are (value + 1)
 #  (integer value)
-#block_device_allocate_retries = 60
+#block_device_allocate_retries=60
+block_device_allocate_retries=600
 
 #
 # Number of greenthreads available for use to sync power states.
@@ -739,7 +749,7 @@
 #
 # * Any positive integer representing greenthreads count.
 #  (integer value)
-#sync_power_state_pool_size = 1000
+#sync_power_state_pool_size=1000
 
 #
 # Number of seconds to wait between runs of the image cache manager.
@@ -750,7 +760,8 @@
 # * Any other value
 #  (integer value)
 # Minimum value: -1
-#image_cache_manager_interval = 2400
+#image_cache_manager_interval=2400
+image_cache_manager_interval=0
 
 #
 # Interval to pull network bandwidth usage info.
@@ -764,7 +775,7 @@
 # * Any value < 0: Disables the option.
 # * Any positive integer in seconds.
 #  (integer value)
-#bandwidth_poll_interval = 600
+#bandwidth_poll_interval=600
 
 #
 # Interval to sync power states between the database and the hypervisor.
@@ -789,7 +800,7 @@
 #   of sync between the hypervisor and the Nova database will have
 #   to be synchronized manually.
 #  (integer value)
-#sync_power_state_interval = 600
+#sync_power_state_interval=600
 
 #
 # Interval between instance network information cache updates.
@@ -807,7 +818,8 @@
 # * Any positive integer in seconds.
 # * Any value <=0 will disable the sync. This is not recommended.
 #  (integer value)
-#heal_instance_info_cache_interval = 60
+#heal_instance_info_cache_interval=60
+heal_instance_info_cache_interval = 60
 
 #
 # Interval for reclaiming deleted instances.
@@ -833,7 +845,7 @@
 #   this option.
 # * Any value <=0 will disable the option.
 #  (integer value)
-#reclaim_instance_interval = 0
+#reclaim_instance_interval=0
 
 #
 # Interval for gathering volume usages.
@@ -847,7 +859,7 @@
 #   this option.
 # * Any value <=0 will disable the option.
 #  (integer value)
-#volume_usage_poll_interval = 0
+#volume_usage_poll_interval=0
 
 #
 # Interval for polling shelved instances to offload.
@@ -867,7 +879,7 @@
 #
 # * ``shelved_offload_time``
 #  (integer value)
-#shelved_poll_interval = 3600
+#shelved_poll_interval=3600
 
 #
 # Time before a shelved instance is eligible for removal from a host.
@@ -887,7 +899,7 @@
 # * Any positive integer in seconds: The instance will exist for
 #   the specified number of seconds before being offloaded.
 #  (integer value)
-#shelved_offload_time = 0
+#shelved_offload_time=0
 
 #
 # Interval for retrying failed instance file deletes.
@@ -908,7 +920,7 @@
 # * ``maximum_instance_delete_attempts`` from instance_cleaning_opts
 #   group.
 #  (integer value)
-#instance_delete_interval = 300
+#instance_delete_interval=300
 
 #
 # Interval (in seconds) between block device allocation retries on failures.
@@ -927,7 +939,8 @@
 # * ``block_device_allocate_retries`` in compute_manager_opts group.
 #  (integer value)
 # Minimum value: 0
-#block_device_allocate_retries_interval = 3
+#block_device_allocate_retries_interval=3
+block_device_allocate_retries_interval=10
 
 #
 # Interval between sending the scheduler a list of current instance UUIDs to
@@ -953,7 +966,7 @@
 # * This option has no impact if ``scheduler_tracks_instance_changes``
 #   is set to False.
 #  (integer value)
-#scheduler_instance_sync_interval = 120
+#scheduler_instance_sync_interval=120
 
 #
 # Interval for updating compute resources.
@@ -970,7 +983,7 @@
 # * Any value < 0: Disables the option.
 # * Any positive integer in seconds.
 #  (integer value)
-#update_resources_interval = 0
+#update_resources_interval=0
 
 #
 # Time interval after which an instance is hard rebooted automatically.
@@ -987,7 +1000,7 @@
 # * Any positive integer in seconds: Enables the option.
 #  (integer value)
 # Minimum value: 0
-#reboot_timeout = 0
+#reboot_timeout=0
 
 #
 # Maximum time in seconds that an instance can take to build.
@@ -1002,7 +1015,7 @@
 # * Any positive integer in seconds: Enables the option.
 #  (integer value)
 # Minimum value: 0
-#instance_build_timeout = 0
+#instance_build_timeout=0
 
 #
 # Interval to wait before un-rescuing an instance stuck in RESCUE.
@@ -1013,7 +1026,7 @@
 # * Any positive integer in seconds: Enables the option.
 #  (integer value)
 # Minimum value: 0
-#rescue_timeout = 0
+#rescue_timeout=0
 
 #
 # Automatically confirm resizes after N seconds.
@@ -1032,7 +1045,7 @@
 # * Any positive integer in seconds: Enables the option.
 #  (integer value)
 # Minimum value: 0
-#resize_confirm_window = 0
+#resize_confirm_window=0
 
 #
 # Total time to wait in seconds for an instance toperform a clean
@@ -1054,7 +1067,7 @@
 # * Any positive integer in seconds (default value is 60).
 #  (integer value)
 # Minimum value: 1
-#shutdown_timeout = 60
+#shutdown_timeout=60
 
 #
 # The compute service periodically checks for instances that have been
@@ -1072,11 +1085,11 @@
 #
 # Related options:
 #
-# * running_deleted_instance_poll_interval
+# * running_deleted_instance_poll
 # * running_deleted_instance_timeout
 #  (string value)
 # Allowed values: noop, log, shutdown, reap
-#running_deleted_instance_action = reap
+#running_deleted_instance_action=reap
 
 #
 # Time interval in seconds to wait between runs for the clean up action.
@@ -1093,7 +1106,7 @@
 #
 # * running_deleted_instance_action
 #  (integer value)
-#running_deleted_instance_poll_interval = 1800
+#running_deleted_instance_poll_interval=1800
 
 #
 # Time interval in seconds to wait for the instances that have
@@ -1107,7 +1120,7 @@
 #
 # * "running_deleted_instance_action"
 #  (integer value)
-#running_deleted_instance_timeout = 0
+#running_deleted_instance_timeout=0
 
 #
 # The number of times to attempt to reap an instance's files.
@@ -1125,7 +1138,25 @@
 # * ``instance_delete_interval`` in interval_opts group can be used to disable
 #   this option.
 #  (integer value)
-#maximum_instance_delete_attempts = 5
+#maximum_instance_delete_attempts=5
+
+# DEPRECATED:
+# This is the message queue topic that the compute service 'listens' on. It is
+# used when the compute service is started up to configure the queue, and
+# whenever an RPC call to the compute service is made.
+#
+# Possible values:
+#
+# * Any string, but there is almost never any reason to ever change this value
+#   from its default of 'compute'.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There is no need to let users choose the RPC topic for all services - there
+# is little gain from this. Furthermore, it makes it really easy to break Nova
+# by using this option.
+#compute_topic=compute
 
 #
 # Sets the scope of the check for unique instance names.
@@ -1150,24 +1181,21 @@
 #osapi_compute_unique_server_name_scope =
 
 #
-# Enable new nova-compute services on this host automatically.
-#
-# When a new nova-compute service starts up, it gets
+# Enable new services on this host automatically.
+#
+# When a new service (for example "nova-compute") starts up, it gets
 # registered in the database as an enabled service. Sometimes it can be useful
-# to register new compute services in disabled state and then enabled them at a
-# later point in time. This option only sets this behavior for nova-compute
-# services, it does not auto-disable other services like nova-conductor,
-# nova-scheduler, nova-consoleauth, or nova-osapi_compute.
-#
-# Possible values:
-#
-# * ``True``: Each new compute service is enabled as soon as it registers
-# itself.
-# * ``False``: Compute services must be enabled via an os-services REST API call
-#   or with the CLI with ``nova service-enable <hostname> <binary>``, otherwise
-#   they are not ready to use.
+# to register new services in disabled state and then enabled them at a later
+# point in time. This option can set this behavior for all services per host.
+#
+# Possible values:
+#
+# * ``True``: Each new service is enabled as soon as it registers itself.
+# * ``False``: Services must be enabled via a REST API call or with the CLI
+#   with ``nova service-enable <hostname> <binary>``, otherwise they are not
+#   ready to use.
 #  (boolean value)
-#enable_new_services = true
+#enable_new_services=true
 
 #
 # Template string to be used to generate instance names.
@@ -1191,7 +1219,7 @@
 #
 # * not to be confused with: ``multi_instance_display_name_template``
 #  (string value)
-#instance_name_template = instance-%08x
+#instance_name_template=instance-%08x
 
 #
 # Number of times to retry live-migration before failing.
@@ -1203,7 +1231,7 @@
 # * Integer greater than 0
 #  (integer value)
 # Minimum value: -1
-#migrate_max_retries = -1
+#migrate_max_retries=-1
 
 #
 # Configuration drive format
@@ -1234,7 +1262,8 @@
 #   drive, set config_drive_cdrom option at hyperv section, to true.
 #  (string value)
 # Allowed values: iso9660, vfat
-#config_drive_format = iso9660
+#config_drive_format=iso9660
+config_drive_format=vfat
 
 #
 # Force injection to take place on a config drive
@@ -1261,7 +1290,8 @@
 #   configuration section to the full path to an qemu-img command
 #   installation.
 #  (boolean value)
-#force_config_drive = false
+#force_config_drive=false
+force_config_drive=true
 
 #
 # Name or path of the tool used for ISO image creation
@@ -1288,12 +1318,68 @@
 #   value in the hyperv configuration section to the full path to an qemu-img
 #   command installation.
 #  (string value)
-#mkisofs_cmd = genisoimage
+#mkisofs_cmd=genisoimage
+
+# DEPRECATED:
+# nova-console-proxy is used to set up multi-tenant VM console access.
+# This option allows pluggable driver program for the console session
+# and represents driver to use for the console proxy.
+#
+# Possible values:
+#
+# * A string representing fully classified class name of console driver.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# This option no longer does anything. Previously this option had only two
+# valid,
+# in-tree values: nova.console.xvp.XVPConsoleProxy and
+# nova.console.fake.FakeConsoleProxy. The latter of these was only used in tests
+# and has since been replaced.
+#console_driver=nova.console.xvp.XVPConsoleProxy
+
+# DEPRECATED:
+# Represents the message queue topic name used by nova-console
+# service when communicating via the AMQP server. The Nova API uses a message
+# queue to communicate with nova-console to retrieve a console URL for that
+# host.
+#
+# Possible values:
+#
+# * A string representing topic exchange name
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There is no need to let users choose the RPC topic for all services - there
+# is little gain from this. Furthermore, it makes it really easy to break Nova
+# by using this option.
+#console_topic=console
+
+# DEPRECATED:
+# This option allows you to change the message topic used by nova-consoleauth
+# service when communicating via the AMQP server. Nova Console Authentication
+# server authenticates nova consoles. Users can then access their instances
+# through VNC clients. The Nova API service uses a message queue to
+# communicate with nova-consoleauth to get a VNC console.
+#
+# Possible Values:
+#
+# * 'consoleauth' (default) or Any string representing topic exchange name.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# There is no need to let users choose the RPC topic for all services - there
+# is little gain from this. Furthermore, it makes it really easy to break Nova
+# by using this option.
+#consoleauth_topic=consoleauth
 
 # DEPRECATED: The driver to use for database access (string value)
 # This option is deprecated for removal since 13.0.0.
 # Its value may be silently ignored in the future.
-#db_driver = nova.db
+#db_driver=nova.db
 
 # DEPRECATED:
 # Default flavor to use for the EC2 API only.
@@ -1302,9 +1388,107 @@
 # This option is deprecated for removal since 14.0.0.
 # Its value may be silently ignored in the future.
 # Reason: The EC2 API is deprecated.
-#default_flavor = m1.small
+#default_flavor=m1.small
+
+#
+# Default pool for floating IPs.
+#
+# This option specifies the default floating IP pool for allocating floating
+# IPs.
+#
+# While allocating a floating ip, users can optionally pass in the name of the
+# pool they want to allocate from, otherwise it will be pulled from the
+# default pool.
+#
+# If this option is not set, then 'nova' is used as default floating pool.
+#
+# Possible values:
+#
+# * Any string representing a floating IP pool name
+#  (string value)
+#default_floating_pool=nova
 
 # DEPRECATED:
+# Autoassigning floating IP to VM
+#
+# When set to True, floating IP is auto allocated and associated
+# to the VM upon creation.
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (boolean value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#auto_assign_floating_ip=false
+
+# DEPRECATED:
+# Full class name for the DNS Manager for floating IPs.
+#
+# This option specifies the class of the driver that provides functionality
+# to manage DNS entries associated with floating IPs.
+#
+# When a user adds a DNS entry for a specified domain to a floating IP,
+# nova will add a DNS entry using the specified floating DNS driver.
+# When a floating IP is deallocated, its DNS entry will automatically be
+# deleted.
+#
+# Possible values:
+#
+# * Full Python path to the class to be used
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#floating_ip_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
+
+# DEPRECATED:
+# Full class name for the DNS Manager for instance IPs.
+#
+# This option specifies the class of the driver that provides functionality
+# to manage DNS entries for instances.
+#
+# On instance creation, nova will add DNS entries for the instance name and
+# id, using the specified instance DNS driver and domain. On instance deletion,
+# nova will remove the DNS entries.
+#
+# Possible values:
+#
+# * Full Python path to the class to be used
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#instance_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
+
+# DEPRECATED:
+# If specified, Nova checks if the availability_zone of every instance matches
+# what the database says the availability_zone should be for the specified
+# dns_domain.
+#
+# Related options:
+#
+# * use_neutron: this options only works with nova-network.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# nova-network is deprecated, as are any related configuration options.
+#instance_dns_domain =
+
+#
 # Abstracts out IPv6 address generation to pluggable backends.
 #
 # nova-network can be put into dual-stack mode, so that it uses
@@ -1318,11 +1502,7 @@
 # * use_ipv6: this option only works if ipv6 is enabled for nova-network.
 #  (string value)
 # Allowed values: rfc2462, account_identifier
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ipv6_backend = rfc2462
+#ipv6_backend=rfc2462
 
 #
 # The IP address which the host is using to connect to the management network.
@@ -1338,7 +1518,8 @@
 # * routing_source_ip
 # * vpn_ip
 #  (string value)
-#my_ip = <host_ipv4>
+#my_ip=10.89.104.70
+my_ip=10.167.4.52
 
 #
 # The IP address which is used to connect to the block storage network.
@@ -1351,41 +1532,27 @@
 #
 # * my_ip - if my_block_storage_ip is not set, then my_ip value is used.
 #  (string value)
-#my_block_storage_ip = $my_ip
-
-#
-# Hostname, FQDN or IP address of this host.
-#
-# Used as:
-#
-# * the oslo.messaging queue name for nova-compute worker
-# * we use this value for the binding_host sent to neutron. This means if you
-# use
-#   a neutron agent, it should have the same value for host.
-# * cinder host attachment information
-#
-# Must be valid within AMQP key.
+#my_block_storage_ip=$my_ip
+
+#
+# Hostname, FQDN or IP address of this host. Must be valid within AMQP key.
 #
 # Possible values:
 #
 # * String with hostname, FQDN or IP address. Default is hostname of this host.
 #  (string value)
-#host = <current_hostname>
-
-# DEPRECATED:
+#host=lcy01-22
+
+#
 # Assign IPv6 and IPv4 addresses when creating instances.
 #
 # Related options:
 #
 # * use_neutron: this only works with nova-network.
 #  (boolean value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#use_ipv6 = false
-
-# DEPRECATED:
+#use_ipv6=false
+
+#
 # This option is a list of full paths to one or more configuration files for
 # dhcpbridge. In most cases the default path of '/etc/nova/nova-dhcpbridge.conf'
 # should be sufficient, but if you have special needs for configuring
@@ -1394,16 +1561,12 @@
 #
 # Possible values
 #
-# * A list of strings, where each string is the full path to a dhcpbridge
-#   configuration file.
+#     A list of strings, where each string is the full path to a dhcpbridge
+#     configuration file.
 #  (multi valued)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#dhcpbridge_flagfile = /etc/nova/nova-dhcpbridge.conf
-
-# DEPRECATED:
+dhcpbridge_flagfile=/etc/nova/nova.conf
+
+#
 # The location where the network configuration files will be kept. The default
 # is
 # the 'networks' directory off of the location where nova's Python module is
@@ -1411,76 +1574,55 @@
 #
 # Possible values
 #
-# * A string containing the full path to the desired configuration directory
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#networks_path = $state_path/networks
-
-# DEPRECATED:
+#     A string containing the full path to the desired configuration directory
+#  (string value)
+#networks_path=$state_path/networks
+
+#
 # This is the name of the network interface for public IP addresses. The default
 # is 'eth0'.
 #
 # Possible values:
 #
-# * Any string representing a network interface name
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#public_interface = eth0
-
-# DEPRECATED:
+#     Any string representing a network interface name
+#  (string value)
+#public_interface=eth0
+
+#
 # The location of the binary nova-dhcpbridge. By default it is the binary named
 # 'nova-dhcpbridge' that is installed with all the other nova binaries.
 #
 # Possible values:
 #
-# * Any string representing the full path to the binary for dhcpbridge
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#dhcpbridge = $bindir/nova-dhcpbridge
-
-# DEPRECATED:
-# The public IP address of the network host.
-#
-# This is used when creating an SNAT rule.
-#
-# Possible values:
-#
-# * Any valid IP address
-#
-# Related options:
-#
-# * ``force_snat_range``
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#routing_source_ip = $my_ip
-
-# DEPRECATED:
+#     Any string representing the full path to the binary for dhcpbridge
+#  (string value)
+dhcpbridge=/usr/bin/nova-dhcpbridge
+
+#
+# This is the public IP address of the network host. It is used when creating a
+# SNAT rule.
+#
+# Possible values:
+#
+#     Any valid IP address
+#
+# Related options:
+#
+#     force_snat_range
+#  (string value)
+#routing_source_ip=$my_ip
+
+#
 # The lifetime of a DHCP lease, in seconds. The default is 86400 (one day).
 #
 # Possible values:
 #
-# * Any positive integer value.
+#     Any positive integer value.
 #  (integer value)
 # Minimum value: 1
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#dhcp_lease_time = 86400
-
-# DEPRECATED:
+#dhcp_lease_time=86400
+
+#
 # Despite the singular form of the name of this option, it is actually a list of
 # zero or more server addresses that dnsmasq will use for DNS nameservers. If
 # this is not empty, dnsmasq will not read /etc/resolv.conf, but will only use
@@ -1491,19 +1633,15 @@
 #
 # Possible values:
 #
-# * A list of strings, where each string is either an IP address or a FQDN.
-#
-# Related options:
-#
-# * ``use_network_dns_servers``
+#     A list of strings, where each string is either an IP address or a FQDN.
+#
+# Related options:
+#
+#     use_network_dns_servers
 #  (multi valued)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
 #dns_server =
 
-# DEPRECATED:
+#
 # When this option is set to True, the dns1 and dns2 servers for the network
 # specified by the user on boot will be used for DNS, as well as any specified
 # in
@@ -1511,62 +1649,46 @@
 #
 # Related options:
 #
-# * ``dns_server``
+#     dns_server
 #  (boolean value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#use_network_dns_servers = false
-
-# DEPRECATED:
+#use_network_dns_servers=false
+
+#
 # This option is a list of zero or more IP address ranges in your network's DMZ
 # that should be accepted.
 #
 # Possible values:
 #
-# * A list of strings, each of which should be a valid CIDR.
+#     A list of strings, each of which should be a valid CIDR.
 #  (list value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
 #dmz_cidr =
 
-# DEPRECATED:
+#
 # This is a list of zero or more IP ranges that traffic from the
 # `routing_source_ip` will be SNATted to. If the list is empty, then no SNAT
 # rules are created.
 #
 # Possible values:
 #
-# * A list of strings, each of which should be a valid CIDR.
-#
-# Related options:
-#
-# * ``routing_source_ip``
+#     A list of strings, each of which should be a valid CIDR.
+#
+# Related options:
+#
+#     routing_source_ip
 #  (multi valued)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
 #force_snat_range =
 
-# DEPRECATED:
+#
 # The path to the custom dnsmasq configuration file, if any.
 #
 # Possible values:
 #
-# * The full path to the configuration file, or an empty string if there is no
-#   custom dnsmasq configuration file.
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
+#     The full path to the configuration file, or an empty string if there is no
+#     custom dnsmasq configuration file.
+#  (string value)
 #dnsmasq_config_file =
 
-# DEPRECATED:
+#
 # This is the class used as the ethernet device driver for linuxnet bridge
 # operations. The default value should be all you need for most cases, but if
 # you
@@ -1575,27 +1697,19 @@
 #
 # Possible values:
 #
-# * Any string representing a dot-separated class path that Nova can import.
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver
-
-# DEPRECATED:
+#     Any string representing a dot-separated class path that Nova can import.
+#  (string value)
+#linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
+
+#
 # The name of the Open vSwitch bridge that is used with linuxnet when connecting
 # with Open vSwitch."
 #
 # Possible values:
 #
-# * Any string representing a valid bridge name.
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#linuxnet_ovs_integration_bridge = br-int
+#     Any string representing a valid bridge name.
+#  (string value)
+#linuxnet_ovs_integration_bridge=br-int
 
 #
 # When True, when a device starts up, and upon binding floating IP addresses,
@@ -1605,9 +1719,9 @@
 #
 # Related options:
 #
-# * ``send_arp_for_ha_count``
+#     send_arp_for_ha_count
 #  (boolean value)
-#send_arp_for_ha = false
+#send_arp_for_ha=false
 
 #
 # When arp messages are configured to be sent, they will be sent with the count
@@ -1616,108 +1730,84 @@
 #
 # Possible values:
 #
-# * Any integer greater than or equal to 0
-#
-# Related options:
-#
-# * ``send_arp_for_ha``
-#  (integer value)
-#send_arp_for_ha_count = 3
-
-# DEPRECATED:
+#     Any integer greater than or equal to 0
+#
+# Related options:
+#
+#     send_arp_for_ha
+#  (integer value)
+#send_arp_for_ha_count=3
+
+#
 # When set to True, only the firt nic of a VM will get its default gateway from
 # the DHCP server.
 #  (boolean value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#use_single_default_gateway = false
-
-# DEPRECATED:
+#use_single_default_gateway=false
+
+#
 # One or more interfaces that bridges can forward traffic to. If any of the
 # items
 # in this list is the special keyword 'all', then all traffic will be forwarded.
 #
 # Possible values:
 #
-# * A list of zero or more interface names, or the word 'all'.
+#     A list of zero or more interface names, or the word 'all'.
 #  (multi valued)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#forward_bridge_interface = all
+#forward_bridge_interface=all
 
 #
 # This option determines the IP address for the network metadata API server.
 #
-# This is really the client side of the metadata host equation that allows
-# nova-network to find the metadata server when doing a default multi host
-# networking.
-#
-# Possible values:
-#
-# * Any valid IP address. The default is the address of the Nova API server.
-#
-# Related options:
-#
-# * ``metadata_port``
-#  (string value)
-#metadata_host = $my_ip
-
-# DEPRECATED:
+# Possible values:
+#
+#    * Any valid IP address. The default is the address of the Nova API server.
+#
+# Related options:
+#
+#     * metadata_port
+#  (string value)
+#metadata_host=$my_ip
+
+#
 # This option determines the port used for the metadata API server.
 #
 # Related options:
 #
-# * ``metadata_host``
+#     * metadata_host
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#metadata_port = 8775
-
-# DEPRECATED:
+#metadata_port=8775
+
+#
 # This expression, if defined, will select any matching iptables rules and place
 # them at the top when applying metadata changes to the rules.
 #
 # Possible values:
 #
-# * Any string representing a valid regular expression, or an empty string
-#
-# Related options:
-#
-# * ``iptables_bottom_regex``
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
+#     * Any string representing a valid regular expression, or an empty string
+#
+# Related options:
+#
+#     * iptables_bottom_regex
+#  (string value)
 #iptables_top_regex =
 
-# DEPRECATED:
+#
 # This expression, if defined, will select any matching iptables rules and place
 # them at the bottom when applying metadata changes to the rules.
 #
 # Possible values:
 #
-# * Any string representing a valid regular expression, or an empty string
-#
-# Related options:
-#
-# * iptables_top_regex
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
+#     * Any string representing a valid regular expression, or an empty string
+#
+# Related options:
+#
+#     * iptables_top_regex
+#  (string value)
 #iptables_bottom_regex =
 
-# DEPRECATED:
+#
 # By default, packets that do not pass the firewall are DROPped. In many cases,
 # though, an operator may find it more useful to change this from DROP to
 # REJECT,
@@ -1726,15 +1816,11 @@
 #
 # Possible values:
 #
-# * A string representing an iptables chain. The default is DROP.
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#iptables_drop_action = DROP
-
-# DEPRECATED:
+#     * A string representing an iptables chain. The default is DROP.
+#  (string value)
+#iptables_drop_action=DROP
+
+#
 # This option represents the period of time, in seconds, that the ovs_vsctl
 # calls
 # will wait for a response from the database before timing out. A setting of 0
@@ -1742,46 +1828,34 @@
 #
 # Possible values:
 #
-# * Any positive integer if a limited timeout is desired, or zero if the calls
-#   should wait forever for a response.
+#     * Any positive integer if a limited timeout is desired, or zero if the
+#     calls should wait forever for a response.
 #  (integer value)
 # Minimum value: 0
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ovs_vsctl_timeout = 120
-
-# DEPRECATED:
+#ovs_vsctl_timeout=120
+
+#
 # This option is used mainly in testing to avoid calls to the underlying network
 # utilities.
 #  (boolean value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#fake_network = false
-
-# DEPRECATED:
+#fake_network=false
+
+#
 # This option determines the number of times to retry ebtables commands before
 # giving up. The minimum number of retries is 1.
 #
 # Possible values:
 #
-# * Any positive integer
-#
-# Related options:
-#
-# * ``ebtables_retry_interval``
+#     * Any positive integer
+#
+# Related options:
+#
+#     * ebtables_retry_interval
 #  (integer value)
 # Minimum value: 1
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ebtables_exec_attempts = 3
-
-# DEPRECATED:
+#ebtables_exec_attempts=3
+
+#
 # This option determines the time, in seconds, that the system will sleep in
 # between ebtables retries. Note that each successive retry waits a multiple of
 # this value, so for example, if this is set to the default of 1.0 seconds, and
@@ -1792,97 +1866,81 @@
 #
 # Possible values:
 #
-# * Any non-negative float or integer. Setting this to zero will result in no
-#   waiting between attempts.
-#
-# Related options:
-#
-# * ebtables_exec_attempts
+#     * Any non-negative float or integer. Setting this to zero will result in
+# no
+#     waiting between attempts.
+#
+# Related options:
+#
+#     * ebtables_exec_attempts
 #  (floating point value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ebtables_retry_interval = 1.0
+#ebtables_retry_interval=1.0
+
+#
+# This option determines whether the network setup information is injected into
+# the VM before it is booted. While it was originally designed to be used only
+# by
+# nova-network, it is also used by the vmware and xenapi virt drivers to control
+# whether network information is injected into a VM.
+#  (boolean value)
+#flat_injected=false
 
 # DEPRECATED:
-# Enable neutron as the backend for networking.
-#
-# Determine whether to use Neutron or Nova Network as the back end. Set to true
-# to use neutron.
-#  (boolean value)
+# This option determines the bridge used for simple network interfaces when no
+# bridge is specified in the VM creation request.
+#
+# Please note that this option is only used when using nova-network instead of
+# Neutron in your deployment.
+#
+# Possible values:
+#
+#     Any string representing a valid network bridge, such as 'br100'
+#
+# Related options:
+#
+#     ``use_neutron``
+#  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#use_neutron = true
-
-#
-# This option determines whether the network setup information is injected into
-# the VM before it is booted. While it was originally designed to be used only
-# by nova-network, it is also used by the vmware and xenapi virt drivers to
-# control whether network information is injected into a VM. The libvirt virt
-# driver also uses it when we use config_drive to configure network to control
-# whether network information is injected into a VM.
-#  (boolean value)
-#flat_injected = false
+#flat_network_bridge=<None>
 
 # DEPRECATED:
-# This option determines the bridge used for simple network interfaces when no
-# bridge is specified in the VM creation request.
+# This is the address of the DNS server for a simple network. If this option is
+# not specified, the default of '8.8.4.4' is used.
 #
 # Please note that this option is only used when using nova-network instead of
 # Neutron in your deployment.
 #
 # Possible values:
 #
-# * Any string representing a valid network bridge, such as 'br100'
-#
-# Related options:
-#
-# * ``use_neutron``
+#     Any valid IP address.
+#
+# Related options:
+#
+#     ``use_neutron``
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#flat_network_bridge = <None>
+#flat_network_dns=8.8.4.4
 
 # DEPRECATED:
-# This is the address of the DNS server for a simple network. If this option is
-# not specified, the default of '8.8.4.4' is used.
-#
-# Please note that this option is only used when using nova-network instead of
-# Neutron in your deployment.
-#
-# Possible values:
-#
-# * Any valid IP address.
-#
-# Related options:
-#
-# * ``use_neutron``
+# This option is the name of the virtual interface of the VM on which the bridge
+# will be built. While it was originally designed to be used only by
+# nova-network, it is also used by libvirt for the bridge interface name.
+#
+# Possible values:
+#
+#     Any valid virtual interface name, such as 'eth0'
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#flat_network_dns = 8.8.4.4
-
-# DEPRECATED:
-# This option is the name of the virtual interface of the VM on which the bridge
-# will be built. While it was originally designed to be used only by
-# nova-network, it is also used by libvirt for the bridge interface name.
-#
-# Possible values:
-#
-# * Any valid virtual interface name, such as 'eth0'
-#  (string value)
-# This option is deprecated for removal since 15.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#flat_interface = <None>
+#flat_interface=<None>
 
 # DEPRECATED:
 # This is the VLAN number used for private networks. Note that the when creating
@@ -1897,13 +1955,12 @@
 #
 # Possible values:
 #
-# * Any integer between 1 and 4094. Values outside of that range will raise a
-#   ValueError exception.
-#
-# Related options:
-#
-# * ``network_manager``
-# * ``use_neutron``
+#     Any integer between 1 and 4094. Values outside of that range will raise a
+#     ValueError exception. Default = 100.
+#
+# Related options:
+#
+#     ``network_manager``, ``use_neutron``
 #  (integer value)
 # Minimum value: 1
 # Maximum value: 4094
@@ -1911,7 +1968,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#vlan_start = 100
+#vlan_start=100
 
 # DEPRECATED:
 # This option is the name of the virtual interface of the VM on which the VLAN
@@ -1925,7 +1982,7 @@
 #
 # Possible values:
 #
-# * Any valid virtual interface name, such as 'eth0'
+#     Any valid virtual interface name, such as 'eth0'
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
@@ -1934,7 +1991,7 @@
 # this option has an effect when using neutron, it incorrectly override the
 # value
 # provided by neutron and should therefore not be used.
-#vlan_interface = <None>
+#vlan_interface=<None>
 
 # DEPRECATED:
 # This option represents the number of networks to create if not explicitly
@@ -1950,25 +2007,25 @@
 #
 # Possible values:
 #
-# * Any positive integer is technically valid, although there are practical
-#   limits based upon available IP address space and virtual interfaces.
-#
-# Related options:
-#
-# * ``use_neutron``
-# * ``network_size``
+#     Any positive integer is technically valid, although there are practical
+#     limits based upon available IP address space and virtual interfaces. The
+#     default is 1.
+#
+# Related options:
+#
+#     ``use_neutron``, ``network_size``
 #  (integer value)
 # Minimum value: 1
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#num_networks = 1
+#num_networks=1
 
 # DEPRECATED:
-# This option is no longer used since the /os-cloudpipe API was removed in the
-# 16.0.0 Pike release. This is the public IP address for the cloudpipe VPN
-# servers. It defaults to the IP address of the host.
+# This is the public IP address for the cloudpipe VPN servers. It defaults to
+# the
+# IP address of the host.
 #
 # Please note that this option is only used when using nova-network instead of
 # Neutron in your deployment. It also will be ignored if the configuration
@@ -1978,19 +2035,17 @@
 #
 # Possible values:
 #
-# * Any valid IP address. The default is ``$my_ip``, the IP address of the VM.
-#
-# Related options:
-#
-# * ``network_manager``
-# * ``use_neutron``
-# * ``vpn_start``
+#     Any valid IP address. The default is $my_ip, the IP address of the VM.
+#
+# Related options:
+#
+#     ``network_manager``, ``use_neutron``, ``vpn_start``
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#vpn_ip = $my_ip
+#vpn_ip=$my_ip
 
 # DEPRECATED:
 # This is the port number to use as the first VPN port for private networks.
@@ -2004,13 +2059,11 @@
 #
 # Possible values:
 #
-# * Any integer representing a valid port number. The default is 1000.
-#
-# Related options:
-#
-# * ``use_neutron``
-# * ``vpn_ip``
-# * ``network_manager``
+#     Any integer representing a valid port number. The default is 1000.
+#
+# Related options:
+#
+#     ``use_neutron``, ``vpn_ip``, ``network_manager``
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
@@ -2018,7 +2071,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#vpn_start = 1000
+#vpn_start=1000
 
 # DEPRECATED:
 # This option determines the number of addresses in each private subnet.
@@ -2028,21 +2081,21 @@
 #
 # Possible values:
 #
-# * Any positive integer that is less than or equal to the available network
-#   size. Note that if you are creating multiple networks, they must all fit in
-#   the available IP address space. The default is 256.
-#
-# Related options:
-#
-# * ``use_neutron``
-# * ``num_networks``
+#     Any positive integer that is less than or equal to the available network
+#     size. Note that if you are creating multiple networks, they must all fit
+# in
+#     the available IP address space. The default is 256.
+#
+# Related options:
+#
+#     ``use_neutron``, ``num_networks``
 #  (integer value)
 # Minimum value: 1
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#network_size = 256
+#network_size=256
 
 # DEPRECATED:
 # This option determines the fixed IPv6 address block when creating a network.
@@ -2052,17 +2105,17 @@
 #
 # Possible values:
 #
-# * Any valid IPv6 CIDR
-#
-# Related options:
-#
-# * ``use_neutron``
+#     Any valid IPv6 CIDR. The default value is "fd00::/48".
+#
+# Related options:
+#
+#     ``use_neutron``
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#fixed_range_v6 = fd00::/48
+#fixed_range_v6=fd00::/48
 
 # DEPRECATED:
 # This is the default IPv4 gateway. It is used only in the testing suite.
@@ -2072,18 +2125,17 @@
 #
 # Possible values:
 #
-# * Any valid IP address.
-#
-# Related options:
-#
-# * ``use_neutron``
-# * ``gateway_v6``
+#     Any valid IP address.
+#
+# Related options:
+#
+#     ``use_neutron``, ``gateway_v6``
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#gateway = <None>
+#gateway=<None>
 
 # DEPRECATED:
 # This is the default IPv6 gateway. It is used only in the testing suite.
@@ -2093,18 +2145,17 @@
 #
 # Possible values:
 #
-# * Any valid IP address.
-#
-# Related options:
-#
-# * ``use_neutron``
-# * ``gateway``
+#     Any valid IP address.
+#
+# Related options:
+#
+#     ``use_neutron``, ``gateway``
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#gateway_v6 = <None>
+#gateway_v6=<None>
 
 # DEPRECATED:
 # This option represents the number of IP addresses to reserve at the top of the
@@ -2114,19 +2165,18 @@
 #
 # Possible values:
 #
-# * Any integer, 0 or greater.
-#
-# Related options:
-#
-# * ``use_neutron``
-# * ``network_manager``
+#     Any integer, 0 or greater. The default is 0.
+#
+# Related options:
+#
+#     ``use_neutron``, ``network_manager``
 #  (integer value)
 # Minimum value: 0
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#cnt_vpn_clients = 0
+#cnt_vpn_clients=0
 
 # DEPRECATED:
 # This is the number of seconds to wait before disassociating a deallocated
@@ -2136,18 +2186,18 @@
 #
 # Possible values:
 #
-# * Any integer, zero or greater.
-#
-# Related options:
-#
-# * ``use_neutron``
+#     Any integer, zero or greater. The default is 600 (10 minutes).
+#
+# Related options:
+#
+#     ``use_neutron``
 #  (integer value)
 # Minimum value: 0
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#fixed_ip_disassociate_timeout = 600
+#fixed_ip_disassociate_timeout=600
 
 # DEPRECATED:
 # This option determines how many times nova-network will attempt to create a
@@ -2156,18 +2206,18 @@
 #
 # Possible values:
 #
-# * Any positive integer. The default is 5.
-#
-# Related options:
-#
-# * ``use_neutron``
+#     Any positive integer. The default is 5.
+#
+# Related options:
+#
+#     ``use_neutron``
 #  (integer value)
 # Minimum value: 1
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#create_unique_mac_address_attempts = 5
+#create_unique_mac_address_attempts=5
 
 # DEPRECATED:
 # Determines whether unused gateway devices, both VLAN and bridge, are deleted
@@ -2176,15 +2226,13 @@
 #
 # Related options:
 #
-# * ``use_neutron``
-# * ``vpn_ip``
-# * ``fake_network``
+#     ``use_neutron``, ``vpn_ip``, ``fake_network``
 #  (boolean value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#teardown_unused_network_gateway = false
+#teardown_unused_network_gateway=false
 
 # DEPRECATED:
 # When this option is True, a call is made to release the DHCP for the instance
@@ -2192,13 +2240,13 @@
 #
 # Related options:
 #
-# * ``use_neutron``
+#     ``use_neutron``
 #  (boolean value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#force_dhcp_release = true
+force_dhcp_release=true
 
 # DEPRECATED:
 # When this option is True, whenever a DNS entry must be updated, a fanout cast
@@ -2207,13 +2255,13 @@
 #
 # Related options:
 #
-# * ``use_neutron``
+#     ``use_neutron``
 #  (boolean value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#update_dns_entries = false
+#update_dns_entries=false
 
 # DEPRECATED:
 # This option determines the time, in seconds, to wait between refreshing DNS
@@ -2221,54 +2269,56 @@
 #
 # Possible values:
 #
-# * A positive integer
-# * -1 to disable updates
-#
-# Related options:
-#
-# * ``use_neutron``
+#     Either -1 (default), or any positive integer. A negative value will
+# disable
+#     the updates.
+#
+# Related options:
+#
+#     ``use_neutron``
 #  (integer value)
 # Minimum value: -1
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#dns_update_periodic_interval = -1
+#dns_update_periodic_interval=-1
 
 # DEPRECATED:
 # This option allows you to specify the domain for the DHCP server.
 #
 # Possible values:
 #
-# * Any string that is a valid domain name.
-#
-# Related options:
-#
-# * ``use_neutron``
+#     Any string that is a valid domain name.
+#
+# Related options:
+#
+#     ``use_neutron``
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#dhcp_domain = novalocal
+#dhcp_domain=novalocal
+dhcp_domain=novalocal
 
 # DEPRECATED:
 # This option allows you to specify the L3 management library to be used.
 #
 # Possible values:
 #
-# * Any dot-separated string that represents the import path to an L3 networking
-#   library.
-#
-# Related options:
-#
-# * ``use_neutron``
+#     Any dot-separated string that represents the import path to an L3
+#     networking library.
+#
+# Related options:
+#
+#     ``use_neutron``
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#l3_lib = nova.network.l3.LinuxNetL3
+#l3_lib=nova.network.l3.LinuxNetL3
 
 # DEPRECATED:
 # THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK.
@@ -2286,73 +2336,58 @@
 #  (boolean value)
 # This option is deprecated for removal since 2014.2.
 # Its value may be silently ignored in the future.
-#share_dhcp_address = false
-
-# DEPRECATED:
-# URL for LDAP server which will store DNS entries
-#
-# Possible values:
-#
-# * A valid LDAP URL representing the server
-#  (uri value)
-# This option is deprecated for removal since 16.0.0.
+#share_dhcp_address=false
+
+# DEPRECATED: Whether to use Neutron or Nova Network as the back end for
+# networking. Defaults to False (indicating Nova network).Set to True to use
+# neutron. (boolean value)
+# This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#ldap_dns_url = ldap://ldap.example.com:389
-
-# DEPRECATED: Bind user for LDAP server (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ldap_dns_user = uid=admin,ou=people,dc=example,dc=org
-
-# DEPRECATED: Bind user's password for LDAP server (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ldap_dns_password = password
-
-# DEPRECATED:
+#use_neutron=true
+
+#
+# URL for LDAP server which will store DNS entries
+#
+# Possible values:
+#
+# * A valid LDAP URL representing the server
+#  (uri value)
+#ldap_dns_url=ldap://ldap.example.com:389
+
+# Bind user for LDAP server (string value)
+#ldap_dns_user=uid=admin,ou=people,dc=example,dc=org
+
+# Bind user's password for LDAP server (string value)
+#ldap_dns_password=password
+
+#
 # Hostmaster for LDAP DNS driver Statement of Authority
 #
 # Possible values:
 #
 # * Any valid string representing LDAP DNS hostmaster.
 #  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ldap_dns_soa_hostmaster = hostmaster@example.org
-
-# DEPRECATED:
+#ldap_dns_soa_hostmaster=hostmaster@example.org
+
+#
 # DNS Servers for LDAP DNS driver
 #
 # Possible values:
 #
 # * A valid URL representing a DNS server
 #  (multi valued)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ldap_dns_servers = dns.example.org
-
-# DEPRECATED:
+#ldap_dns_servers=dns.example.org
+
+#
 # Base distinguished name for the LDAP search query
 #
 # This option helps to decide where to look up the host in LDAP.
 #  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ldap_dns_base_dn = ou=hosts,dc=example,dc=org
-
-# DEPRECATED:
+#ldap_dns_base_dn=ou=hosts,dc=example,dc=org
+
+#
 # Refresh interval (in seconds) for LDAP DNS driver Start of Authority
 #
 # Time interval, a secondary/slave DNS server waits before requesting for
@@ -2361,48 +2396,41 @@
 #
 # NOTE: Lower values would cause more traffic.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ldap_dns_soa_refresh = 1800
-
-# DEPRECATED:
+#ldap_dns_soa_refresh=1800
+
+#
 # Retry interval (in seconds) for LDAP DNS driver Start of Authority
 #
 # Time interval, a secondary/slave DNS server should wait, if an
 # attempt to transfer zone failed during the previous refresh interval.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ldap_dns_soa_retry = 3600
-
-# DEPRECATED:
+#ldap_dns_soa_retry=3600
+
+#
 # Expiry interval (in seconds) for LDAP DNS driver Start of Authority
 #
 # Time interval, a secondary/slave DNS server holds the information
 # before it is no longer considered authoritative.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ldap_dns_soa_expiry = 86400
-
-# DEPRECATED:
+#ldap_dns_soa_expiry=86400
+
+#
 # Minimum interval (in seconds) for LDAP DNS driver Start of Authority
 #
 # It is Minimum time-to-live applies for all resource records in the
 # zone file. This value is supplied to other servers how long they
 # should keep the data in cache.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
+#ldap_dns_soa_minimum=7200
+
+# DEPRECATED: The topic network nodes listen on (string value)
+# This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
-# nova-network is deprecated, as are any related configuration options.
-#ldap_dns_soa_minimum = 7200
+# There is no need to let users choose the RPC topic for all services - there
+# is little gain from this. Furthermore, it makes it really easy to break Nova
+# by using this option.
+#network_topic=network
 
 # DEPRECATED:
 # Default value for multi_host in networks.
@@ -2422,13 +2450,13 @@
 #
 # Related options:
 #
-# * ``use_neutron``
+# * use_neutron
 #  (boolean value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#multi_host = false
+#multi_host=false
 
 # DEPRECATED:
 # Driver to use for network creation.
@@ -2446,26 +2474,29 @@
 #
 # Related options:
 #
-# * ``use_neutron``
+# * use_neutron
 #  (string value)
 # This option is deprecated for removal since 15.0.0.
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#network_driver = nova.network.linux_net
-
-# DEPRECATED:
+#network_driver=nova.network.linux_net
+
+#
 # Firewall driver to use with ``nova-network`` service.
 #
 # This option only applies when using the ``nova-network`` service. When using
 # another networking services, such as Neutron, this should be to set to the
 # ``nova.virt.firewall.NoopFirewallDriver``.
 #
-# Possible values:
-#
-# * ``nova.virt.firewall.IptablesFirewallDriver``
-# * ``nova.virt.firewall.NoopFirewallDriver``
-# * ``nova.virt.libvirt.firewall.IptablesFirewallDriver``
+# If unset (the default), this will default to the hypervisor-specified
+# default driver.
+#
+# Possible values:
+#
+# * nova.virt.firewall.IptablesFirewallDriver
+# * nova.virt.firewall.NoopFirewallDriver
+# * nova.virt.libvirt.firewall.IptablesFirewallDriver
 # * [...]
 #
 # Related options:
@@ -2473,13 +2504,10 @@
 # * ``use_neutron``: This must be set to ``False`` to enable ``nova-network``
 #   networking
 #  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#firewall_driver = nova.virt.firewall.NoopFirewallDriver
-
-# DEPRECATED:
+#firewall_driver=<None>
+firewall_driver = nova.virt.firewall.NoopFirewallDriver
+
+#
 # Determine whether to allow network traffic from same network.
 #
 # When set to true, hosts on the same subnet are not filtered and are allowed
@@ -2506,144 +2534,34 @@
 #   ``nova.virt.libvirt.firewall.IptablesFirewallDriver`` to ensure the
 #   libvirt firewall driver is enabled.
 #  (boolean value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#allow_same_net_traffic = true
-
-# DEPRECATED:
-# Default pool for floating IPs.
-#
-# This option specifies the default floating IP pool for allocating floating
-# IPs.
-#
-# While allocating a floating ip, users can optionally pass in the name of the
-# pool they want to allocate from, otherwise it will be pulled from the
-# default pool.
-#
-# If this option is not set, then 'nova' is used as default floating pool.
-#
-# Possible values:
-#
-# * Any string representing a floating IP pool name
-#  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# This option was used for two purposes: to set the floating IP pool name for
-# nova-network and to do the same for neutron. nova-network is deprecated, as
-# are
-# any related configuration options. Users of neutron, meanwhile, should use the
-# 'default_floating_pool' option in the '[neutron]' group.
-#default_floating_pool = nova
-
-# DEPRECATED:
-# Autoassigning floating IP to VM
-#
-# When set to True, floating IP is auto allocated and associated
-# to the VM upon creation.
-#
-# Related options:
-#
-# * use_neutron: this options only works with nova-network.
-#  (boolean value)
-# This option is deprecated for removal since 15.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#auto_assign_floating_ip = false
-
-# DEPRECATED:
-# Full class name for the DNS Manager for floating IPs.
-#
-# This option specifies the class of the driver that provides functionality
-# to manage DNS entries associated with floating IPs.
-#
-# When a user adds a DNS entry for a specified domain to a floating IP,
-# nova will add a DNS entry using the specified floating DNS driver.
-# When a floating IP is deallocated, its DNS entry will automatically be
-# deleted.
-#
-# Possible values:
-#
-# * Full Python path to the class to be used
-#
-# Related options:
-#
-# * use_neutron: this options only works with nova-network.
-#  (string value)
-# This option is deprecated for removal since 15.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#floating_ip_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver
-
-# DEPRECATED:
-# Full class name for the DNS Manager for instance IPs.
-#
-# This option specifies the class of the driver that provides functionality
-# to manage DNS entries for instances.
-#
-# On instance creation, nova will add DNS entries for the instance name and
-# id, using the specified instance DNS driver and domain. On instance deletion,
-# nova will remove the DNS entries.
-#
-# Possible values:
-#
-# * Full Python path to the class to be used
-#
-# Related options:
-#
-# * use_neutron: this options only works with nova-network.
-#  (string value)
-# This option is deprecated for removal since 15.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#instance_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver
-
-# DEPRECATED:
-# If specified, Nova checks if the availability_zone of every instance matches
-# what the database says the availability_zone should be for the specified
-# dns_domain.
-#
-# Related options:
-#
-# * use_neutron: this options only works with nova-network.
-#  (string value)
-# This option is deprecated for removal since 15.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# nova-network is deprecated, as are any related configuration options.
-#instance_dns_domain =
+#allow_same_net_traffic=true
 
 #
 # Filename that will be used for storing websocket frames received
 # and sent by a proxy service (like VNC, spice, serial) running on this host.
 # If this is not set, no recording will be done.
 #  (string value)
-#record = <None>
+#record=<None>
 
 # Run as a background process. (boolean value)
-#daemon = false
+#daemon=false
 
 # Disallow non-encrypted connections. (boolean value)
-#ssl_only = false
+#ssl_only=false
 
 # Set to True if source host is addressed with IPv6. (boolean value)
-#source_is_ipv6 = false
+#source_is_ipv6=false
 
 # Path to SSL certificate file. (string value)
-#cert = self.pem
+#cert=self.pem
 
 # SSL key file (if separate from cert). (string value)
-#key = <None>
+#key=<None>
 
 #
 # Path to directory with content which will be served by a web server.
 #  (string value)
-#web = /usr/share/spice-html5
+#web=/usr/share/spice-html5
 
 #
 # The directory where the Nova python modules are installed.
@@ -2661,7 +2579,7 @@
 #
 # * ``state_path``
 #  (string value)
-#pybasedir = /build/nova-aJ1u2D/nova-16.0.4
+#pybasedir=/build/nova-elxmSs/nova-15.0.2
 
 #
 # The directory where the Nova binaries are installed.
@@ -2675,7 +2593,7 @@
 #
 # * The full path to a directory.
 #  (string value)
-#bindir = /usr/local/bin
+#bindir=/usr/local/bin
 
 #
 # The top-level directory for maintaining Nova's state.
@@ -2691,7 +2609,7 @@
 #
 # * The full path to a directory. Defaults to value provided in ``pybasedir``.
 #  (string value)
-#state_path = $pybasedir
+state_path=/var/lib/nova
 
 #
 # Number of seconds indicating how frequently the state of services on a
@@ -2705,7 +2623,8 @@
 #   is less than report_interval, services will routinely be considered down,
 #   because they report in too rarely.
 #  (integer value)
-#report_interval = 10
+#report_interval=10
+report_interval = 60
 
 #
 # Maximum time in seconds since last check-in for up service
@@ -2718,7 +2637,8 @@
 #
 # * report_interval (service_down_time should not be less than report_interval)
 #  (integer value)
-#service_down_time = 60
+#service_down_time=60
+service_down_time=90
 
 #
 # Enable periodic tasks.
@@ -2730,7 +2650,7 @@
 # periodic tasks on only one host - in this case disable this option for all
 # hosts but one.
 #  (boolean value)
-#periodic_enable = true
+#periodic_enable=true
 
 #
 # Number of seconds to randomly delay when starting the periodic task
@@ -2748,10 +2668,10 @@
 # * 0 : disable the random delay
 #  (integer value)
 # Minimum value: 0
-#periodic_fuzzy_delay = 60
+#periodic_fuzzy_delay=60
 
 # List of APIs to be enabled by default. (list value)
-#enabled_apis = osapi_compute,metadata
+enabled_apis=osapi_compute,metadata
 
 #
 # List of APIs with enabled SSL.
@@ -2767,7 +2687,7 @@
 # The OpenStack API service listens on this IP address for incoming
 # requests.
 #  (string value)
-#osapi_compute_listen = 0.0.0.0
+#osapi_compute_listen=0.0.0.0
 
 #
 # Port on which the OpenStack API will listen.
@@ -2777,7 +2697,7 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#osapi_compute_listen_port = 8774
+#osapi_compute_listen_port=8774
 
 #
 # Number of workers for OpenStack API service. The default will be the number
@@ -2794,7 +2714,7 @@
 # * None (default value)
 #  (integer value)
 # Minimum value: 1
-#osapi_compute_workers = <None>
+#osapi_compute_workers=<None>
 
 #
 # IP address on which the metadata API will listen.
@@ -2802,7 +2722,7 @@
 # The metadata API service listens on this IP address for incoming
 # requests.
 #  (string value)
-#metadata_listen = 0.0.0.0
+#metadata_listen=0.0.0.0
 
 #
 # Port on which the metadata API will listen.
@@ -2812,7 +2732,7 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#metadata_listen_port = 8775
+#metadata_listen_port=8775
 
 #
 # Number of workers for metadata service. If not specified the number of
@@ -2829,11 +2749,11 @@
 # * None (default value)
 #  (integer value)
 # Minimum value: 1
-#metadata_workers = <None>
+#metadata_workers=<None>
 
 # Full class name for the Manager for network (string value)
 # Allowed values: nova.network.manager.FlatManager, nova.network.manager.FlatDHCPManager, nova.network.manager.VlanManager
-#network_manager = nova.network.manager.VlanManager
+#network_manager=nova.network.manager.VlanManager
 
 #
 # This option specifies the driver to be used for the servicegroup service.
@@ -2857,7 +2777,7 @@
 #     * service_down_time (maximum time since last check-in for up service)
 #  (string value)
 # Allowed values: db, mc
-#servicegroup_driver = db
+#servicegroup_driver=db
 
 #
 # From oslo.log
@@ -2866,7 +2786,15 @@
 # If set to true, the logging level will be set to DEBUG instead of the default
 # INFO level. (boolean value)
 # Note: This option can be changed without restarting.
-#debug = false
+#debug=false
+debug=false
+
+# DEPRECATED: If set to false, the logging level will be set to WARNING instead
+# of the default INFO level. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#verbose=true
+verbose=true
 
 # The name of a logging configuration file. This file is appended to any
 # existing logging configuration files. For details about logging configuration
@@ -2876,131 +2804,132 @@
 # example, logging_context_format_string). (string value)
 # Note: This option can be changed without restarting.
 # Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
+#log_config_append=<None>
 
 # Defines the format string for %%(asctime)s in log records. Default:
 # %(default)s . This option is ignored if log_config_append is set. (string
 # value)
-#log_date_format = %Y-%m-%d %H:%M:%S
+#log_date_format=%Y-%m-%d %H:%M:%S
 
 # (Optional) Name of log file to send logging output to. If no default is set,
 # logging will go to stderr as defined by use_stderr. This option is ignored if
 # log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logfile
-#log_file = <None>
+#log_file=<None>
 
 # (Optional) The base directory used for relative log_file  paths. This option
 # is ignored if log_config_append is set. (string value)
 # Deprecated group/name - [DEFAULT]/logdir
-#log_dir = <None>
+log_dir=/var/log/nova
 
 # Uses logging handler designed to watch file system. When log file is moved or
 # removed this handler will open a new log file with specified path
 # instantaneously. It makes sense only if log_file option is specified and Linux
 # platform is used. This option is ignored if log_config_append is set. (boolean
 # value)
-#watch_log_file = false
+#watch_log_file=false
 
 # Use syslog for logging. Existing syslog format is DEPRECATED and will be
 # changed later to honor RFC5424. This option is ignored if log_config_append is
 # set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol which
-# includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
+#use_syslog=false
 
 # Syslog facility to receive log lines. This option is ignored if
 # log_config_append is set. (string value)
-#syslog_log_facility = LOG_USER
+#syslog_log_facility=LOG_USER
 
 # Log output to standard error. This option is ignored if log_config_append is
 # set. (boolean value)
-#use_stderr = false
+#use_stderr=false
 
 # Format string to use for log messages with context. (string value)
-#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
+#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
 
 # Format string to use for log messages when context is undefined. (string
 # value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
 
 # Additional data to append to log message when logging level for the message is
 # DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
 
 # Prefix each line of exception output with this format. (string value)
-#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
+#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
 
 # Defines the format string for %(user_identity)s that is used in
 # logging_context_format_string. (string value)
-#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
+#logging_user_identity_format=%(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
 
 # List of package logging levels in logger=LEVEL pairs. This option is ignored
 # if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+#default_log_levels=amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
 
 # Enables or disables publication of error events. (boolean value)
-#publish_errors = false
+#publish_errors=false
 
 # The format for an instance that is passed with the log message. (string value)
-#instance_format = "[instance: %(uuid)s] "
+#instance_format="[instance: %(uuid)s] "
 
 # The format for an instance UUID that is passed with the log message. (string
 # value)
-#instance_uuid_format = "[instance: %(uuid)s] "
+#instance_uuid_format="[instance: %(uuid)s] "
 
 # Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
+#rate_limit_interval=0
 
 # Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
+#rate_limit_burst=0
 
 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
 # empty string. Logs with level greater or equal to rate_limit_except_level are
 # not filtered. An empty string means that all levels are filtered. (string
 # value)
-#rate_limit_except_level = CRITICAL
+#rate_limit_except_level=CRITICAL
 
 # Enables or disables fatal status of deprecations. (boolean value)
-#fatal_deprecations = false
+#fatal_deprecations=false
 
 #
 # From oslo.messaging
 #
 
 # Size of RPC connection pool. (integer value)
-#rpc_conn_pool_size = 30
+# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
+#rpc_conn_pool_size=30
 
 # The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
+#conn_pool_min_size=2
 
 # The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
+#conn_pool_ttl=1200
 
 # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
 # The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
+# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
+#rpc_zmq_bind_address=*
 
 # MatchMaker driver. (string value)
 # Allowed values: redis, sentinel, dummy
-#rpc_zmq_matchmaker = redis
+# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
+#rpc_zmq_matchmaker=redis
 
 # Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
+# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
+#rpc_zmq_contexts=1
 
 # Maximum number of ingress messages to locally buffer per topic. Default is
 # unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
+# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
+#rpc_zmq_topic_backlog=<None>
 
 # Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
+# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
+#rpc_zmq_ipc_dir=/var/run/openstack
 
 # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
 # "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
+# Deprecated group/name - [DEFAULT]/rpc_zmq_host
+#rpc_zmq_host=localhost
 
 # Number of seconds to wait before all pending messages will be sent after
 # closing a socket. The default value of -1 specifies an infinite linger period.
@@ -3008,138 +2937,146 @@
 # immediately when the socket is closed. Positive values specify an upper bound
 # for the linger period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
+#zmq_linger=-1
+zmq_linger=30
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
+# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
+#rpc_poll_timeout=1
 
 # Expiration timeout in seconds of a name service record about existing target (
 # < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
+# Deprecated group/name - [DEFAULT]/zmq_target_expire
+#zmq_target_expire=300
 
 # Update period in seconds of a name service record about existing target.
 # (integer value)
-#zmq_target_update = 180
+# Deprecated group/name - [DEFAULT]/zmq_target_update
+#zmq_target_update=180
 
 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
 # value)
-#use_pub_sub = false
+# Deprecated group/name - [DEFAULT]/use_pub_sub
+#use_pub_sub=false
 
 # Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
+# Deprecated group/name - [DEFAULT]/use_router_proxy
+#use_router_proxy=false
 
 # This option makes direct connections dynamic or static. It makes sense only
 # with use_router_proxy=False which means to use direct connections for direct
 # message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
+#use_dynamic_connections=false
 
 # How many additional connections to a host will be made for failover reasons.
 # This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
+#zmq_failover_connections=2
 
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#rpc_zmq_min_port = 49153
+# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
+#rpc_zmq_min_port=49153
 
 # Maximal port number for random ports range. (integer value)
 # Minimum value: 1
 # Maximum value: 65536
-#rpc_zmq_max_port = 65536
+# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
+#rpc_zmq_max_port=65536
 
 # Number of retries to find free port number before fail with ZMQBindError.
 # (integer value)
-#rpc_zmq_bind_port_retries = 100
+# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
+#rpc_zmq_bind_port_retries=100
 
 # Default serialization mechanism for serializing/deserializing
 # outgoing/incoming messages (string value)
 # Allowed values: json, msgpack
-#rpc_zmq_serialization = json
+# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
+#rpc_zmq_serialization=json
 
 # This option configures round-robin mode in zmq socket. True means not keeping
 # a queue when server side disconnects. False means to keep queue and messages
 # even if server is disconnected, when the server appears we send all
 # accumulated messages to it. (boolean value)
-#zmq_immediate = true
+#zmq_immediate=true
 
 # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
 # other negative value) means to skip any overrides and leave it to OS default;
 # 0 and 1 (or any other positive value) mean to disable and enable the option
 # respectively. (integer value)
-#zmq_tcp_keepalive = -1
+#zmq_tcp_keepalive=-1
 
 # The duration between two keepalive transmissions in idle condition. The unit
 # is platform dependent, for example, seconds in Linux, milliseconds in Windows
 # etc. The default value of -1 (or any other negative value and 0) means to skip
 # any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
+#zmq_tcp_keepalive_idle=-1
 
 # The number of retransmissions to be carried out before declaring that remote
 # end is not available. The default value of -1 (or any other negative value and
 # 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
+#zmq_tcp_keepalive_cnt=-1
 
 # The duration between two successive keepalive retransmissions, if
 # acknowledgement to the previous keepalive transmission is not received. The
 # unit is platform dependent, for example, seconds in Linux, milliseconds in
 # Windows etc. The default value of -1 (or any other negative value and 0) means
 # to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
+#zmq_tcp_keepalive_intvl=-1
 
 # Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
+#rpc_thread_pool_size=100
 
 # Expiration timeout in seconds of a sent/received message after which it is not
 # tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
+#rpc_message_ttl=300
 
 # Wait for message acknowledgements from receivers. This mechanism works only
 # via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
+#rpc_use_acks=false
 
 # Number of seconds to wait for an ack from a cast/call. After each retry
 # attempt this timeout is multiplied by some specified multiplier. (integer
 # value)
-#rpc_ack_timeout_base = 15
+#rpc_ack_timeout_base=15
 
 # Number to multiply base ack timeout by after each retry attempt. (integer
 # value)
-#rpc_ack_timeout_multiplier = 2
+#rpc_ack_timeout_multiplier=2
 
 # Default number of message sending attempts in case of any problems occurred:
 # positive value N means at most N retries, 0 means no retries, None or -1 (or
 # any other negative values) mean to retry forever. This option is used only if
 # acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
+#rpc_retry_attempts=3
 
 # List of publisher hosts SubConsumer can subscribe on. This option has higher
 # priority then the default publishers list taken from the matchmaker. (list
 # value)
 #subscribe_on =
 
-# Size of executor thread pool when executor is threading or eventlet. (integer
-# value)
+# Size of executor thread pool. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
-#executor_thread_pool_size = 64
+#executor_thread_pool_size=64
+executor_thread_pool_size=70
 
 # Seconds to wait for a response from a call. (integer value)
-#rpc_response_timeout = 60
-
-# A URL representing the messaging driver to use and its full configuration.
-# (string value)
-#transport_url = <None>
+#rpc_response_timeout=60
+rpc_response_timeout = 3600
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rpc_backend = rabbit
+#rpc_backend=rabbit
 
 # The default exchange under which topics are scoped. May be overridden by an
 # exchange name specified in the transport_url option. (string value)
-#control_exchange = openstack
+#control_exchange=openstack
 
 #
 # From oslo.service.periodic_task
@@ -3147,7 +3084,7 @@
 
 # Some periodic tasks can be run in a separate process. Should we run them here?
 # (boolean value)
-#run_external_periodic_tasks = true
+#run_external_periodic_tasks=true
 
 #
 # From oslo.service.service
@@ -3159,21 +3096,21 @@
 # is in use); and <start>:<end> results in listening on the smallest unused port
 # number within the specified range of port numbers.  The chosen port is
 # displayed in the service's log file. (string value)
-#backdoor_port = <None>
+#backdoor_port=<None>
 
 # Enable eventlet backdoor, using the provided path as a unix socket that can
 # receive connections. This option is mutually exclusive with 'backdoor_port' in
 # that only one should be provided. If both are provided then the existence of
 # this option overrides the usage of that option. (string value)
-#backdoor_socket = <None>
+#backdoor_socket=<None>
 
 # Enables or disables logging values of all registered options when starting a
 # service (at DEBUG level). (boolean value)
-#log_options = true
+#log_options=true
 
 # Specify a timeout after which a gracefully shutdown server will exit. Zero
 # value means endless wait. (integer value)
-#graceful_shutdown_timeout = 60
+#graceful_shutdown_timeout=60
 
 
 [api]
@@ -3191,7 +3128,9 @@
 # specified as the username.
 #  (string value)
 # Allowed values: keystone, noauth2
-#auth_strategy = keystone
+# Deprecated group/name - [DEFAULT]/auth_strategy
+#auth_strategy=keystone
+auth_strategy=keystone
 
 #
 # When True, the 'X-Forwarded-For' header is treated as the canonical remote
@@ -3199,7 +3138,8 @@
 #
 # You should only enable this if you have an HTML sanitizing proxy.
 #  (boolean value)
-#use_forwarded_for = false
+# Deprecated group/name - [DEFAULT]/use_forwarded_for
+#use_forwarded_for=false
 
 #
 # When gathering the existing metadata for a config drive, the EC2-style
@@ -3223,7 +3163,8 @@
 #
 # * Any string that represents zero or more versions, separated by spaces.
 #  (string value)
-#config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01
+# Deprecated group/name - [DEFAULT]/config_drive_skip_versions
+#config_drive_skip_versions=1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01
 
 #
 # A list of vendordata providers.
@@ -3257,6 +3198,7 @@
 # * vendordata_dynamic_read_timeout
 # * vendordata_dynamic_failure_fatal
 #  (list value)
+# Deprecated group/name - [DEFAULT]/vendordata_providers
 #vendordata_providers =
 
 #
@@ -3267,6 +3209,7 @@
 # services and querying them for information about the instance. This behaviour
 # is documented in the vendordata.rst file in the nova developer reference.
 #  (list value)
+# Deprecated group/name - [DEFAULT]/vendordata_dynamic_targets
 #vendordata_dynamic_targets =
 
 #
@@ -3285,6 +3228,7 @@
 # * vendordata_dynamic_read_timeout
 # * vendordata_dynamic_failure_fatal
 #  (string value)
+# Deprecated group/name - [DEFAULT]/vendordata_dynamic_ssl_certfile
 #vendordata_dynamic_ssl_certfile =
 
 #
@@ -3305,7 +3249,8 @@
 # * vendordata_dynamic_failure_fatal
 #  (integer value)
 # Minimum value: 3
-#vendordata_dynamic_connect_timeout = 5
+# Deprecated group/name - [DEFAULT]/vendordata_dynamic_connect_timeout
+#vendordata_dynamic_connect_timeout=5
 
 #
 # Maximum wait time for an external REST service to return data once connected.
@@ -3324,7 +3269,8 @@
 # * vendordata_dynamic_failure_fatal
 #  (integer value)
 # Minimum value: 0
-#vendordata_dynamic_read_timeout = 5
+# Deprecated group/name - [DEFAULT]/vendordata_dynamic_read_timeout
+#vendordata_dynamic_read_timeout=5
 
 #
 # Should failures to fetch dynamic vendordata be fatal to instance boot?
@@ -3337,7 +3283,7 @@
 # * vendordata_dynamic_connect_timeout
 # * vendordata_dynamic_read_timeout
 #  (boolean value)
-#vendordata_dynamic_failure_fatal = false
+#vendordata_dynamic_failure_fatal=false
 
 #
 # This option is the time (in seconds) to cache metadata. When set to 0,
@@ -3347,7 +3293,8 @@
 # usage, and result in longer times for host metadata changes to take effect.
 #  (integer value)
 # Minimum value: 0
-#metadata_cache_expiration = 15
+# Deprecated group/name - [DEFAULT]/metadata_cache_expiration
+#metadata_cache_expiration=15
 
 #
 # Cloud providers may store custom data in vendor data file that will then be
@@ -3361,7 +3308,8 @@
 # * Any string representing the path to the data file, or an empty string
 #     (default).
 #  (string value)
-#vendordata_jsonfile_path = <None>
+# Deprecated group/name - [DEFAULT]/vendordata_jsonfile_path
+#vendordata_jsonfile_path=<None>
 
 #
 # As a query can potentially return many thousands of items, you can limit the
@@ -3369,7 +3317,7 @@
 #  (integer value)
 # Minimum value: 0
 # Deprecated group/name - [DEFAULT]/osapi_max_limit
-#max_limit = 1000
+#max_limit=1000
 
 #
 # This string is prepended to the normal URL that is returned in links to the
@@ -3381,7 +3329,7 @@
 # * Any string, including an empty string (the default).
 #  (string value)
 # Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix
-#compute_link_prefix = <None>
+#compute_link_prefix=<None>
 
 #
 # This string is prepended to the normal URL that is returned in links to
@@ -3393,20 +3341,15 @@
 # * Any string, including an empty string (the default).
 #  (string value)
 # Deprecated group/name - [DEFAULT]/osapi_glance_link_prefix
-#glance_link_prefix = <None>
-
-# DEPRECATED:
+#glance_link_prefix=<None>
+
+#
 # Operators can turn off the ability for a user to take snapshots of their
 # instances by setting this option to False. When disabled, any attempt to
 # take a snapshot will result in a HTTP 400 response ("Bad Request").
 #  (boolean value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: This option disables the createImage server action API in a non-
-# discoverable way and is thus a barrier to interoperability. Also, it is not
-# used for other APIs that create snapshots like shelve or createBackup.
-# Disabling snapshots should be done via policy if so desired.
-#allow_instance_snapshots = true
+# Deprecated group/name - [DEFAULT]/allow_instance_snapshots
+#allow_instance_snapshots=true
 
 #
 # This option is a list of all instance states for which network address
@@ -3431,10 +3374,11 @@
 # * "shelved_offloaded"
 #  (list value)
 # Deprecated group/name - [DEFAULT]/osapi_hide_server_address_states
-#hide_server_address_states = building
+#hide_server_address_states=building
 
 # The full path to the fping binary. (string value)
-#fping_path = /usr/sbin/fping
+# Deprecated group/name - [DEFAULT]/fping_path
+#fping_path=/usr/sbin/fping
 
 #
 # When True, the TenantNetworkController will query the Neutron API to get the
@@ -3444,7 +3388,8 @@
 #
 # * neutron_default_tenant_id
 #  (boolean value)
-#use_neutron_default_nets = false
+# Deprecated group/name - [DEFAULT]/use_neutron_default_nets
+#use_neutron_default_nets=false
 
 #
 # Tenant ID for getting the default network from Neutron API (also referred in
@@ -3454,7 +3399,8 @@
 #
 # * use_neutron_default_nets
 #  (string value)
-#neutron_default_tenant_id = default
+# Deprecated group/name - [DEFAULT]/neutron_default_tenant_id
+#neutron_default_tenant_id=default
 
 #
 # Enables returning of the instance password by the relevant server API calls
@@ -3462,11 +3408,11 @@
 # support password injection, then the password returned will not be correct,
 # so if your hypervisor does not support password injection, set this to False.
 #  (boolean value)
-#enable_instance_password = true
+# Deprecated group/name - [DEFAULT]/enable_instance_password
+#enable_instance_password=true
 
 
 [api_database]
-connection = sqlite:////var/lib/nova/nova_api.sqlite
 #
 # The *Nova API Database* is a separate database which is used for information
 # which is used across *cells*. This database is mandatory since the Mitaka
@@ -3478,101 +3424,73 @@
 
 # The SQLAlchemy connection string to use to connect to the database. (string
 # value)
-#connection = <None>
+connection=sqlite:////var/lib/nova/nova.sqlite
 
 # If True, SQLite uses synchronous mode. (boolean value)
-#sqlite_synchronous = true
+#sqlite_synchronous=true
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
-#slave_connection = <None>
+#slave_connection=<None>
 
 # The SQL mode to be used for MySQL sessions. This option, including the
 # default, overrides any server-set SQL mode. To use whatever SQL mode is set by
 # the server configuration, set this to no value. Example: mysql_sql_mode=
 # (string value)
-#mysql_sql_mode = TRADITIONAL
+#mysql_sql_mode=TRADITIONAL
 
 # Timeout before idle SQL connections are reaped. (integer value)
-#idle_timeout = 3600
+#idle_timeout=3600
 
 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0
 # indicates no limit. (integer value)
-#max_pool_size = <None>
+#max_pool_size=<None>
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
-#max_retries = 10
+#max_retries=10
 
 # Interval between retries of opening a SQL connection. (integer value)
-#retry_interval = 10
+#retry_interval=10
 
 # If set, use this value for max_overflow with SQLAlchemy. (integer value)
-#max_overflow = <None>
+#max_overflow=<None>
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
-#connection_debug = 0
+#connection_debug=0
 
 # Add Python stack traces to SQL as comment strings. (boolean value)
-#connection_trace = false
+#connection_trace=false
 
 # If set, use this value for pool_timeout with SQLAlchemy. (integer value)
-#pool_timeout = <None>
-
-
-[barbican]
+#pool_timeout=<None>
+
+[cache]
 
 #
 # From nova.conf
 #
-
-# Use this endpoint to connect to Barbican, for example:
-# "http://localhost:9311/" (string value)
-#barbican_endpoint = <None>
-
-# Version of the Barbican API, for example: "v1" (string value)
-#barbican_api_version = <None>
-
-# Use this endpoint to connect to Keystone (string value)
-#auth_endpoint = http://localhost/identity/v3
-
-# Number of seconds to wait before retrying poll for key creation completion
-# (integer value)
-#retry_delay = 1
-
-# Number of times to retry poll for key creation completion (integer value)
-#number_of_retries = 60
-
-# Specifies if insecure TLS (https) requests. If False, the server's certificate
-# will not be validated (boolean value)
-#verify_ssl = true
-
-
-[cache]
-
-#
-# From nova.conf
-#
-
+backend = oslo_cache.memcache_pool
+enabled = true
+memcache_servers=10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
 # Prefix for building the configuration dictionary for the cache region. This
 # should not need to be changed unless there is another dogpile.cache region
 # with the same configuration name. (string value)
-#config_prefix = cache.oslo
+#config_prefix=cache.oslo
 
 # Default TTL, in seconds, for any cached item in the dogpile.cache region. This
 # applies to any cached method that doesn't have an explicit cache expiration
 # time defined for it. (integer value)
-#expiration_time = 600
-
-# Cache backend module. For eventlet-based or environments with hundreds of
-# threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is
-# recommended. For environments with less than 100 threaded servers, Memcached
-# (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test
-# environments with a single instance of the server can use the
+#expiration_time=600
+
+# Dogpile.cache backend module. It is recommended that Memcache or Redis
+# (dogpile.cache.redis) be used in production deployments. For eventlet-based or
+# highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is
+# recommended. For low thread servers, dogpile.cache.memcached is recommended.
+# Test environments with a single instance of the server can use the
 # dogpile.cache.memory backend. (string value)
-# Allowed values: oslo_cache.memcache_pool, oslo_cache.dict, dogpile.cache.memcached, dogpile.cache.redis, dogpile.cache.memory, dogpile.cache.null
-#backend = dogpile.cache.null
+#backend=dogpile.cache.null
 
 # Arguments supplied to the backend module. Specify this option once per
 # argument to be passed to the dogpile.cache backend. Example format:
@@ -3585,45 +3503,44 @@
 #proxies =
 
 # Global toggle for caching. (boolean value)
-#enabled = false
+#enabled=false
 
 # Extra debugging from the cache backend (cache keys, get/set/delete/etc calls).
 # This is only really useful if you need to see the specific cache-backend
 # get/set/delete calls with the keys/values.  Typically this should be left set
 # to false. (boolean value)
-#debug_cache_backend = false
+#debug_cache_backend=false
 
 # Memcache servers in the format of "host:port". (dogpile.cache.memcache and
 # oslo_cache.memcache_pool backends only). (list value)
-#memcache_servers = localhost:11211
+#memcache_servers=localhost:11211
 
 # Number of seconds memcached server is considered dead before it is tried
 # again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
 # (integer value)
-#memcache_dead_retry = 300
+#memcache_dead_retry=300
 
 # Timeout in seconds for every call to a server. (dogpile.cache.memcache and
 # oslo_cache.memcache_pool backends only). (integer value)
-#memcache_socket_timeout = 3
+#memcache_socket_timeout=3
 
 # Max total number of open connections to every memcached server.
 # (oslo_cache.memcache_pool backend only). (integer value)
-#memcache_pool_maxsize = 10
+#memcache_pool_maxsize=10
 
 # Number of seconds a connection to memcached is held unused in the pool before
 # it is closed. (oslo_cache.memcache_pool backend only). (integer value)
-#memcache_pool_unused_timeout = 60
+#memcache_pool_unused_timeout=60
 
 # Number of seconds that an operation will wait to get a memcache client
 # connection. (integer value)
-#memcache_pool_connection_get_timeout = 10
+#memcache_pool_connection_get_timeout=10
 
 
 [cells]
-enable = False
-#
-# DEPRECATED: Cells options allow you to use cells v1 functionality in an
-# OpenStack deployment.
+#
+# Cells options allow you to use cells functionality in openstack
+# deployment.
 #
 # Note that the options in this group are only for cells v1 functionality, which
 # is considered experimental and not recommended for new deployments. Cells v1
@@ -3636,6 +3553,24 @@
 #
 
 # DEPRECATED:
+# Topic.
+#
+# This is the message queue topic that cells nodes listen on. It is
+# used when the cells service is started up to configure the queue,
+# and whenever an RPC call to the scheduler is made.
+#
+# Possible values:
+#
+# * cells: This is the recommended and the default value.
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# Configurable RPC topics provide little value and can result in a wide variety
+# of errors. They should not be used.
+#topic=cells
+
+#
 # Enable cell v1 functionality.
 #
 # Note that cells v1 is considered experimental and not recommended for new
@@ -3660,12 +3595,9 @@
 #   is enabled.
 # * cell_type: Cell type should be defined for all cells.
 #  (boolean value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#enable = false
-
-# DEPRECATED:
+enable=False
+
+#
 # Name of the current cell.
 #
 # This value must be unique for each cell. Name of a cell is used as
@@ -3677,12 +3609,9 @@
 # * enabled: This option is meaningful only when cells service
 #   is enabled
 #  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#name = nova
-
-# DEPRECATED:
+#name=nova
+
+#
 # Cell capabilities.
 #
 # List of arbitrary key=value pairs defining capabilities of the
@@ -3694,12 +3623,9 @@
 # * key=value pairs list for example;
 #   ``hypervisor=xenserver;kvm,os=linux;windows``
 #  (list value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#capabilities = hypervisor=xenserver;kvm,os=linux;windows
-
-# DEPRECATED:
+#capabilities=hypervisor=xenserver;kvm,os=linux;windows
+
+#
 # Call timeout.
 #
 # Cell messaging module waits for response(s) to be put into the
@@ -3711,12 +3637,9 @@
 # * An integer, corresponding to the interval time in seconds.
 #  (integer value)
 # Minimum value: 0
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#call_timeout = 60
-
-# DEPRECATED:
+#call_timeout=60
+
+#
 # Reserve percentage
 #
 # Percentage of cell capacity to hold in reserve, so the minimum
@@ -3736,12 +3659,9 @@
 # * An integer or float, corresponding to the percentage of cell capacity to
 #   be held in reserve.
 #  (floating point value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#reserve_percent = 10.0
-
-# DEPRECATED:
+#reserve_percent=10.0
+
+#
 # Type of cell.
 #
 # When cells feature is enabled the hosts in the OpenStack Compute
@@ -3755,12 +3675,9 @@
 #   (nova.quota.NoopQuotaDriver)
 #  (string value)
 # Allowed values: api, compute
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#cell_type = compute
-
-# DEPRECATED:
+#cell_type=compute
+
+#
 # Mute child interval.
 #
 # Number of seconds after which a lack of capability and capacity
@@ -3771,12 +3688,9 @@
 #
 # * An integer, corresponding to the interval time in seconds.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#mute_child_interval = 300
-
-# DEPRECATED:
+#mute_child_interval=300
+
+#
 # Bandwidth update interval.
 #
 # Seconds between bandwidth usage cache updates for cells.
@@ -3785,12 +3699,9 @@
 #
 # * An integer, corresponding to the interval time in seconds.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#bandwidth_update_interval = 600
-
-# DEPRECATED:
+#bandwidth_update_interval=600
+
+#
 # Instance update sync database limit.
 #
 # Number of instances to pull from the database at one time for
@@ -3801,12 +3712,9 @@
 #
 # * An integer, corresponding to a number of instances.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#instance_update_sync_database_limit = 100
-
-# DEPRECATED:
+#instance_update_sync_database_limit=100
+
+#
 # Mute weight multiplier.
 #
 # Multiplier used to weigh mute children. Mute children cells are
@@ -3817,12 +3725,9 @@
 #
 # * Negative numeric number
 #  (floating point value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#mute_weight_multiplier = -10000.0
-
-# DEPRECATED:
+#mute_weight_multiplier=-10000.0
+
+#
 # Ram weight multiplier.
 #
 # Multiplier used for weighing ram. Negative numbers indicate that
@@ -3833,12 +3738,9 @@
 #
 # * Numeric multiplier
 #  (floating point value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#ram_weight_multiplier = 10.0
-
-# DEPRECATED:
+#ram_weight_multiplier=10.0
+
+#
 # Offset weight multiplier
 #
 # Multiplier used to weigh offset weigher. Cells with higher
@@ -3851,12 +3753,9 @@
 #
 # * Numeric multiplier
 #  (floating point value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#offset_weight_multiplier = 1.0
-
-# DEPRECATED:
+#offset_weight_multiplier=1.0
+
+#
 # Instance updated at threshold
 #
 # Number of seconds after an instance was updated or deleted to
@@ -3874,12 +3773,9 @@
 # * This value is used with the ``instance_update_num_instances``
 #   value in a periodic task run.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#instance_updated_at_threshold = 3600
-
-# DEPRECATED:
+#instance_updated_at_threshold=3600
+
+#
 # Instance update num instances
 #
 # On every run of the periodic task, nova cells manager will attempt to
@@ -3897,12 +3793,9 @@
 # * This value is used with the ``instance_updated_at_threshold``
 #   value in a periodic task run.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#instance_update_num_instances = 1
-
-# DEPRECATED:
+#instance_update_num_instances=1
+
+#
 # Maximum hop count
 #
 # When processing a targeted message, if the local cell is not the
@@ -3914,24 +3807,18 @@
 #
 # * Positive integer value
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#max_hop_count = 10
-
-# DEPRECATED:
+#max_hop_count=10
+
+#
 # Cells scheduler.
 #
 # The class of the driver used by the cells scheduler. This should be
 # the full Python path to the class to be used. If nothing is specified
 # in this option, the CellsScheduler is used.
 #  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#scheduler = nova.cells.scheduler.CellsScheduler
-
-# DEPRECATED:
+#scheduler=nova.cells.scheduler.CellsScheduler
+
+#
 # RPC driver queue base.
 #
 # When sending a message to another cell by JSON-ifying the message
@@ -3943,12 +3830,9 @@
 #
 # * The base queue name to be used when communicating between cells.
 #  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#rpc_driver_queue_base = cells.intercell
-
-# DEPRECATED:
+#rpc_driver_queue_base=cells.intercell
+
+#
 # Scheduler filter classes.
 #
 # Filter classes the cells scheduler should use. An entry of
@@ -3977,12 +3861,9 @@
 # to a particular cell.
 #
 #  (list value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#scheduler_filter_classes = nova.cells.filters.all_filters
-
-# DEPRECATED:
+#scheduler_filter_classes=nova.cells.filters.all_filters
+
+#
 # Scheduler weight classes.
 #
 # Weigher classes the cells scheduler should use. An entry of
@@ -4012,12 +3893,9 @@
 # is set to a very high value (for example, '999999999999999'), it is
 # likely to be picked if another cell do not have a higher weight.
 #  (list value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#scheduler_weight_classes = nova.cells.weights.all_weighers
-
-# DEPRECATED:
+#scheduler_weight_classes=nova.cells.weights.all_weighers
+
+#
 # Scheduler retries.
 #
 # How many retries when no cells are available. Specifies how many
@@ -4033,12 +3911,9 @@
 # * This value is used with the ``scheduler_retry_delay`` value
 #   while retrying to find a suitable cell.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#scheduler_retries = 10
-
-# DEPRECATED:
+#scheduler_retries=10
+
+#
 # Scheduler retry delay.
 #
 # Specifies the delay (in seconds) between scheduling retries when no
@@ -4056,12 +3931,9 @@
 # * This value is used with the ``scheduler_retries`` value
 #   while retrying to find a suitable cell.
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#scheduler_retry_delay = 2
-
-# DEPRECATED:
+#scheduler_retry_delay=2
+
+#
 # DB check interval.
 #
 # Cell state manager updates cell status for all cells from the DB
@@ -4074,12 +3946,9 @@
 # * Interval time, in seconds.
 #
 #  (integer value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#db_check_interval = 60
-
-# DEPRECATED:
+#db_check_interval=60
+
+#
 # Optional cells configuration.
 #
 # Configuration file from which to read cells configuration. If given,
@@ -4130,10 +3999,7 @@
 #     }
 #
 #  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason: Cells v1 is being replaced with Cells v2.
-#cells_config = <None>
+#cells_config=<None>
 
 
 [cinder]
@@ -4141,7 +4007,8 @@
 #
 # From nova.conf
 #
-
+os_region_name = RegionOne
+catalog_info=volumev2:cinderv2:internalURL
 #
 # Info to match when looking for cinder in the service catalog.
 #
@@ -4157,7 +4024,7 @@
 #
 # * endpoint_template - Setting this option will override catalog_info
 #  (string value)
-#catalog_info = volumev3:cinderv3:publicURL
+#catalog_info=volumev2:cinderv2:publicURL
 
 #
 # If this option is set then it will override service catalog lookup with
@@ -4175,7 +4042,7 @@
 #
 # * catalog_info - If endpoint_template is not set, catalog_info will be used.
 #  (string value)
-#endpoint_template = <None>
+#endpoint_template=<None>
 
 #
 # Region name of this node. This is used when picking the URL in the service
@@ -4185,7 +4052,7 @@
 #
 # * Any string representing region name
 #  (string value)
-#os_region_name = <None>
+#os_region_name=<None>
 
 #
 # Number of times cinderclient should retry on any failed http call.
@@ -4198,7 +4065,7 @@
 # * Any integer value. 0 means connection is attempted only once
 #  (integer value)
 # Minimum value: 0
-#http_retries = 3
+#http_retries=3
 
 #
 # Allow attach between instance and volume in different availability zones.
@@ -4214,51 +4081,102 @@
 # the build request.
 # By default there is no availability zone restriction on volume attach.
 #  (boolean value)
-#cross_az_attach = true
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-
-[compute]
+#cross_az_attach=true
+
+
+[cloudpipe]
 
 #
 # From nova.conf
 #
 
 #
-# Number of consecutive failed builds that result in disabling a compute
-# service.
-#
-# This option will cause nova-compute to set itself to a disabled state
-# if a certain number of consecutive build failures occur. This will
-# prevent the scheduler from continuing to send builds to a compute node that is
-# consistently failing. Note that all failures qualify and count towards this
-# score, including reschedules that may have been due to racy scheduler
-# behavior.
-# Since the failures must be consecutive, it is unlikely that occasional
-# expected
-# reschedules will actually disable a compute node.
-#
-# Possible values:
-#
-# * Any positive integer representing a build failure count.
-# * Zero to never auto-disable.
-#  (integer value)
-#consecutive_build_service_disable_threshold = 10
+# Image ID used when starting up a cloudpipe VPN client.
+#
+# An empty instance is created and configured with OpenVPN using
+# boot_script_template. This instance would be snapshotted and stored
+# in glance. ID of the stored image is used in 'vpn_image_id' to
+# create cloudpipe VPN client.
+#
+# Possible values:
+#
+# * Any valid ID of a VPN image
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vpn_image_id
+#vpn_image_id=0
+
+#
+# Flavor for VPN instances.
+#
+# Possible values:
+#
+# * Any valid flavor name
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vpn_flavor
+#vpn_flavor=m1.tiny
+
+#
+# Template for cloudpipe instance boot script.
+#
+# Possible values:
+#
+# * Any valid path to a cloudpipe instance boot script template
+#
+# Related options:
+#
+# The following options are required to configure cloudpipe-managed
+# OpenVPN server.
+#
+# * dmz_net
+# * dmz_mask
+# * cnt_vpn_clients
+#  (string value)
+# Deprecated group/name - [DEFAULT]/boot_script_template
+#boot_script_template=$pybasedir/nova/cloudpipe/bootscript.template
+
+#
+# Network to push into OpenVPN config.
+#
+# Note: Above mentioned OpenVPN config can be found at
+# /etc/openvpn/server.conf.
+#
+# Possible values:
+#
+# * Any valid IPv4/IPV6 address
+#
+# Related options:
+#
+# * boot_script_template - dmz_net is pushed into bootscript.template
+#   to configure cloudpipe-managed OpenVPN server
+#  (IP address value)
+# Deprecated group/name - [DEFAULT]/dmz_net
+#dmz_net=10.0.0.0
+
+#
+# Netmask to push into OpenVPN config.
+#
+# Possible values:
+#
+# * Any valid IPv4/IPV6 netmask
+#
+# Related options:
+#
+# * dmz_net - dmz_net and dmz_mask is pushed into bootscript.template
+#   to configure cloudpipe-managed OpenVPN server
+# * boot_script_template
+#  (IP address value)
+# Deprecated group/name - [DEFAULT]/dmz_mask
+#dmz_mask=255.255.255.0
+
+#
+# Suffix to add to project name for VPN key and secgroups
+#
+# Possible values:
+#
+# * Any string value representing the VPN key suffix
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vpn_key_suffix
+#vpn_key_suffix=-vpn
 
 
 [conductor]
@@ -4280,13 +4198,13 @@
 # There is no need to let users choose the RPC topic for all services - there
 # is little gain from this. Furthermore, it makes it really easy to break Nova
 # by using this option.
-#topic = conductor
+#topic=conductor
 
 #
 # Number of workers for OpenStack Conductor service. The default will be the
 # number of CPUs available.
 #  (integer value)
-#workers = <None>
+#workers=<None>
 
 
 [console]
@@ -4331,7 +4249,7 @@
 #  (integer value)
 # Minimum value: 0
 # Deprecated group/name - [DEFAULT]/console_token_ttl
-#token_ttl = 600
+#token_ttl=600
 
 
 [cors]
@@ -4343,24 +4261,53 @@
 # Indicate whether this resource may be shared with the domain received in the
 # requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
 # slash. Example: https://horizon.example.com (list value)
-#allowed_origin = <None>
+#allowed_origin=<None>
 
 # Indicate that the actual request can include user credentials (boolean value)
-#allow_credentials = true
+#allow_credentials=true
 
 # Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
 # Headers. (list value)
-#expose_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Service-Token
+#expose_headers=X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Service-Token
 
 # Maximum cache age of CORS preflight requests. (integer value)
-#max_age = 3600
+#max_age=3600
 
 # Indicate which methods can be used during the actual request. (list value)
-#allow_methods = GET,PUT,POST,DELETE,PATCH
+#allow_methods=GET,PUT,POST,DELETE,PATCH
 
 # Indicate which header field names may be used during the actual request. (list
 # value)
-#allow_headers = X-Auth-Token,X-Openstack-Request-Id,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id
+#allow_headers=X-Auth-Token,X-Openstack-Request-Id,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id
+
+
+[cors.subdomain]
+
+#
+# From oslo.middleware
+#
+
+# Indicate whether this resource may be shared with the domain received in the
+# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
+# slash. Example: https://horizon.example.com (list value)
+#allowed_origin=<None>
+
+# Indicate that the actual request can include user credentials (boolean value)
+#allow_credentials=true
+
+# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
+# Headers. (list value)
+#expose_headers=X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Service-Token
+
+# Maximum cache age of CORS preflight requests. (integer value)
+#max_age=3600
+
+# Indicate which methods can be used during the actual request. (list value)
+#allow_methods=GET,PUT,POST,DELETE,PATCH
+
+# Indicate which header field names may be used during the actual request. (list
+# value)
+#allow_headers=X-Auth-Token,X-Openstack-Request-Id,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id
 
 
 [crypto]
@@ -4381,7 +4328,8 @@
 #
 # * ca_path
 #  (string value)
-#ca_file = cacert.pem
+# Deprecated group/name - [DEFAULT]/ca_file
+#ca_file=cacert.pem
 
 #
 # Filename of a private key.
@@ -4390,7 +4338,8 @@
 #
 # * keys_path
 #  (string value)
-#key_file = private/cakey.pem
+# Deprecated group/name - [DEFAULT]/key_file
+#key_file=private/cakey.pem
 
 #
 # Filename of root Certificate Revocation List (CRL). This is a list of
@@ -4401,7 +4350,8 @@
 #
 # * ca_path
 #  (string value)
-#crl_file = crl.pem
+# Deprecated group/name - [DEFAULT]/crl_file
+#crl_file=crl.pem
 
 #
 # Directory path where keys are located.
@@ -4410,7 +4360,8 @@
 #
 # * key_file
 #  (string value)
-#keys_path = $state_path/keys
+# Deprecated group/name - [DEFAULT]/keys_path
+#keys_path=$state_path/keys
 
 #
 # Directory path where root CA is located.
@@ -4419,125 +4370,133 @@
 #
 # * ca_file
 #  (string value)
-#ca_path = $state_path/CA
+# Deprecated group/name - [DEFAULT]/ca_path
+#ca_path=$state_path/CA
 
 # Option to enable/disable use of CA for each project. (boolean value)
-#use_project_ca = false
+# Deprecated group/name - [DEFAULT]/use_project_ca
+#use_project_ca=false
 
 #
 # Subject for certificate for users, %s for
 # project, user, timestamp
 #  (string value)
-#user_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s
+# Deprecated group/name - [DEFAULT]/user_cert_subject
+#user_cert_subject=/C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s
 
 #
 # Subject for certificate for projects, %s for
 # project, timestamp
 #  (string value)
-#project_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s
+# Deprecated group/name - [DEFAULT]/project_cert_subject
+#project_cert_subject=/C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s
 
 
 [database]
-connection = sqlite:////var/lib/nova/nova.sqlite
 
 #
 # From oslo.db
 #
 
+# DEPRECATED: The file name to use with SQLite. (string value)
+# Deprecated group/name - [DEFAULT]/sqlite_db
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Should use config option connection or slave_connection to connect the
+# database.
+#sqlite_db=oslo.sqlite
+
 # If True, SQLite uses synchronous mode. (boolean value)
-#sqlite_synchronous = true
+# Deprecated group/name - [DEFAULT]/sqlite_synchronous
+#sqlite_synchronous=true
 
 # The back end to use for the database. (string value)
 # Deprecated group/name - [DEFAULT]/db_backend
-#backend = sqlalchemy
+#backend=sqlalchemy
 
 # The SQLAlchemy connection string to use to connect to the database. (string
 # value)
 # Deprecated group/name - [DEFAULT]/sql_connection
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
-#connection = <None>
+#connection=<None>
 
 # The SQLAlchemy connection string to use to connect to the slave database.
 # (string value)
-#slave_connection = <None>
+#slave_connection=<None>
 
 # The SQL mode to be used for MySQL sessions. This option, including the
 # default, overrides any server-set SQL mode. To use whatever SQL mode is set by
 # the server configuration, set this to no value. Example: mysql_sql_mode=
 # (string value)
-#mysql_sql_mode = TRADITIONAL
-
-# If True, transparently enables support for handling MySQL Cluster (NDB).
-# (boolean value)
-#mysql_enable_ndb = false
+#mysql_sql_mode=TRADITIONAL
 
 # Timeout before idle SQL connections are reaped. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_idle_timeout
 # Deprecated group/name - [DATABASE]/sql_idle_timeout
 # Deprecated group/name - [sql]/idle_timeout
-#idle_timeout = 3600
+#idle_timeout=3600
 
 # Minimum number of SQL connections to keep open in a pool. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
 # Deprecated group/name - [DATABASE]/sql_min_pool_size
-#min_pool_size = 1
+#min_pool_size=1
 
 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0
 # indicates no limit. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
-#max_pool_size = 5
+#max_pool_size=5
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
-#max_retries = 10
+#max_retries=10
 
 # Interval between retries of opening a SQL connection. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
 # Deprecated group/name - [DATABASE]/reconnect_interval
-#retry_interval = 10
+#retry_interval=10
 
 # If set, use this value for max_overflow with SQLAlchemy. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
-#max_overflow = 50
+#max_overflow=50
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
 # Minimum value: 0
 # Maximum value: 100
 # Deprecated group/name - [DEFAULT]/sql_connection_debug
-#connection_debug = 0
+#connection_debug=0
 
 # Add Python stack traces to SQL as comment strings. (boolean value)
 # Deprecated group/name - [DEFAULT]/sql_connection_trace
-#connection_trace = false
+#connection_trace=false
 
 # If set, use this value for pool_timeout with SQLAlchemy. (integer value)
 # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
-#pool_timeout = <None>
+#pool_timeout=<None>
 
 # Enable the experimental use of database reconnect on connection lost. (boolean
 # value)
-#use_db_reconnect = false
+#use_db_reconnect=false
 
 # Seconds between retries of a database transaction. (integer value)
-#db_retry_interval = 1
+#db_retry_interval=1
 
 # If True, increases the interval between retries of a database operation up to
 # db_max_retry_interval. (boolean value)
-#db_inc_retry_interval = true
+#db_inc_retry_interval=true
 
 # If db_inc_retry_interval is set, the maximum seconds between retries of a
 # database operation. (integer value)
-#db_max_retry_interval = 10
+#db_max_retry_interval=10
 
 # Maximum retries in case of connection error or deadlock error before error is
 # raised. Set to -1 to specify an infinite retry count. (integer value)
-#db_max_retries = 20
+#db_max_retries=20
 
 #
 # From oslo.db.concurrency
@@ -4546,7 +4505,7 @@
 # Enable the experimental use of thread pooling for all DB API calls (boolean
 # value)
 # Deprecated group/name - [DEFAULT]/dbapi_use_tpool
-#use_tpool = false
+#use_tpool=false
 
 
 [ephemeral_storage_encryption]
@@ -4558,21 +4517,19 @@
 #
 # Enables/disables LVM ephemeral storage encryption.
 #  (boolean value)
-#enabled = false
+#enabled=false
 
 #
 # Cipher-mode string to be used.
 #
 # The cipher and mode to be used to encrypt ephemeral storage. The set of
-# cipher-mode combinations available depends on kernel support. According
-# to the dm-crypt documentation, the cipher is expected to be in the format:
-# "<cipher>-<chainmode>-<ivmode>".
+# cipher-mode combinations available depends on kernel support.
 #
 # Possible values:
 #
 # * Any crypto option listed in ``/proc/crypto``.
 #  (string value)
-#cipher = aes-xts-plain64
+#cipher=aes-xts-plain64
 
 #
 # Encryption key length in bits.
@@ -4581,7 +4538,7 @@
 # In XTS mode only half of the bits are used for encryption key.
 #  (integer value)
 # Minimum value: 1
-#key_size = 512
+#key_size=512
 
 
 [filter_scheduler]
@@ -4613,7 +4570,7 @@
 #  (integer value)
 # Minimum value: 1
 # Deprecated group/name - [DEFAULT]/scheduler_host_subset_size
-#host_subset_size = 1
+#host_subset_size=1
 
 #
 # The number of instances that can be actively performing IO on a host.
@@ -4630,7 +4587,8 @@
 # * An integer, where the integer corresponds to the max number of instances
 #   that can be actively performing IO on any given host.
 #  (integer value)
-#max_io_ops_per_host = 8
+# Deprecated group/name - [DEFAULT]/max_io_ops_per_host
+#max_io_ops_per_host=8
 
 #
 # Maximum number of instances that be active on a host.
@@ -4650,8 +4608,8 @@
 # * An integer, where the integer corresponds to the max instances that can be
 #   scheduled on a host.
 #  (integer value)
-# Minimum value: 1
-#max_instances_per_host = 50
+# Deprecated group/name - [DEFAULT]/max_instances_per_host
+#max_instances_per_host=50
 
 #
 # Enable querying of individual hosts for instance information.
@@ -4670,14 +4628,9 @@
 #
 # This option is only used by the FilterScheduler and its subclasses; if you use
 # a different scheduler, this option has no effect.
-#
-# NOTE: In a multi-cell (v2) setup where the cell MQ is separated from the
-# top-level, computes cannot directly communicate with the scheduler. Thus,
-# this option cannot be enabled in that scenario. See also the
-# [workarounds]/disable_group_policy_check_upcall option.
 #  (boolean value)
 # Deprecated group/name - [DEFAULT]/scheduler_tracks_instance_changes
-#track_instance_changes = true
+#track_instance_changes=true
 
 #
 # Filters that the scheduler can use.
@@ -4702,13 +4655,15 @@
 # * scheduler_enabled_filters
 #  (multi valued)
 # Deprecated group/name - [DEFAULT]/scheduler_available_filters
-#available_filters = nova.scheduler.filters.all_filters
+#available_filters=nova.scheduler.filters.all_filters
 
 #
 # Filters that the scheduler will use.
 #
 # An ordered list of filter class names that will be used for filtering
-# hosts. These filters will be applied in the order they are listed so
+# hosts. Ignore the word 'default' in the name of this option: these filters
+# will
+# *always* be applied, and they will be applied in the order they are listed so
 # place your most restrictive filters first to make the filtering process more
 # efficient.
 #
@@ -4727,9 +4682,9 @@
 #   exception will be raised.
 #  (list value)
 # Deprecated group/name - [DEFAULT]/scheduler_default_filters
-#enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
-
-# DEPRECATED:
+#enabled_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
+
+#
 # Filters used for filtering baremetal hosts.
 #
 # Filters are applied in order, so place your most restrictive filters first to
@@ -4749,16 +4704,9 @@
 #   no effect.
 #  (list value)
 # Deprecated group/name - [DEFAULT]/baremetal_scheduler_default_filters
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# These filters were used to overcome some of the baremetal scheduling
-# limitations in Nova prior to the use of the Placement API. Now scheduling will
-# use the custom resource class defined for each baremetal node to make its
-# selection.
-#baremetal_enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter
-
-# DEPRECATED:
+#baremetal_enabled_filters=RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter
+
+#
 # Enable baremetal filters.
 #
 # Set this to True to tell the nova scheduler that it should use the filters
@@ -4775,14 +4723,7 @@
 #   specified in 'scheduler_enabled_filters'.
 #  (boolean value)
 # Deprecated group/name - [DEFAULT]/scheduler_use_baremetal_filters
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason:
-# These filters were used to overcome some of the baremetal scheduling
-# limitations in Nova prior to the use of the Placement API. Now scheduling will
-# use the custom resource class defined for each baremetal node to make its
-# selection.
-#use_baremetal_filters = false
+#use_baremetal_filters=false
 
 #
 # Weighers that the scheduler will use.
@@ -4804,7 +4745,7 @@
 #   a weigher that will be used for selecting a host
 #  (list value)
 # Deprecated group/name - [DEFAULT]/scheduler_weight_classes
-#weight_classes = nova.scheduler.weights.all_weighers
+#weight_classes=nova.scheduler.weights.all_weighers
 
 #
 # Ram weight multipler ratio.
@@ -4828,7 +4769,8 @@
 # * An integer or float value, where the value corresponds to the multipler
 #   ratio for this weigher.
 #  (floating point value)
-#ram_weight_multiplier = 1.0
+# Deprecated group/name - [DEFAULT]/ram_weight_multiplier
+#ram_weight_multiplier=1.0
 
 #
 # Disk weight multipler ratio.
@@ -4838,14 +4780,15 @@
 #
 # This option is only used by the FilterScheduler and its subclasses; if you use
 # a different scheduler, this option has no effect. Also note that this setting
-# only affects scheduling if the 'disk' weigher is enabled.
+# only affects scheduling if the 'ram' weigher is enabled.
 #
 # Possible values:
 #
 # * An integer or float value, where the value corresponds to the multipler
 #   ratio for this weigher.
 #  (floating point value)
-#disk_weight_multiplier = 1.0
+# Deprecated group/name - [DEFAULT]/disk_weight_multiplier
+#disk_weight_multiplier=1.0
 
 #
 # IO operations weight multipler ratio.
@@ -4870,25 +4813,8 @@
 # * An integer or float value, where the value corresponds to the multipler
 #   ratio for this weigher.
 #  (floating point value)
-#io_ops_weight_multiplier = -1.0
-
-#
-# PCI device affinity weight multiplier.
-#
-# The PCI device affinity weighter computes a weighting based on the number of
-# PCI devices on the host and the number of PCI devices requested by the
-# instance. The ``NUMATopologyFilter`` filter must be enabled for this to have
-# any significance. For more information, refer to the filter documentation:
-#
-#     https://docs.openstack.org/developer/nova/filter_scheduler.html
-#
-# Possible values:
-#
-# * A positive integer or float value, where the value corresponds to the
-#   multiplier ratio for this weigher.
-#  (floating point value)
-# Minimum value: 0
-#pci_weight_multiplier = 1.0
+# Deprecated group/name - [DEFAULT]/io_ops_weight_multiplier
+#io_ops_weight_multiplier=-1.0
 
 #
 # Multiplier used for weighing hosts for group soft-affinity.
@@ -4899,7 +4825,8 @@
 #   for hosts with group soft affinity. Only a positive value are meaningful, as
 #   negative values would make this behave as a soft anti-affinity weigher.
 #  (floating point value)
-#soft_affinity_weight_multiplier = 1.0
+# Deprecated group/name - [DEFAULT]/soft_affinity_weight_multiplier
+#soft_affinity_weight_multiplier=1.0
 
 #
 # Multiplier used for weighing hosts for group soft-anti-affinity.
@@ -4911,7 +4838,8 @@
 #   meaningful, as negative values would make this behave as a soft affinity
 #   weigher.
 #  (floating point value)
-#soft_anti_affinity_weight_multiplier = 1.0
+# Deprecated group/name - [DEFAULT]/soft_anti_affinity_weight_multiplier
+#soft_anti_affinity_weight_multiplier=1.0
 
 #
 # List of UUIDs for images that can only be run on certain hosts.
@@ -4933,6 +4861,7 @@
 # * scheduler/isolated_hosts
 # * scheduler/restrict_isolated_hosts_to_isolated_images
 #  (list value)
+# Deprecated group/name - [DEFAULT]/isolated_images
 #isolated_images =
 
 #
@@ -4954,6 +4883,7 @@
 # * scheduler/isolated_images
 # * scheduler/restrict_isolated_hosts_to_isolated_images
 #  (list value)
+# Deprecated group/name - [DEFAULT]/isolated_hosts
 #isolated_hosts =
 
 #
@@ -4970,7 +4900,8 @@
 # * scheduler/isolated_images
 # * scheduler/isolated_hosts
 #  (boolean value)
-#restrict_isolated_hosts_to_isolated_images = true
+# Deprecated group/name - [DEFAULT]/restrict_isolated_hosts_to_isolated_images
+#restrict_isolated_hosts_to_isolated_images=true
 
 #
 # Image property namespace for use in the host aggregate.
@@ -4998,7 +4929,8 @@
 #
 # * aggregate_image_properties_isolation_separator
 #  (string value)
-#aggregate_image_properties_isolation_namespace = <None>
+# Deprecated group/name - [DEFAULT]/aggregate_image_properties_isolation_namespace
+#aggregate_image_properties_isolation_namespace=<None>
 
 #
 # Separator character(s) for image property namespace and name.
@@ -5022,7 +4954,8 @@
 #
 # * aggregate_image_properties_isolation_namespace
 #  (string value)
-#aggregate_image_properties_isolation_separator = .
+# Deprecated group/name - [DEFAULT]/aggregate_image_properties_isolation_separator
+#aggregate_image_properties_isolation_separator=.
 
 
 [glance]
@@ -5043,7 +4976,17 @@
 # "scheme://hostname:port[/path]"
 #   (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image").
 #  (list value)
-#api_servers = <None>
+#api_servers=<None>
+api_servers = http://10.167.4.35:9292
+
+
+#
+# Enable insecure SSL (https) requests to glance.
+#
+# This setting can be used to turn off verification of the glance server
+# certificate against the certificate authorities.
+#  (boolean value)
+#api_insecure=false
 
 #
 # Enable glance operation retries.
@@ -5052,7 +4995,7 @@
 # an image to / from glance. 0 means no retries.
 #  (integer value)
 # Minimum value: 0
-#num_retries = 0
+#num_retries=0
 
 #
 # List of url schemes that can be directly accessed.
@@ -5083,74 +5026,11 @@
 #
 # * The options in the `key_manager` group, as the key_manager is used
 #   for the signature validation.
-# * Both enable_certificate_validation and default_trusted_certificate_ids
-#   below depend on this option being enabled.
 #  (boolean value)
-#verify_glance_signatures = false
-
-# DEPRECATED:
-# Enable certificate validation for image signature verification.
-#
-# During image signature verification nova will first verify the validity of the
-# image's signing certificate using the set of trusted certificates associated
-# with the instance. If certificate validation fails, signature verification
-# will not be performed and the image will be placed into an error state. This
-# provides end users with stronger assurances that the image data is unmodified
-# and trustworthy. If left disabled, image signature verification can still
-# occur but the end user will not have any assurance that the signing
-# certificate used to generate the image signature is still trustworthy.
-#
-# Related options:
-#
-# * This option only takes effect if verify_glance_signatures is enabled.
-# * The value of default_trusted_certificate_ids may be used when this option
-#   is enabled.
-#  (boolean value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# This option is intended to ease the transition for deployments leveraging
-# image signature verification. The intended state long-term is for signature
-# verification and certificate validation to always happen together.
-#enable_certificate_validation = false
-
-#
-# List of certificate IDs for certificates that should be trusted.
-#
-# May be used as a default list of trusted certificate IDs for certificate
-# validation. The value of this option will be ignored if the user provides a
-# list of trusted certificate IDs with an instance API request. The value of
-# this option will be persisted with the instance data if signature verification
-# and certificate validation are enabled and if the user did not provide an
-# alternative list. If left empty when certificate validation is enabled the
-# user must provide a list of trusted certificate IDs otherwise certificate
-# validation will fail.
-#
-# Related options:
-#
-# * The value of this option may be used if both verify_glance_signatures and
-#   enable_certificate_validation are enabled.
-#  (list value)
-#default_trusted_certificate_ids =
+#verify_glance_signatures=false
 
 # Enable or disable debug logging with glanceclient. (boolean value)
-#debug = false
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
+#debug=false
 
 
 [guestfs]
@@ -5180,7 +5060,7 @@
 #     * libvirt.inject_partition
 #     * libvirt.inject_password
 #  (boolean value)
-#debug = false
+#debug=false
 
 
 [healthcheck]
@@ -5192,10 +5072,10 @@
 # DEPRECATED: The path to respond to healtcheck requests on. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
-#path = /healthcheck
+#path=/healthcheck
 
 # Show more detailed information as part of the response (boolean value)
-#detailed = false
+#detailed=false
 
 # Additional backends that can perform health checks and report that information
 # back as part of a request. (list value)
@@ -5203,7 +5083,7 @@
 
 # Check the presence of a file to determine if an application is running on a
 # port. Used by DisableByFileHealthcheck plugin. (string value)
-#disable_by_file_path = <None>
+#disable_by_file_path=<None>
 
 # Check the presence of a file based on a port to determine if an application is
 # running on a port. Expects a "port:path" list of strings. Used by
@@ -5235,7 +5115,7 @@
 # * Float values greater than 1.0: Enables allocation of total implied
 #   RAM divided by this value for startup.
 #  (floating point value)
-#dynamic_memory_ratio = 1.0
+#dynamic_memory_ratio=1.0
 
 #
 # Enable instance metrics collection
@@ -5244,7 +5124,7 @@
 # metric APIs. Collected data can by retrieved by other apps and
 # services, e.g.: Ceilometer.
 #  (boolean value)
-#enable_instance_metrics_collection = false
+#enable_instance_metrics_collection=false
 
 #
 # Instances path share
@@ -5273,7 +5153,7 @@
 # different CPU features and checked during instance creation
 # in order to limit the CPU features used by the instance.
 #  (boolean value)
-#limit_cpu_features = false
+#limit_cpu_features=false
 
 #
 # Mounted disk query retry count
@@ -5293,7 +5173,7 @@
 #   "mounted_disk_query_retry_interval" option.
 #  (integer value)
 # Minimum value: 0
-#mounted_disk_query_retry_count = 10
+#mounted_disk_query_retry_count=10
 
 #
 # Mounted disk query retry interval
@@ -5312,7 +5192,7 @@
 #   mounted_disk_query_retry_interval configuration options.
 #  (integer value)
 # Minimum value: 0
-#mounted_disk_query_retry_interval = 5
+#mounted_disk_query_retry_interval=5
 
 #
 # Power state check timeframe
@@ -5326,7 +5206,7 @@
 # * Timeframe in seconds (Default: 60).
 #  (integer value)
 # Minimum value: 0
-#power_state_check_timeframe = 60
+#power_state_check_timeframe=60
 
 #
 # Power state event polling interval
@@ -5342,7 +5222,7 @@
 # * Time in seconds (Default: 2).
 #  (integer value)
 # Minimum value: 0
-#power_state_event_polling_interval = 2
+#power_state_event_polling_interval=2
 
 #
 # qemu-img command
@@ -5371,7 +5251,7 @@
 #   set the mkisofs_cmd value to the full path to an mkisofs.exe
 #   installation.
 #  (string value)
-#qemu_img_cmd = qemu-img.exe
+#qemu_img_cmd=qemu-img.exe
 
 #
 # External virtual switch name
@@ -5391,7 +5271,7 @@
 #   is used. This list is queried using WQL.
 # * Virtual switch name.
 #  (string value)
-#vswitch_name = <None>
+#vswitch_name=<None>
 
 #
 # Wait soft reboot seconds
@@ -5405,7 +5285,7 @@
 # * Time in seconds (Default: 60).
 #  (integer value)
 # Minimum value: 0
-#wait_soft_reboot_seconds = 60
+#wait_soft_reboot_seconds=60
 
 #
 # Configuration drive cdrom
@@ -5434,7 +5314,8 @@
 # * You can configure the Compute service to always create a configuration
 #   drive by setting the force_config_drive option to 'True'.
 #  (boolean value)
-#config_drive_cdrom = false
+#config_drive_cdrom=false
+config_drive_cdrom=false
 
 #
 # Configuration drive inject password
@@ -5447,7 +5328,8 @@
 #   configuration drive usage with Hyper-V, such as force_config_drive.
 # * Currently, the only accepted config_drive_format is 'iso9660'.
 #  (boolean value)
-#config_drive_inject_password = false
+#config_drive_inject_password=false
+config_drive_inject_password=false
 
 #
 # Volume attach retry count
@@ -5465,7 +5347,7 @@
 #   volume_attach_retry_interval option.
 #  (integer value)
 # Minimum value: 0
-#volume_attach_retry_count = 10
+#volume_attach_retry_count=10
 
 #
 # Volume attach retry interval
@@ -5484,7 +5366,7 @@
 #   volume_attach_retry_interval configuration options.
 #  (integer value)
 # Minimum value: 0
-#volume_attach_retry_interval = 5
+#volume_attach_retry_interval=5
 
 #
 # Enable RemoteFX feature
@@ -5512,7 +5394,7 @@
 #
 #     64, 128, 256, 512, 1024
 #  (boolean value)
-#enable_remotefx = false
+#enable_remotefx=false
 
 #
 # Use multipath connections when attaching iSCSI or FC disks.
@@ -5520,7 +5402,7 @@
 # This requires the Multipath IO Windows feature to be enabled. MPIO must be
 # configured to claim such devices.
 #  (boolean value)
-#use_multipath_io = false
+#use_multipath_io=false
 
 #
 # List of iSCSI initiators that will be used for estabilishing iSCSI sessions.
@@ -5531,113 +5413,25 @@
 #iscsi_initiator_list =
 
 
-[ironic]
-#
-# Configuration options for Ironic driver (Bare Metal).
-# If using the Ironic driver following options must be set:
-# * auth_type
-# * auth_url
-# * project_name
-# * username
-# * password
-# * project_domain_id or project_domain_name
-# * user_domain_id or user_domain_name
+[image_file_url]
 
 #
 # From nova.conf
 #
 
-# URL override for the Ironic API endpoint. (uri value)
-#api_endpoint = http://ironic.example.org:6385/
-
-#
-# The number of times to retry when a request conflicts.
-# If set to 0, only try once, no retries.
-#
-# Related options:
-#
-# * api_retry_interval
-#  (integer value)
-# Minimum value: 0
-#api_max_retries = 60
-
-#
-# The number of seconds to wait before retrying the request.
-#
-# Related options:
-#
-# * api_max_retries
-#  (integer value)
-# Minimum value: 0
-#api_retry_interval = 2
-
-# Timeout (seconds) to wait for node serial console state changed. Set to 0 to
-# disable timeout. (integer value)
-# Minimum value: 0
-#serial_console_state_timeout = 10
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Authentication type to load (string value)
-# Deprecated group/name - [ironic]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
-
-# Authentication URL (string value)
-#auth_url = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-#project_id = <None>
-
-# Project name to scope to (string value)
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
-
-# User ID (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [ironic]/user_name
-#username = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User's password (string value)
-#password = <None>
+# DEPRECATED:
+# List of file systems that are configured in this file in the
+# image_file_url:<list entry name> sections
+#  (list value)
+# This option is deprecated for removal since 14.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The feature to download images from glance via filesystem is not used and will
+# be removed in the future.
+#filesystems =
+
+
+
 
 
 [key_manager]
@@ -5653,95 +5447,71 @@
 #
 # * Empty string or a key in hex value
 #  (string value)
-#fixed_key = <None>
+# Deprecated group/name - [keymgr]/fixed_key
+#fixed_key=<None>
 
 # The full class name of the key manager API class (string value)
-#api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager
 
 # The type of authentication credential to create. Possible values are 'token',
 # 'password', 'keystone_token', and 'keystone_password'. Required if no context
 # is passed to the credential factory. (string value)
-#auth_type = <None>
+#auth_type=<None>
 
 # Token for authentication. Required for 'token' and 'keystone_token' auth_type
 # if no context is passed to the credential factory. (string value)
-#token = <None>
+#token=<None>
 
 # Username for authentication. Required for 'password' auth_type. Optional for
 # the 'keystone_password' auth_type. (string value)
-#username = <None>
+#username=<None>
 
 # Password for authentication. Required for 'password' and 'keystone_password'
 # auth_type. (string value)
-#password = <None>
+#password=<None>
 
 # User ID for authentication. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#user_id = <None>
+#user_id=<None>
 
 # User's domain ID for authentication. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#user_domain_id = <None>
+#user_domain_id=<None>
 
 # User's domain name for authentication. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#user_domain_name = <None>
+#user_domain_name=<None>
 
 # Trust ID for trust scoping. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#trust_id = <None>
+#trust_id=<None>
 
 # Domain ID for domain scoping. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#domain_id = <None>
+#domain_id=<None>
 
 # Domain name for domain scoping. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#domain_name = <None>
+#domain_name=<None>
 
 # Project ID for project scoping. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#project_id = <None>
+#project_id=<None>
 
 # Project name for project scoping. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#project_name = <None>
+#project_name=<None>
 
 # Project's domain ID for project. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#project_domain_id = <None>
+#project_domain_id=<None>
 
 # Project's domain name for project. Optional for 'keystone_token' and
 # 'keystone_password' auth_type. (string value)
-#project_domain_name = <None>
+#project_domain_name=<None>
 
 # Allow fetching a new token if the current one is going to expire. Optional for
 # 'keystone_token' and 'keystone_password' auth_type. (boolean value)
-#reauthenticate = true
-
-
-[keystone]
-# Configuration options for the identity service
-
-#
-# From nova.conf
-#
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
+#reauthenticate=true
 
 
 [keystone_authtoken]
@@ -5749,52 +5519,62 @@
 #
 # From keystonemiddleware.auth_token
 #
-
+signing_dirname=/tmp/keystone-signing-nova
+revocation_cache_time = 10
+auth_type = password
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = nova
+password = opnfv_secret
+auth_uri=http://10.167.4.35:5000
+auth_url=http://10.167.4.35:35357
+memcached_servers=10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
 # Complete "public" Identity API endpoint. This endpoint should not be an
 # "admin" endpoint, as it should be accessible by all end users. Unauthenticated
 # clients are redirected to this endpoint to authenticate. Although this
-# endpoint should ideally be unversioned, client support in the wild varies. If
-# you're using a versioned v2 endpoint here, then this should *not* be the same
-# endpoint the service user utilizes for validating tokens, because normal end
-# users may not be able to reach that endpoint. (string value)
-#auth_uri = <None>
+# endpoint should  ideally be unversioned, client support in the wild varies.
+# If you're using a versioned v2 endpoint here, then this  should *not* be the
+# same endpoint the service user utilizes  for validating tokens, because normal
+# end users may not be  able to reach that endpoint. (string value)
+#auth_uri=<None>
 
 # API version of the admin Identity API endpoint. (string value)
-#auth_version = <None>
+#auth_version=<None>
 
 # Do not handle authorization requests within the middleware, but delegate the
 # authorization decision to downstream WSGI components. (boolean value)
-#delay_auth_decision = false
+#delay_auth_decision=false
 
 # Request timeout value for communicating with Identity API server. (integer
 # value)
-#http_connect_timeout = <None>
+#http_connect_timeout=<None>
 
 # How many times are we trying to reconnect when communicating with Identity API
 # Server. (integer value)
-#http_request_max_retries = 3
+#http_request_max_retries=3
 
 # Request environment key where the Swift cache object is stored. When
 # auth_token middleware is deployed with a Swift cache, use this option to have
 # the middleware share a caching backend with swift. Otherwise, use the
 # ``memcached_servers`` option instead. (string value)
-#cache = <None>
+#cache=<None>
 
 # Required if identity server requires client certificate (string value)
-#certfile = <None>
+#certfile=<None>
 
 # Required if identity server requires client certificate (string value)
-#keyfile = <None>
+#keyfile=<None>
 
 # A PEM encoded Certificate Authority to use when verifying HTTPs connections.
 # Defaults to system CAs. (string value)
-#cafile = <None>
+#cafile=<None>
 
 # Verify HTTPS connections. (boolean value)
-#insecure = false
+#insecure=false
 
 # The region in which the identity server can be found. (string value)
-#region_name = <None>
+#region_name=<None>
 
 # DEPRECATED: Directory used to cache files related to PKI tokens. This option
 # has been deprecated in the Ocata release and will be removed in the P release.
@@ -5802,17 +5582,17 @@
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
-#signing_dir = <None>
+#signing_dir=<None>
 
 # Optionally specify a list of memcached server(s) to use for caching. If left
 # undefined, tokens will instead be cached in-process. (list value)
 # Deprecated group/name - [keystone_authtoken]/memcache_servers
-#memcached_servers = <None>
+#memcached_servers=<None>
 
 # In order to prevent excessive effort spent validating tokens, the middleware
 # caches previously-seen tokens for a configurable duration (in seconds). Set to
 # -1 to disable caching completely. (integer value)
-#token_cache_time = 300
+#token_cache_time=300
 
 # DEPRECATED: Determines the frequency at which the list of revoked tokens is
 # retrieved from the Identity service (in seconds). A high number of revocation
@@ -5822,7 +5602,7 @@
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
-#revocation_cache_time = 10
+#revocation_cache_time=10
 
 # (Optional) If defined, indicate whether token data should be authenticated or
 # authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
@@ -5830,40 +5610,40 @@
 # cache. If the value is not one of these options or empty, auth_token will
 # raise an exception on initialization. (string value)
 # Allowed values: None, MAC, ENCRYPT
-#memcache_security_strategy = None
+#memcache_security_strategy=None
 
 # (Optional, mandatory if memcache_security_strategy is defined) This string is
 # used for key derivation. (string value)
-#memcache_secret_key = <None>
+#memcache_secret_key=<None>
 
 # (Optional) Number of seconds memcached server is considered dead before it is
 # tried again. (integer value)
-#memcache_pool_dead_retry = 300
+#memcache_pool_dead_retry=300
 
 # (Optional) Maximum total number of open connections to every memcached server.
 # (integer value)
-#memcache_pool_maxsize = 10
+#memcache_pool_maxsize=10
 
 # (Optional) Socket timeout in seconds for communicating with a memcached
 # server. (integer value)
-#memcache_pool_socket_timeout = 3
+#memcache_pool_socket_timeout=3
 
 # (Optional) Number of seconds a connection to memcached is held unused in the
 # pool before it is closed. (integer value)
-#memcache_pool_unused_timeout = 60
+#memcache_pool_unused_timeout=60
 
 # (Optional) Number of seconds that an operation will wait to get a memcached
 # client connection from the pool. (integer value)
-#memcache_pool_conn_get_timeout = 10
+#memcache_pool_conn_get_timeout=10
 
 # (Optional) Use the advanced (eventlet safe) memcached client pool. The
 # advanced pool will only work under python 2.x. (boolean value)
-#memcache_use_advanced_pool = false
+#memcache_use_advanced_pool=false
 
 # (Optional) Indicate whether to set the X-Service-Catalog header. If False,
 # middleware will not ask for service catalog on token validation and will not
 # set the X-Service-Catalog header. (boolean value)
-#include_service_catalog = true
+#include_service_catalog=true
 
 # Used to control the use and type of token binding. Can be set to: "disabled"
 # to not check token binding. "permissive" (default) to validate binding
@@ -5872,7 +5652,7 @@
 # be rejected. "required" any form of token binding is needed to be allowed.
 # Finally the name of a binding method that must be present in tokens. (string
 # value)
-#enforce_token_bind = permissive
+#enforce_token_bind=permissive
 
 # DEPRECATED: If true, the revocation list will be checked for cached tokens.
 # This requires that PKI tokens are configured on the identity server. (boolean
@@ -5880,7 +5660,7 @@
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
-#check_revocations_for_cached = false
+#check_revocations_for_cached=false
 
 # DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
 # single algorithm or multiple. The algorithms are those supported by Python
@@ -5893,7 +5673,7 @@
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
-#hash_algorithms = md5
+#hash_algorithms=md5
 
 # A choice of roles that must be present in a service token. Service tokens are
 # allowed to request that an expired token can be used and so this check should
@@ -5901,13 +5681,13 @@
 # here are applied as an ANY check so any role in this list must be present. For
 # backwards compatibility reasons this currently only affects the allow_expired
 # check. (list value)
-#service_token_roles = service
+#service_token_roles=service
 
 # For backwards compatibility reasons we must let valid service tokens pass that
 # don't pass the service_token_roles check as valid. Setting this true will
 # become the default in a future release and should be enabled if possible.
 # (boolean value)
-#service_token_roles_required = false
+#service_token_roles_required=false
 
 # Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
 # (string value)
@@ -5915,43 +5695,43 @@
 
 # Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
 # (string value)
-#auth_host = 127.0.0.1
+#auth_host=127.0.0.1
 
 # Port of the admin Identity API endpoint. Deprecated, use identity_uri.
 # (integer value)
-#auth_port = 35357
+#auth_port=35357
 
 # Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
 # (string value)
 # Allowed values: http, https
-#auth_protocol = https
+#auth_protocol=https
 
 # Complete admin Identity API endpoint. This should specify the unversioned root
 # endpoint e.g. https://localhost:35357/ (string value)
-#identity_uri = <None>
+#identity_uri=<None>
 
 # This option is deprecated and may be removed in a future release. Single
 # shared secret with the Keystone configuration used for bootstrapping a
 # Keystone installation, or otherwise bypassing the normal authentication
 # process. This option should not be used, use `admin_user` and `admin_password`
 # instead. (string value)
-#admin_token = <None>
+#admin_token=<None>
 
 # Service username. (string value)
-#admin_user = <None>
+#admin_user=<None>
 
 # Service user password. (string value)
-#admin_password = <None>
+#admin_password=<None>
 
 # Service tenant name. (string value)
-#admin_tenant_name = admin
+#admin_tenant_name=admin
 
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
-#auth_type = <None>
+#auth_type=<None>
 
 # Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#auth_section=<None>
 
 
 [libvirt]
@@ -5966,7 +5746,15 @@
 #
 # From nova.conf
 #
-
+cpu_mode = host-passthrough
+virt_type = kvm
+inject_partition=-2
+inject_password=True
+disk_cachemodes="file=directsync,block=none"
+block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_NON_SHARED_INC
+live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST
+inject_key=True
+vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
 #
 # The ID of the image to boot from to rescue data from a corrupted instance.
 #
@@ -5992,7 +5780,7 @@
 #   specified. This is the case when *Amazon*'s AMI/AKI/ARI image
 #   format is used for the rescue image.
 #  (string value)
-#rescue_image_id = <None>
+#rescue_image_id=<None>
 
 #
 # The ID of the kernel (AKI) image to use with the rescue image.
@@ -6011,7 +5799,7 @@
 # * ``rescue_image_id``: If that option points to an image in *Amazon*'s
 #   AMI/AKI/ARI image format, it's useful to use ``rescue_kernel_id`` too.
 #  (string value)
-#rescue_kernel_id = <None>
+#rescue_kernel_id=<None>
 
 #
 # The ID of the RAM disk (ARI) image to use with the rescue image.
@@ -6030,7 +5818,7 @@
 # * ``rescue_image_id``: If that option points to an image in *Amazon*'s
 #   AMI/AKI/ARI image format, it's useful to use ``rescue_ramdisk_id`` too.
 #  (string value)
-#rescue_ramdisk_id = <None>
+#rescue_ramdisk_id=<None>
 
 #
 # Describes the virtualization type (or so called domain type) libvirt should
@@ -6051,7 +5839,7 @@
 # * ``cpu_model``: depends on this
 #  (string value)
 # Allowed values: kvm, lxc, qemu, uml, xen, parallels
-#virt_type = kvm
+#virt_type=kvm
 
 #
 # Overrides the default libvirt URI of the chosen virtualization type.
@@ -6093,7 +5881,7 @@
 # * ``inject_partition``: That option will decide about the discovery and usage
 #   of the file system. It also can disable the injection at all.
 #  (boolean value)
-#inject_password = false
+#inject_password=false
 
 #
 # Allow the injection of an SSH key at boot time.
@@ -6115,7 +5903,7 @@
 # * ``inject_partition``: That option will decide about the discovery and usage
 #   of the file system. It also can disable the injection at all.
 #  (boolean value)
-#inject_key = false
+#inject_key=false
 
 #
 # Determines the way how the file system is chosen to inject data into it.
@@ -6146,7 +5934,7 @@
 #   single partition image
 #  (integer value)
 # Minimum value: -2
-#inject_partition = -2
+#inject_partition=-2
 
 # DEPRECATED:
 # Enable a mouse cursor within a graphical VNC or SPICE sessions.
@@ -6164,7 +5952,7 @@
 # This option is deprecated for removal since 14.0.0.
 # Its value may be silently ignored in the future.
 # Reason: This option is being replaced by the 'pointer_model' option.
-#use_usb_tablet = true
+#use_usb_tablet=true
 
 #
 # The IP address or hostname to be used as the target for live migration
@@ -6181,13 +5969,8 @@
 # Possible values:
 #
 # * A valid IP address or hostname, else None.
-#
-# Related options:
-#
-# * ``live_migration_tunnelled``: The live_migration_inbound_addr value is
-#   ignored if tunneling is enabled.
-#  (string value)
-#live_migration_inbound_addr = <None>
+#  (string value)
+#live_migration_inbound_addr=<None>
 
 # DEPRECATED:
 # Live migration target URI to use.
@@ -6197,20 +5980,16 @@
 # hostname.
 #
 # If this option is set to None (which is the default), Nova will automatically
-# generate the `live_migration_uri` value based on only 4 supported `virt_type`
+# generate the `live_migration_uri` value based on only 3 supported `virt_type`
 # in following list:
-#
 # * 'kvm': 'qemu+tcp://%s/system'
 # * 'qemu': 'qemu+tcp://%s/system'
 # * 'xen': 'xenmigr://%s/system'
-# * 'parallels': 'parallels+tcp://%s/system'
-#
-# Related options:
-#
+#
+# Related options:
 # * ``live_migration_inbound_addr``: If ``live_migration_inbound_addr`` value
-#   is not None and ``live_migration_tunnelled`` is False, the ip/hostname
-#   address of target compute node is used instead of ``live_migration_uri`` as
-#   the uri for live migration.
+#   is not None, the ip/hostname address of target compute node is used instead
+#   of ``live_migration_uri`` as the uri for live migration.
 # * ``live_migration_scheme``: If ``live_migration_uri`` is not set, the scheme
 #   used for live migration is taken from ``live_migration_scheme`` instead.
 #  (string value)
@@ -6222,25 +6001,24 @@
 # allow to change live migration scheme and target URI:
 # ``live_migration_scheme``
 # and ``live_migration_inbound_addr`` respectively.
-#live_migration_uri = <None>
-
-#
-# URI scheme used for live migration.
-#
-# Override the default libvirt live migration scheme (which is dependent on
+#live_migration_uri=<None>
+
+#
+# Schema used for live migration.
+#
+# Override the default libvirt live migration scheme (which is dependant on
 # virt_type). If this option is set to None, nova will automatically choose a
 # sensible default based on the hypervisor. It is not recommended that you
 # change
 # this unless you are very sure that hypervisor supports a particular scheme.
 #
 # Related options:
-#
 # * ``virt_type``: This option is meaningful only when ``virt_type`` is set to
 #   `kvm` or `qemu`.
 # * ``live_migration_uri``: If ``live_migration_uri`` value is not None, the
 #   scheme used for live migration is taken from ``live_migration_uri`` instead.
 #  (string value)
-#live_migration_scheme = <None>
+#live_migration_scheme=<None>
 
 #
 # Enable tunnelled migration.
@@ -6251,17 +6029,17 @@
 # the network to allow direct hypervisor to hypervisor communication.
 # If False, use the native transport. If not set, Nova will choose a
 # sensible default based on, for example the availability of native
-# encryption support in the hypervisor. Enabling this option will definitely
+# encryption support in the hypervisor. Enable this option will definitely
 # impact performance massively.
 #
 # Note that this option is NOT compatible with use of block migration.
 #
-# Related options:
-#
-# * ``live_migration_inbound_addr``: The live_migration_inbound_addr value is
-#   ignored if tunneling is enabled.
+# Possible values:
+#
+# * Supersedes and (if set) overrides the deprecated 'live_migration_flag' and
+#   'block_migration_flag' to enable tunneled migration.
 #  (boolean value)
-#live_migration_tunnelled = false
+#live_migration_tunnelled=false
 
 #
 # Maximum bandwidth(in MiB/s) to be used during migration.
@@ -6270,7 +6048,7 @@
 # do not support this feature and will return an error if bandwidth is not 0.
 # Please refer to the libvirt documentation for further details.
 #  (integer value)
-#live_migration_bandwidth = 0
+#live_migration_bandwidth=0
 
 #
 # Maximum permitted downtime, in milliseconds, for live migration
@@ -6285,26 +6063,23 @@
 #
 # * live_migration_completion_timeout
 #  (integer value)
-# Minimum value: 100
-#live_migration_downtime = 500
+#live_migration_downtime=500
 
 #
 # Number of incremental steps to reach max downtime value.
 #
 # Will be rounded up to a minimum of 3 steps.
 #  (integer value)
-# Minimum value: 3
-#live_migration_downtime_steps = 10
+#live_migration_downtime_steps=10
 
 #
 # Time to wait, in seconds, between each step increase of the migration
 # downtime.
 #
-# Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be
+# Minimum delay is 10 seconds. Value is per GiB of guest RAM + disk to be
 # transferred, with lower bound of a minimum of 2 GiB per device.
 #  (integer value)
-# Minimum value: 3
-#live_migration_downtime_delay = 75
+#live_migration_downtime_delay=75
 
 #
 # Time to wait, in seconds, for migration to successfully complete transferring
@@ -6321,7 +6096,7 @@
 # * live_migration_downtime_delay
 #  (integer value)
 # Note: This option can be changed without restarting.
-#live_migration_completion_timeout = 800
+#live_migration_completion_timeout=800
 
 # DEPRECATED:
 # Time to wait, in seconds, for migration to make forward progress in
@@ -6337,7 +6112,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Serious bugs found in this feature.
-#live_migration_progress_timeout = 0
+#live_migration_progress_timeout=0
 
 #
 # This option allows nova to switch an on-going live migration to post-copy
@@ -6361,7 +6136,7 @@
 #
 #     * live_migration_permit_auto_converge
 #  (boolean value)
-#live_migration_permit_post_copy = false
+#live_migration_permit_post_copy=false
 
 #
 # This option allows nova to start live migration with auto converge on.
@@ -6369,13 +6144,14 @@
 # Auto converge throttles down CPU if a progress of on-going live migration
 # is slow. Auto converge will only be used if this flag is set to True and
 # post copy is not permitted or post copy is unavailable due to the version
-# of libvirt and QEMU in use.
+# of libvirt and QEMU in use. Auto converge requires libvirt>=1.2.3 and
+# QEMU>=1.6.0.
 #
 # Related options:
 #
 #     * live_migration_permit_post_copy
 #  (boolean value)
-#live_migration_permit_auto_converge = false
+#live_migration_permit_auto_converge=false
 
 #
 # Determine the snapshot image format when sending to the image service.
@@ -6393,7 +6169,7 @@
 # * If not set, defaults to same type as source image.
 #  (string value)
 # Allowed values: raw, qcow2, vmdk, vdi
-#snapshot_image_format = <None>
+#snapshot_image_format=<None>
 
 #
 # Override the default disk prefix for the devices attached to an instance.
@@ -6412,12 +6188,12 @@
 # * ``virt_type``: Influences which device type is used, which determines
 #   the default disk prefix.
 #  (string value)
-#disk_prefix = <None>
+#disk_prefix=<None>
 
 # Number of seconds to wait for instance to shut down after soft reboot request
 # is made. We fall back to hard reboot if instance does not shutdown within this
 # window. (integer value)
-#wait_soft_reboot_seconds = 120
+#wait_soft_reboot_seconds=120
 
 #
 # Is used to set the CPU mode an instance should have.
@@ -6439,7 +6215,7 @@
 #   be launched.
 #  (string value)
 # Allowed values: host-model, host-passthrough, custom, none
-#cpu_mode = <None>
+#cpu_mode=<None>
 
 #
 # Set the name of the libvirt CPU model the instance should use.
@@ -6454,85 +6230,38 @@
 #   This would result in an error and the instance won't be launched.
 # * ``virt_type``: Only the virtualization types ``kvm`` and ``qemu`` use this.
 #  (string value)
-#cpu_model = <None>
+#cpu_model=<None>
 
 # Location where libvirt driver will store snapshots before uploading them to
 # image service (string value)
-#snapshots_directory = $instances_path/snapshots
+#snapshots_directory=$instances_path/snapshots
 
 # Location where the Xen hvmloader is kept (string value)
-#xen_hvmloader_path = /usr/lib/xen/boot/hvmloader
-
-#
-# Specific cache modes to use for different disk types.
-#
-# For example: file=directsync,block=none,network=writeback
-#
-# For local or direct-attached storage, it is recommended that you use
-# writethrough (default) mode, as it ensures data integrity and has acceptable
-# I/O performance for applications running in the guest, especially for read
-# operations. However, caching mode none is recommended for remote NFS storage,
-# because direct I/O operations (O_DIRECT) perform better than synchronous I/O
-# operations (with O_SYNC). Caching mode none effectively turns all guest I/O
-# operations into direct I/O operations on the host, which is the NFS client in
-# this environment.
-#
-# Possible cache modes:
-#
-# * default: Same as writethrough.
-# * none: With caching mode set to none, the host page cache is disabled, but
-#   the disk write cache is enabled for the guest. In this mode, the write
-#   performance in the guest is optimal because write operations bypass the host
-#   page cache and go directly to the disk write cache. If the disk write cache
-#   is battery-backed, or if the applications or storage stack in the guest
-#   transfer data properly (either through fsync operations or file system
-#   barriers), then data integrity can be ensured. However, because the host
-#   page cache is disabled, the read performance in the guest would not be as
-#   good as in the modes where the host page cache is enabled, such as
-#   writethrough mode.
-# * writethrough: writethrough mode is the default caching mode. With
-#   caching set to writethrough mode, the host page cache is enabled, but the
-#   disk write cache is disabled for the guest. Consequently, this caching mode
-#   ensures data integrity even if the applications and storage stack in the
-#   guest do not transfer data to permanent storage properly (either through
-#   fsync operations or file system barriers). Because the host page cache is
-#   enabled in this mode, the read performance for applications running in the
-#   guest is generally better. However, the write performance might be reduced
-#   because the disk write cache is disabled.
-# * writeback: With caching set to writeback mode, both the host page cache
-#   and the disk write cache are enabled for the guest. Because of this, the
-#   I/O performance for applications running in the guest is good, but the data
-#   is not protected in a power failure. As a result, this caching mode is
-#   recommended only for temporary data where potential data loss is not a
-#   concern.
-# * directsync: Like "writethrough", but it bypasses the host page cache.
-# * unsafe: Caching mode of unsafe ignores cache transfer operations
-#   completely. As its name implies, this caching mode should be used only for
-#   temporary data where data loss is not a concern. This mode can be useful for
-#   speeding up guest installations, but you should switch to another caching
-#   mode in production environments.
-#  (list value)
+#xen_hvmloader_path=/usr/lib/xen/boot/hvmloader
+
+# Specific cachemodes to use for different disk types e.g:
+# file=directsync,block=none (list value)
 #disk_cachemodes =
 
 # A path to a device that will be used as source of entropy on the host.
 # Permitted options are: /dev/random or /dev/hwrng (string value)
-#rng_dev_path = <None>
+#rng_dev_path=<None>
 
 # For qemu or KVM guests, set this option to specify a default machine type per
 # host architecture. You can find a list of supported machine types in your
 # environment by checking the output of the "virsh capabilities"command. The
 # format of the value for this config option is host-arch=machine-type. For
 # example: x86_64=machinetype1,armv7l=machinetype2 (list value)
-#hw_machine_type = <None>
+#hw_machine_type=<None>
 
 # The data source used to the populate the host "serial" UUID exposed to guest
 # in the virtual BIOS. (string value)
 # Allowed values: none, os, hardware, auto
-#sysinfo_serial = auto
+#sysinfo_serial=auto
 
 # A number of seconds to memory usage statistics period. Zero or negative value
 # mean to disable memory usage statistics. (integer value)
-#mem_stats_period_seconds = 10
+#mem_stats_period_seconds=10
 
 # List of uid targets and ranges.Syntax is guest-uid:host-uid:countMaximum of 5
 # allowed. (list value)
@@ -6544,7 +6273,7 @@
 
 # In a realtime host context vCPUs for guest will run in that scheduling
 # priority. Priority depends on the host kernel (usually 1-99) (integer value)
-#realtime_scheduler_priority = 1
+#realtime_scheduler_priority=1
 
 #
 # This is a performance event list which could be used as monitor. These events
@@ -6573,7 +6302,7 @@
 # * images_volume_group
 #  (string value)
 # Allowed values: raw, flat, qcow2, lvm, rbd, ploop, default
-#images_type = default
+#images_type=default
 
 #
 # LVM Volume Group that is used for VM images, when you specify images_type=lvm
@@ -6582,15 +6311,15 @@
 #
 # * images_type
 #  (string value)
-#images_volume_group = <None>
+#images_volume_group=<None>
 
 #
 # Create sparse logical volumes (with virtualsize) if this flag is set to True.
 #  (boolean value)
-#sparse_logical_volumes = false
+#sparse_logical_volumes=false
 
 # The RADOS pool in which rbd volumes are stored (string value)
-#images_rbd_pool = rbd
+#images_rbd_pool=rbd
 
 # Path to the ceph configuration file to use (string value)
 #images_rbd_ceph_conf =
@@ -6605,32 +6334,32 @@
 # * Qemu >= 1.6 (qcow2 format)
 #  (string value)
 # Allowed values: ignore, unmap
-#hw_disk_discard = <None>
+#hw_disk_discard=<None>
 
 # DEPRECATED: Allows image information files to be stored in non-standard
 # locations (string value)
 # This option is deprecated for removal since 14.0.0.
 # Its value may be silently ignored in the future.
 # Reason: Image info files are no longer used by the image cache
-#image_info_filename_pattern = $instances_path/$image_cache_subdirectory_name/%(image)s.info
+#image_info_filename_pattern=$instances_path/$image_cache_subdirectory_name/%(image)s.info
 
 # Unused resized base images younger than this will not be removed (integer
 # value)
-#remove_unused_resized_minimum_age_seconds = 3600
+#remove_unused_resized_minimum_age_seconds=3600
 
 # DEPRECATED: Write a checksum for files in _base to disk (boolean value)
 # This option is deprecated for removal since 14.0.0.
 # Its value may be silently ignored in the future.
 # Reason: The image cache no longer periodically calculates checksums of stored
 # images. Data integrity can be checked at the block or filesystem level.
-#checksum_base_images = false
+#checksum_base_images=false
 
 # DEPRECATED: How frequently to checksum base images (integer value)
 # This option is deprecated for removal since 14.0.0.
 # Its value may be silently ignored in the future.
 # Reason: The image cache no longer periodically calculates checksums of stored
 # images. Data integrity can be checked at the block or filesystem level.
-#checksum_interval_seconds = 3600
+#checksum_interval_seconds=3600
 
 #
 # Method used to wipe ephemeral disks when they are deleted. Only takes effect
@@ -6648,7 +6377,7 @@
 # * volume_clear_size
 #  (string value)
 # Allowed values: none, zero, shred
-#volume_clear = zero
+#volume_clear=zero
 
 #
 # Size of area in MiB, counting from the beginning of the allocated volume,
@@ -6666,7 +6395,7 @@
 #   for this option to have any impact
 #  (integer value)
 # Minimum value: 0
-#volume_clear_size = 0
+#volume_clear_size=0
 
 #
 # Enable snapshot compression for ``qcow2`` images.
@@ -6679,10 +6408,24 @@
 #
 # * snapshot_image_format
 #  (boolean value)
-#snapshot_compression = false
+#snapshot_compression=false
 
 # Use virtio for bridge interfaces with KVM/QEMU (boolean value)
-#use_virtio_for_bridges = true
+#use_virtio_for_bridges=true
+
+#
+# Protocols listed here will be accessed directly from QEMU.
+#
+# If gluster is present in qemu_allowed_storage_drivers, glusterfs's backend
+# will
+# pass a disk configuration to QEMU. This allows QEMU to access the volume using
+# libgfapi rather than mounting GlusterFS via fuse.
+#
+# Possible values:
+#
+# * [gluster]
+#  (list value)
+#qemu_allowed_storage_drivers =
 
 #
 # Use multipath connection of the iSCSI or FC volume
@@ -6691,13 +6434,7 @@
 # provide high availability and fault tolerance.
 #  (boolean value)
 # Deprecated group/name - [libvirt]/iscsi_use_multipath
-#volume_use_multipath = false
-
-#
-# Number of times to scan given storage protocol to find volume.
-#  (integer value)
-# Deprecated group/name - [libvirt]/num_iscsi_scan_tries
-#num_volume_scan_tries = 5
+#volume_use_multipath=false
 
 #
 # Number of times to rediscover AoE target to find volume.
@@ -6706,7 +6443,18 @@
 # Ethernet). This option allows the user to specify the maximum number of retry
 # attempts that can be made to discover the AoE device.
 #  (integer value)
-#num_aoe_discover_tries = 3
+#num_aoe_discover_tries=3
+
+#
+# Absolute path to the directory where the glusterfs volume is mounted on the
+# compute node.
+#  (string value)
+#glusterfs_mount_point_base=$state_path/mnt
+
+#
+# Number of times to scan iSCSI target to find volume.
+#  (integer value)
+#num_iscsi_scan_tries=5
 
 #
 # The iSCSI transport iface to use to connect to target in case offload support
@@ -6719,7 +6467,7 @@
 # provided here with the actual transport name.
 #  (string value)
 # Deprecated group/name - [libvirt]/iscsi_transport
-#iscsi_iface = <None>
+#iscsi_iface=<None>
 
 #
 # Number of times to scan iSER target to find volume.
@@ -6729,7 +6477,7 @@
 # maximum
 # number of scan attempts that can be made to find iSER volume.
 #  (integer value)
-#num_iser_scan_tries = 5
+#num_iser_scan_tries=5
 
 #
 # Use multipath connection of the iSER volume.
@@ -6737,7 +6485,7 @@
 # iSER volumes can be connected as multipath devices. This will provide high
 # availability and fault tolerance.
 #  (boolean value)
-#iser_use_multipath = false
+#iser_use_multipath=false
 
 #
 # The RADOS client name for accessing rbd(RADOS Block Devices) volumes.
@@ -6745,12 +6493,12 @@
 # Libvirt will refer to this user when connecting and authenticating with
 # the Ceph RBD server.
 #  (string value)
-#rbd_user = <None>
+#rbd_user=<None>
 
 #
 # The libvirt UUID of the secret for the rbd_user volumes.
 #  (string value)
-#rbd_secret_uuid = <None>
+#rbd_secret_uuid=<None>
 
 #
 # Directory where the NFS volume is mounted on the compute node.
@@ -6763,7 +6511,7 @@
 #
 # * A string representing absolute path of mount point.
 #  (string value)
-#nfs_mount_point_base = $state_path/mnt
+#nfs_mount_point_base=$state_path/mnt
 
 #
 # Mount options passed to the NFS client. See section of the nfs man page
@@ -6777,7 +6525,7 @@
 # * Any string representing mount options separated by commas.
 # * Example string: vers=3,lookupcache=pos
 #  (string value)
-#nfs_mount_options = <None>
+#nfs_mount_options=<None>
 
 #
 # Directory where the Quobyte volume is mounted on the compute node.
@@ -6790,15 +6538,38 @@
 #
 # * A string representing absolute path of mount point.
 #  (string value)
-#quobyte_mount_point_base = $state_path/mnt
+#quobyte_mount_point_base=$state_path/mnt
 
 # Path to a Quobyte Client configuration file. (string value)
-#quobyte_client_cfg = <None>
+#quobyte_client_cfg=<None>
+
+#
+# Path or URL to Scality SOFS(Scale-Out File Server) configuration file.
+#
+# The Scality SOFS provides OpenStack users the option of storing their
+# data on a high capacity, replicated, highly available Scality Ring object
+# storage cluster.
+#  (string value)
+#scality_sofs_config=<None>
+
+#
+# Base dir where Scality SOFS shall be mounted.
+#
+# The Scality volume driver in Nova mounts SOFS and lets the hypervisor access
+# the volumes.
+#
+# Possible values:
+#
+# * $state_path/scality where state_path is a config option that specifies
+#   the top-level directory for maintaining nova's state or Any string
+#   containing the full directory path.
+#  (string value)
+#scality_sofs_mount_point=$state_path/scality
 
 #
 # Directory where the SMBFS shares are mounted on the compute node.
 #  (string value)
-#smbfs_mount_point_base = $state_path/mnt
+#smbfs_mount_point_base=$state_path/mnt
 
 #
 # Mount options passed to the SMBFS client.
@@ -6821,7 +6592,7 @@
 # * copying file to remote host
 #  (string value)
 # Allowed values: ssh, rsync
-#remote_filesystem_transport = ssh
+#remote_filesystem_transport=ssh
 
 #
 # Directory where the Virtuozzo Storage clusters are mounted on the compute
@@ -6833,7 +6604,7 @@
 #
 # * vzstorage_mount_* group of parameters
 #  (string value)
-#vzstorage_mount_point_base = $state_path/mnt
+#vzstorage_mount_point_base=$state_path/mnt
 
 #
 # Mount owner user name.
@@ -6844,7 +6615,7 @@
 #
 # * vzstorage_mount_* group of parameters
 #  (string value)
-#vzstorage_mount_user = stack
+#vzstorage_mount_user=stack
 
 #
 # Mount owner group name.
@@ -6855,7 +6626,7 @@
 #
 # * vzstorage_mount_* group of parameters
 #  (string value)
-#vzstorage_mount_group = qemu
+#vzstorage_mount_group=qemu
 
 #
 # Mount access mode.
@@ -6869,7 +6640,7 @@
 #
 # * vzstorage_mount_* group of parameters
 #  (string value)
-#vzstorage_mount_perms = 0770
+#vzstorage_mount_perms=0770
 
 #
 # Path to vzstorage client log.
@@ -6882,7 +6653,7 @@
 #
 # * vzstorage_mount_opts may include more detailed logging options.
 #  (string value)
-#vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz
+#vzstorage_log_path=/var/log/pstorage/%(cluster_name)s/nova.log.gz
 
 #
 # Path to the SSD cache file.
@@ -6905,7 +6676,7 @@
 #
 # * vzstorage_mount_opts may include more detailed cache options.
 #  (string value)
-#vzstorage_cache_path = <None>
+#vzstorage_cache_path=<None>
 
 #
 # Extra mount options for pstorage-mount
@@ -6934,7 +6705,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
+#host=127.0.0.1
 
 # DEPRECATED: Use this port to connect to redis host. (port value)
 # Minimum value: 0
@@ -6942,7 +6713,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
+#port=6379
 
 # DEPRECATED: Password for Redis server (optional). (string value)
 # This option is deprecated for removal.
@@ -6958,16 +6729,16 @@
 #sentinel_hosts =
 
 # Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
+#sentinel_group_name=oslo-messaging-zeromq
 
 # Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
+#wait_timeout=2000
 
 # Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
+#check_timeout=20000
 
 # Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
+#socket_timeout=10000
 
 
 [metrics]
@@ -7009,7 +6780,7 @@
 #
 # * weight_of_unavailable
 #  (floating point value)
-#weight_multiplier = 1.0
+#weight_multiplier=1.0
 
 #
 # This setting specifies the metrics to be weighed and the relative ratios for
@@ -7064,7 +6835,7 @@
 #
 # * weight_of_unavailable
 #  (boolean value)
-#required = true
+#required=true
 
 #
 # When any of the following conditions are met, this value will be used in place
@@ -7089,7 +6860,7 @@
 # * required
 # * weight_multiplier
 #  (floating point value)
-#weight_of_unavailable = -10000.0
+#weight_of_unavailable=-10000.0
 
 
 [mks]
@@ -7116,15 +6887,14 @@
 #
 # Possible values:
 #
-# * Must be a valid URL of the form:``http://host:port/`` or
-#   ``https://host:port/``
-#  (uri value)
-#mksproxy_base_url = http://127.0.0.1:6090/
+# * Must be a valid URL of the form:``http://host:port/``
+#  (string value)
+#mksproxy_base_url=http://127.0.0.1:6090/
 
 #
 # Enables graphical console access for virtual machines.
 #  (boolean value)
-#enabled = false
+#enabled=false
 
 
 [neutron]
@@ -7134,7 +6904,17 @@
 #
 # From nova.conf
 #
-
+username=neutron
+password=opnfv_secret
+project_name=service
+auth_url = http://10.167.4.35:35357/v3
+url=http://10.167.4.35:9696
+region_name= RegionOne
+extension_sync_interval=600
+auth_type = v3password
+project_domain_name = Default
+user_domain_name = Default
+timeout=30
 #
 # This option specifies the URL for connecting to Neutron.
 #
@@ -7144,7 +6924,7 @@
 #   This typically matches the URL returned for the 'network' service type
 #   from the Keystone service catalog.
 #  (uri value)
-#url = http://127.0.0.1:9696
+#url=http://127.0.0.1:9696
 
 #
 # Region name for connecting to Neutron in admin context.
@@ -7156,25 +6936,17 @@
 # to Keystone, the Keystone service uses the region_name to determine the
 # region the request is coming from.
 #  (string value)
-#region_name = RegionOne
-
-#
-# Default name for the Open vSwitch integration bridge.
+#region_name=RegionOne
+
 #
 # Specifies the name of an integration bridge interface used by OpenvSwitch.
-# This option is only used if Neutron does not specify the OVS bridge name in
-# port binding responses.
-#  (string value)
-#ovs_bridge = br-int
-
-#
-# Default name for the floating IP pool.
-#
-# Specifies the name of floating IP pool used for allocating floating IPs. This
-# option is only used if Neutron does not specify the floating IP pool name in
-# port binding reponses.
-#  (string value)
-#default_floating_pool = nova
+# This option is used only if Neutron does not specify the OVS bridge name.
+#
+# Possible values:
+#
+# * Any string representing OVS bridge name.
+#  (string value)
+#ovs_bridge=br-int
 
 #
 # Integer value representing the number of seconds to wait before querying
@@ -7184,7 +6956,7 @@
 # extensions with no wait.
 #  (integer value)
 # Minimum value: 0
-#extension_sync_interval = 600
+#extension_sync_interval=600
 
 #
 # When set to True, this option indicates that Neutron will be used to proxy
@@ -7195,7 +6967,7 @@
 #
 # * metadata_proxy_shared_secret
 #  (boolean value)
-#service_metadata_proxy = false
+#service_metadata_proxy=false
 
 #
 # This option holds the shared secret string used to validate proxy requests to
@@ -7210,82 +6982,83 @@
 
 # PEM encoded Certificate Authority to use when verifying HTTPs connections.
 # (string value)
-#cafile = <None>
+#cafile=<None>
 
 # PEM encoded client certificate cert file (string value)
-#certfile = <None>
+#certfile=<None>
 
 # PEM encoded client certificate key file (string value)
-#keyfile = <None>
+#keyfile=<None>
 
 # Verify HTTPS connections. (boolean value)
-#insecure = false
+#insecure=false
 
 # Timeout value for http requests (integer value)
-#timeout = <None>
+#timeout=<None>
+timeout = 300
 
 # Authentication type to load (string value)
 # Deprecated group/name - [neutron]/auth_plugin
-#auth_type = <None>
+#auth_type=<None>
 
 # Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#auth_section=<None>
 
 # Authentication URL (string value)
-#auth_url = <None>
+#auth_url=<None>
 
 # Domain ID to scope to (string value)
-#domain_id = <None>
+#domain_id=<None>
 
 # Domain name to scope to (string value)
-#domain_name = <None>
+#domain_name=<None>
 
 # Project ID to scope to (string value)
-#project_id = <None>
+#project_id=<None>
 
 # Project name to scope to (string value)
-#project_name = <None>
+#project_name=<None>
 
 # Domain ID containing project (string value)
-#project_domain_id = <None>
+#project_domain_id=<None>
 
 # Domain name containing project (string value)
-#project_domain_name = <None>
+#project_domain_name=<None>
 
 # Trust ID (string value)
-#trust_id = <None>
+#trust_id=<None>
 
 # Optional domain ID to use with v3 and v2 parameters. It will be used for both
 # the user and project domain in v3 and ignored in v2 authentication. (string
 # value)
-#default_domain_id = <None>
+#default_domain_id=<None>
 
 # Optional domain name to use with v3 API and v2 parameters. It will be used for
 # both the user and project domain in v3 and ignored in v2 authentication.
 # (string value)
-#default_domain_name = <None>
+#default_domain_name=<None>
 
 # User ID (string value)
-#user_id = <None>
+#user_id=<None>
 
 # Username (string value)
-# Deprecated group/name - [neutron]/user_name
-#username = <None>
+# Deprecated group/name - [neutron]/user-name
+#username=<None>
 
 # User's domain id (string value)
-#user_domain_id = <None>
+#user_domain_id=<None>
 
 # User's domain name (string value)
-#user_domain_name = <None>
+#user_domain_name=<None>
 
 # User's password (string value)
-#password = <None>
+#password=<None>
 
 # Tenant ID (string value)
-#tenant_id = <None>
+#tenant_id=<None>
 
 # Tenant Name (string value)
-#tenant_name = <None>
+#tenant_name=<None>
 
 
 [notifications]
@@ -7300,37 +7073,35 @@
 #
 
 #
-# If set, send compute.instance.update notifications on
-# instance state changes.
-#
-# Please refer to
-# https://docs.openstack.org/nova/latest/reference/notifications.html for
+# If set, send compute.instance.update notifications on instance state
+# changes.
+#
+# Please refer to https://wiki.openstack.org/wiki/SystemUsageData for
 # additional information on notifications.
 #
 # Possible values:
 #
 # * None - no notifications
-# * "vm_state" - notifications are sent with VM state transition information in
-#   the ``old_state`` and ``state`` fields. The ``old_task_state`` and
-#   ``new_task_state`` fields will be set to the current task_state of the
-#   instance.
-# * "vm_and_task_state" - notifications are sent with VM and task state
-#   transition information.
+# * "vm_state" - notifications on VM state changes
+# * "vm_and_task_state" - notifications on VM and task state changes
 #  (string value)
 # Allowed values: <None>, vm_state, vm_and_task_state
-#notify_on_state_change = <None>
+# Deprecated group/name - [DEFAULT]/notify_on_state_change
+#notify_on_state_change=<None>
+notify_on_state_change = vm_and_task_state
 
 #
 # If enabled, send api.fault notifications on caught exceptions in the
 # API service.
 #  (boolean value)
 # Deprecated group/name - [DEFAULT]/notify_api_faults
-#notify_on_api_faults = false
+#notify_on_api_faults=false
+notify_on_api_faults=false
 
 # Default notification level for outgoing notifications. (string value)
 # Allowed values: DEBUG, INFO, WARN, ERROR, CRITICAL
 # Deprecated group/name - [DEFAULT]/default_notification_level
-#default_level = INFO
+#default_level=INFO
 
 #
 # Default publisher_id for outgoing notifications. If you consider routing
@@ -7345,7 +7116,8 @@
 #
 # *  my_ip - IP address of this host
 #  (string value)
-#default_publisher_id = $my_ip
+# Deprecated group/name - [DEFAULT]/default_publisher_id
+#default_publisher_id=$my_ip
 
 #
 # Specifies which notification format shall be used by nova.
@@ -7365,36 +7137,59 @@
 # http://docs.openstack.org/developer/nova/notifications.html
 #  (string value)
 # Allowed values: unversioned, versioned, both
-#notification_format = both
-
-#
-# Specifies the topics for the versioned notifications issued by nova.
-#
-# The default value is fine for most deployments and rarely needs to be changed.
-# However, if you have a third-party service that consumes versioned
-# notifications, it might be worth getting a topic for that service.
-# Nova will send a message containing a versioned notification payload to each
-# topic queue in this list.
-#
-# The list of versioned notifications is visible in
-# http://docs.openstack.org/developer/nova/notifications.html
+# Deprecated group/name - [DEFAULT]/notification_format
+#notification_format=both
+
+
+[osapi_v21]
+
+#
+# From nova.conf
+#
+
+# DEPRECATED:
+# This option is a list of all of the v2.1 API extensions to never load.
+#
+# Possible values:
+#
+# * A list of strings, each being the alias of an extension that you do not
+#   wish to load.
+#
+# Related options:
+#
+# * enabled
+# * extensions_whitelist
 #  (list value)
-#versioned_notifications_topics = versioned_notifications
-
-#
-# If enabled, include block device information in the versioned notification
-# payload. Sending block device information is disabled by default as providing
-# that information can incur some overhead on the system since the information
-# may need to be loaded from the database.
-#  (boolean value)
-#bdms_in_notifications = false
-
-
-[osapi_v21]
-
-#
-# From nova.conf
-#
+# This option is deprecated for removal since 12.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# API extensions are now part of the standard API. API extensions should be
+# disabled using policy, rather than via these configuration options.
+#extensions_blacklist =
+
+# DEPRECATED:
+# This is a list of extensions. If it is empty, then *all* extensions except
+# those specified in the extensions_blacklist option will be loaded. If it is
+# not
+# empty, then only those extensions in this list will be loaded, provided that
+# they are also not in the extensions_blacklist option.
+#
+# Possible values:
+#
+# * A list of strings, each being the alias of an extension that you wish to
+#   load, or an empty list, which indicates that all extensions are to be run.
+#
+# Related options:
+#
+# * enabled
+# * extensions_blacklist
+#  (list value)
+# This option is deprecated for removal since 12.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# API extensions are now part of the standard API. API extensions should be
+# disabled using policy, rather than via these configuration options.
+#extensions_whitelist =
 
 # DEPRECATED:
 # This option is a string representing a regular expression (regex) that matches
@@ -7412,7 +7207,7 @@
 # dashes. If your installation uses IDs outside of this range, you should use
 # this option to provide your own regex and give you time to migrate offending
 # projects to valid IDs before the next release.
-#project_id_regex = <None>
+#project_id_regex=<None>
 
 
 [oslo_concurrency]
@@ -7422,14 +7217,16 @@
 #
 
 # Enables or disables inter-process locks. (boolean value)
-#disable_process_locking = false
+# Deprecated group/name - [DEFAULT]/disable_process_locking
+#disable_process_locking=false
 
 # Directory to use for lock files.  For security, the specified directory should
 # only be writable by the user running the processes that need locking. Defaults
 # to environment variable OSLO_LOCK_PATH. If OSLO_LOCK_PATH is not set in the
 # environment, use the Python tempfile.gettempdir function to find a suitable
 # location. If external locks are used, a lock path must be set. (string value)
-#lock_path = /tmp
+# Deprecated group/name - [DEFAULT]/lock_path
+lock_path = /var/lib/nova/tmp
 
 
 [oslo_messaging_amqp]
@@ -7440,104 +7237,103 @@
 
 # Name for the AMQP container. must be globally unique. Defaults to a generated
 # UUID (string value)
-#container_name = <None>
+# Deprecated group/name - [amqp1]/container_name
+#container_name=<None>
 
 # Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
+# Deprecated group/name - [amqp1]/idle_timeout
+#idle_timeout=0
 
 # Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
+# Deprecated group/name - [amqp1]/trace
+#trace=false
 
 # CA certificate PEM file used to verify the server's certificate (string value)
+# Deprecated group/name - [amqp1]/ssl_ca_file
 #ssl_ca_file =
 
 # Self-identifying certificate PEM file for client authentication (string value)
+# Deprecated group/name - [amqp1]/ssl_cert_file
 #ssl_cert_file =
 
 # Private key PEM file used to sign ssl_cert_file certificate (optional) (string
 # value)
+# Deprecated group/name - [amqp1]/ssl_key_file
 #ssl_key_file =
 
 # Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
+# Deprecated group/name - [amqp1]/ssl_key_password
+#ssl_key_password=<None>
 
 # DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
+# Deprecated group/name - [amqp1]/allow_insecure_clients
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
+#allow_insecure_clients=false
 
 # Space separated list of acceptable SASL mechanisms (string value)
+# Deprecated group/name - [amqp1]/sasl_mechanisms
 #sasl_mechanisms =
 
 # Path to directory that contains the SASL configuration (string value)
+# Deprecated group/name - [amqp1]/sasl_config_dir
 #sasl_config_dir =
 
 # Name of configuration file (without .conf suffix) (string value)
+# Deprecated group/name - [amqp1]/sasl_config_name
 #sasl_config_name =
 
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the username.
+# User name for message broker authentication (string value)
+# Deprecated group/name - [amqp1]/username
 #username =
 
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the password.
+# Password for message broker authentication (string value)
+# Deprecated group/name - [amqp1]/password
 #password =
 
 # Seconds to pause before attempting to re-connect. (integer value)
 # Minimum value: 1
-#connection_retry_interval = 1
+#connection_retry_interval=1
 
 # Increase the connection_retry_interval by this many seconds after each
 # unsuccessful failover attempt. (integer value)
 # Minimum value: 0
-#connection_retry_backoff = 2
+#connection_retry_backoff=2
 
 # Maximum limit for connection_retry_interval + connection_retry_backoff
 # (integer value)
 # Minimum value: 1
-#connection_retry_interval_max = 30
+#connection_retry_interval_max=30
 
 # Time to pause between re-connecting an AMQP 1.0 link that failed due to a
 # recoverable error. (integer value)
 # Minimum value: 1
-#link_retry_delay = 10
+#link_retry_delay=10
 
 # The maximum number of attempts to re-send a reply message which failed due to
 # a recoverable error. (integer value)
 # Minimum value: -1
-#default_reply_retry = 0
+#default_reply_retry=0
 
 # The deadline for an rpc reply message delivery. (integer value)
 # Minimum value: 5
-#default_reply_timeout = 30
+#default_reply_timeout=30
 
 # The deadline for an rpc cast or call message delivery. Only used when caller
 # does not provide a timeout expiry. (integer value)
 # Minimum value: 5
-#default_send_timeout = 30
+#default_send_timeout=30
 
 # The deadline for a sent notification message delivery. Only used when caller
 # does not provide a timeout expiry. (integer value)
 # Minimum value: 5
-#default_notify_timeout = 30
+#default_notify_timeout=30
 
 # The duration to schedule a purge of idle sender links. Detach link after
 # expiry. (integer value)
 # Minimum value: 1
-#default_sender_link_timeout = 600
+#default_sender_link_timeout=600
 
 # Indicates the addressing mode used by the driver.
 # Permitted values:
@@ -7545,36 +7341,39 @@
 # 'routable' - use routable addresses
 # 'dynamic'  - use legacy addresses if the message bus does not support routing
 # otherwise use routable addressing (string value)
-#addressing_mode = dynamic
+#addressing_mode=dynamic
 
 # address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
+# Deprecated group/name - [amqp1]/server_request_prefix
+#server_request_prefix=exclusive
 
 # address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
+# Deprecated group/name - [amqp1]/broadcast_prefix
+#broadcast_prefix=broadcast
 
 # address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
+# Deprecated group/name - [amqp1]/group_request_prefix
+#group_request_prefix=unicast
 
 # Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
+#rpc_address_prefix=openstack.org/om/rpc
 
 # Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
+#notify_address_prefix=openstack.org/om/notify
 
 # Appended to the address prefix when sending a fanout message. Used by the
 # message bus to identify fanout messages. (string value)
-#multicast_address = multicast
+#multicast_address=multicast
 
 # Appended to the address prefix when sending to a particular RPC/Notification
 # server. Used by the message bus to identify messages sent to a single
 # destination. (string value)
-#unicast_address = unicast
+#unicast_address=unicast
 
 # Appended to the address prefix when sending to a group of consumers. Used by
 # the message bus to identify messages that should be delivered in a round-robin
 # fashion across consumers. (string value)
-#anycast_address = anycast
+#anycast_address=anycast
 
 # Exchange name used in notification addresses.
 # Exchange name resolution precedence:
@@ -7582,7 +7381,7 @@
 # else default_notification_exchange if set
 # else control_exchange if set
 # else 'notify' (string value)
-#default_notification_exchange = <None>
+#default_notification_exchange=<None>
 
 # Exchange name used in RPC addresses.
 # Exchange name resolution precedence:
@@ -7590,19 +7389,19 @@
 # else default_rpc_exchange if set
 # else control_exchange if set
 # else 'rpc' (string value)
-#default_rpc_exchange = <None>
+#default_rpc_exchange=<None>
 
 # Window size for incoming RPC Reply messages. (integer value)
 # Minimum value: 1
-#reply_link_credit = 200
+#reply_link_credit=200
 
 # Window size for incoming RPC Request messages (integer value)
 # Minimum value: 1
-#rpc_server_credit = 100
+#rpc_server_credit=100
 
 # Window size for incoming Notification messages (integer value)
 # Minimum value: 1
-#notify_server_credit = 100
+#notify_server_credit=100
 
 # Send messages of this type pre-settled.
 # Pre-settled messages will not receive acknowledgement
@@ -7614,8 +7413,8 @@
 # 'rpc-cast' - Send RPC Casts pre-settled
 # 'notify'   - Send Notifications pre-settled
 #  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
+#pre_settled=rpc-cast
+#pre_settled=rpc-reply
 
 
 [oslo_messaging_kafka]
@@ -7628,7 +7427,7 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
+#kafka_default_host=localhost
 
 # DEPRECATED: Default Kafka broker Port (port value)
 # Minimum value: 0
@@ -7636,33 +7435,33 @@
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
+#kafka_default_port=9092
 
 # Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
+#kafka_max_fetch_bytes=1048576
+
+# Default timeout(s) for Kafka consumers (integer value)
+#kafka_consumer_timeout=1.0
 
 # Pool Size for Kafka Consumers (integer value)
-#pool_size = 10
+#pool_size=10
 
 # The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
+#conn_pool_min_size=2
 
 # The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
+#conn_pool_ttl=1200
 
 # Group id for Kafka consumer. Consumers in one group will coordinate message
 # consumption (string value)
-#consumer_group = oslo_messaging_consumer
+#consumer_group=oslo_messaging_consumer
 
 # Upper bound on the delay for KafkaProducer batching in seconds (floating point
 # value)
-#producer_batch_timeout = 0.0
+#producer_batch_timeout=0.0
 
 # Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
+#producer_batch_size=16384
 
 
 [oslo_messaging_notifications]
@@ -7670,6 +7469,7 @@
 #
 # From oslo.messaging
 #
+driver = messagingv2
 
 # The Drivers(s) to handle sending notifications. Possible values are messaging,
 # messagingv2, routing, log, test, noop (multi valued)
@@ -7679,17 +7479,12 @@
 # A URL representing the messaging driver to use for notifications. If not set,
 # we fall back to the same configuration used for RPC. (string value)
 # Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
+#transport_url=<None>
 
 # AMQP topic used for OpenStack notifications. (list value)
 # Deprecated group/name - [rpc_notifier2]/topics
 # Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
-
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
+#topics=notifications
 
 
 [oslo_messaging_rabbit]
@@ -7701,110 +7496,122 @@
 # Use durable queues in AMQP. (boolean value)
 # Deprecated group/name - [DEFAULT]/amqp_durable_queues
 # Deprecated group/name - [DEFAULT]/rabbit_durable_queues
-#amqp_durable_queues = false
+#amqp_durable_queues=false
 
 # Auto-delete queues in AMQP. (boolean value)
-#amqp_auto_delete = false
-
-# Enable SSL (boolean value)
-#ssl = <None>
+# Deprecated group/name - [DEFAULT]/amqp_auto_delete
+#amqp_auto_delete=false
 
 # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
 # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
 # distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
+# Deprecated group/name - [DEFAULT]/kombu_ssl_version
+#kombu_ssl_version =
 
 # SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
+# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
+#kombu_ssl_keyfile =
 
 # SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
+# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
+#kombu_ssl_certfile =
 
 # SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
+# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
+#kombu_ssl_ca_certs =
 
 # How long to wait before reconnecting in response to an AMQP consumer cancel
 # notification. (floating point value)
-#kombu_reconnect_delay = 1.0
+# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
+#kombu_reconnect_delay=1.0
 
 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
 # be used. This option may not be available in future versions. (string value)
-#kombu_compression = <None>
+#kombu_compression=<None>
 
 # How long to wait a missing client before abandoning to send it its replies.
 # This value should not be longer than rpc_response_timeout. (integer value)
 # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
-#kombu_missing_consumer_retry_timeout = 60
+#kombu_missing_consumer_retry_timeout=60
 
 # Determines how the next RabbitMQ node is chosen in case the one we are
 # currently connected to becomes unavailable. Takes effect only if more than one
 # RabbitMQ node is provided in config. (string value)
 # Allowed values: round-robin, shuffle
-#kombu_failover_strategy = round-robin
+#kombu_failover_strategy=round-robin
 
 # DEPRECATED: The RabbitMQ broker address where a single node is used. (string
 # value)
+# Deprecated group/name - [DEFAULT]/rabbit_host
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_host = localhost
+#rabbit_host=localhost
 
 # DEPRECATED: The RabbitMQ broker port where a single node is used. (port value)
 # Minimum value: 0
 # Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/rabbit_port
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_port = 5672
+#rabbit_port=5672
 
 # DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
+# Deprecated group/name - [DEFAULT]/rabbit_hosts
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_hosts = $rabbit_host:$rabbit_port
+#rabbit_hosts=$rabbit_host:$rabbit_port
+
+# Connect over SSL for RabbitMQ. (boolean value)
+# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
+#rabbit_use_ssl=false
 
 # DEPRECATED: The RabbitMQ userid. (string value)
+# Deprecated group/name - [DEFAULT]/rabbit_userid
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_userid = guest
+#rabbit_userid=guest
 
 # DEPRECATED: The RabbitMQ password. (string value)
+# Deprecated group/name - [DEFAULT]/rabbit_password
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_password = guest
+#rabbit_password=guest
 
 # The RabbitMQ login method. (string value)
 # Allowed values: PLAIN, AMQPLAIN, RABBIT-CR-DEMO
-#rabbit_login_method = AMQPLAIN
+# Deprecated group/name - [DEFAULT]/rabbit_login_method
+#rabbit_login_method=AMQPLAIN
 
 # DEPRECATED: The RabbitMQ virtual host. (string value)
+# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 # Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_virtual_host = /
+#rabbit_virtual_host=/
 
 # How frequently to retry connecting with RabbitMQ. (integer value)
-#rabbit_retry_interval = 1
+#rabbit_retry_interval=1
 
 # How long to backoff for between retries when connecting to RabbitMQ. (integer
 # value)
-#rabbit_retry_backoff = 2
+# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
+#rabbit_retry_backoff=2
 
 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
 # (integer value)
-#rabbit_interval_max = 30
+#rabbit_interval_max=30
 
 # DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
 # (infinite retry count). (integer value)
+# Deprecated group/name - [DEFAULT]/rabbit_max_retries
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
-#rabbit_max_retries = 0
+#rabbit_max_retries=0
 
 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
 # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
@@ -7812,131 +7619,138 @@
 # you just want to make sure that all queues (except those with auto-generated
 # names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
 # '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
-#rabbit_ha_queues = false
+# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
+#rabbit_ha_queues=false
 
 # Positive integer representing duration in seconds for queue TTL (x-expires).
 # Queues which are unused for the duration of the TTL are automatically deleted.
 # The parameter affects only reply and fanout queues. (integer value)
 # Minimum value: 1
-#rabbit_transient_queues_ttl = 1800
+#rabbit_transient_queues_ttl=1800
 
 # Specifies the number of messages to prefetch. Setting to zero allows unlimited
 # messages. (integer value)
-#rabbit_qos_prefetch_count = 0
+#rabbit_qos_prefetch_count=0
 
 # Number of seconds after which the Rabbit broker is considered down if
 # heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
 # value)
-#heartbeat_timeout_threshold = 60
+#heartbeat_timeout_threshold=60
 
 # How often times during the heartbeat_timeout_threshold we check the heartbeat.
 # (integer value)
-#heartbeat_rate = 2
+#heartbeat_rate=2
 
 # Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
-#fake_rabbit = false
+# Deprecated group/name - [DEFAULT]/fake_rabbit
+#fake_rabbit=false
 
 # Maximum number of channels to allow (integer value)
-#channel_max = <None>
+#channel_max=<None>
 
 # The maximum byte size for an AMQP frame (integer value)
-#frame_max = <None>
+#frame_max=<None>
 
 # How often to send heartbeats for consumer's connections (integer value)
-#heartbeat_interval = 3
+#heartbeat_interval=3
+
+# Enable SSL (boolean value)
+#ssl=<None>
 
 # Arguments passed to ssl.wrap_socket (dict value)
-#ssl_options = <None>
+#ssl_options=<None>
 
 # Set socket timeout in seconds for connection's socket (floating point value)
-#socket_timeout = 0.25
+#socket_timeout=0.25
 
 # Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value)
-#tcp_user_timeout = 0.25
+#tcp_user_timeout=0.25
 
 # Set delay for reconnection to some host which has connection error (floating
 # point value)
-#host_connection_reconnect_delay = 0.25
+#host_connection_reconnect_delay=0.25
 
 # Connection factory implementation (string value)
 # Allowed values: new, single, read_write
-#connection_factory = single
+#connection_factory=single
 
 # Maximum number of connections to keep queued. (integer value)
-#pool_max_size = 30
+#pool_max_size=30
 
 # Maximum number of connections to create above `pool_max_size`. (integer value)
-#pool_max_overflow = 0
+#pool_max_overflow=0
 
 # Default number of seconds to wait for a connections to available (integer
 # value)
-#pool_timeout = 30
+#pool_timeout=30
 
 # Lifetime of a connection (since creation) in seconds or None for no recycling.
 # Expired connections are closed on acquire. (integer value)
-#pool_recycle = 600
+#pool_recycle=600
 
 # Threshold at which inactive (since release) connections are considered stale
 # in seconds or None for no staleness. Stale connections are closed on acquire.
 # (integer value)
-#pool_stale = 60
+#pool_stale=60
 
 # Default serialization mechanism for serializing/deserializing
 # outgoing/incoming messages (string value)
 # Allowed values: json, msgpack
-#default_serializer_type = json
+#default_serializer_type=json
 
 # Persist notification messages. (boolean value)
-#notification_persistence = false
+#notification_persistence=false
 
 # Exchange name for sending notifications (string value)
-#default_notification_exchange = ${control_exchange}_notification
+#default_notification_exchange=${control_exchange}_notification
 
 # Max number of not acknowledged message which RabbitMQ can send to notification
 # listener. (integer value)
-#notification_listener_prefetch_count = 100
+#notification_listener_prefetch_count=100
 
 # Reconnecting retry count in case of connectivity problem during sending
 # notification, -1 means infinite retry. (integer value)
-#default_notification_retry_attempts = -1
+#default_notification_retry_attempts=-1
 
 # Reconnecting retry delay in case of connectivity problem during sending
 # notification message (floating point value)
-#notification_retry_delay = 0.25
+#notification_retry_delay=0.25
 
 # Time to live for rpc queues without consumers in seconds. (integer value)
-#rpc_queue_expiration = 60
+#rpc_queue_expiration=60
 
 # Exchange name for sending RPC messages (string value)
-#default_rpc_exchange = ${control_exchange}_rpc
+#default_rpc_exchange=${control_exchange}_rpc
 
 # Exchange name for receiving RPC replies (string value)
-#rpc_reply_exchange = ${control_exchange}_rpc_reply
+#rpc_reply_exchange=${control_exchange}_rpc_reply
 
 # Max number of not acknowledged message which RabbitMQ can send to rpc
 # listener. (integer value)
-#rpc_listener_prefetch_count = 100
+#rpc_listener_prefetch_count=100
 
 # Max number of not acknowledged message which RabbitMQ can send to rpc reply
 # listener. (integer value)
-#rpc_reply_listener_prefetch_count = 100
+#rpc_reply_listener_prefetch_count=100
 
 # Reconnecting retry count in case of connectivity problem during sending reply.
 # -1 means infinite retry during rpc_timeout (integer value)
-#rpc_reply_retry_attempts = -1
+#rpc_reply_retry_attempts=-1
 
 # Reconnecting retry delay in case of connectivity problem during sending reply.
 # (floating point value)
-#rpc_reply_retry_delay = 0.25
+#rpc_reply_retry_delay=0.25
 
 # Reconnecting retry count in case of connectivity problem during sending RPC
 # message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
 # request could be processed more than one time (integer value)
-#default_rpc_retry_attempts = -1
+#default_rpc_retry_attempts=-1
 
 # Reconnecting retry delay in case of connectivity problem during sending RPC
 # message (floating point value)
-#rpc_retry_delay = 0.25
+#rpc_retry_delay=0.25
+
+
 
 
 [oslo_messaging_zmq]
@@ -7947,25 +7761,31 @@
 
 # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
 # The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
+# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
+#rpc_zmq_bind_address=*
 
 # MatchMaker driver. (string value)
 # Allowed values: redis, sentinel, dummy
-#rpc_zmq_matchmaker = redis
+# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
+#rpc_zmq_matchmaker=redis
 
 # Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
+# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
+#rpc_zmq_contexts=1
 
 # Maximum number of ingress messages to locally buffer per topic. Default is
 # unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
+# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
+#rpc_zmq_topic_backlog=<None>
 
 # Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
+# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
+#rpc_zmq_ipc_dir=/var/run/openstack
 
 # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
 # "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
+# Deprecated group/name - [DEFAULT]/rpc_zmq_host
+#rpc_zmq_host=localhost
 
 # Number of seconds to wait before all pending messages will be sent after
 # closing a socket. The default value of -1 specifies an infinite linger period.
@@ -7973,110 +7793,119 @@
 # immediately when the socket is closed. Positive values specify an upper bound
 # for the linger period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
+#zmq_linger=-1
 
 # The default number of seconds that poll should wait. Poll raises timeout
 # exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
+# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
+#rpc_poll_timeout=1
 
 # Expiration timeout in seconds of a name service record about existing target (
 # < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
+# Deprecated group/name - [DEFAULT]/zmq_target_expire
+#zmq_target_expire=300
 
 # Update period in seconds of a name service record about existing target.
 # (integer value)
-#zmq_target_update = 180
+# Deprecated group/name - [DEFAULT]/zmq_target_update
+#zmq_target_update=180
 
 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
 # value)
-#use_pub_sub = false
+# Deprecated group/name - [DEFAULT]/use_pub_sub
+#use_pub_sub=false
 
 # Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
+# Deprecated group/name - [DEFAULT]/use_router_proxy
+#use_router_proxy=false
 
 # This option makes direct connections dynamic or static. It makes sense only
 # with use_router_proxy=False which means to use direct connections for direct
 # message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
+#use_dynamic_connections=false
 
 # How many additional connections to a host will be made for failover reasons.
 # This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
+#zmq_failover_connections=2
 
 # Minimal port number for random ports range. (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#rpc_zmq_min_port = 49153
+# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
+#rpc_zmq_min_port=49153
 
 # Maximal port number for random ports range. (integer value)
 # Minimum value: 1
 # Maximum value: 65536
-#rpc_zmq_max_port = 65536
+# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
+#rpc_zmq_max_port=65536
 
 # Number of retries to find free port number before fail with ZMQBindError.
 # (integer value)
-#rpc_zmq_bind_port_retries = 100
+# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
+#rpc_zmq_bind_port_retries=100
 
 # Default serialization mechanism for serializing/deserializing
 # outgoing/incoming messages (string value)
 # Allowed values: json, msgpack
-#rpc_zmq_serialization = json
+# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
+#rpc_zmq_serialization=json
 
 # This option configures round-robin mode in zmq socket. True means not keeping
 # a queue when server side disconnects. False means to keep queue and messages
 # even if server is disconnected, when the server appears we send all
 # accumulated messages to it. (boolean value)
-#zmq_immediate = true
+#zmq_immediate=true
 
 # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
 # other negative value) means to skip any overrides and leave it to OS default;
 # 0 and 1 (or any other positive value) mean to disable and enable the option
 # respectively. (integer value)
-#zmq_tcp_keepalive = -1
+#zmq_tcp_keepalive=-1
 
 # The duration between two keepalive transmissions in idle condition. The unit
 # is platform dependent, for example, seconds in Linux, milliseconds in Windows
 # etc. The default value of -1 (or any other negative value and 0) means to skip
 # any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
+#zmq_tcp_keepalive_idle=-1
 
 # The number of retransmissions to be carried out before declaring that remote
 # end is not available. The default value of -1 (or any other negative value and
 # 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
+#zmq_tcp_keepalive_cnt=-1
 
 # The duration between two successive keepalive retransmissions, if
 # acknowledgement to the previous keepalive transmission is not received. The
 # unit is platform dependent, for example, seconds in Linux, milliseconds in
 # Windows etc. The default value of -1 (or any other negative value and 0) means
 # to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
+#zmq_tcp_keepalive_intvl=-1
 
 # Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
+#rpc_thread_pool_size=100
 
 # Expiration timeout in seconds of a sent/received message after which it is not
 # tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
+#rpc_message_ttl=300
 
 # Wait for message acknowledgements from receivers. This mechanism works only
 # via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
+#rpc_use_acks=false
 
 # Number of seconds to wait for an ack from a cast/call. After each retry
 # attempt this timeout is multiplied by some specified multiplier. (integer
 # value)
-#rpc_ack_timeout_base = 15
+#rpc_ack_timeout_base=15
 
 # Number to multiply base ack timeout by after each retry attempt. (integer
 # value)
-#rpc_ack_timeout_multiplier = 2
+#rpc_ack_timeout_multiplier=2
 
 # Default number of message sending attempts in case of any problems occurred:
 # positive value N means at most N retries, 0 means no retries, None or -1 (or
 # any other negative values) mean to retry forever. This option is used only if
 # acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
+#rpc_retry_attempts=3
 
 # List of publisher hosts SubConsumer can subscribe on. This option has higher
 # priority then the default publishers list taken from the matchmaker. (list
@@ -8093,18 +7922,18 @@
 # The maximum body size for each  request, in bytes. (integer value)
 # Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
 # Deprecated group/name - [DEFAULT]/max_request_body_size
-#max_request_body_size = 114688
+#max_request_body_size=114688
 
 # DEPRECATED: The HTTP Header that will be used to determine what the original
 # request protocol scheme was, even if it was hidden by a SSL termination proxy.
 # (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
-#secure_proxy_ssl_header = X-Forwarded-Proto
+#secure_proxy_ssl_header=X-Forwarded-Proto
 
 # Whether the application is behind a proxy or not. This determines if the
 # middleware should parse the headers or not. (boolean value)
-#enable_proxy_headers_parsing = false
+#enable_proxy_headers_parsing=false
 
 
 [oslo_policy]
@@ -8114,17 +7943,20 @@
 #
 
 # The file that defines policies. (string value)
-#policy_file = policy.json
+# Deprecated group/name - [DEFAULT]/policy_file
+#policy_file=policy.json
 
 # Default rule. Enforced when a requested rule is not found. (string value)
-#policy_default_rule = default
+# Deprecated group/name - [DEFAULT]/policy_default_rule
+#policy_default_rule=default
 
 # Directories where policy configuration files are stored. They can be relative
 # to any directory in the search path defined by the config_dir option, or
 # absolute paths. The file defined by policy_file must exist for these
 # directories to be searched.  Missing or empty directories are ignored. (multi
 # valued)
-#policy_dirs = policy.d
+# Deprecated group/name - [DEFAULT]/policy_dirs
+#policy_dirs=policy.d
 
 
 [pci]
@@ -8227,13 +8059,19 @@
 # Deprecated group/name - [DEFAULT]/pci_passthrough_whitelist
 #passthrough_whitelist =
 
-
 [placement]
-os_region_name = openstack
 
 #
 # From nova.conf
 #
+auth_type = password
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = nova
+password = opnfv_secret
+auth_url=http://10.167.4.35:35357/v3
+os_interface = internal
 
 #
 # Region name of this node. This is used when picking the URL in the service
@@ -8243,92 +8081,93 @@
 #
 # * Any string representing region name
 #  (string value)
-#os_region_name = <None>
+#os_region_name = openstack
+os_region_name = RegionOne
 
 #
 # Endpoint interface for this node. This is used when picking the URL in the
 # service catalog.
 #  (string value)
-#os_interface = <None>
+#os_interface=<None>
 
 # PEM encoded Certificate Authority to use when verifying HTTPs connections.
 # (string value)
-#cafile = <None>
+#cafile=<None>
 
 # PEM encoded client certificate cert file (string value)
-#certfile = <None>
+#certfile=<None>
 
 # PEM encoded client certificate key file (string value)
-#keyfile = <None>
+#keyfile=<None>
 
 # Verify HTTPS connections. (boolean value)
-#insecure = false
+#insecure=false
 
 # Timeout value for http requests (integer value)
-#timeout = <None>
+#timeout=<None>
 
 # Authentication type to load (string value)
 # Deprecated group/name - [placement]/auth_plugin
-#auth_type = <None>
+#auth_type=<None>
 
 # Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#auth_section=<None>
 
 # Authentication URL (string value)
-#auth_url = <None>
+#auth_url=<None>
 
 # Domain ID to scope to (string value)
-#domain_id = <None>
+#domain_id=<None>
 
 # Domain name to scope to (string value)
-#domain_name = <None>
+#domain_name=<None>
 
 # Project ID to scope to (string value)
-#project_id = <None>
+#project_id=<None>
 
 # Project name to scope to (string value)
-#project_name = <None>
+#project_name=<None>
 
 # Domain ID containing project (string value)
-#project_domain_id = <None>
+#project_domain_id=<None>
 
 # Domain name containing project (string value)
-#project_domain_name = <None>
+#project_domain_name=<None>
 
 # Trust ID (string value)
-#trust_id = <None>
+#trust_id=<None>
 
 # Optional domain ID to use with v3 and v2 parameters. It will be used for both
 # the user and project domain in v3 and ignored in v2 authentication. (string
 # value)
-#default_domain_id = <None>
+#default_domain_id=<None>
 
 # Optional domain name to use with v3 API and v2 parameters. It will be used for
 # both the user and project domain in v3 and ignored in v2 authentication.
 # (string value)
-#default_domain_name = <None>
+#default_domain_name=<None>
 
 # User ID (string value)
-#user_id = <None>
+#user_id=<None>
 
 # Username (string value)
-# Deprecated group/name - [placement]/user_name
-#username = <None>
+# Deprecated group/name - [placement]/user-name
+#username=<None>
 
 # User's domain id (string value)
-#user_domain_id = <None>
+#user_domain_id=<None>
 
 # User's domain name (string value)
-#user_domain_name = <None>
+#user_domain_name=<None>
 
 # User's password (string value)
-#password = <None>
+#password=<None>
 
 # Tenant ID (string value)
-#tenant_id = <None>
+#tenant_id=<None>
 
 # Tenant Name (string value)
-#tenant_name = <None>
+#tenant_name=<None>
 
 
 [quota]
@@ -8349,7 +8188,7 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_instances
-#instances = 10
+#instances=10
 
 #
 # The number of instance cores or vCPUs allowed per project.
@@ -8361,7 +8200,7 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_cores
-#cores = 20
+#cores=20
 
 #
 # The number of megabytes of instance RAM allowed per project.
@@ -8373,7 +8212,7 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_ram
-#ram = 51200
+#ram=51200
 
 # DEPRECATED:
 # The number of floating IPs allowed per project.
@@ -8394,7 +8233,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#floating_ips = 10
+#floating_ips=10
 
 # DEPRECATED:
 # The number of fixed IPs allowed per project.
@@ -8414,7 +8253,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#fixed_ips = -1
+#fixed_ips=-1
 
 #
 # The number of metadata items allowed per instance.
@@ -8429,7 +8268,7 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_metadata_items
-#metadata_items = 128
+#metadata_items=128
 
 #
 # The number of injected files allowed.
@@ -8449,7 +8288,7 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_injected_files
-#injected_files = 5
+#injected_files=5
 
 #
 # The number of bytes allowed per injected file.
@@ -8461,7 +8300,7 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_injected_file_content_bytes
-#injected_file_content_bytes = 10240
+#injected_file_content_bytes=10240
 
 #
 # The maximum allowed injected file path length.
@@ -8473,7 +8312,7 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_injected_file_path_length
-#injected_file_path_length = 255
+#injected_file_path_length=255
 
 # DEPRECATED:
 # The number of security groups per project.
@@ -8489,7 +8328,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#security_groups = 10
+#security_groups=10
 
 # DEPRECATED:
 # The number of security rules per security group.
@@ -8509,7 +8348,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#security_group_rules = 20
+#security_group_rules=20
 
 #
 # The maximum number of key pairs allowed per user.
@@ -8524,7 +8363,7 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_key_pairs
-#key_pairs = 100
+#key_pairs=100
 
 #
 # The maxiumum number of server groups per project.
@@ -8541,7 +8380,7 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_server_groups
-#server_groups = 10
+#server_groups=10
 
 #
 # The maximum number of servers per server group.
@@ -8553,14 +8392,15 @@
 #  (integer value)
 # Minimum value: -1
 # Deprecated group/name - [DEFAULT]/quota_server_group_members
-#server_group_members = 10
+#server_group_members=10
 
 #
 # The number of seconds until a reservation expires.
 #
 # This quota represents the time period for invalidating quota reservations.
 #  (integer value)
-#reservation_expire = 86400
+# Deprecated group/name - [DEFAULT]/reservation_expire
+#reservation_expire=86400
 
 #
 # The count of reservations until usage is refreshed.
@@ -8570,7 +8410,8 @@
 # issues.
 #  (integer value)
 # Minimum value: 0
-#until_refresh = 0
+# Deprecated group/name - [DEFAULT]/until_refresh
+#until_refresh=0
 
 #
 # The number of seconds between subsequent usage refreshes.
@@ -8581,7 +8422,8 @@
 # on a new reservation if max_age has passed since the last reservation.
 #  (integer value)
 # Minimum value: 0
-#max_age = 0
+# Deprecated group/name - [DEFAULT]/max_age
+#max_age=0
 
 # DEPRECATED:
 # The quota enforcer driver.
@@ -8597,34 +8439,7 @@
 # Deprecated group/name - [DEFAULT]/quota_driver
 # This option is deprecated for removal since 14.0.0.
 # Its value may be silently ignored in the future.
-#driver = nova.quota.DbQuotaDriver
-
-#
-# Recheck quota after resource creation to prevent allowing quota to be
-# exceeded.
-#
-# This defaults to True (recheck quota after resource creation) but can be set
-# to
-# False to avoid additional load if allowing quota to be exceeded because of
-# racing requests is considered acceptable. For example, when set to False, if a
-# user makes highly parallel REST API requests to create servers, it will be
-# possible for them to create more servers than their allowed quota during the
-# race. If their quota is 10 servers, they might be able to create 50 during the
-# burst. After the burst, they will not be able to create any more servers but
-# they will be able to keep their 50 servers until they delete them.
-#
-# The initial quota check is done before resources are created, so if multiple
-# parallel requests arrive at the same time, all could pass the quota check and
-# create resources, potentially exceeding quota. When recheck_quota is True,
-# quota will be checked a second time after resources have been created and if
-# the resource is over quota, it will be deleted and OverQuota will be raised,
-# usually resulting in a 403 response to the REST API user. This makes it
-# impossible for a user to exceed their quota with the caveat that it will,
-# however, be possible for a REST API user to be rejected with a 403 response in
-# the event of a collision close to reaching their quota limit, even if the user
-# has enough quota available when they made the request.
-#  (boolean value)
-#recheck_quota = true
+#driver=nova.quota.DbQuotaDriver
 
 
 [rdp]
@@ -8654,7 +8469,7 @@
 # * ``compute_driver``: Must be hyperv.
 #
 #  (boolean value)
-#enabled = false
+#enabled=false
 
 #
 # The URL an end user would use to connect to the RDP HTML5 console proxy.
@@ -8676,7 +8491,7 @@
 # * <scheme>://<ip-address>:<port-number>/
 #
 #   The scheme must be identical to the scheme configured for the RDP HTML5
-#   console proxy service. It is ``http`` or ``https``.
+#   console proxy service.
 #
 #   The IP address must be identical to the address on which the RDP HTML5
 #   console proxy service is listening.
@@ -8688,8 +8503,8 @@
 #
 # * ``rdp.enabled``: Must be set to ``True`` for ``html5_proxy_base_url`` to be
 #   effective.
-#  (uri value)
-#html5_proxy_base_url = http://127.0.0.1:6083/
+#  (string value)
+#html5_proxy_base_url=http://127.0.0.1:6083/
 
 
 [remote_debug]
@@ -8715,8 +8530,8 @@
 #
 #     /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
 #     --remote_debug-host <IP address where the debugger is running>
-#  (unknown value)
-#host = <None>
+#  (string value)
+#host=<None>
 
 #
 # Debug port to connect to. This command line parameter allows you to specify
@@ -8738,7 +8553,7 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#port = <None>
+#port=<None>
 
 
 [scheduler]
@@ -8756,36 +8571,29 @@
 #  (string value)
 # Allowed values: host_manager, ironic_host_manager
 # Deprecated group/name - [DEFAULT]/scheduler_host_manager
-#host_manager = host_manager
-
-#
-# The class of the driver used by the scheduler. This should be chosen from one
-# of the entrypoints under the namespace 'nova.scheduler.driver' of file
-# 'setup.cfg'. If nothing is specified in this option, the 'filter_scheduler' is
-# used.
-#
-# Other options are:
-#
-# * 'caching_scheduler' which aggressively caches the system state for better
-#   individual scheduler performance at the risk of more retries when running
-#   multiple schedulers. [DEPRECATED]
-# * 'chance_scheduler' which simply picks a host at random. [DEPRECATED]
-# * 'fake_scheduler' which is used for testing.
-#
-# Possible values:
-#
-# * Any of the drivers included in Nova:
-# ** filter_scheduler
-# ** caching_scheduler
-# ** chance_scheduler
-# ** fake_scheduler
-# * You may also set this to the entry point name of a custom scheduler driver,
-#   but you will be responsible for creating and maintaining it in your
-# setup.cfg
-#   file.
-#  (string value)
+#host_manager=host_manager
+
+#
+# The class of the driver used by the scheduler.
+#
+# The options are chosen from the entry points under the namespace
+# 'nova.scheduler.driver' in 'setup.cfg'.
+#
+# Possible values:
+#
+# * A string, where the string corresponds to the class name of a scheduler
+#   driver. There are a number of options available:
+# ** 'caching_scheduler', which aggressively caches the system state for better
+#    individual scheduler performance at the risk of more retries when running
+#    multiple schedulers
+# ** 'chance_scheduler', which simply picks a host at random
+# ** 'fake_scheduler', which is used for testing
+# ** A custom scheduler driver. In this case, you will be responsible for
+#    creating and maintaining the entry point in your 'setup.cfg' file
+#  (string value)
+# Allowed values: filter_scheduler, caching_scheduler, chance_scheduler, fake_scheduler
 # Deprecated group/name - [DEFAULT]/scheduler_driver
-#driver = filter_scheduler
+#driver=filter_scheduler
 
 #
 # Periodic task interval.
@@ -8812,7 +8620,7 @@
 # * ``nova-service service_down_time``
 #  (integer value)
 # Deprecated group/name - [DEFAULT]/scheduler_driver_task_period
-#periodic_task_interval = 60
+#periodic_task_interval=60
 
 #
 # Maximum number of schedule attempts for a chosen host.
@@ -8831,7 +8639,7 @@
 #          (integer value)
 # Minimum value: 1
 # Deprecated group/name - [DEFAULT]/scheduler_max_attempts
-#max_attempts = 3
+#max_attempts=3
 
 #
 # Periodic task interval.
@@ -8840,14 +8648,15 @@
 # to discover new hosts that have been added to cells. If negative (the
 # default), no automatic discovery will occur.
 #
-# Deployments where compute nodes come and go frequently may want this
-# enabled, where others may prefer to manually discover hosts when one
-# is added to avoid any overhead from constantly checking. If enabled,
-# every time this runs, we will select any unmapped hosts out of each
-# cell database on every run.
+# Small deployments may want this periodic task enabled, as surveying the
+# cells for new hosts is likely to be lightweight enough to not cause undue
+# burdon to the scheduler. However, larger clouds (and those that are not
+# adding hosts regularly) will likely want to disable this automatic
+# behavior and instead use the `nova-manage cell_v2 discover_hosts` command
+# when hosts have been added to a cell.
 #  (integer value)
 # Minimum value: -1
-#discover_hosts_in_cells_interval = -1
+#discover_hosts_in_cells_interval=-1
 
 
 [serial_console]
@@ -8866,7 +8675,7 @@
 # In order to use this feature, the service ``nova-serialproxy`` needs to run.
 # This service is typically executed on the controller node.
 #  (boolean value)
-#enabled = false
+#enabled=false
 
 #
 # A range of TCP ports a guest can use for its backend.
@@ -8881,7 +8690,7 @@
 #   Be sure that the first port number is lower than the second port number
 #   and that both are in range from 0 to 65535.
 #  (string value)
-#port_range = 10000:20000
+#port_range=10000:20000
 
 #
 # The URL an end user would use to connect to the ``nova-serialproxy`` service.
@@ -8900,7 +8709,7 @@
 #   with ``wss://`` instead of the unsecured ``ws://``. The options ``cert``
 #   and ``key`` in the ``[DEFAULT]`` section have to be set for that.
 #  (uri value)
-#base_url = ws://127.0.0.1:6083/
+#base_url=ws://127.0.0.1:6083/
 
 #
 # The IP address to which proxy clients (like ``nova-serialproxy``) should
@@ -8908,7 +8717,7 @@
 #
 # This is typically the IP address of the host of a ``nova-compute`` service.
 #  (string value)
-#proxyclient_address = 127.0.0.1
+#proxyclient_address=127.0.0.1
 
 #
 # The IP address which is used by the ``nova-serialproxy`` service to listen
@@ -8922,7 +8731,7 @@
 # * Ensure that this is the same IP address which is defined in the option
 #   ``base_url`` of this section or use ``0.0.0.0`` to listen on all addresses.
 #  (string value)
-#serialproxy_host = 0.0.0.0
+#serialproxy_host=0.0.0.0
 
 #
 # The port number which is used by the ``nova-serialproxy`` service to listen
@@ -8938,7 +8747,7 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#serialproxy_port = 6083
+#serialproxy_port=6083
 
 
 [service_user]
@@ -8955,7 +8764,7 @@
 # When True, if sending a user token to an REST API, also send a service token.
 #
 # Nova often reuses the user token provided to the nova-api to talk to other
-# REST APIs, such as Cinder, Glance and Neutron. It is possible that while the
+# REST APIs, such as Cinder and Neutron. It is possible that while the
 # user token was valid when the request was made to Nova, the token may expire
 # before it reaches the other service. To avoid any failures, and to
 # make it clear it is Nova calling the service on the users behalf, we include
@@ -8966,86 +8775,86 @@
 # This feature is currently experimental, and as such is turned off by default
 # while full testing and performance tuning of this feature is completed.
 #  (boolean value)
-#send_service_user_token = false
+#send_service_user_token=false
 
 # PEM encoded Certificate Authority to use when verifying HTTPs connections.
 # (string value)
-#cafile = <None>
+#cafile=<None>
 
 # PEM encoded client certificate cert file (string value)
-#certfile = <None>
+#certfile=<None>
 
 # PEM encoded client certificate key file (string value)
-#keyfile = <None>
+#keyfile=<None>
 
 # Verify HTTPS connections. (boolean value)
-#insecure = false
+#insecure=false
 
 # Timeout value for http requests (integer value)
-#timeout = <None>
+#timeout=<None>
 
 # Authentication type to load (string value)
 # Deprecated group/name - [service_user]/auth_plugin
-#auth_type = <None>
+#auth_type=<None>
 
 # Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#auth_section=<None>
 
 # Authentication URL (string value)
-#auth_url = <None>
+#auth_url=<None>
 
 # Domain ID to scope to (string value)
-#domain_id = <None>
+#domain_id=<None>
 
 # Domain name to scope to (string value)
-#domain_name = <None>
+#domain_name=<None>
 
 # Project ID to scope to (string value)
-#project_id = <None>
+#project_id=<None>
 
 # Project name to scope to (string value)
-#project_name = <None>
+#project_name=<None>
 
 # Domain ID containing project (string value)
-#project_domain_id = <None>
+#project_domain_id=<None>
 
 # Domain name containing project (string value)
-#project_domain_name = <None>
+#project_domain_name=<None>
 
 # Trust ID (string value)
-#trust_id = <None>
+#trust_id=<None>
 
 # Optional domain ID to use with v3 and v2 parameters. It will be used for both
 # the user and project domain in v3 and ignored in v2 authentication. (string
 # value)
-#default_domain_id = <None>
+#default_domain_id=<None>
 
 # Optional domain name to use with v3 API and v2 parameters. It will be used for
 # both the user and project domain in v3 and ignored in v2 authentication.
 # (string value)
-#default_domain_name = <None>
+#default_domain_name=<None>
 
 # User ID (string value)
-#user_id = <None>
+#user_id=<None>
 
 # Username (string value)
-# Deprecated group/name - [service_user]/user_name
-#username = <None>
+# Deprecated group/name - [service_user]/user-name
+#username=<None>
 
 # User's domain id (string value)
-#user_domain_id = <None>
+#user_domain_id=<None>
 
 # User's domain name (string value)
-#user_domain_name = <None>
+#user_domain_name=<None>
 
 # User's password (string value)
-#password = <None>
+#password=<None>
 
 # Tenant ID (string value)
-#tenant_id = <None>
+#tenant_id=<None>
 
 # Tenant Name (string value)
-#tenant_name = <None>
+#tenant_name=<None>
 
 
 [spice]
@@ -9060,7 +8869,8 @@
 # * vnc.enabled set to False
 # * update html5proxy_base_url
 # * update server_proxyclient_address
-
+enabled = false
+html5proxy_base_url = https://172.30.10.101:6080/spice_auto.html
 #
 # From nova.conf
 #
@@ -9073,7 +8883,7 @@
 # * VNC must be explicitly disabled to get access to the SPICE console. Set the
 #   enabled option to False in the [vnc] section to disable the VNC console.
 #  (boolean value)
-#enabled = false
+#enabled=false
 
 #
 # Enable the SPICE guest agent support on the instances.
@@ -9091,7 +8901,7 @@
 #   needing to click inside the console or press keys to release it. The
 #   performance of mouse movement is also improved.
 #  (boolean value)
-#agent_enabled = true
+#agent_enabled=true
 
 #
 # Location of the SPICE HTML5 console proxy.
@@ -9115,7 +8925,7 @@
 #   The access URL returned by the compute node must have the host
 #   and port where the ``nova-spicehtml5proxy`` service is listening.
 #  (uri value)
-#html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html
+#html5proxy_base_url=http://127.0.0.1:6082/spice_auto.html
 
 #
 # The  address where the SPICE server running on the instances should listen.
@@ -9128,7 +8938,7 @@
 #
 # * IP address to listen on.
 #  (string value)
-#server_listen = 127.0.0.1
+#server_listen=127.0.0.1
 
 #
 # The address used by ``nova-spicehtml5proxy`` client to connect to instance
@@ -9148,7 +8958,7 @@
 #   The proxy client must be able to access the address specified in
 #   ``server_listen`` using the value of this option.
 #  (string value)
-#server_proxyclient_address = 127.0.0.1
+#server_proxyclient_address=127.0.0.1
 
 #
 # A keyboard layout which is supported by the underlying hypervisor on this
@@ -9159,7 +8969,7 @@
 #   use QEMU as hypervisor, you should find the list of supported keyboard
 #   layouts at /usr/share/qemu/keymaps.
 #  (string value)
-#keymap = en-us
+#keymap=en-us
 
 #
 # IP address or a hostname on which the ``nova-spicehtml5proxy`` service
@@ -9170,8 +8980,8 @@
 # * This option depends on the ``html5proxy_base_url`` option.
 #   The ``nova-spicehtml5proxy`` service must be listening on a host that is
 #   accessible from the HTML5 client.
-#  (unknown value)
-#html5proxy_host = 0.0.0.0
+#  (string value)
+#html5proxy_host=0.0.0.0
 
 #
 # Port on which the ``nova-spicehtml5proxy`` service listens for incoming
@@ -9185,7 +8995,33 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#html5proxy_port = 6082
+#html5proxy_port=6082
+
+
+[ssl]
+
+#
+# From nova.conf
+#
+
+# CA certificate file to use to verify connecting clients. (string value)
+# Deprecated group/name - [DEFAULT]/ssl_ca_file
+# Certificate file to use when starting the server securely. (string value)
+# Deprecated group/name - [DEFAULT]/ssl_cert_file
+#cert_file=<None>
+
+# Private key file to use when starting the server securely. (string value)
+# Deprecated group/name - [DEFAULT]/ssl_key_file
+#key_file=<None>
+
+# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
+# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
+# distributions. (string value)
+#version=<None>
+
+# Sets the list of available ciphers. value should be a string in the OpenSSL
+# cipher list format. (string value)
+#ciphers=<None>
 
 
 [trusted_computing]
@@ -9196,7 +9032,7 @@
 # From nova.conf
 #
 
-# DEPRECATED:
+#
 # The host to use as the attestation server.
 #
 # Cloud computing pools can involve thousands of compute nodes located at
@@ -9222,13 +9058,10 @@
 # * attestation_auth_blob
 # * attestation_auth_timeout
 # * attestation_insecure_ssl
-#  (unknown value)
-# This option is deprecated for removal since Pike.
-# Its value may be silently ignored in the future.
-# Reason: Incomplete filter
-#attestation_server = <None>
-
-# DEPRECATED:
+#  (string value)
+#attestation_server=<None>
+
+#
 # The absolute path to the certificate to use for authentication when connecting
 # to the attestation server. See the `attestation_server` help text for more
 # information about host verification.
@@ -9251,12 +9084,9 @@
 # * attestation_auth_timeout
 # * attestation_insecure_ssl
 #  (string value)
-# This option is deprecated for removal since Pike.
-# Its value may be silently ignored in the future.
-# Reason: Incomplete filter
-#attestation_server_ca_file = <None>
-
-# DEPRECATED:
+#attestation_server_ca_file=<None>
+
+#
 # The port to use when connecting to the attestation server. See the
 # `attestation_server` help text for more information about host verification.
 #
@@ -9275,12 +9105,9 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-# This option is deprecated for removal since Pike.
-# Its value may be silently ignored in the future.
-# Reason: Incomplete filter
-#attestation_port = 8443
-
-# DEPRECATED:
+#attestation_port=8443
+
+#
 # The URL on the attestation server to use. See the `attestation_server` help
 # text for more information about host verification.
 #
@@ -9305,12 +9132,9 @@
 # * attestation_auth_timeout
 # * attestation_insecure_ssl
 #  (string value)
-# This option is deprecated for removal since Pike.
-# Its value may be silently ignored in the future.
-# Reason: Incomplete filter
-#attestation_api_url = /OpenAttestationWebServices/V1.0
-
-# DEPRECATED:
+#attestation_api_url=/OpenAttestationWebServices/V1.0
+
+#
 # Attestation servers require a specific blob that is used to authenticate. The
 # content and format of the blob are determined by the particular attestation
 # server being used. There is no default value; you must supply the value as
@@ -9335,12 +9159,9 @@
 # * attestation_auth_timeout
 # * attestation_insecure_ssl
 #  (string value)
-# This option is deprecated for removal since Pike.
-# Its value may be silently ignored in the future.
-# Reason: Incomplete filter
-#attestation_auth_blob = <None>
-
-# DEPRECATED:
+#attestation_auth_blob=<None>
+
+#
 # This value controls how long a successful attestation is cached. Once this
 # period has elapsed, a new attestation request will be made. See the
 # `attestation_server` help text for more information about host verification.
@@ -9364,13 +9185,9 @@
 # * attestation_auth_blob
 # * attestation_insecure_ssl
 #  (integer value)
-# Minimum value: 0
-# This option is deprecated for removal since Pike.
-# Its value may be silently ignored in the future.
-# Reason: Incomplete filter
-#attestation_auth_timeout = 60
-
-# DEPRECATED:
+#attestation_auth_timeout=60
+
+#
 # When set to True, the SSL certificate verification is skipped for the
 # attestation service. See the `attestation_server` help text for more
 # information about host verification.
@@ -9388,10 +9205,7 @@
 # * attestation_auth_blob
 # * attestation_auth_timeout
 #  (boolean value)
-# This option is deprecated for removal since Pike.
-# Its value may be silently ignored in the future.
-# Reason: Incomplete filter
-#attestation_insecure_ssl = false
+#attestation_insecure_ssl=false
 
 
 [upgrade_levels]
@@ -9438,34 +9252,34 @@
 # * An OpenStack release name, in lower case, such as 'mitaka' or
 #   'liberty'.
 #  (string value)
-#compute = <None>
+#compute=<None>
 
 # Cells RPC API version cap (string value)
-#cells = <None>
+#cells=<None>
 
 # Intercell RPC API version cap (string value)
-#intercell = <None>
+#intercell=<None>
 
 # Cert RPC API version cap (string value)
-#cert = <None>
+#cert=<None>
 
 # Scheduler RPC API version cap (string value)
-#scheduler = <None>
+#scheduler=<None>
 
 # Conductor RPC API version cap (string value)
-#conductor = <None>
+#conductor=<None>
 
 # Console RPC API version cap (string value)
-#console = <None>
+#console=<None>
 
 # Consoleauth RPC API version cap (string value)
-#consoleauth = <None>
+#consoleauth=<None>
 
 # Network RPC API version cap (string value)
-#network = <None>
+#network=<None>
 
 # Base API RPC API version cap (string value)
-#baseapi = <None>
+#baseapi=<None>
 
 
 [vendordata_dynamic_auth]
@@ -9479,82 +9293,82 @@
 
 # PEM encoded Certificate Authority to use when verifying HTTPs connections.
 # (string value)
-#cafile = <None>
+#cafile=<None>
 
 # PEM encoded client certificate cert file (string value)
-#certfile = <None>
+#certfile=<None>
 
 # PEM encoded client certificate key file (string value)
-#keyfile = <None>
+#keyfile=<None>
 
 # Verify HTTPS connections. (boolean value)
-#insecure = false
+#insecure=false
 
 # Timeout value for http requests (integer value)
-#timeout = <None>
+#timeout=<None>
 
 # Authentication type to load (string value)
 # Deprecated group/name - [vendordata_dynamic_auth]/auth_plugin
-#auth_type = <None>
+#auth_type=<None>
 
 # Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#auth_section=<None>
 
 # Authentication URL (string value)
-#auth_url = <None>
+#auth_url=<None>
 
 # Domain ID to scope to (string value)
-#domain_id = <None>
+#domain_id=<None>
 
 # Domain name to scope to (string value)
-#domain_name = <None>
+#domain_name=<None>
 
 # Project ID to scope to (string value)
-#project_id = <None>
+#project_id=<None>
 
 # Project name to scope to (string value)
-#project_name = <None>
+#project_name=<None>
 
 # Domain ID containing project (string value)
-#project_domain_id = <None>
+#project_domain_id=<None>
 
 # Domain name containing project (string value)
-#project_domain_name = <None>
+#project_domain_name=<None>
 
 # Trust ID (string value)
-#trust_id = <None>
+#trust_id=<None>
 
 # Optional domain ID to use with v3 and v2 parameters. It will be used for both
 # the user and project domain in v3 and ignored in v2 authentication. (string
 # value)
-#default_domain_id = <None>
+#default_domain_id=<None>
 
 # Optional domain name to use with v3 API and v2 parameters. It will be used for
 # both the user and project domain in v3 and ignored in v2 authentication.
 # (string value)
-#default_domain_name = <None>
+#default_domain_name=<None>
 
 # User ID (string value)
-#user_id = <None>
+#user_id=<None>
 
 # Username (string value)
-# Deprecated group/name - [vendordata_dynamic_auth]/user_name
-#username = <None>
+# Deprecated group/name - [vendordata_dynamic_auth]/user-name
+#username=<None>
 
 # User's domain id (string value)
-#user_domain_id = <None>
+#user_domain_id=<None>
 
 # User's domain name (string value)
-#user_domain_name = <None>
+#user_domain_name=<None>
 
 # User's password (string value)
-#password = <None>
+#password=<None>
 
 # Tenant ID (string value)
-#tenant_id = <None>
+#tenant_id=<None>
 
 # Tenant Name (string value)
-#tenant_name = <None>
+#tenant_name=<None>
 
 
 [vmware]
@@ -9583,7 +9397,7 @@
 #
 # * Any valid string representing VLAN interface name
 #  (string value)
-#vlan_interface = vmnic0
+#vlan_interface=vmnic0
 
 #
 # This option should be configured only when using the NSX-MH Neutron
@@ -9595,14 +9409,14 @@
 #
 # * Any valid string representing the name of the integration bridge
 #  (string value)
-#integration_bridge = <None>
+#integration_bridge=<None>
 
 #
 # Set this value if affected by an increased network latency causing
 # repeated characters when typing in a remote console.
 #  (integer value)
 # Minimum value: 0
-#console_delay_seconds = <None>
+#console_delay_seconds=<None>
 
 #
 # Identifies the remote system where the serial port traffic will
@@ -9617,7 +9431,7 @@
 #
 # * Any valid URI
 #  (string value)
-#serial_port_service_uri = <None>
+#serial_port_service_uri=<None>
 
 #
 # Identifies a proxy service that provides network access to the
@@ -9625,34 +9439,34 @@
 #
 # Possible values:
 #
-# * Any valid URI (The scheme is 'telnet' or 'telnets'.)
+# * Any valid URI
 #
 # Related options:
 # This option is ignored if serial_port_service_uri is not specified.
 # * serial_port_service_uri
-#  (uri value)
-#serial_port_proxy_uri = <None>
-
-#
-# Hostname or IP address for connection to VMware vCenter host. (unknown value)
-#host_ip = <None>
+#  (string value)
+#serial_port_proxy_uri=<None>
+
+#
+# Hostname or IP address for connection to VMware vCenter host. (string value)
+#host_ip=<None>
 
 # Port for connection to VMware vCenter host. (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#host_port = 443
+#host_port=443
 
 # Username for connection to VMware vCenter host. (string value)
-#host_username = <None>
+#host_username=<None>
 
 # Password for connection to VMware vCenter host. (string value)
-#host_password = <None>
+#host_password=<None>
 
 #
 # Specifies the CA bundle file to be used in verifying the vCenter
 # server certificate.
 #  (string value)
-#ca_file = <None>
+#ca_file=<None>
 
 #
 # If true, the vCenter server certificate is not verified. If false,
@@ -9661,10 +9475,10 @@
 # Related options:
 # * ca_file: This option is ignored if "ca_file" is set.
 #  (boolean value)
-#insecure = false
+#insecure=false
 
 # Name of a VMware Cluster ComputeResource. (string value)
-#cluster_name = <None>
+#cluster_name=<None>
 
 #
 # Regular expression pattern to match the name of datastore.
@@ -9680,20 +9494,20 @@
 #
 # * Any matching regular expression to a datastore must be given
 #  (string value)
-#datastore_regex = <None>
+#datastore_regex=<None>
 
 #
 # Time interval in seconds to poll remote tasks invoked on
 # VMware VC server.
 #  (floating point value)
-#task_poll_interval = 0.5
+#task_poll_interval=0.5
 
 #
 # Number of times VMware vCenter server API must be retried on connection
 # failures, e.g. socket error, etc.
 #  (integer value)
 # Minimum value: 0
-#api_retry_count = 10
+#api_retry_count=10
 
 #
 # This option specifies VNC starting port.
@@ -9713,13 +9527,13 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#vnc_port = 5900
+#vnc_port=5900
 
 #
 # Total number of VNC ports.
 #  (integer value)
 # Minimum value: 0
-#vnc_port_total = 10000
+#vnc_port_total=10000
 
 #
 # This option enables/disables the use of linked clone.
@@ -9737,7 +9551,27 @@
 # is avoided as it creates copy of the virtual machine that shares
 # virtual disks with its parent VM.
 #  (boolean value)
-#use_linked_clone = true
+#use_linked_clone=true
+
+# DEPRECATED:
+# This option specifies VIM Service WSDL Location
+#
+# If vSphere API versions 5.1 and later is being used, this section can
+# be ignored. If version is less than 5.1, WSDL files must be hosted
+# locally and their location must be specified in the above section.
+#
+# Optional over-ride to default location for bug work-arounds.
+#
+# Possible values:
+#
+# * http://<server>/vimService.wsdl
+# * file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason: Only vCenter versions earlier than 5.1 require this option and the
+# current minimum version is 5.1.
+#wsdl_location=<None>
 
 #
 # This option enables or disables storage policy based placement
@@ -9747,7 +9581,7 @@
 #
 # * pbm_default_policy
 #  (boolean value)
-#pbm_enabled = false
+#pbm_enabled=false
 
 #
 # This option specifies the PBM service WSDL file location URL.
@@ -9760,7 +9594,7 @@
 # * Any valid file path
 #   e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl
 #  (string value)
-#pbm_wsdl_location = <None>
+#pbm_wsdl_location=<None>
 
 #
 # This option specifies the default policy to be used.
@@ -9776,7 +9610,7 @@
 #
 # * pbm_enabled
 #  (string value)
-#pbm_default_policy = <None>
+#pbm_default_policy=<None>
 
 #
 # This option specifies the limit on the maximum number of objects to
@@ -9788,7 +9622,7 @@
 # Any remaining objects may be retrieved with additional requests.
 #  (integer value)
 # Minimum value: 0
-#maximum_objects = 100
+#maximum_objects=100
 
 #
 # This option adds a prefix to the folder where cached images are stored
@@ -9803,7 +9637,7 @@
 #
 # * Any string representing the cache prefix to the folder
 #  (string value)
-#cache_prefix = <None>
+#cache_prefix=<None>
 
 
 [vnc]
@@ -9814,6 +9648,12 @@
 #
 # From nova.conf
 #
+enabled = true
+novncproxy_base_url=https://172.30.10.101:6080/vnc_auto.html
+novncproxy_port=6080
+vncserver_listen=0.0.0.0
+vncserver_proxyclient_address=10.167.4.52
+keymap = en-us
 
 #
 # Enable VNC related features.
@@ -9822,7 +9662,7 @@
 # (for example Horizon) can then establish a VNC connection to the guest.
 #  (boolean value)
 # Deprecated group/name - [DEFAULT]/vnc_enabled
-#enabled = true
+#enabled=true
 
 #
 # Keymap for VNC.
@@ -9838,13 +9678,14 @@
 #   of supported keyboard layouts at ``/usr/share/qemu/keymaps``.
 #  (string value)
 # Deprecated group/name - [DEFAULT]/vnc_keymap
-#keymap = en-us
+#keymap=en-us
 
 #
 # The IP address or hostname on which an instance should listen to for
 # incoming VNC connection requests on this node.
-#  (unknown value)
-#vncserver_listen = 127.0.0.1
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vncserver_listen
+#vncserver_listen=127.0.0.1
 
 #
 # Private, internal IP address or hostname of VNC console proxy.
@@ -9854,8 +9695,9 @@
 #
 # This option sets the private address to which proxy clients, such as
 # ``nova-xvpvncproxy``, should connect to.
-#  (unknown value)
-#vncserver_proxyclient_address = 127.0.0.1
+#  (string value)
+# Deprecated group/name - [DEFAULT]/vncserver_proxyclient_address
+#vncserver_proxyclient_address=127.0.0.1
 
 #
 # Public address of noVNC VNC console proxy.
@@ -9873,7 +9715,8 @@
 # * novncproxy_host
 # * novncproxy_port
 #  (uri value)
-#novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html
+# Deprecated group/name - [DEFAULT]/novncproxy_base_url
+#novncproxy_base_url=http://127.0.0.1:6080/vnc_auto.html
 
 #
 # IP address or hostname that the XVP VNC console proxy should bind to.
@@ -9891,8 +9734,9 @@
 #
 # * xvpvncproxy_port
 # * xvpvncproxy_base_url
-#  (unknown value)
-#xvpvncproxy_host = 0.0.0.0
+#  (string value)
+# Deprecated group/name - [DEFAULT]/xvpvncproxy_host
+#xvpvncproxy_host=0.0.0.0
 
 #
 # Port that the XVP VNC console proxy should bind to.
@@ -9913,7 +9757,8 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#xvpvncproxy_port = 6081
+# Deprecated group/name - [DEFAULT]/xvpvncproxy_port
+#xvpvncproxy_port=6081
 
 #
 # Public URL address of XVP VNC console proxy.
@@ -9933,7 +9778,8 @@
 # * xvpvncproxy_host
 # * xvpvncproxy_port
 #  (uri value)
-#xvpvncproxy_base_url = http://127.0.0.1:6081/console
+# Deprecated group/name - [DEFAULT]/xvpvncproxy_base_url
+#xvpvncproxy_base_url=http://127.0.0.1:6081/console
 
 #
 # IP address that the noVNC console proxy should bind to.
@@ -9950,7 +9796,8 @@
 # * novncproxy_port
 # * novncproxy_base_url
 #  (string value)
-#novncproxy_host = 0.0.0.0
+# Deprecated group/name - [DEFAULT]/novncproxy_host
+#novncproxy_host=0.0.0.0
 
 #
 # Port that the noVNC console proxy should bind to.
@@ -9969,7 +9816,8 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#novncproxy_port = 6080
+# Deprecated group/name - [DEFAULT]/novncproxy_port
+#novncproxy_port=6080
 
 
 [workarounds]
@@ -10001,7 +9849,7 @@
 #
 # * Any options that affect 'rootwrap' will be ignored.
 #  (boolean value)
-#disable_rootwrap = false
+#disable_rootwrap=false
 
 #
 # Disable live snapshots when using the libvirt driver.
@@ -10026,7 +9874,8 @@
 # * False: Live snapshots are always used when snapshotting (as long as
 #   there is a new enough libvirt and the backend storage supports it)
 #  (boolean value)
-#disable_libvirt_livesnapshot = true
+#disable_libvirt_livesnapshot=true
+disable_libvirt_livesnapshot=true
 
 #
 # Enable handling of events emitted from compute drivers.
@@ -10058,24 +9907,7 @@
 #   then instances that get out of sync between the hypervisor and the Nova
 #   database will have to be synchronized manually.
 #  (boolean value)
-#handle_virt_lifecycle_events = true
-
-#
-# Disable the server group policy check upcall in compute.
-#
-# In order to detect races with server group affinity policy, the compute
-# service attempts to validate that the policy was not violated by the
-# scheduler. It does this by making an upcall to the API database to list
-# the instances in the server group for one that it is booting, which violates
-# our api/cell isolation goals. Eventually this will be solved by proper
-# affinity
-# guarantees in the scheduler and placement service, but until then, this late
-# check is needed to ensure proper affinity policy.
-#
-# Operators that desire api/cell isolation over this check should
-# enable this flag, which will avoid making that upcall from compute.
-#  (boolean value)
-#disable_group_policy_check_upcall = false
+#handle_virt_lifecycle_events=true
 
 
 [wsgi]
@@ -10094,16 +9926,15 @@
 #
 # * A string representing file name for the paste.deploy config.
 #  (string value)
-#api_paste_config = api-paste.ini
-
-# DEPRECATED:
+# Deprecated group/name - [DEFAULT]/api_paste_config
+api_paste_config=/etc/nova/api-paste.ini
+
+#
 # It represents a python format string that is used as the template to generate
 # log lines. The following values can be formatted into it: client_ip,
 # date_time, request_line, status_code, body_length, wall_seconds.
 #
-# This option is used for building custom request loglines when running
-# nova-api under eventlet. If used under uwsgi or apache, this option
-# has no effect.
+# This option is used for building custom request loglines.
 #
 # Possible values:
 #
@@ -10111,14 +9942,8 @@
 #   'len: %(body_length)s time: %(wall_seconds).7f' (default)
 # * Any formatted string formed by specific values.
 #  (string value)
-# This option is deprecated for removal since 16.0.0.
-# Its value may be silently ignored in the future.
-# Reason:
-# This option only works when running nova-api under eventlet, and
-# encodes very eventlet specific pieces of information. Starting in Pike
-# the preferred model for running nova-api is under uwsgi or apache
-# mod_wsgi.
-#wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f
+# Deprecated group/name - [DEFAULT]/wsgi_log_format
+#wsgi_log_format=%(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f
 
 #
 # This option specifies the HTTP header used to determine the protocol scheme
@@ -10126,25 +9951,11 @@
 #
 # Possible values:
 #
-# * None (default) - the request scheme is not influenced by any HTTP headers
+# * None (default) - the request scheme is not influenced by any HTTP headers.
 # * Valid HTTP header, like HTTP_X_FORWARDED_PROTO
-#
-# WARNING: Do not set this unless you know what you are doing.
-#
-# Make sure ALL of the following are true before setting this (assuming the
-# values from the example above):
-# * Your API is behind a proxy.
-# * Your proxy strips the X-Forwarded-Proto header from all incoming requests.
-#   In other words, if end users include that header in their requests, the
-# proxy
-#   will discard it.
-# * Your proxy sets the X-Forwarded-Proto header and sends it to API, but only
-#   for requests that originally come in via HTTPS.
-#
-# If any of those are not true, you should keep this setting set to None.
-#
-#  (string value)
-#secure_proxy_ssl_header = <None>
+#  (string value)
+# Deprecated group/name - [DEFAULT]/secure_proxy_ssl_header
+#secure_proxy_ssl_header=<None>
 
 #
 # This option allows setting path to the CA certificate file that should be used
@@ -10158,7 +9969,8 @@
 #
 # * enabled_ssl_apis
 #  (string value)
-#ssl_ca_file = <None>
+# Deprecated group/name - [DEFAULT]/ssl_ca_file
+#ssl_ca_file=<None>
 
 #
 # This option allows setting path to the SSL certificate of API server.
@@ -10171,7 +9983,8 @@
 #
 # * enabled_ssl_apis
 #  (string value)
-#ssl_cert_file = <None>
+# Deprecated group/name - [DEFAULT]/ssl_cert_file
+#ssl_cert_file=<None>
 
 #
 # This option specifies the path to the file where SSL private key of API
@@ -10185,7 +9998,8 @@
 #
 # * enabled_ssl_apis
 #  (string value)
-#ssl_key_file = <None>
+# Deprecated group/name - [DEFAULT]/ssl_key_file
+#ssl_key_file=<None>
 
 #
 # This option sets the value of TCP_KEEPIDLE in seconds for each server socket.
@@ -10198,7 +10012,8 @@
 # * keep_alive
 #  (integer value)
 # Minimum value: 0
-#tcp_keepidle = 600
+# Deprecated group/name - [DEFAULT]/tcp_keepidle
+#tcp_keepidle=600
 
 #
 # This option specifies the size of the pool of greenthreads used by wsgi.
@@ -10207,7 +10022,7 @@
 #  (integer value)
 # Minimum value: 0
 # Deprecated group/name - [DEFAULT]/wsgi_default_pool_size
-#default_pool_size = 1000
+#default_pool_size=1000
 
 #
 # This option specifies the maximum line size of message headers to be accepted.
@@ -10220,7 +10035,8 @@
 # self-defined message length.
 #  (integer value)
 # Minimum value: 0
-#max_header_line = 16384
+# Deprecated group/name - [DEFAULT]/max_header_line
+#max_header_line=16384
 
 #
 # This option allows using the same TCP connection to send and receive multiple
@@ -10237,7 +10053,7 @@
 # * tcp_keepidle
 #  (boolean value)
 # Deprecated group/name - [DEFAULT]/wsgi_keep_alive
-#keep_alive = true
+#keep_alive=true
 
 #
 # This option specifies the timeout for client connections' socket operations.
@@ -10246,7 +10062,8 @@
 # connection. To wait forever set to 0.
 #  (integer value)
 # Minimum value: 0
-#client_socket_timeout = 900
+# Deprecated group/name - [DEFAULT]/client_socket_timeout
+#client_socket_timeout=900
 
 
 [xenserver]
@@ -10293,7 +10110,7 @@
 #
 #  (integer value)
 # Minimum value: 0
-#agent_timeout = 30
+#agent_timeout=30
 
 #
 # Number of seconds to wait for agent't reply to version request.
@@ -10311,7 +10128,7 @@
 # operational.
 #  (integer value)
 # Minimum value: 0
-#agent_version_timeout = 300
+#agent_version_timeout=300
 
 #
 # Number of seconds to wait for agent's reply to resetnetwork
@@ -10322,7 +10139,7 @@
 # agent communication ``agent_timeout`` is ignored in this case.
 #  (integer value)
 # Minimum value: 0
-#agent_resetnetwork_timeout = 60
+#agent_resetnetwork_timeout=60
 
 #
 # Path to locate guest agent on the server.
@@ -10337,7 +10154,7 @@
 # * ``compute_driver`` should be set to ``xenapi.XenAPIDriver``
 #
 #  (string value)
-#agent_path = usr/sbin/xe-update-networking
+#agent_path=usr/sbin/xe-update-networking
 
 #
 # Disables the use of XenAPI agent.
@@ -10352,7 +10169,7 @@
 # * ``use_agent_default``
 #
 #  (boolean value)
-#disable_agent = false
+#disable_agent=false
 
 #
 # Whether or not to use the agent by default when its usage is enabled but not
@@ -10373,11 +10190,11 @@
 # * ``disable_agent``
 #
 #  (boolean value)
-#use_agent_default = false
+#use_agent_default=false
 
 # Timeout in seconds for XenAPI login. (integer value)
 # Minimum value: 0
-#login_timeout = 10
+#login_timeout=10
 
 #
 # Maximum number of concurrent XenAPI connections.
@@ -10387,7 +10204,94 @@
 # session, which allows you to make concurrent XenAPI connections.
 #  (integer value)
 # Minimum value: 1
-#connection_concurrent = 5
+#connection_concurrent=5
+
+# DEPRECATED:
+# Base URL for torrent files; must contain a slash character (see RFC 1808,
+# step 6).
+#  (string value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_base_url=<None>
+
+# DEPRECATED: Probability that peer will become a seeder (1.0 = 100%) (floating
+# point value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_seed_chance=1.0
+
+# DEPRECATED:
+# Number of seconds after downloading an image via BitTorrent that it should
+# be seeded for other peers.'
+#  (integer value)
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_seed_duration=3600
+
+# DEPRECATED:
+# Cached torrent files not accessed within this number of seconds can be reaped.
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_max_last_accessed=86400
+
+# DEPRECATED: Beginning of port range to listen on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_listen_port_start=6881
+
+# DEPRECATED: End of port range to listen on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_listen_port_end=6891
+
+# DEPRECATED:
+# Number of seconds a download can remain at the same progress percentage w/o
+# being considered a stall.
+#  (integer value)
+# Minimum value: 0
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_download_stall_cutoff=600
+
+# DEPRECATED:
+# Maximum number of seeder processes to run concurrently within a given dom0
+# (-1 = no limit).
+#  (integer value)
+# Minimum value: -1
+# This option is deprecated for removal since 15.0.0.
+# Its value may be silently ignored in the future.
+# Reason:
+# The torrent feature has not been tested nor maintained, and as such is being
+# removed.
+#torrent_max_seeder_processes_per_host=1
 
 #
 # Cache glance images locally.
@@ -10405,7 +10309,7 @@
 # * `none`: turns off caching entirely.
 #  (string value)
 # Allowed values: all, some, none
-#cache_images = all
+#cache_images=all
 
 #
 # Compression level for images.
@@ -10424,14 +10328,14 @@
 #  (integer value)
 # Minimum value: 1
 # Maximum value: 9
-#image_compression_level = <None>
+#image_compression_level=<None>
 
 # Default OS type used when uploading an image to glance (string value)
-#default_os_type = linux
+#default_os_type=linux
 
 # Time in secs to wait for a block device to be created (integer value)
 # Minimum value: 1
-#block_device_creation_timeout = 10
+#block_device_creation_timeout=10
 
 #
 # Maximum size in bytes of kernel or ramdisk images.
@@ -10439,7 +10343,7 @@
 # Specifying the maximum size of kernel or ramdisk will avoid copying
 # large files to dom0 and fill up /boot/guest.
 #  (integer value)
-#max_kernel_ramdisk_size = 16777216
+#max_kernel_ramdisk_size=16777216
 
 #
 # Filter for finding the SR to be used to install guest instances on.
@@ -10453,21 +10357,37 @@
 # * To fall back on the Default SR, as displayed by XenCenter,
 #   set this flag to: default-sr:true.
 #  (string value)
-#sr_matching_filter = default-sr:true
+#sr_matching_filter=default-sr:true
 
 #
 # Whether to use sparse_copy for copying data on a resize down.
 # (False will use standard dd). This speeds up resizes down
 # considerably since large runs of zeros won't have to be rsynced.
 #  (boolean value)
-#sparse_copy = true
+#sparse_copy=true
 
 #
 # Maximum number of retries to unplug VBD.
 # If set to 0, should try once, no retries.
 #  (integer value)
 # Minimum value: 0
-#num_vbd_unplug_retries = 10
+#num_vbd_unplug_retries=10
+
+#
+# Whether or not to download images via Bit Torrent.
+#
+# The value for this option must be chosen from the choices listed
+# here. Configuring a value other than these will default to 'none'.
+#
+# Possible values:
+#
+# * `all`: will download all images.
+# * `some`: will only download images that have the image_property
+#           `bittorrent=true`.
+# * `none`: will turnoff downloading images via Bit Torrent.
+#  (string value)
+# Allowed values: all, some, none
+#torrent_images=none
 
 #
 # Name of network to use for booting iPXE ISOs.
@@ -10483,7 +10403,7 @@
 # * `ipxe_boot_menu_url`
 # * `ipxe_mkisofs_cmd`
 #  (string value)
-#ipxe_network_name = <None>
+#ipxe_network_name=<None>
 
 #
 # URL to the iPXE boot menu.
@@ -10499,7 +10419,7 @@
 # * `ipxe_network_name`
 # * `ipxe_mkisofs_cmd`
 #  (string value)
-#ipxe_boot_menu_url = <None>
+#ipxe_boot_menu_url=<None>
 
 #
 # Name and optionally path of the tool used for ISO image creation.
@@ -10516,7 +10436,7 @@
 # * `ipxe_network_name`
 # * `ipxe_boot_menu_url`
 #  (string value)
-#ipxe_mkisofs_cmd = mkisofs
+#ipxe_mkisofs_cmd=mkisofs
 
 #
 # URL for connection to XenServer/Xen Cloud Platform. A special value
@@ -10528,13 +10448,13 @@
 #   generally the management network IP address of the XenServer.
 # * This option must be set if you chose the XenServer driver.
 #  (string value)
-#connection_url = <None>
+#connection_url=<None>
 
 # Username for connection to XenServer/Xen Cloud Platform (string value)
-#connection_username = root
+#connection_username=root
 
 # Password for connection to XenServer/Xen Cloud Platform (string value)
-#connection_password = <None>
+#connection_password=<None>
 
 #
 # The interval used for polling of coalescing vhds.
@@ -10548,7 +10468,7 @@
 # * `vhd_coalesce_max_attempts`
 #  (floating point value)
 # Minimum value: 0
-#vhd_coalesce_poll_interval = 5.0
+#vhd_coalesce_poll_interval=5.0
 
 #
 # Ensure compute service is running on host XenAPI connects to.
@@ -10565,7 +10485,7 @@
 #
 # * `independent_compute`
 #  (boolean value)
-#check_host = true
+#check_host=true
 
 #
 # Max number of times to poll for VHD to coalesce.
@@ -10578,10 +10498,10 @@
 # * `vhd_coalesce_poll_interval`
 #  (integer value)
 # Minimum value: 0
-#vhd_coalesce_max_attempts = 20
+#vhd_coalesce_max_attempts=20
 
 # Base path to the storage repository on the XenServer host. (string value)
-#sr_base_path = /var/run/sr-mount
+#sr_base_path=/var/run/sr-mount
 
 #
 # The iSCSI Target Host.
@@ -10593,8 +10513,8 @@
 # Possible values:
 #
 # * Any string that represents hostname/ip of Target.
-#  (unknown value)
-#target_host = <None>
+#  (string value)
+#target_host=<None>
 
 #
 # The iSCSI Target Port.
@@ -10605,7 +10525,7 @@
 #  (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#target_port = 3260
+#target_port=3260
 
 # DEPRECATED:
 # Used to enable the remapping of VBD dev.
@@ -10617,7 +10537,7 @@
 # This option provided a workaround for issues in Ubuntu Maverick, which
 # was released in April 2010 and was dropped from support in April 2012.
 # There's no reason to continue supporting this option.
-#remap_vbd_dev = false
+#remap_vbd_dev=false
 
 #
 # Specify prefix to remap VBD dev to (ex. /dev/xvdb -> /dev/sdb).
@@ -10626,7 +10546,7 @@
 #
 # * If `remap_vbd_dev` is set to False this option has no impact.
 #  (string value)
-#remap_vbd_dev_prefix = sd
+#remap_vbd_dev_prefix=sd
 
 #
 # Used to prevent attempts to attach VBDs locally, so Nova can
@@ -10641,7 +10561,7 @@
 # * Swap disks for Windows VMs (will error if attempted)
 # * Nova-based auto_configure_disk (will error if attempted)
 #  (boolean value)
-#independent_compute = false
+#independent_compute=false
 
 #
 # Wait time for instances to go to running state.
@@ -10661,7 +10581,7 @@
 # state.
 #  (integer value)
 # Minimum value: 0
-#running_timeout = 60
+#running_timeout=60
 
 # DEPRECATED:
 # The XenAPI VIF driver using XenServer Network APIs.
@@ -10695,7 +10615,7 @@
 # which is the default configuration for Nova since the 15.0.0 Ocata release. In
 # the future the "use_neutron" configuration option will be used to determine
 # which vif driver to use.
-#vif_driver = nova.virt.xenapi.vif.XenAPIOpenVswitchDriver
+#vif_driver=nova.virt.xenapi.vif.XenAPIOpenVswitchDriver
 
 #
 # Dom0 plugin driver used to handle image uploads.
@@ -10708,7 +10628,7 @@
 # plugin driver. This driver is then called to uplaod images to the
 # GlanceStore.
 #  (string value)
-#image_upload_handler = nova.virt.xenapi.image.glance.GlanceStore
+#image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore
 
 #
 # Number of seconds to wait for SR to settle if the VDI
@@ -10720,7 +10640,7 @@
 # before raising VDI not found exception.
 #  (integer value)
 # Minimum value: 0
-#introduce_vdi_retry_wait = 20
+#introduce_vdi_retry_wait=20
 
 #
 # The name of the integration Bridge that is used with xenapi
@@ -10734,7 +10654,7 @@
 #
 # * Any string that represents a bridge name.
 #  (string value)
-#ovs_integration_bridge = <None>
+#ovs_integration_bridge=<None>
 
 #
 # When adding new host to a pool, this will append a --force flag to the
@@ -10747,16 +10667,17 @@
 # Despite this effort to level differences between CPUs, it is still possible
 # that adding new host will fail, thus option to force join was introduced.
 #  (boolean value)
-#use_join_force = true
+#use_join_force=true
 
 #
 # Publicly visible name for this console host.
 #
 # Possible values:
 #
-# * Current hostname (default) or any string representing hostname.
-#  (string value)
-#console_public_hostname = <current_hostname>
+# * A string representing a valid hostname
+#  (string value)
+# Deprecated group/name - [DEFAULT]/console_public_hostname
+#console_public_hostname=lcy01-22
 
 
 [xvp]
@@ -10771,18 +10692,23 @@
 #
 
 # XVP conf template (string value)
-#console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template
+# Deprecated group/name - [DEFAULT]/console_xvp_conf_template
+#console_xvp_conf_template=$pybasedir/nova/console/xvp.conf.template
 
 # Generated XVP conf file (string value)
-#console_xvp_conf = /etc/xvp.conf
+# Deprecated group/name - [DEFAULT]/console_xvp_conf
+#console_xvp_conf=/etc/xvp.conf
 
 # XVP master process pid file (string value)
-#console_xvp_pid = /var/run/xvp.pid
+# Deprecated group/name - [DEFAULT]/console_xvp_pid
+#console_xvp_pid=/var/run/xvp.pid
 
 # XVP log file (string value)
-#console_xvp_log = /var/log/xvp.log
+# Deprecated group/name - [DEFAULT]/console_xvp_log
+#console_xvp_log=/var/log/xvp.log
 
 # Port for XVP to multiplex VNC connections on (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#console_xvp_multiplex_port = 5900
+# Deprecated group/name - [DEFAULT]/console_xvp_multiplex_port
+#console_xvp_multiplex_port=5900

2018-03-30 06:49:24,544 [salt.state       ][INFO    ][1516] Completed state [/etc/nova/nova.conf] at time 06:49:24.544659 duration_in_ms=377.929
2018-03-30 06:49:24,545 [salt.state       ][INFO    ][1516] Running state [/etc/default/nova-compute] at time 06:49:24.545288
2018-03-30 06:49:24,545 [salt.state       ][INFO    ][1516] Executing state file.managed for /etc/default/nova-compute
2018-03-30 06:49:24,567 [salt.fileclient  ][INFO    ][1516] Fetching file from saltenv 'base', ** done ** 'nova/files/default'
2018-03-30 06:49:24,570 [salt.state       ][INFO    ][1516] File changed:
New file
2018-03-30 06:49:24,570 [salt.state       ][INFO    ][1516] Completed state [/etc/default/nova-compute] at time 06:49:24.570830 duration_in_ms=25.542
2018-03-30 06:49:24,572 [salt.state       ][INFO    ][1516] Running state [nova-compute] at time 06:49:24.572656
2018-03-30 06:49:24,572 [salt.state       ][INFO    ][1516] Executing state service.running for nova-compute
2018-03-30 06:49:24,573 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'status', 'nova-compute.service', '-n', '0'] in directory '/root'
2018-03-30 06:49:24,593 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'is-active', 'nova-compute.service'] in directory '/root'
2018-03-30 06:49:24,611 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'is-enabled', 'nova-compute.service'] in directory '/root'
2018-03-30 06:49:24,633 [salt.state       ][INFO    ][1516] The service nova-compute is already running
2018-03-30 06:49:24,633 [salt.state       ][INFO    ][1516] Completed state [nova-compute] at time 06:49:24.633896 duration_in_ms=61.24
2018-03-30 06:49:24,634 [salt.state       ][INFO    ][1516] Running state [nova-compute] at time 06:49:24.634155
2018-03-30 06:49:24,634 [salt.state       ][INFO    ][1516] Executing state service.mod_watch for nova-compute
2018-03-30 06:49:24,635 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'is-active', 'nova-compute.service'] in directory '/root'
2018-03-30 06:49:24,654 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'is-enabled', 'nova-compute.service'] in directory '/root'
2018-03-30 06:49:24,674 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'nova-compute.service'] in directory '/root'
2018-03-30 06:49:24,725 [salt.state       ][INFO    ][1516] {'nova-compute': True}
2018-03-30 06:49:24,725 [salt.state       ][INFO    ][1516] Completed state [nova-compute] at time 06:49:24.725360 duration_in_ms=91.206
2018-03-30 06:49:24,725 [salt.state       ][INFO    ][1516] Running state [/etc/default/libvirtd] at time 06:49:24.725884
2018-03-30 06:49:24,726 [salt.state       ][INFO    ][1516] Executing state file.managed for /etc/default/libvirtd
2018-03-30 06:49:24,748 [salt.fileclient  ][INFO    ][1516] Fetching file from saltenv 'base', ** done ** 'nova/files/pike/libvirt.Debian'
2018-03-30 06:49:24,754 [salt.state       ][INFO    ][1516] File changed:
--- 
+++ 
@@ -1,17 +1,13 @@
-# Defaults for libvirtd initscript (/etc/init.d/libvirtd)
+# Defaults for libvirt-bin initscript (/etc/init.d/libvirt-bin)
 # This is a POSIX shell fragment
 
 # Start libvirtd to handle qemu/kvm:
 start_libvirtd="yes"
 
 # options passed to libvirtd, add "-l" to listen on tcp
-#libvirtd_opts=""
-
+# Don't use "-d" option with systemd
+libvirtd_opts="-l"
+LIBVIRTD_ARGS="--listen"
 # pass in location of kerberos keytab
 #export KRB5_KTNAME=/etc/libvirt/libvirt.keytab
 
-# Whether to mount a systemd like cgroup layout (only
-# useful when not running systemd)
-#mount_cgroups=yes
-# Which cgroups to mount
-#cgroups="memory devices"

2018-03-30 06:49:24,754 [salt.state       ][INFO    ][1516] Completed state [/etc/default/libvirtd] at time 06:49:24.754765 duration_in_ms=28.879
2018-03-30 06:49:24,756 [salt.state       ][INFO    ][1516] Running state [service.systemctl_reload] at time 06:49:24.756297
2018-03-30 06:49:24,756 [salt.state       ][INFO    ][1516] Executing state module.wait for service.systemctl_reload
2018-03-30 06:49:24,756 [salt.state       ][INFO    ][1516] No changes made for service.systemctl_reload
2018-03-30 06:49:24,757 [salt.state       ][INFO    ][1516] Completed state [service.systemctl_reload] at time 06:49:24.757022 duration_in_ms=0.725
2018-03-30 06:49:24,757 [salt.state       ][INFO    ][1516] Running state [service.systemctl_reload] at time 06:49:24.757206
2018-03-30 06:49:24,757 [salt.state       ][INFO    ][1516] Executing state module.mod_watch for service.systemctl_reload
2018-03-30 06:49:24,757 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', '--system', 'daemon-reload'] in directory '/root'
2018-03-30 06:49:24,853 [salt.state       ][INFO    ][1516] {'ret': True}
2018-03-30 06:49:24,853 [salt.state       ][INFO    ][1516] Completed state [service.systemctl_reload] at time 06:49:24.853885 duration_in_ms=96.677
2018-03-30 06:49:24,855 [salt.state       ][INFO    ][1516] Running state [/etc/libvirt/qemu.conf] at time 06:49:24.855044
2018-03-30 06:49:24,855 [salt.state       ][INFO    ][1516] Executing state file.managed for /etc/libvirt/qemu.conf
2018-03-30 06:49:24,882 [salt.fileclient  ][INFO    ][1516] Fetching file from saltenv 'base', ** done ** 'nova/files/pike/qemu.conf.Debian'
2018-03-30 06:49:25,011 [salt.state       ][INFO    ][1516] File changed:
--- 
+++ 
@@ -1,52 +1,8 @@
+
 # Master configuration file for the QEMU driver.
 # All settings described here are optional - if omitted, sensible
 # defaults are used.
 
-# Use of TLS requires that x509 certificates be issued. The default is
-# to keep them in /etc/pki/qemu. This directory must contain
-#
-#  ca-cert.pem - the CA master certificate
-#  server-cert.pem - the server certificate signed with ca-cert.pem
-#  server-key.pem  - the server private key
-#
-# and optionally may contain
-#
-#  dh-params.pem - the DH params configuration file
-#
-#default_tls_x509_cert_dir = "/etc/pki/qemu"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client who does not have a
-# certificate signed by the CA in /etc/pki/qemu/ca-cert.pem
-#
-# The default_tls_x509_cert_dir directory must also contain
-#
-#  client-cert.pem - the client certificate signed with the ca-cert.pem
-#  client-key.pem - the client private key
-#
-#default_tls_x509_verify = 1
-
-#
-# Libvirt assumes the server-key.pem file is unencrypted by default.
-# To use an encrypted server-key.pem file, the password to decrypt
-# the PEM file is required. This can be provided by creating a secret
-# object in libvirt and then to uncomment this setting to set the UUID
-# of the secret.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#default_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
-
-
 # VNC is configured to listen on 127.0.0.1 by default.
 # To make it listen on all public interfaces, uncomment
 # this next option.
@@ -60,9 +16,9 @@
 # unix socket. This prevents unprivileged access from users on the
 # host machine, though most VNC clients do not support it.
 #
-# This will only be enabled for VNC configurations that have listen
-# type=address but without any address specified. This setting takes
-# preference over vnc_listen.
+# This will only be enabled for VNC configurations that do not have
+# a hardcoded 'listen' or 'socket' value. This setting takes preference
+# over vnc_listen.
 #
 #vnc_auto_unix_socket = 1
 
@@ -77,10 +33,15 @@
 #vnc_tls = 1
 
 
-# In order to override the default TLS certificate location for
-# vnc certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist then the default_tls_x509_cert_dir
-# path will be used.
+# Use of TLS requires that x509 certificates be issued. The
+# default it to keep them in /etc/pki/libvirt-vnc. This directory
+# must contain
+#
+#  ca-cert.pem - the CA master certificate
+#  server-cert.pem - the server certificate signed with ca-cert.pem
+#  server-key.pem  - the server private key
+#
+# This option allows the certificate directory to be changed
 #
 #vnc_tls_x509_cert_dir = "/etc/pki/libvirt-vnc"
 
@@ -90,15 +51,10 @@
 # an encrypted channel.
 #
 # It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the vnc_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
+# issuing a x509 certificate to every client who needs to connect.
+#
+# Enabling this option will reject any client who does not have a
+# certificate signed by the CA in /etc/pki/libvirt-vnc/ca-cert.pem
 #
 #vnc_tls_x509_verify = 1
 
@@ -162,23 +118,17 @@
 #spice_tls = 1
 
 
-# In order to override the default TLS certificate location for
-# spice certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist then the default_tls_x509_cert_dir
-# path will be used.
+# Use of TLS requires that x509 certificates be issued. The
+# default it to keep them in /etc/pki/libvirt-spice. This directory
+# must contain
+#
+#  ca-cert.pem - the CA master certificate
+#  server-cert.pem - the server certificate signed with ca-cert.pem
+#  server-key.pem  - the server private key
+#
+# This option allows the certificate directory to be changed.
 #
 #spice_tls_x509_cert_dir = "/etc/pki/libvirt-spice"
-
-
-# Enable this option to have SPICE served over an automatically created
-# unix socket. This prevents unprivileged access from users on the
-# host machine.
-#
-# This will only be enabled for SPICE configurations that have listen
-# type=address but without any address specified. This setting takes
-# preference over spice_listen.
-#
-#spice_auto_unix_socket = 1
 
 
 # The default SPICE password. This parameter is only used if the
@@ -205,86 +155,6 @@
 # point to the directory, and create a qemu.conf in that location
 #
 #spice_sasl_dir = "/some/directory/sasl2"
-
-# Enable use of TLS encryption on the chardev TCP transports.
-#
-# It is necessary to setup CA and issue a server certificate
-# before enabling this.
-#
-#chardev_tls = 1
-
-
-# In order to override the default TLS certificate location for character
-# device TCP certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist then the default_tls_x509_cert_dir
-# path will be used.
-#
-#chardev_tls_x509_cert_dir = "/etc/pki/libvirt-chardev"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the chardev_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
-#
-#chardev_tls_x509_verify = 1
-
-
-# Uncomment and use the following option to override the default secret
-# UUID provided in the default_tls_x509_secret_uuid parameter.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#chardev_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
-
-
-# In order to override the default TLS certificate location for migration
-# certificates, supply a valid path to the certificate directory. If the
-# provided path does not exist then the default_tls_x509_cert_dir path
-# will be used. Once/if a default certificate is enabled/defined, migration
-# will then be able to use the certificate via migration API flags.
-#
-#migrate_tls_x509_cert_dir = "/etc/pki/libvirt-migrate"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the migrate_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
-#
-#migrate_tls_x509_verify = 1
-
-
-# Uncomment and use the following option to override the default secret
-# UUID provided in the default_tls_x509_secret_uuid parameter.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#migrate_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
 
 
 # By default, if no graphical front end is configured, libvirt will disable
@@ -368,7 +238,6 @@
 # Set to 0 to disable file ownership changes.
 #dynamic_ownership = 1
 
-
 # What cgroup controllers to make use of with QEMU guests
 #
 #  - 'cpu' - use for schedular tunables
@@ -403,19 +272,11 @@
 #    "/dev/null", "/dev/full", "/dev/zero",
 #    "/dev/random", "/dev/urandom",
 #    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
-#    "/dev/rtc","/dev/hpet"
+#    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio"
 #]
-#
-# RDMA migration requires the following extra files to be added to the list:
-#   "/dev/infiniband/rdma_cm",
-#   "/dev/infiniband/issm0",
-#   "/dev/infiniband/issm1",
-#   "/dev/infiniband/umad0",
-#   "/dev/infiniband/umad1",
-#   "/dev/infiniband/uverbs0"
-
-
-# The default format for QEMU/KVM guest save images is raw; that is, the
+
+
+# The default format for Qemu/KVM guest save images is raw; that is, the
 # memory from the domain is dumped out directly to a file.  If you have
 # guests with a large amount of memory, however, this can take up quite
 # a bit of space.  If you would like to compress the images while they
@@ -469,20 +330,15 @@
 # unspecified here, determination of a host mount point in /proc/mounts
 # will be attempted.  Specifying an explicit mount overrides detection
 # of the same in /proc/mounts.  Setting the mount point to "" will
-# disable guest hugepage backing. If desired, multiple mount points can
-# be specified at once, separated by comma and enclosed in square
-# brackets, for example:
-#
-#     hugetlbfs_mount = ["/dev/hugepages2M", "/dev/hugepages1G"]
-#
-# The size of huge page served by specific mount point is determined by
-# libvirt at the daemon startup.
-#
-# NB, within these mount points, guests will create memory backing
-# files in a location of $MOUNTPOINT/libvirt/qemu
+# disable guest hugepage backing.
+#
+# NB, within this mount point, guests will create memory backing files
+# in a location of $MOUNTPOINT/libvirt/qemu
 #
 #hugetlbfs_mount = "/dev/hugepages"
-
+#hugetlbfs_mount = ["/run/hugepages/kvm", "/mnt/hugepages_1GB"]
+hugetlbfs_mount = ["/mnt/hugepages_2M"]
+security_driver="none"
 
 # Path to the setuid helper for creating tap devices.  This executable
 # is used to create <source type='bridge'> interfaces when libvirtd is
@@ -518,42 +374,6 @@
 # The same applies to max_files which sets the limit on the maximum
 # number of opened files.
 #
-#max_processes = 0
-#max_files = 0
-
-# If max_core is set to a non-zero integer, then QEMU will be
-# permitted to create core dumps when it crashes, provided its
-# RAM size is smaller than the limit set.
-#
-# Be warned that the core dump will include a full copy of the
-# guest RAM, if the 'dump_guest_core' setting has been enabled,
-# or if the guest XML contains
-#
-#   <memory dumpcore="on">...guest ram...</memory>
-#
-# If guest RAM is to be included, ensure the max_core limit
-# is set to at least the size of the largest expected guest
-# plus another 1GB for any QEMU host side memory mappings.
-#
-# As a special case it can be set to the string "unlimited" to
-# to allow arbitrarily sized core dumps.
-#
-# By default the core dump size is set to 0 disabling all dumps
-#
-# Size is a positive integer specifying bytes or the
-# string "unlimited"
-#
-#max_core = "unlimited"
-
-# Determine if guest RAM is included in QEMU core dumps. By
-# default guest RAM will be excluded if a new enough QEMU is
-# present. Setting this to '1' will force guest RAM to always
-# be included in QEMU core dumps.
-#
-# This setting will be ignored if the guest XML has set the
-# dumpcore attribute on the <memory> element.
-#
-#dump_guest_core = 1
 
 # mac_filter enables MAC addressed based filtering on bridge ports.
 # This currently requires ebtables to be installed.
@@ -580,13 +400,11 @@
 #allow_disk_format_probing = 1
 
 
-# In order to prevent accidentally starting two domains that
-# share one writable disk, libvirt offers two approaches for
-# locking files. The first one is sanlock, the other one,
-# virtlockd, is then our own implementation. Accepted values
-# are "sanlock" and "lockd".
-#
-#lock_manager = "lockd"
+# To enable 'Sanlock' project based locking of the file
+# content (to prevent two VMs writing to the same
+# disk), uncomment this
+#
+#lock_manager = "sanlock"
 
 
 
@@ -628,17 +446,10 @@
 #seccomp_sandbox = 1
 
 
+
 # Override the listen address for all incoming migrations. Defaults to
 # 0.0.0.0, or :: if both host and qemu are capable of IPv6.
-#migration_address = "0.0.0.0"
-
-
-# The default hostname or IP address which will be used by a migration
-# source for transferring migration data to this host.  The migration
-# source has to be able to resolve this hostname and connect to it so
-# setting "localhost" will not work.  By default, the host's configured
-# hostname is used.
-#migration_host = "host.example.com"
+#migration_address = "127.0.0.1"
 
 
 # Override the port range used for incoming migrations.
@@ -650,80 +461,9 @@
 #
 #migration_port_min = 49152
 #migration_port_max = 49215
-
-
-
-# Timestamp QEMU's log messages (if QEMU supports it)
-#
-# Defaults to 1.
-#
-#log_timestamp = 0
-
-
-# Location of master nvram file
-#
-# When a domain is configured to use UEFI instead of standard
-# BIOS it may use a separate storage for UEFI variables. If
-# that's the case libvirt creates the variable store per domain
-# using this master file as image. Each UEFI firmware can,
-# however, have different variables store. Therefore the nvram is
-# a list of strings when a single item is in form of:
-#   ${PATH_TO_UEFI_FW}:${PATH_TO_UEFI_VARS}.
-# Later, when libvirt creates per domain variable store, this list is
-# searched for the master image. The UEFI firmware can be called
-# differently for different guest architectures. For instance, it's OVMF
-# for x86_64 and i686, but it's AAVMF for aarch64. The libvirt default
-# follows this scheme.
-#nvram = [
-#   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
-#   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
-#   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
-#   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
-#]
-
-# The backend to use for handling stdout/stderr output from
-# QEMU processes.
-#
-#  'file': QEMU writes directly to a plain file. This is the
-#          historical default, but allows QEMU to inflict a
-#          denial of service attack on the host by exhausting
-#          filesystem space
-#
-#  'logd': QEMU writes to a pipe provided by virtlogd daemon.
-#          This is the current default, providing protection
-#          against denial of service by performing log file
-#          rollover when a size limit is hit.
-#
-#stdio_handler = "logd"
-
-# QEMU gluster libgfapi log level, debug levels are 0-9, with 9 being the
-# most verbose, and 0 representing no debugging output.
-#
-# The current logging levels defined in the gluster GFAPI are:
-#
-#    0 - None
-#    1 - Emergency
-#    2 - Alert
-#    3 - Critical
-#    4 - Error
-#    5 - Warning
-#    6 - Notice
-#    7 - Info
-#    8 - Debug
-#    9 - Trace
-#
-# Defaults to 4
-#
-#gluster_debug_level = 9
-
-# To enhance security, QEMU driver is capable of creating private namespaces
-# for each domain started. Well, so far only "mount" namespace is supported. If
-# enabled it means qemu process is unable to see all the devices on the system,
-# only those configured for the domain in question. Libvirt then manages
-# devices entries throughout the domain lifetime. This namespace is turned on
-# by default.
-#namespaces = [ "mount" ]
-
-# This directory is used for memoryBacking source if configured as file.
-# NOTE: big files will be stored here
-#memory_backing_dir = "/var/lib/libvirt/qemu/ram"
+cgroup_device_acl = [
+    "/dev/null", "/dev/full", "/dev/zero",
+    "/dev/random", "/dev/urandom",
+    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
+    "/dev/rtc", "/dev/hpet","/dev/net/tun",
+]

2018-03-30 06:49:25,011 [salt.state       ][INFO    ][1516] Completed state [/etc/libvirt/qemu.conf] at time 06:49:25.011941 duration_in_ms=156.898
2018-03-30 06:49:25,012 [salt.state       ][INFO    ][1516] Running state [/etc/libvirt/libvirtd.conf] at time 06:49:25.012328
2018-03-30 06:49:25,012 [salt.state       ][INFO    ][1516] Executing state file.managed for /etc/libvirt/libvirtd.conf
2018-03-30 06:49:25,035 [salt.fileclient  ][INFO    ][1516] Fetching file from saltenv 'base', ** done ** 'nova/files/pike/libvirtd.conf.Debian'
2018-03-30 06:49:25,123 [salt.state       ][INFO    ][1516] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 # Master libvirt daemon configuration file
 #
 # For further information consult http://libvirt.org/format.html
@@ -21,6 +22,12 @@
 # This is enabled by default, uncomment this to disable it
 #listen_tls = 0
 
+
+listen_tls = 0
+listen_tcp = 1
+auth_tcp = "none"
+
+
 # Listen for unencrypted TCP connections on the public TCP/IP port.
 # NB, must pass the --listen flag to the libvirtd process for this to
 # have any effect.
@@ -48,10 +55,6 @@
 # Override the default configuration which binds to all network
 # interfaces. This can be a numeric IPv4/6 address, or hostname
 #
-# If the libvirtd service is started in parallel with network
-# startup (e.g. with systemd), binding to addresses other than
-# the wildcards (0.0.0.0/::) might not be available yet.
-#
 #listen_addr = "192.168.0.1"
 
 
@@ -67,7 +70,7 @@
 # unique on the immediate broadcast network.
 #
 # The default is "Virtualization Host HOSTNAME", where HOSTNAME
-# is substituted for the short hostname of the machine (without domain)
+# is subsituted for the short hostname of the machine (without domain)
 #
 #mdns_name = "Virtualization Host Joe Demo"
 
@@ -82,14 +85,14 @@
 # without becoming root.
 #
 # This is restricted to 'root' by default.
-unix_sock_group = "libvirt"
+unix_sock_group = "libvirtd"
 
 # Set the UNIX socket permissions for the R/O socket. This is used
 # for monitoring VM status only
 #
-# Default allows any user. If setting group ownership, you may want to
-# restrict this too.
-unix_sock_ro_perms = "0777"
+# Default allows any user. If setting group ownership may want to
+# restrict this to:
+#unix_sock_ro_perms = "0777"
 
 # Set the UNIX socket permissions for the R/W socket. This is used
 # for full management of VMs
@@ -98,19 +101,11 @@
 # the default will change to allow everyone (eg, 0777)
 #
 # If not using PolicyKit and setting group ownership for access
-# control, then you may want to relax this too.
+# control then you may want to relax this to:
 unix_sock_rw_perms = "0770"
-
-# Set the UNIX socket permissions for the admin interface socket.
-#
-# Default allows only owner (root), do not change it unless you are
-# sure to whom you are exposing the access to.
-#unix_sock_admin_perms = "0700"
 
 # Set the name of the directory in which sockets will be found/created.
 #unix_sock_dir = "/var/run/libvirt"
-
-
 
 #################################################################
 #
@@ -125,7 +120,7 @@
 #  - sasl: use SASL infrastructure. The actual auth scheme is then
 #          controlled from /etc/sasl2/libvirt.conf. For the TCP
 #          socket only GSSAPI & DIGEST-MD5 mechanisms will be used.
-#          For non-TCP or TLS sockets, any scheme is allowed.
+#          For non-TCP or TLS sockets,  any scheme is allowed.
 #
 #  - polkit: use PolicyKit to authenticate. This is only suitable
 #            for use on the UNIX sockets. The default policy will
@@ -156,6 +151,7 @@
 # use, always enable SASL and use the GSSAPI or DIGEST-MD5
 # mechanism in /etc/sasl2/libvirt.conf
 #auth_tcp = "sasl"
+#auth_tcp = "none"
 
 # Change the authentication scheme for TLS sockets.
 #
@@ -167,15 +163,6 @@
 #auth_tls = "none"
 
 
-# Change the API access control scheme
-#
-# By default an authenticated user is allowed access
-# to all APIs. Access drivers can place restrictions
-# on this. By default the 'nop' driver is enabled,
-# meaning no access control checks are done once a
-# client has authenticated with libvirtd
-#
-#access_drivers = [ "polkit" ]
 
 #################################################################
 #
@@ -228,7 +215,7 @@
 #tls_no_verify_certificate = 1
 
 
-# A whitelist of allowed x509 Distinguished Names
+# A whitelist of allowed x509  Distinguished Names
 # This list may contain wildcards such as
 #
 #    "C=GB,ST=London,L=London,O=Red Hat,CN=*"
@@ -242,7 +229,7 @@
 #tls_allowed_dn_list = ["DN1", "DN2"]
 
 
-# A whitelist of allowed SASL usernames. The format for username
+# A whitelist of allowed SASL usernames. The format for usernames
 # depends on the SASL authentication mechanism. Kerberos usernames
 # look like username@REALM
 #
@@ -259,13 +246,6 @@
 #sasl_allowed_username_list = ["joe@EXAMPLE.COM", "fred@EXAMPLE.COM" ]
 
 
-# Override the compile time default TLS priority string. The
-# default is usually "NORMAL" unless overridden at build time.
-# Only set this is it is desired for libvirt to deviate from
-# the global default settings.
-#
-#tls_priority="NORMAL"
-
 
 #################################################################
 #
@@ -274,22 +254,12 @@
 
 # The maximum number of concurrent client connections to allow
 # over all sockets combined.
-#max_clients = 5000
-
-# The maximum length of queue of connections waiting to be
-# accepted by the daemon. Note, that some protocols supporting
-# retransmission may obey this so that a later reattempt at
-# connection succeeds.
-#max_queued_clients = 1000
-
-# The maximum length of queue of accepted but not yet
-# authenticated clients. The default value is 20. Set this to
-# zero to turn this feature off.
-#max_anonymous_clients = 20
+#max_clients = 20
+
 
 # The minimum limit sets the number of workers to start up
 # initially. If the number of active clients exceeds this,
-# then more threads are spawned, up to max_workers limit.
+# then more threads are spawned, upto max_workers limit.
 # Typically you'd want max_workers to equal maximum number
 # of clients allowed
 #min_workers = 5
@@ -297,15 +267,15 @@
 
 
 # The number of priority workers. If all workers from above
-# pool are stuck, some calls marked as high priority
+# pool will stuck, some calls marked as high priority
 # (notably domainDestroy) can be executed in this pool.
 #prio_workers = 5
 
 # Total global limit on concurrent RPC calls. Should be
 # at least as large as max_workers. Beyond this, RPC requests
-# will be read into memory and queued. This directly impacts
+# will be read into memory and queued. This directly impact
 # memory usage, currently each request requires 256 KB of
-# memory. So by default up to 5 MB of memory is used
+# memory. So by default upto 5 MB of memory is used
 #
 # XXX this isn't actually enforced yet, only the per-client
 # limit is used so far
@@ -317,16 +287,6 @@
 # and max_workers parameter
 #max_client_requests = 5
 
-# Same processing controls, but this time for the admin interface.
-# For description of each option, be so kind to scroll few lines
-# upwards.
-
-#admin_min_workers = 1
-#admin_max_workers = 5
-#admin_max_clients = 5
-#admin_max_queued_clients = 5
-#admin_max_client_requests = 5
-
 #################################################################
 #
 # Logging controls
@@ -334,10 +294,6 @@
 
 # Logging level: 4 errors, 3 warnings, 2 information, 1 debug
 # basically 1 will log everything possible
-# Note: Journald may employ rate limiting of the messages logged
-# and thus lock up the libvirt daemon. To use the debug level with
-# journald you have to specify it explicitly in 'log_outputs', otherwise
-# only information level messages will be logged.
 #log_level = 3
 
 # Logging filters:
@@ -346,22 +302,16 @@
 # The format for a filter is one of:
 #    x:name
 #    x:+name
-
-#      where name is a string which is matched against the category
-#      given in the VIR_LOG_INIT() at the top of each libvirt source
-#      file, e.g., "remote", "qemu", or "util.json" (the name in the
-#      filter can be a substring of the full category name, in order
-#      to match multiple similar categories), the optional "+" prefix
-#      tells libvirt to log stack trace for each message matching
-#      name, and x is the minimal level where matching messages should
-#      be logged:
-
+#      where name is a string which is matched against source file name,
+#      e.g., "remote", "qemu", or "util/json", the optional "+" prefix
+#      tells libvirt to log stack trace for each message matching name,
+#      and x is the minimal level where matching messages should be logged:
 #    1: DEBUG
 #    2: INFO
 #    3: WARNING
 #    4: ERROR
 #
-# Multiple filters can be defined in a single @filters, they just need to be
+# Multiple filter can be defined in a single @filters, they just need to be
 # separated by spaces.
 #
 # e.g. to only get warning or errors from the remote layer and only errors
@@ -377,24 +327,22 @@
 #      use syslog for the output and use the given name as the ident
 #    x:file:file_path
 #      output to a file, with the given filepath
-#    x:journald
-#      output to journald logging system
 # In all case the x prefix is the minimal level, acting as a filter
 #    1: DEBUG
 #    2: INFO
 #    3: WARNING
 #    4: ERROR
 #
-# Multiple outputs can be defined, they just need to be separated by spaces.
+# Multiple output can be defined, they just need to be separated by spaces.
 # e.g. to log all warnings and errors to syslog under the libvirtd ident:
 #log_outputs="3:syslog:libvirtd"
 #
 
-# Log debug buffer size:
-#
-# This configuration option is no longer used, since the global
-# log buffer functionality has been removed. Please configure
-# suitable log_outputs/log_filters settings to obtain logs.
+# Log debug buffer size: default 64
+# The daemon keeps an internal debug log buffer which will be dumped in case
+# of crash or upon receiving a SIGUSR2 signal. This setting allows to override
+# the default buffer size in kilobytes.
+# If value is 0 or less the debug log buffer is deactivated
 #log_buffer_size = 64
 
 
@@ -417,16 +365,10 @@
 
 ###################################################################
 # UUID of the host:
-# Host UUID is read from one of the sources specified in host_uuid_source.
-#
-# - 'smbios': fetch the UUID from 'dmidecode -s system-uuid'
-# - 'machine-id': fetch the UUID from /etc/machine-id
-#
-# The host_uuid_source default is 'smbios'. If 'dmidecode' does not provide
-# a valid UUID a temporary UUID will be generated.
-#
-# Another option is to specify host UUID in host_uuid.
-#
+# Provide the UUID of the host here in case the command
+# 'dmidecode -s system-uuid' does not provide a valid uuid. In case
+# 'dmidecode' does not provide a valid UUID and none is provided here, a
+# temporary UUID will be generated.
 # Keep the format of the example UUID below. UUID must not have all digits
 # be the same.
 
@@ -434,12 +376,11 @@
 # it with the output of the 'uuidgen' command and then
 # uncomment this entry
 #host_uuid = "00000000-0000-0000-0000-000000000000"
-#host_uuid_source = "smbios"
 
 ###################################################################
 # Keepalive protocol:
 # This allows libvirtd to detect broken client connections or even
-# dead clients.  A keepalive message is sent to a client after
+# dead client.  A keepalive message is sent to a client after
 # keepalive_interval seconds of inactivity to check if the client is
 # still responding; keepalive_count is a maximum number of keepalive
 # messages that are allowed to be sent to the client without getting
@@ -448,31 +389,15 @@
 # keepalive_interval * (keepalive_count + 1) seconds since the last
 # message received from the client.  If keepalive_interval is set to
 # -1, libvirtd will never send keepalive requests; however clients
-# can still send them and the daemon will send responses.  When
+# can still send them and the deamon will send responses.  When
 # keepalive_count is set to 0, connections will be automatically
 # closed after keepalive_interval seconds of inactivity without
 # sending any keepalive messages.
 #
 #keepalive_interval = 5
 #keepalive_count = 5
-
-#
-# These configuration options are no longer used.  There is no way to
-# restrict such clients from connecting since they first need to
-# connect in order to ask for keepalive.
-#
-#keepalive_required = 1
-#admin_keepalive_required = 1
-
-# Keepalive settings for the admin interface
-#admin_keepalive_interval = 5
-#admin_keepalive_count = 5
-
-###################################################################
-# Open vSwitch:
-# This allows to specify a timeout for openvswitch calls made by
-# libvirt. The ovs-vsctl utility is used for the configuration and
-# its timeout option is set by default to 5 seconds to avoid
-# potential infinite waits blocking libvirt.
-#
-#ovs_timeout = 5
+#
+# If set to 1, libvirtd will refuse to talk to clients that do not
+# support keepalive protocol.  Defaults to 0.
+#
+#keepalive_required = 1
2018-03-30 06:49:25,123 [salt.state       ][INFO    ][1516] Completed state [/etc/libvirt/libvirtd.conf] at time 06:49:25.123520 duration_in_ms=111.192
2018-03-30 06:49:25,124 [salt.state       ][INFO    ][1516] Running state [virsh net-destroy default] at time 06:49:25.124742
2018-03-30 06:49:25,124 [salt.state       ][INFO    ][1516] Executing state cmd.run for virsh net-destroy default
2018-03-30 06:49:25,125 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command 'virsh net-list | grep default' in directory '/root'
2018-03-30 06:49:25,154 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command 'virsh net-destroy default' in directory '/root'
2018-03-30 06:49:25,442 [salt.state       ][INFO    ][1516] {'pid': 16098, 'retcode': 0, 'stderr': '', 'stdout': 'Network default destroyed'}
2018-03-30 06:49:25,443 [salt.state       ][INFO    ][1516] Completed state [virsh net-destroy default] at time 06:49:25.443080 duration_in_ms=318.336
2018-03-30 06:49:25,446 [salt.state       ][INFO    ][1516] Running state [libvirtd] at time 06:49:25.446314
2018-03-30 06:49:25,446 [salt.state       ][INFO    ][1516] Executing state service.running for libvirtd
2018-03-30 06:49:25,447 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'status', 'libvirtd.service', '-n', '0'] in directory '/root'
2018-03-30 06:49:25,470 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'is-active', 'libvirtd.service'] in directory '/root'
2018-03-30 06:49:25,488 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'is-enabled', 'libvirtd.service'] in directory '/root'
2018-03-30 06:49:25,507 [salt.state       ][INFO    ][1516] The service libvirtd is already running
2018-03-30 06:49:25,508 [salt.state       ][INFO    ][1516] Completed state [libvirtd] at time 06:49:25.508289 duration_in_ms=61.973
2018-03-30 06:49:25,508 [salt.state       ][INFO    ][1516] Running state [libvirtd] at time 06:49:25.508681
2018-03-30 06:49:25,509 [salt.state       ][INFO    ][1516] Executing state service.mod_watch for libvirtd
2018-03-30 06:49:25,510 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'is-active', 'libvirtd.service'] in directory '/root'
2018-03-30 06:49:25,529 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemctl', 'is-enabled', 'libvirtd.service'] in directory '/root'
2018-03-30 06:49:25,547 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'libvirtd.service'] in directory '/root'
2018-03-30 06:49:25,662 [salt.state       ][INFO    ][1516] {'libvirtd': True}
2018-03-30 06:49:25,663 [salt.state       ][INFO    ][1516] Completed state [libvirtd] at time 06:49:25.663451 duration_in_ms=154.761
2018-03-30 06:49:25,664 [salt.state       ][INFO    ][1516] Running state [/etc/tmpfiles.d/openvswitch-vhost.conf] at time 06:49:25.664730
2018-03-30 06:49:25,665 [salt.state       ][INFO    ][1516] Executing state file.managed for /etc/tmpfiles.d/openvswitch-vhost.conf
2018-03-30 06:49:25,672 [salt.state       ][INFO    ][1516] File changed:
New file
2018-03-30 06:49:25,673 [salt.state       ][INFO    ][1516] Completed state [/etc/tmpfiles.d/openvswitch-vhost.conf] at time 06:49:25.672945 duration_in_ms=8.215
2018-03-30 06:49:25,673 [salt.state       ][INFO    ][1516] Running state [systemd-tmpfiles --create] at time 06:49:25.673259
2018-03-30 06:49:25,673 [salt.state       ][INFO    ][1516] Executing state cmd.run for systemd-tmpfiles --create
2018-03-30 06:49:25,674 [salt.loaded.int.module.cmdmod][INFO    ][1516] Executing command 'systemd-tmpfiles --create' in directory '/root'
2018-03-30 06:49:25,694 [salt.state       ][INFO    ][1516] {'pid': 16185, 'retcode': 0, 'stderr': '', 'stdout': ''}
2018-03-30 06:49:25,695 [salt.state       ][INFO    ][1516] Completed state [systemd-tmpfiles --create] at time 06:49:25.694935 duration_in_ms=21.676
2018-03-30 06:49:25,699 [salt.minion      ][INFO    ][1516] Returning information for job: 20180330064745919678
2018-03-30 07:04:34,303 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command state.sls with jid 20180330070433916489
2018-03-30 07:04:34,320 [salt.minion      ][INFO    ][16604] Starting a new job with PID 16604
2018-03-30 07:04:36,792 [salt.state       ][INFO    ][16604] Loading fresh modules for state activity
2018-03-30 07:04:36,848 [salt.fileclient  ][INFO    ][16604] Fetching file from saltenv 'base', ** done ** 'ceilometer/init.sls'
2018-03-30 07:04:36,876 [salt.fileclient  ][INFO    ][16604] Fetching file from saltenv 'base', ** done ** 'ceilometer/agent.sls'
2018-03-30 07:04:38,009 [salt.state       ][INFO    ][16604] Running state [ceilometer-agent-compute] at time 07:04:38.009047
2018-03-30 07:04:38,009 [salt.state       ][INFO    ][16604] Executing state pkg.installed for ceilometer-agent-compute
2018-03-30 07:04:38,009 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 07:04:38,307 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['apt-cache', '-q', 'policy', 'ceilometer-agent-compute'] in directory '/root'
2018-03-30 07:04:38,395 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2018-03-30 07:04:40,281 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2018-03-30 07:04:40,327 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'ceilometer-agent-compute'] in directory '/root'
2018-03-30 07:04:44,394 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330070444000293
2018-03-30 07:04:44,413 [salt.minion      ][INFO    ][17086] Starting a new job with PID 17086
2018-03-30 07:04:44,435 [salt.minion      ][INFO    ][17086] Returning information for job: 20180330070444000293
2018-03-30 07:04:54,599 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command saltutil.find_job with jid 20180330070454205968
2018-03-30 07:04:54,614 [salt.minion      ][INFO    ][17777] Starting a new job with PID 17777
2018-03-30 07:04:54,642 [salt.minion      ][INFO    ][17777] Returning information for job: 20180330070454205968
2018-03-30 07:04:59,444 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}\n', '-W'] in directory '/root'
2018-03-30 07:04:59,517 [salt.state       ][INFO    ][16604] Made the following changes:
'python-pysnmp4' changed from 'absent' to '4.2.5-1'
'python-setproctitle' changed from 'absent' to '1.1.8-1build2'
'python-pysnmp4-mibs' changed from 'absent' to '0.1.3-1'
'python-pysnmp2' changed from 'absent' to '1'
'python-pysnmp-common' changed from 'absent' to '1'
'ceilometer-agent-compute' changed from 'absent' to '1:9.0.4-0ubuntu1~cloud0'
'python-pam' changed from 'absent' to '0.4.2-13.2ubuntu2'
'python-cotyledon' changed from 'absent' to '1.6.3-0ubuntu1~cloud0'
'ceilometer-common' changed from 'absent' to '1:9.0.4-0ubuntu1~cloud0'
'python2.7-twisted-core' changed from 'absent' to '1'
'python-twisted' changed from 'absent' to '16.0.0-1ubuntu0.2'
'python-ceilometer' changed from 'absent' to '1:9.0.4-0ubuntu1~cloud0'
'python-ipaddr' changed from 'absent' to '2.1.11-2'
'libsmi2ldbl' changed from 'absent' to '0.4.8+dfsg2-11'
'python2.7-twisted' changed from 'absent' to '1'
'python-croniter' changed from 'absent' to '0.3.8-1'
'python-wsme' changed from 'absent' to '0.8.0-2ubuntu2'
'python-twisted-core' changed from 'absent' to '16.0.0-1ubuntu0.2'
'python-jsonpath-rw' changed from 'absent' to '1.4.0-1'
'python-attr' changed from 'absent' to '15.2.0-1'
'python-service-identity' changed from 'absent' to '16.0.0-2'
'python-serial' changed from 'absent' to '3.0.1-1'
'smitools' changed from 'absent' to '0.4.8+dfsg2-11'
'python-jsonpath-rw-ext' changed from 'absent' to '0.1.9-1'
'python2.7-twisted-bin' changed from 'absent' to '1'
'python-pysnmp4-apps' changed from 'absent' to '0.3.2-1'
'python-twisted-bin' changed from 'absent' to '16.0.0-1ubuntu0.2'

2018-03-30 07:04:59,546 [salt.state       ][INFO    ][16604] Loading fresh modules for state activity
2018-03-30 07:04:59,698 [salt.state       ][INFO    ][16604] Completed state [ceilometer-agent-compute] at time 07:04:59.698587 duration_in_ms=21689.54
2018-03-30 07:04:59,701 [salt.state       ][INFO    ][16604] Running state [/etc/ceilometer/ceilometer.conf] at time 07:04:59.701101
2018-03-30 07:04:59,701 [salt.state       ][INFO    ][16604] Executing state file.managed for /etc/ceilometer/ceilometer.conf
2018-03-30 07:04:59,740 [salt.fileclient  ][INFO    ][16604] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/pike/ceilometer-agent.conf.Debian'
2018-03-30 07:04:59,796 [salt.state       ][INFO    ][16604] File changed:
--- 
+++ 
@@ -1,2157 +1,37 @@
 [DEFAULT]
-
-#
-# From ceilometer
-#
-
-# To reduce polling agent load, samples are sent to the notification agent in a
-# batch. To gain higher throughput at the cost of load set this to False.
-# (boolean value)
-#batch_polled_samples = true
-
-# To reduce large requests at same time to Nova or other components from
-# different compute agents, shuffle start time of polling task. (integer value)
-#shuffle_time_before_polling_task = 0
-
-# Configuration file for WSGI definition of API. (string value)
-#api_paste_config = api_paste.ini
-
-# Inspector to use for inspecting the hypervisor layer. Known inspectors are
-# libvirt, hyperv, vsphere and xenapi. (string value)
-#hypervisor_inspector = libvirt
-
-# Libvirt domain type. (string value)
-# Allowed values: kvm, lxc, qemu, uml, xen
-#libvirt_type = kvm
-
-# Override the default libvirt URI (which is dependent on libvirt_type).
-# (string value)
-#libvirt_uri =
-
-# DEPRECATED: Dispatchers to process metering data. (multi valued)
-# Deprecated group/name - [DEFAULT]/dispatcher
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: This option only be used in collector service, the collector service
-# has been deprecated and will be removed in the future, this should also be
-# deprecated for removal with collector service.
-#meter_dispatchers =
-
-# DEPRECATED: Dispatchers to process event data. (multi valued)
-# Deprecated group/name - [DEFAULT]/dispatcher
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: This option only be used in collector service, the collector service
-# has been deprecated and will be removed in the future, this should also be
-# deprecated for removal with collector service.
-#event_dispatchers =
-
-# Exchange name for Ironic notifications. (string value)
-#ironic_exchange = ironic
-
-# DEPRECATED: Allow novaclient's debug log output. (Use default_log_levels
-# instead) (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#nova_http_log_debug = false
-
-# Swift reseller prefix. Must be on par with reseller_prefix in proxy-
-# server.conf. (string value)
-#reseller_prefix = AUTH_
-
-# Configuration file for pipeline definition. (string value)
-#pipeline_cfg_file = pipeline.yaml
-
-# Configuration file for event pipeline definition. (string value)
-#event_pipeline_cfg_file = event_pipeline.yaml
-
-# Source for samples emitted on this instance. (string value)
-#sample_source = openstack
-
-# List of metadata prefixes reserved for metering use. (list value)
-#reserved_metadata_namespace = metering.
-
-# Limit on length of reserved metadata values. (integer value)
-#reserved_metadata_length = 256
-
-# List of metadata keys reserved for metering use. And these keys are
-# additional to the ones included in the namespace. (list value)
-#reserved_metadata_keys =
-
-# Path to the rootwrap configuration file to use for running commands as root
-# (string value)
-#rootwrap_config = /etc/ceilometer/rootwrap.conf
-
-# DEPRECATED: Exchange name for Nova notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#nova_control_exchange = nova
-
-# DEPRECATED: Exchange name for Neutron notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#neutron_control_exchange = neutron
-
-# DEPRECATED: Exchange name for Heat notifications (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#heat_control_exchange = heat
-
-# DEPRECATED: Exchange name for Glance notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#glance_control_exchange = glance
-
-# DEPRECATED: Exchange name for Keystone notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#keystone_control_exchange = keystone
-
-# DEPRECATED: Exchange name for Cinder notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#cinder_control_exchange = cinder
-
-# DEPRECATED: Exchange name for Data Processing notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#sahara_control_exchange = sahara
-
-# DEPRECATED: Exchange name for Swift notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#swift_control_exchange = swift
-
-# DEPRECATED: Exchange name for Magnum notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#magnum_control_exchange = magnum
-
-# DEPRECATED: Exchange name for DBaaS notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#trove_control_exchange = trove
-
-# DEPRECATED: Exchange name for Messaging service notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#zaqar_control_exchange = zaqar
-
-# DEPRECATED: Exchange name for DNS service notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#dns_control_exchange = central
-
-# DEPRECATED: Exchange name for ceilometer notifications. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Use notification_control_exchanges instead
-#ceilometer_control_exchange = ceilometer
-
-# Name of this node, which must be valid in an AMQP key. Can be an opaque
-# identifier. For ZeroMQ only, must be a valid host name, FQDN, or IP address.
-# (unknown value)
-#host = <your_hostname>
-
-# Timeout seconds for HTTP requests. Set it to None to disable timeout.
-# (integer value)
-#http_timeout = 600
-
-# Maximum number of parallel requests for services to handle at the same time.
-# (integer value)
-# Minimum value: 1
-#max_parallel_requests = 64
-
-#
-# From oslo.log
-#
-
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
-# Note: This option can be changed without restarting.
-#debug = false
-
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
-# Note: This option can be changed without restarting.
-# Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
-
-# Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
-#log_date_format = %Y-%m-%d %H:%M:%S
-
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
-# Deprecated group/name - [DEFAULT]/logfile
-#log_file = <None>
-
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
-# Deprecated group/name - [DEFAULT]/logdir
-#log_dir = <None>
-
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
-# (boolean value)
-#watch_log_file = false
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
-
-# Syslog facility to receive log lines. This option is ignored if
-# log_config_append is set. (string value)
-#syslog_log_facility = LOG_USER
-
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
-#use_stderr = false
-
-# Format string to use for log messages with context. (string value)
-#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
-
-# Format string to use for log messages when context is undefined. (string
-# value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string value)
-#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
-
-# Defines the format string for %(user_identity)s that is used in
-# logging_context_format_string. (string value)
-#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
-
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
-
-# Enables or disables publication of error events. (boolean value)
-#publish_errors = false
-
-# The format for an instance that is passed with the log message. (string
-# value)
-#instance_format = "[instance: %(uuid)s] "
-
-# The format for an instance UUID that is passed with the log message. (string
-# value)
-#instance_uuid_format = "[instance: %(uuid)s] "
-
-# Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
-
-# Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
-
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
-#rate_limit_except_level = CRITICAL
-
-# Enables or disables fatal status of deprecations. (boolean value)
-#fatal_deprecations = false
-
-#
-# From oslo.messaging
-#
-
-# Size of RPC connection pool. (integer value)
-#rpc_conn_pool_size = 30
-
-# The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
-
-# The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Allowed values: redis, sentinel, dummy
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Allowed values: json, msgpack
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
-# Size of executor thread pool when executor is threading or eventlet. (integer
-# value)
-# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
-#executor_thread_pool_size = 64
-
-# Seconds to wait for a response from a call. (integer value)
-#rpc_response_timeout = 60
-
-# A URL representing the messaging driver to use and its full configuration.
-# (string value)
-#transport_url = <None>
-
-# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
-# include amqp and zmq. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rpc_backend = rabbit
-
-# The default exchange under which topics are scoped. May be overridden by an
-# exchange name specified in the transport_url option. (string value)
-#control_exchange = openstack
-
-#
-# From oslo.service.service
-#
-
-# Enable eventlet backdoor.  Acceptable values are 0, <port>, and
-# <start>:<end>, where 0 results in listening on a random tcp port number;
-# <port> results in listening on the specified port number (and not enabling
-# backdoor if that port is in use); and <start>:<end> results in listening on
-# the smallest unused port number within the specified range of port numbers.
-# The chosen port is displayed in the service's log file. (string value)
-#backdoor_port = <None>
-
-# Enable eventlet backdoor, using the provided path as a unix socket that can
-# receive connections. This option is mutually exclusive with 'backdoor_port'
-# in that only one should be provided. If both are provided then the existence
-# of this option overrides the usage of that option. (string value)
-#backdoor_socket = <None>
-
-# Enables or disables logging values of all registered options when starting a
-# service (at DEBUG level). (boolean value)
-#log_options = true
-
-# Specify a timeout after which a gracefully shutdown server will exit. Zero
-# value means endless wait. (integer value)
-#graceful_shutdown_timeout = 60
-
-
-[api]
-
-#
-# From ceilometer
-#
-
-# Default maximum number of items returned by API request. (integer value)
-# Minimum value: 1
-#default_api_return_limit = 100
-
-# Set True to disable resource/meter/sample URLs. Default autodetection by
-# querying keystone. (boolean value)
-#gnocchi_is_enabled = <None>
-
-# Set True to redirect alarms URLs to aodh. Default autodetection by querying
-# keystone. (boolean value)
-#aodh_is_enabled = <None>
-
-# The endpoint of Aodh to redirect alarms URLs to Aodh API. Default
-# autodetection by querying keystone. (string value)
-#aodh_url = <None>
-
-# Set True to redirect events URLs to Panko. Default autodetection by querying
-# keystone. (boolean value)
-#panko_is_enabled = <None>
-
-# The endpoint of Panko to redirect events URLs to Panko API. Default
-# autodetection by querying keystone. (string value)
-#panko_url = <None>
-
-
-[collector]
-
-#
-# From ceilometer
-#
-
-# Address to which the UDP socket is bound. Set to an empty string to disable.
-# (unknown value)
-#udp_address = 0.0.0.0
-
-# Port to which the UDP socket is bound. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#udp_port = 4952
-
-# Number of notification messages to wait before dispatching them (integer
-# value)
-#batch_size = 1
-
-# Number of seconds to wait before dispatching samples when batch_size is not
-# reached (None means indefinitely) (integer value)
-#batch_timeout = <None>
-
-# Number of workers for collector service. default value is 1. (integer value)
-# Minimum value: 1
-# Deprecated group/name - [DEFAULT]/collector_workers
-#workers = 1
-
+executor_thread_pool_size = 5
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
 
 [compute]
 
-#
-# From ceilometer
-#
-
-# DEPRECATED: Enable work-load partitioning, allowing multiple compute agents
-# to be run simultaneously. (replaced by instance_discovery_method) (boolean
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#workload_partitioning = false
-
-# Ceilometer offers many methods to discover the instance running on a compute
-# node:
-# * naive: poll nova to get all instances
-# * workload_partitioning: poll nova to get instances of the compute
-# * libvirt_metadata: get instances from libvirt metadata   but without
-# instance metadata (recommended for Gnocchi   backend (string value)
-# Allowed values: naive, workload_partitioning, libvirt_metadata
-#instance_discovery_method = libvirt_metadata
-
-# New instances will be discovered periodically based on this option (in
-# seconds). By default, the agent discovers instances according to pipeline
-# polling interval. If option is greater than 0, the instance list to poll will
-# be updated based on this option's interval. Measurements relating to the
-# instances will match intervals defined in pipeline.  (integer value)
-# Minimum value: 0
-#resource_update_interval = 0
-
-# The expiry to totally refresh the instances resource cache, since the
-# instance may be migrated to another host, we need to clean the legacy
-# instances info in local cache by totally refreshing the local cache. The
-# minimum should be the value of the config option of resource_update_interval.
-# This option is only used for agent polling to Nova API, so it will works only
-# when 'instance_discovery_method' was set to 'naive'. (integer value)
-# Minimum value: 0
-#resource_cache_expiry = 3600
-
-
-[coordination]
-
-#
-# From ceilometer
-#
-
-# The backend URL to use for distributed coordination. If left empty, per-
-# deployment central agent and per-host compute agent won't do workload
-# partitioning and will only function correctly if a single instance of that
-# service is running. (string value)
-#backend_url = <None>
-
-# Number of seconds between checks to see if group membership has changed
-# (floating point value)
-#check_watchers = 10.0
-
-
-[cors]
-
-#
-# From oslo.middleware.cors
-#
-
-# Indicate whether this resource may be shared with the domain received in the
-# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
-# slash. Example: https://horizon.example.com (list value)
-#allowed_origin = <None>
-
-# Indicate that the actual request can include user credentials (boolean value)
-#allow_credentials = true
-
-# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
-# Headers. (list value)
-#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-Openstack-Request-Id
-
-# Maximum cache age of CORS preflight requests. (integer value)
-#max_age = 3600
-
-# Indicate which methods can be used during the actual request. (list value)
-#allow_methods = GET,PUT,POST,DELETE,PATCH
-
-# Indicate which header field names may be used during the actual request.
-# (list value)
-#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-Openstack-Request-Id
-
-
-[database]
-
-#
-# From ceilometer
-#
-
-# Number of seconds that samples are kept in the database for (<= 0 means
-# forever). (integer value)
-# Deprecated group/name - [database]/time_to_live
-#metering_time_to_live = -1
-
-# The connection string used to connect to the metering database. (if unset,
-# connection is used) (string value)
-#metering_connection = <None>
-
-# Indicates if expirer expires only samples. If set true, expired samples will
-# be deleted, but residual resource and meter definition data will remain.
-# (boolean value)
-#sql_expire_samples_only = false
-
-#
-# From oslo.db
-#
-
-# If True, SQLite uses synchronous mode. (boolean value)
-#sqlite_synchronous = true
-
-# The back end to use for the database. (string value)
-# Deprecated group/name - [DEFAULT]/db_backend
-#backend = sqlalchemy
-
-# The SQLAlchemy connection string to use to connect to the database. (string
-# value)
-# Deprecated group/name - [DEFAULT]/sql_connection
-# Deprecated group/name - [DATABASE]/sql_connection
-# Deprecated group/name - [sql]/connection
-#connection = <None>
-
-# The SQLAlchemy connection string to use to connect to the slave database.
-# (string value)
-#slave_connection = <None>
-
-# The SQL mode to be used for MySQL sessions. This option, including the
-# default, overrides any server-set SQL mode. To use whatever SQL mode is set
-# by the server configuration, set this to no value. Example: mysql_sql_mode=
-# (string value)
-#mysql_sql_mode = TRADITIONAL
-
-# If True, transparently enables support for handling MySQL Cluster (NDB).
-# (boolean value)
-#mysql_enable_ndb = false
-
-# Timeout before idle SQL connections are reaped. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_idle_timeout
-# Deprecated group/name - [DATABASE]/sql_idle_timeout
-# Deprecated group/name - [sql]/idle_timeout
-#idle_timeout = 3600
-
-# Minimum number of SQL connections to keep open in a pool. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_min_pool_size
-# Deprecated group/name - [DATABASE]/sql_min_pool_size
-#min_pool_size = 1
-
-# Maximum number of SQL connections to keep open in a pool. Setting a value of
-# 0 indicates no limit. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_pool_size
-# Deprecated group/name - [DATABASE]/sql_max_pool_size
-#max_pool_size = 5
-
-# Maximum number of database connection retries during startup. Set to -1 to
-# specify an infinite retry count. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_retries
-# Deprecated group/name - [DATABASE]/sql_max_retries
-#max_retries = 10
-
-# Interval between retries of opening a SQL connection. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_retry_interval
-# Deprecated group/name - [DATABASE]/reconnect_interval
-#retry_interval = 10
-
-# If set, use this value for max_overflow with SQLAlchemy. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_overflow
-# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
-#max_overflow = 50
-
-# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
-# value)
-# Minimum value: 0
-# Maximum value: 100
-# Deprecated group/name - [DEFAULT]/sql_connection_debug
-#connection_debug = 0
-
-# Add Python stack traces to SQL as comment strings. (boolean value)
-# Deprecated group/name - [DEFAULT]/sql_connection_trace
-#connection_trace = false
-
-# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
-# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
-#pool_timeout = <None>
-
-# Enable the experimental use of database reconnect on connection lost.
-# (boolean value)
-#use_db_reconnect = false
-
-# Seconds between retries of a database transaction. (integer value)
-#db_retry_interval = 1
-
-# If True, increases the interval between retries of a database operation up to
-# db_max_retry_interval. (boolean value)
-#db_inc_retry_interval = true
-
-# If db_inc_retry_interval is set, the maximum seconds between retries of a
-# database operation. (integer value)
-#db_max_retry_interval = 10
-
-# Maximum retries in case of connection error or deadlock error before error is
-# raised. Set to -1 to specify an infinite retry count. (integer value)
-#db_max_retries = 20
-
-
-[dispatcher_file]
-
-#
-# From ceilometer
-#
-
-# Name and the location of the file to record meters. (string value)
-#file_path = <None>
-
-# The max size of the file. (integer value)
-#max_bytes = 0
-
-# The max number of the files to keep. (integer value)
-#backup_count = 0
-
-
-[dispatcher_gnocchi]
-
-#
-# From ceilometer
-#
-
-# Filter out samples generated by Gnocchi service activity (boolean value)
-#filter_service_activity = true
-
-# Gnocchi project used to filter out samples generated by Gnocchi service
-# activity (string value)
-#filter_project = gnocchi
-
-# The archive policy to use when the dispatcher create a new metric. (string
-# value)
-#archive_policy = <None>
-
-# The Yaml file that defines mapping between samples and gnocchi
-# resources/metrics (string value)
-#resources_definition_file = gnocchi_resources.yaml
-
-# Number of seconds before request to gnocchi times out (floating point value)
-# Minimum value: 0
-#request_timeout = 6.05
-
-
-[dispatcher_http]
-
-#
-# From ceilometer
-#
-
-# The target where the http request will be sent. If this is not set, no data
-# will be posted. For example: target = http://hostname:1234/path (string
-# value)
-#target =
-
-# The target for event data where the http request will be sent to. If this is
-# not set, it will default to same as Sample target. (string value)
-#event_target = <None>
-
-# The max time in seconds to wait for a request to timeout. (integer value)
-#timeout = 5
-
-# The path to a server certificate or directory if the system CAs are not used
-# or if a self-signed certificate is used. Set to False to ignore SSL cert
-# verification. (string value)
-#verify_ssl = <None>
-
-# Indicates whether samples are published in a batch. (boolean value)
-#batch_mode = false
-
-
-[event]
-
-#
-# From ceilometer
-#
-
-# Configuration file for event definitions. (string value)
-#definitions_cfg_file = event_definitions.yaml
-
-# Drop notifications if no event definition matches. (Otherwise, we convert
-# them with just the default traits) (boolean value)
-#drop_unmatched_notifications = false
-
-# Store the raw notification for select priority levels (info and/or error). By
-# default, raw details are not captured. (multi valued)
-#store_raw =
-
-
-[hardware]
-
-#
-# From ceilometer
-#
-
-# URL scheme to use for hardware nodes. (string value)
-#url_scheme = snmp://
-
-# SNMPd user name of all nodes running in the cloud. (string value)
-#readonly_user_name = ro_snmp_user
-
-# SNMPd v3 authentication password of all the nodes running in the cloud.
-# (string value)
-#readonly_user_password = password
-
-# SNMPd v3 authentication algorithm of all the nodes running in the cloud
-# (string value)
-# Allowed values: md5, sha
-#readonly_user_auth_proto = <None>
-
-# SNMPd v3 encryption algorithm of all the nodes running in the cloud (string
-# value)
-# Allowed values: des, aes128, 3des, aes192, aes256
-#readonly_user_priv_proto = <None>
-
-# SNMPd v3 encryption password of all the nodes running in the cloud. (string
-# value)
-#readonly_user_priv_password = <None>
-
-# Name of the control plane Tripleo network (string value)
-#tripleo_network_name = ctlplane
-
-# Configuration file for defining hardware snmp meters. (string value)
-#meter_definitions_file = snmp.yaml
-
-
-[ipmi]
-
-#
-# From ceilometer
-#
-
-# Number of retries upon Intel Node Manager initialization failure (integer
-# value)
-#node_manager_init_retry = 3
-
-# Tolerance of IPMI/NM polling failures before disable this pollster. Negative
-# indicates retrying forever. (integer value)
-#polling_retry = 3
-
+instance_discovery_method = libvirt_metadata
 
 [keystone_authtoken]
 
-#
-# From keystonemiddleware.auth_token
-#
-
-# Complete "public" Identity API endpoint. This endpoint should not be an
-# "admin" endpoint, as it should be accessible by all end users.
-# Unauthenticated clients are redirected to this endpoint to authenticate.
-# Although this endpoint should ideally be unversioned, client support in the
-# wild varies. If you're using a versioned v2 endpoint here, then this should
-# *not* be the same endpoint the service user utilizes for validating tokens,
-# because normal end users may not be able to reach that endpoint. (string
-# value)
-#auth_uri = <None>
-
-# API version of the admin Identity API endpoint. (string value)
-#auth_version = <None>
-
-# Do not handle authorization requests within the middleware, but delegate the
-# authorization decision to downstream WSGI components. (boolean value)
-#delay_auth_decision = false
-
-# Request timeout value for communicating with Identity API server. (integer
-# value)
-#http_connect_timeout = <None>
-
-# How many times are we trying to reconnect when communicating with Identity
-# API Server. (integer value)
-#http_request_max_retries = 3
-
-# Request environment key where the Swift cache object is stored. When
-# auth_token middleware is deployed with a Swift cache, use this option to have
-# the middleware share a caching backend with swift. Otherwise, use the
-# ``memcached_servers`` option instead. (string value)
-#cache = <None>
-
-# Required if identity server requires client certificate (string value)
-#certfile = <None>
-
-# Required if identity server requires client certificate (string value)
-#keyfile = <None>
-
-# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# Defaults to system CAs. (string value)
-#cafile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# The region in which the identity server can be found. (string value)
-#region_name = <None>
-
-# DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P
-# release. (string value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#signing_dir = <None>
-
-# Optionally specify a list of memcached server(s) to use for caching. If left
-# undefined, tokens will instead be cached in-process. (list value)
-# Deprecated group/name - [keystone_authtoken]/memcache_servers
-#memcached_servers = <None>
-
-# In order to prevent excessive effort spent validating tokens, the middleware
-# caches previously-seen tokens for a configurable duration (in seconds). Set
-# to -1 to disable caching completely. (integer value)
-#token_cache_time = 300
-
-# DEPRECATED: Determines the frequency at which the list of revoked tokens is
-# retrieved from the Identity service (in seconds). A high number of revocation
-# events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in
-# the Ocata release and will be removed in the P release. (integer value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#revocation_cache_time = 10
-
-# (Optional) If defined, indicate whether token data should be authenticated or
-# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
-# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
-# cache. If the value is not one of these options or empty, auth_token will
-# raise an exception on initialization. (string value)
-# Allowed values: None, MAC, ENCRYPT
-#memcache_security_strategy = None
-
-# (Optional, mandatory if memcache_security_strategy is defined) This string is
-# used for key derivation. (string value)
-#memcache_secret_key = <None>
-
-# (Optional) Number of seconds memcached server is considered dead before it is
-# tried again. (integer value)
-#memcache_pool_dead_retry = 300
-
-# (Optional) Maximum total number of open connections to every memcached
-# server. (integer value)
-#memcache_pool_maxsize = 10
-
-# (Optional) Socket timeout in seconds for communicating with a memcached
-# server. (integer value)
-#memcache_pool_socket_timeout = 3
-
-# (Optional) Number of seconds a connection to memcached is held unused in the
-# pool before it is closed. (integer value)
-#memcache_pool_unused_timeout = 60
-
-# (Optional) Number of seconds that an operation will wait to get a memcached
-# client connection from the pool. (integer value)
-#memcache_pool_conn_get_timeout = 10
-
-# (Optional) Use the advanced (eventlet safe) memcached client pool. The
-# advanced pool will only work under python 2.x. (boolean value)
-#memcache_use_advanced_pool = false
-
-# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
-# middleware will not ask for service catalog on token validation and will not
-# set the X-Service-Catalog header. (boolean value)
-#include_service_catalog = true
-
-# Used to control the use and type of token binding. Can be set to: "disabled"
-# to not check token binding. "permissive" (default) to validate binding
-# information if the bind type is of a form known to the server and ignore it
-# if not. "strict" like "permissive" but if the bind type is unknown the token
-# will be rejected. "required" any form of token binding is needed to be
-# allowed. Finally the name of a binding method that must be present in tokens.
-# (string value)
-#enforce_token_bind = permissive
-
-# DEPRECATED: If true, the revocation list will be checked for cached tokens.
-# This requires that PKI tokens are configured on the identity server. (boolean
-# value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#check_revocations_for_cached = false
-
-# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
-# single algorithm or multiple. The algorithms are those supported by Python
-# standard hashlib.new(). The hashes will be tried in the order given, so put
-# the preferred one first for performance. The result of the first hash will be
-# stored in the cache. This will typically be set to multiple values only while
-# migrating from a less secure algorithm to a more secure one. Once all the old
-# tokens are expired this option should be set to a single value for better
-# performance. (list value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#hash_algorithms = md5
-
-# A choice of roles that must be present in a service token. Service tokens are
-# allowed to request that an expired token can be used and so this check should
-# tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present.
-# For backwards compatibility reasons this currently only affects the
-# allow_expired check. (list value)
-#service_token_roles = service
-
-# For backwards compatibility reasons we must let valid service tokens pass
-# that don't pass the service_token_roles check as valid. Setting this true
-# will become the default in a future release and should be enabled if
-# possible. (boolean value)
-#service_token_roles_required = false
-
-# Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
-# (string value)
-#auth_admin_prefix =
-
-# Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
-# (string value)
-#auth_host = 127.0.0.1
-
-# Port of the admin Identity API endpoint. Deprecated, use identity_uri.
-# (integer value)
-#auth_port = 35357
-
-# Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
-# (string value)
-# Allowed values: http, https
-#auth_protocol = https
-
-# Complete admin Identity API endpoint. This should specify the unversioned
-# root endpoint e.g. https://localhost:35357/ (string value)
-#identity_uri = <None>
-
-# This option is deprecated and may be removed in a future release. Single
-# shared secret with the Keystone configuration used for bootstrapping a
-# Keystone installation, or otherwise bypassing the normal authentication
-# process. This option should not be used, use `admin_user` and
-# `admin_password` instead. (string value)
-#admin_token = <None>
-
-# Service username. (string value)
-#admin_user = <None>
-
-# Service user password. (string value)
-#admin_password = <None>
-
-# Service tenant name. (string value)
-#admin_tenant_name = admin
-
-# Authentication type to load (string value)
-# Deprecated group/name - [keystone_authtoken]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
-
-
-[matchmaker_redis]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Host to locate redis. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
-
-# DEPRECATED: Use this port to connect to redis host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
-
-# DEPRECATED: Password for Redis server (optional). (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#password =
-
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
-# [host:port, host1:port ... ] (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#sentinel_hosts =
-
-# Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
-
-# Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
-
-# Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
-
-# Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
-
-
-[meter]
-
-#
-# From ceilometer
-#
-
-# DEPRECATED: Configuration file for defining meter notifications. This option
-# is deprecated and use meter_definitions_dirs to configure meter notification
-# file. Meter definitions configuration file will be sought according to the
-# parameter. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#meter_definitions_cfg_file = <None>
-
-# List directory to find files of defining meter notifications. (multi valued)
-#meter_definitions_dirs = /etc/ceilometer/meters.d
-#meter_definitions_dirs = /build/ceilometer-8KmzqX/ceilometer-9.0.4/ceilometer/data/meters.d
-
-
-[notification]
-
-#
-# From ceilometer
-#
-
-# Number of queues to parallelize workload across. This value should be larger
-# than the number of active notification agents for optimal results. WARNING:
-# Once set, lowering this value may result in lost data. (integer value)
-# Minimum value: 1
-#pipeline_processing_queues = 10
-
-# Acknowledge message when event persistence fails. (boolean value)
-#ack_on_event_error = true
-
-# Enable workload partitioning, allowing multiple notification agents to be run
-# simultaneously. (boolean value)
-#workload_partitioning = false
-
-# Messaging URLs to listen for notifications. Example:
-# rabbit://user:pass@host1:port1[,user:pass@hostN:portN]/virtual_host
-# (DEFAULT/transport_url is used if empty). This is useful when you have
-# dedicate messaging nodes for each service, for example, all nova
-# notifications go to rabbit-nova:5672, while all cinder notifications go to
-# rabbit-cinder:5672. (multi valued)
-#messaging_urls =
-
-# Number of notification messages to wait before publishing them. Batching is
-# advised when transformations are applied in pipeline. (integer value)
-# Minimum value: 1
-#batch_size = 100
-
-# Number of seconds to wait before publishing samples when batch_size is not
-# reached (None means indefinitely) (integer value)
-#batch_timeout = 5
-
-# Number of workers for notification service, default value is 1. (integer
-# value)
-# Minimum value: 1
-# Deprecated group/name - [DEFAULT]/notification_workers
-#workers = 1
-
-# Exchanges name to listen for notifications. (multi valued)
-# Deprecated group/name - [DEFAULT]/http_control_exchanges
-#notification_control_exchanges = nova
-#notification_control_exchanges = glance
-#notification_control_exchanges = neutron
-#notification_control_exchanges = cinder
-#notification_control_exchanges = heat
-#notification_control_exchanges = keystone
-#notification_control_exchanges = sahara
-#notification_control_exchanges = trove
-#notification_control_exchanges = zaqar
-#notification_control_exchanges = swift
-#notification_control_exchanges = ceilometer
-#notification_control_exchanges = magnum
-#notification_control_exchanges = dns
-
-
-[oslo_concurrency]
-
-#
-# From oslo.concurrency
-#
-
-# Enables or disables inter-process locks. (boolean value)
-#disable_process_locking = false
-
-# Directory to use for lock files.  For security, the specified directory
-# should only be writable by the user running the processes that need locking.
-# Defaults to environment variable OSLO_LOCK_PATH. If OSLO_LOCK_PATH is not set
-# in the environment, use the Python tempfile.gettempdir function to find a
-# suitable location. If external locks are used, a lock path must be set.
-# (string value)
-#lock_path = /tmp
-
-
-[oslo_messaging_amqp]
-
-#
-# From oslo.messaging
-#
-
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string
-# value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string
-# value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional)
-# (string value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-
-# robin fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# Pool Size for Kafka Consumers (integer value)
-#pool_size = 10
-
-# The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
-
-# The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating
-# point value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
+auth_type = password
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = ceilometer
+password = opnfv_secret
+auth_uri = http://10.167.4.35:5000
+auth_url = http://10.167.4.35:35357
+interface = internal
+token_cache_time = -1
 
 [oslo_messaging_notifications]
 
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are
-# messaging, messagingv2, routing, log, test, noop (multi valued)
-# Deprecated group/name - [DEFAULT]/notification_driver
-#driver =
-
-# A URL representing the messaging driver to use for notifications. If not set,
-# we fall back to the same configuration used for RPC. (string value)
-# Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
-
-# AMQP topic used for OpenStack notifications. (list value)
-# Deprecated group/name - [rpc_notifier2]/topics
-# Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
-
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
-
-
-[oslo_messaging_rabbit]
-
-#
-# From oslo.messaging
-#
-
-# Use durable queues in AMQP. (boolean value)
-# Deprecated group/name - [DEFAULT]/amqp_durable_queues
-# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
-#amqp_durable_queues = false
-
-# Auto-delete queues in AMQP. (boolean value)
-#amqp_auto_delete = false
-
-# Enable SSL (boolean value)
-#ssl = <None>
-
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
-
-# How long to wait before reconnecting in response to an AMQP consumer cancel
-# notification. (floating point value)
-#kombu_reconnect_delay = 1.0
-
-# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
-# be used. This option may not be available in future versions. (string value)
-#kombu_compression = <None>
-
-# How long to wait a missing client before abandoning to send it its replies.
-# This value should not be longer than rpc_response_timeout. (integer value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
-#kombu_missing_consumer_retry_timeout = 60
-
-# Determines how the next RabbitMQ node is chosen in case the one we are
-# currently connected to becomes unavailable. Takes effect only if more than
-# one RabbitMQ node is provided in config. (string value)
-# Allowed values: round-robin, shuffle
-#kombu_failover_strategy = round-robin
-
-# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_host = localhost
-
-# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
-# value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_port = 5672
-
-# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_hosts = $rabbit_host:$rabbit_port
-
-# DEPRECATED: The RabbitMQ userid. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_userid = guest
-
-# DEPRECATED: The RabbitMQ password. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_password = guest
-
-# The RabbitMQ login method. (string value)
-# Allowed values: PLAIN, AMQPLAIN, RABBIT-CR-DEMO
-#rabbit_login_method = AMQPLAIN
-
-# DEPRECATED: The RabbitMQ virtual host. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_virtual_host = /
-
-# How frequently to retry connecting with RabbitMQ. (integer value)
-#rabbit_retry_interval = 1
-
-# How long to backoff for between retries when connecting to RabbitMQ. (integer
-# value)
-#rabbit_retry_backoff = 2
-
-# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
-# (integer value)
-#rabbit_interval_max = 30
-
-# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
-# (infinite retry count). (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#rabbit_max_retries = 0
-
-# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
-# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
-# is no longer controlled by the x-ha-policy argument when declaring a queue.
-# If you just want to make sure that all queues (except those with auto-
-# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
-# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
-#rabbit_ha_queues = false
-
-# Positive integer representing duration in seconds for queue TTL (x-expires).
-# Queues which are unused for the duration of the TTL are automatically
-# deleted. The parameter affects only reply and fanout queues. (integer value)
-# Minimum value: 1
-#rabbit_transient_queues_ttl = 1800
-
-# Specifies the number of messages to prefetch. Setting to zero allows
-# unlimited messages. (integer value)
-#rabbit_qos_prefetch_count = 0
-
-# Number of seconds after which the Rabbit broker is considered down if
-# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
-# value)
-#heartbeat_timeout_threshold = 60
-
-# How often times during the heartbeat_timeout_threshold we check the
-# heartbeat. (integer value)
-#heartbeat_rate = 2
-
-# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
-#fake_rabbit = false
-
-# Maximum number of channels to allow (integer value)
-#channel_max = <None>
-
-# The maximum byte size for an AMQP frame (integer value)
-#frame_max = <None>
-
-# How often to send heartbeats for consumer's connections (integer value)
-#heartbeat_interval = 3
-
-# Arguments passed to ssl.wrap_socket (dict value)
-#ssl_options = <None>
-
-# Set socket timeout in seconds for connection's socket (floating point value)
-#socket_timeout = 0.25
-
-# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
-# value)
-#tcp_user_timeout = 0.25
-
-# Set delay for reconnection to some host which has connection error (floating
-# point value)
-#host_connection_reconnect_delay = 0.25
-
-# Connection factory implementation (string value)
-# Allowed values: new, single, read_write
-#connection_factory = single
-
-# Maximum number of connections to keep queued. (integer value)
-#pool_max_size = 30
-
-# Maximum number of connections to create above `pool_max_size`. (integer
-# value)
-#pool_max_overflow = 0
-
-# Default number of seconds to wait for a connections to available (integer
-# value)
-#pool_timeout = 30
-
-# Lifetime of a connection (since creation) in seconds or None for no
-# recycling. Expired connections are closed on acquire. (integer value)
-#pool_recycle = 600
-
-# Threshold at which inactive (since release) connections are considered stale
-# in seconds or None for no staleness. Stale connections are closed on acquire.
-# (integer value)
-#pool_stale = 60
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Allowed values: json, msgpack
-#default_serializer_type = json
-
-# Persist notification messages. (boolean value)
-#notification_persistence = false
-
-# Exchange name for sending notifications (string value)
-#default_notification_exchange = ${control_exchange}_notification
-
-# Max number of not acknowledged message which RabbitMQ can send to
-# notification listener. (integer value)
-#notification_listener_prefetch_count = 100
-
-# Reconnecting retry count in case of connectivity problem during sending
-# notification, -1 means infinite retry. (integer value)
-#default_notification_retry_attempts = -1
-
-# Reconnecting retry delay in case of connectivity problem during sending
-# notification message (floating point value)
-#notification_retry_delay = 0.25
-
-# Time to live for rpc queues without consumers in seconds. (integer value)
-#rpc_queue_expiration = 60
-
-# Exchange name for sending RPC messages (string value)
-#default_rpc_exchange = ${control_exchange}_rpc
-
-# Exchange name for receiving RPC replies (string value)
-#rpc_reply_exchange = ${control_exchange}_rpc_reply
-
-# Max number of not acknowledged message which RabbitMQ can send to rpc
-# listener. (integer value)
-#rpc_listener_prefetch_count = 100
-
-# Max number of not acknowledged message which RabbitMQ can send to rpc reply
-# listener. (integer value)
-#rpc_reply_listener_prefetch_count = 100
-
-# Reconnecting retry count in case of connectivity problem during sending
-# reply. -1 means infinite retry during rpc_timeout (integer value)
-#rpc_reply_retry_attempts = -1
-
-# Reconnecting retry delay in case of connectivity problem during sending
-# reply. (floating point value)
-#rpc_reply_retry_delay = 0.25
-
-# Reconnecting retry count in case of connectivity problem during sending RPC
-# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
-# request could be processed more than one time (integer value)
-#default_rpc_retry_attempts = -1
-
-# Reconnecting retry delay in case of connectivity problem during sending RPC
-# message (floating point value)
-#rpc_retry_delay = 0.25
-
-
-[oslo_messaging_zmq]
-
-#
-# From oslo.messaging
-#
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Allowed values: redis, sentinel, dummy
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Allowed values: json, msgpack
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
-
-[oslo_middleware]
-
-#
-# From oslo.middleware.http_proxy_to_wsgi
-#
-
-# Whether the application is behind a proxy or not. This determines if the
-# middleware should parse the headers or not. (boolean value)
-#enable_proxy_headers_parsing = false
-
-
-[oslo_policy]
-
-#
-# From oslo.policy
-#
-
-# The file that defines policies. (string value)
-#policy_file = policy.json
-
-# Default rule. Enforced when a requested rule is not found. (string value)
-#policy_default_rule = default
-
-# Directories where policy configuration files are stored. They can be relative
-# to any directory in the search path defined by the config_dir option, or
-# absolute paths. The file defined by policy_file must exist for these
-# directories to be searched.  Missing or empty directories are ignored. (multi
-# valued)
-#policy_dirs = policy.d
-
-
-[polling]
-
-#
-# From ceilometer
-#
-
-# Configuration file for pipeline definition. (string value)
-#cfg_file = polling.yaml
-
-# Work-load partitioning group prefix. Use only if you want to run multiple
-# polling agents with different config files. For each sub-group of the agent
-# pool with the same partitioning_group_prefix a disjoint subset of pollsters
-# should be loaded. (string value)
-#partitioning_group_prefix = <None>
-
-
-[publisher]
-
-#
-# From ceilometer
-#
-
-# Secret value for signing messages. Set value empty if signing is not required
-# to avoid computational overhead. (string value)
-# Deprecated group/name - [DEFAULT]/metering_secret
-# Deprecated group/name - [publisher_rpc]/metering_secret
-# Deprecated group/name - [publisher]/metering_secret
-#telemetry_secret = change this for valid signing
-
-
-[publisher_notifier]
-
-#
-# From ceilometer
-#
-
-# The topic that ceilometer uses for metering notifications. (string value)
-#metering_topic = metering
-
-# The topic that ceilometer uses for event notifications. (string value)
-#event_topic = event
-
-# The driver that ceilometer uses for metering notifications. (string value)
-# Deprecated group/name - [publisher_notifier]/metering_driver
-#telemetry_driver = messagingv2
-
-
-[rgw_admin_credentials]
-
-#
-# From ceilometer
-#
-
-# Access key for Radosgw Admin. (string value)
-#access_key = <None>
-
-# Secret key for Radosgw Admin. (string value)
-#secret_key = <None>
-
+topics = notifications
 
 [service_credentials]
 
-#
-# From ceilometer-auth
-#
-
-# Authentication type to load (string value)
-# Deprecated group/name - [service_credentials]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
-
-# Authentication URL (string value)
-#auth_url = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-# Deprecated group/name - [service_credentials]/tenant_id
-#project_id = <None>
-
-# Project name to scope to (string value)
-# Deprecated group/name - [service_credentials]/tenant_name
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
-
-# Optional domain ID to use with v3 and v2 parameters. It will be used for both
-# the user and project domain in v3 and ignored in v2 authentication. (string
-# value)
-#default_domain_id = <None>
-
-# Optional domain name to use with v3 API and v2 parameters. It will be used
-# for both the user and project domain in v3 and ignored in v2 authentication.
-# (string value)
-#default_domain_name = <None>
-
-# User id (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [service_credentials]/user_name
-#username = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User's password (string value)
-#password = <None>
-
-# Region name to use for OpenStack service endpoints. (string value)
-# Deprecated group/name - [DEFAULT]/os_region_name
-#region_name = <None>
-
-# Type of endpoint in Identity service catalog to use for communication with
-# OpenStack services. (string value)
-# Allowed values: public, internal, admin, auth, publicURL, internalURL, adminURL
-# Deprecated group/name - [service_credentials]/os_endpoint_type
-#interface = public
-
-
-[service_types]
-
-#
-# From ceilometer
-#
-
-# Glance service type. (string value)
-#glance = image
-
-# Neutron service type. (string value)
-#neutron = network
-
-# Neutron load balancer version. (string value)
-# Allowed values: v1, v2
-#neutron_lbaas_version = v2
-
-# Nova service type. (string value)
-#nova = compute
-
-# Radosgw service type. (string value)
-#radosgw = <None>
-
-# Swift service type. (string value)
-#swift = object-store
-
-# Cinder service type. (string value)
-# Deprecated group/name - [service_types]/cinderv2
-#cinder = volumev3
-
-
-[vmware]
-
-#
-# From ceilometer
-#
-
-# IP address of the VMware vSphere host. (unknown value)
-#host_ip = 127.0.0.1
-
-# Port of the VMware vSphere host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#host_port = 443
-
-# Username of VMware vSphere. (string value)
-#host_username =
-
-# Password of VMware vSphere. (string value)
-#host_password =
-
-# CA bundle file to use in verifying the vCenter server certificate. (string
-# value)
-#ca_file = <None>
-
-# If true, the vCenter server certificate is not verified. If false, then the
-# default CA truststore is used for verification. This option is ignored if
-# "ca_file" is set. (boolean value)
-#insecure = false
-
-# Number of times a VMware vSphere API may be retried. (integer value)
-#api_retry_count = 10
-
-# Sleep time in seconds for polling an ongoing async task. (floating point
-# value)
-#task_poll_interval = 0.5
-
-# Optional vim service WSDL location e.g http://<server>/vimService.wsdl.
-# Optional over-ride to default location for bug work-arounds. (string value)
-#wsdl_location = <None>
-
-
-[xenapi]
-
-#
-# From ceilometer
-#
-
-# URL for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_url = <None>
-
-# Username for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_username = root
-
-# Password for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_password = <None>
+auth_type = password
+user_domain_id = default
+project_domain_id = default
+project_name = service
+username = ceilometer
+password = opnfv_secret
+auth_url = http://10.167.4.35:5000
+token_cache_time = -1
+interface = internal
+region_name = RegionOne

2018-03-30 07:04:59,797 [salt.state       ][INFO    ][16604] Completed state [/etc/ceilometer/ceilometer.conf] at time 07:04:59.797849 duration_in_ms=96.748
2018-03-30 07:04:59,798 [salt.state       ][INFO    ][16604] Running state [/etc/default/ceilometer-agent-compute] at time 07:04:59.798124
2018-03-30 07:04:59,798 [salt.state       ][INFO    ][16604] Executing state file.managed for /etc/default/ceilometer-agent-compute
2018-03-30 07:04:59,818 [salt.fileclient  ][INFO    ][16604] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/default'
2018-03-30 07:04:59,820 [salt.state       ][INFO    ][16604] File changed:
New file
2018-03-30 07:04:59,821 [salt.state       ][INFO    ][16604] Completed state [/etc/default/ceilometer-agent-compute] at time 07:04:59.821014 duration_in_ms=22.889
2018-03-30 07:04:59,821 [salt.state       ][INFO    ][16604] Running state [/etc/ceilometer/pipeline.yaml] at time 07:04:59.821257
2018-03-30 07:04:59,821 [salt.state       ][INFO    ][16604] Executing state file.managed for /etc/ceilometer/pipeline.yaml
2018-03-30 07:04:59,838 [salt.fileclient  ][INFO    ][16604] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/pike/pipeline.yaml'
2018-03-30 07:04:59,875 [salt.state       ][INFO    ][16604] File changed:
New file
2018-03-30 07:04:59,875 [salt.state       ][INFO    ][16604] Completed state [/etc/ceilometer/pipeline.yaml] at time 07:04:59.875466 duration_in_ms=54.208
2018-03-30 07:04:59,875 [salt.state       ][INFO    ][16604] Running state [/etc/ceilometer/event_pipeline.yaml] at time 07:04:59.875713
2018-03-30 07:04:59,875 [salt.state       ][INFO    ][16604] Executing state file.managed for /etc/ceilometer/event_pipeline.yaml
2018-03-30 07:04:59,893 [salt.fileclient  ][INFO    ][16604] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/pike/event_pipeline.yaml'
2018-03-30 07:04:59,925 [salt.state       ][INFO    ][16604] File changed:
New file
2018-03-30 07:04:59,925 [salt.state       ][INFO    ][16604] Completed state [/etc/ceilometer/event_pipeline.yaml] at time 07:04:59.925943 duration_in_ms=50.23
2018-03-30 07:05:00,071 [salt.state       ][INFO    ][16604] Running state [ceilometer-agent-compute] at time 07:05:00.071909
2018-03-30 07:05:00,072 [salt.state       ][INFO    ][16604] Executing state service.running for ceilometer-agent-compute
2018-03-30 07:05:00,074 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['systemctl', 'status', 'ceilometer-agent-compute.service', '-n', '0'] in directory '/root'
2018-03-30 07:05:00,089 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['systemctl', 'is-active', 'ceilometer-agent-compute.service'] in directory '/root'
2018-03-30 07:05:00,104 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['systemctl', 'is-enabled', 'ceilometer-agent-compute.service'] in directory '/root'
2018-03-30 07:05:00,120 [salt.state       ][INFO    ][16604] The service ceilometer-agent-compute is already running
2018-03-30 07:05:00,120 [salt.state       ][INFO    ][16604] Completed state [ceilometer-agent-compute] at time 07:05:00.120280 duration_in_ms=48.372
2018-03-30 07:05:00,120 [salt.state       ][INFO    ][16604] Running state [ceilometer-agent-compute] at time 07:05:00.120473
2018-03-30 07:05:00,120 [salt.state       ][INFO    ][16604] Executing state service.mod_watch for ceilometer-agent-compute
2018-03-30 07:05:00,121 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['systemctl', 'is-active', 'ceilometer-agent-compute.service'] in directory '/root'
2018-03-30 07:05:00,130 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['systemctl', 'is-enabled', 'ceilometer-agent-compute.service'] in directory '/root'
2018-03-30 07:05:00,138 [salt.loaded.int.module.cmdmod][INFO    ][16604] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'ceilometer-agent-compute.service'] in directory '/root'
2018-03-30 07:05:00,253 [salt.state       ][INFO    ][16604] {'ceilometer-agent-compute': True}
2018-03-30 07:05:00,254 [salt.state       ][INFO    ][16604] Completed state [ceilometer-agent-compute] at time 07:05:00.254076 duration_in_ms=133.603
2018-03-30 07:05:00,255 [salt.minion      ][INFO    ][16604] Returning information for job: 20180330070433916489
2018-03-30 07:11:07,111 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command pillar.get with jid 20180330071106729851
2018-03-30 07:11:07,128 [salt.minion      ][INFO    ][18066] Starting a new job with PID 18066
2018-03-30 07:11:07,135 [salt.minion      ][INFO    ][18066] Returning information for job: 20180330071106729851
2018-03-30 07:11:07,760 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command pillar.get with jid 20180330071107376179
2018-03-30 07:11:07,776 [salt.minion      ][INFO    ][18071] Starting a new job with PID 18071
2018-03-30 07:11:07,783 [salt.minion      ][INFO    ][18071] Returning information for job: 20180330071107376179
2018-03-30 07:11:08,452 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command pillar.get with jid 20180330071108037788
2018-03-30 07:11:08,468 [salt.minion      ][INFO    ][18076] Starting a new job with PID 18076
2018-03-30 07:11:08,474 [salt.minion      ][INFO    ][18076] Returning information for job: 20180330071108037788
2018-03-30 07:11:09,172 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command pillar.get with jid 20180330071108791488
2018-03-30 07:11:09,188 [salt.minion      ][INFO    ][18081] Starting a new job with PID 18081
2018-03-30 07:11:09,194 [salt.minion      ][INFO    ][18081] Returning information for job: 20180330071108791488
2018-03-30 07:11:33,452 [salt.minion      ][INFO    ][2771] User sudo_ubuntu Executing command cp.push_dir with jid 20180330071133067681
2018-03-30 07:11:33,470 [salt.minion      ][INFO    ][18094] Starting a new job with PID 18094
