2019-04-30 21:31:48,410 [salt.utils.decorators:613 ][WARNING ][3656] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:06,069 [salt.utils.decorators:613 ][WARNING ][4478] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:19,045 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:35,113 [salt.loaded.int.states.file:2298][WARNING ][5997] State for file: /boot/grub/grub.cfg - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:44,998 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,018 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,034 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,050 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,066 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,081 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,097 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,113 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,129 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,145 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,161 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,177 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,193 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,209 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,225 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,241 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,755 [salt.utils.decorators:613 ][WARNING ][5997] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:45,827 [salt.loaded.int.states.file:2298][WARNING ][5997] State for file: /etc/shadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:45,828 [salt.loaded.int.states.file:2298][WARNING ][5997] State for file: /etc/gshadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:45,829 [salt.loaded.int.states.file:2298][WARNING ][5997] State for file: /etc/group- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:45,829 [salt.loaded.int.states.file:2298][WARNING ][5997] State for file: /etc/group - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:45,830 [salt.loaded.int.states.file:2298][WARNING ][5997] State for file: /etc/passwd- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:45,831 [salt.loaded.int.states.file:2298][WARNING ][5997] State for file: /etc/passwd - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:45,837 [salt.loaded.int.states.file:2298][WARNING ][5997] State for file: /etc/gshadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:45,838 [salt.loaded.int.states.file:2298][WARNING ][5997] State for file: /etc/shadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:55,474 [salt.loaded.int.module.cmdmod:730 ][ERROR   ][5997] Command '['ovs-vsctl', 'br-exists', 'br-floating']' failed with return code: 2
2019-04-30 21:32:55,474 [salt.loaded.int.module.cmdmod:736 ][ERROR   ][5997] retcode: 2
2019-04-30 21:32:58,570 [salt.loaded.int.module.cmdmod:730 ][ERROR   ][5997] Command '['umount', '/dev/shm']' failed with return code: 32
2019-04-30 21:32:58,570 [salt.loaded.int.module.cmdmod:734 ][ERROR   ][5997] stderr: umount: /dev/shm: target is busy
        (In some cases useful info about processes that
         use the device is found by lsof(8) or fuser(1).)
2019-04-30 21:32:58,570 [salt.loaded.int.module.cmdmod:736 ][ERROR   ][5997] retcode: 32
2019-04-30 21:33:00,173 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13528] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
2019-04-30 21:33:00,189 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13528] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'salt-minion.service'] in directory '/root'
2019-04-30 21:33:00,200 [salt.utils.parsers:1051][WARNING ][2090] Minion received a SIGTERM. Exiting.
2019-04-30 21:33:01,000 [salt.cli.daemons :293 ][INFO    ][13585] Setting up the Salt Minion "cmp001.mcp-ovs-ha.local"
2019-04-30 21:33:01,067 [salt.cli.daemons :82  ][INFO    ][13585] Starting up the Salt Minion
2019-04-30 21:33:01,067 [salt.utils.event :1017][INFO    ][13585] Starting pull socket on /var/run/salt/minion/minion_event_abfab5186f_pull.ipc
2019-04-30 21:33:01,635 [salt.minion      :976 ][INFO    ][13585] Creating minion process manager
2019-04-30 21:33:02,755 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][13585] Executing command ['date', '+%z'] in directory '/root'
2019-04-30 21:33:02,779 [salt.utils.schedule:568 ][INFO    ][13585] Updating job settings for scheduled job: __mine_interval
2019-04-30 21:33:02,848 [salt.minion      :1108][INFO    ][13585] Added mine.update to scheduler
2019-04-30 21:33:02,861 [salt.minion      :1975][INFO    ][13585] Minion is starting as user 'root'
2019-04-30 21:33:02,875 [salt.minion      :2336][INFO    ][13585] Minion is ready to receive requests!
2019-04-30 21:33:55,133 [salt.minion      :1308][INFO    ][13585] User sudo_ubuntu Executing command test.ping with jid 20190430213355120660
2019-04-30 21:33:55,144 [salt.minion      :1432][INFO    ][13710] Starting a new job with PID 13710
2019-04-30 21:33:55,155 [salt.minion      :1711][INFO    ][13710] Returning information for job: 20190430213355120660
2019-04-30 21:33:55,792 [salt.minion      :1308][INFO    ][13585] User sudo_ubuntu Executing command cmd.run with jid 20190430213355779912
2019-04-30 21:33:55,801 [salt.minion      :1432][INFO    ][13715] Starting a new job with PID 13715
2019-04-30 21:33:55,803 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][13715] Executing command 'reboot' in directory '/root'
2019-04-30 21:33:56,044 [salt.utils.parsers:1051][WARNING ][13585] Minion received a SIGTERM. Exiting.
2019-04-30 21:33:56,044 [salt.cli.daemons :82  ][INFO    ][13585] Shutting down the Salt Minion
2019-04-30 21:33:57,628 [salt.minion      :1711][INFO    ][13715] Returning information for job: 20190430213355779912
2019-04-30 21:36:11,024 [salt.cli.daemons :293 ][INFO    ][3184] Setting up the Salt Minion "cmp001.mcp-ovs-ha.local"
2019-04-30 21:36:11,225 [salt.cli.daemons :82  ][INFO    ][3184] Starting up the Salt Minion
2019-04-30 21:36:11,226 [salt.utils.event :1017][INFO    ][3184] Starting pull socket on /var/run/salt/minion/minion_event_abfab5186f_pull.ipc
2019-04-30 21:36:11,812 [salt.minion      :976 ][INFO    ][3184] Creating minion process manager
2019-04-30 21:36:12,956 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][3184] Executing command ['date', '+%z'] in directory '/root'
2019-04-30 21:36:12,968 [salt.utils.schedule:568 ][INFO    ][3184] Updating job settings for scheduled job: __mine_interval
2019-04-30 21:36:12,975 [salt.minion      :1108][INFO    ][3184] Added mine.update to scheduler
2019-04-30 21:36:12,988 [salt.minion      :1975][INFO    ][3184] Minion is starting as user 'root'
2019-04-30 21:36:13,000 [salt.minion      :2336][INFO    ][3184] Minion is ready to receive requests!
2019-04-30 21:36:32,583 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command test.ping with jid 20190430213632566499
2019-04-30 21:36:32,593 [salt.minion      :1432][INFO    ][3527] Starting a new job with PID 3527
2019-04-30 21:36:32,604 [salt.minion      :1711][INFO    ][3527] Returning information for job: 20190430213632566499
2019-04-30 21:36:33,231 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command state.apply with jid 20190430213633214368
2019-04-30 21:36:33,240 [salt.minion      :1432][INFO    ][3532] Starting a new job with PID 3532
2019-04-30 21:36:37,936 [salt.state       :915 ][INFO    ][3532] Loading fresh modules for state activity
2019-04-30 21:36:40,590 [salt.state       :1780][INFO    ][3532] Running state [/etc/environment] at time 21:36:40.590437
2019-04-30 21:36:40,590 [salt.state       :1813][INFO    ][3532] Executing state file.blockreplace for [/etc/environment]
2019-04-30 21:36:40,591 [salt.state       :300 ][INFO    ][3532] No changes needed to be made
2019-04-30 21:36:40,591 [salt.state       :1951][INFO    ][3532] Completed state [/etc/environment] at time 21:36:40.591186 duration_in_ms=0.749
2019-04-30 21:36:40,591 [salt.state       :1780][INFO    ][3532] Running state [/etc/profile.d] at time 21:36:40.591326
2019-04-30 21:36:40,591 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/etc/profile.d]
2019-04-30 21:36:40,596 [salt.state       :300 ][INFO    ][3532] Directory /etc/profile.d is in the correct state
Directory /etc/profile.d updated
2019-04-30 21:36:40,596 [salt.state       :1951][INFO    ][3532] Completed state [/etc/profile.d] at time 21:36:40.596608 duration_in_ms=5.281
2019-04-30 21:36:40,596 [salt.state       :1780][INFO    ][3532] Running state [/etc/bash.bashrc] at time 21:36:40.596748
2019-04-30 21:36:40,596 [salt.state       :1813][INFO    ][3532] Executing state file.blockreplace for [/etc/bash.bashrc]
2019-04-30 21:36:41,316 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['git', '--version'] in directory '/root'
2019-04-30 21:36:41,492 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'test -f /etc/bash.bashrc' in directory '/root'
2019-04-30 21:36:41,504 [salt.state       :300 ][INFO    ][3532] No changes needed to be made
2019-04-30 21:36:41,505 [salt.state       :1951][INFO    ][3532] Completed state [/etc/bash.bashrc] at time 21:36:41.504985 duration_in_ms=908.235
2019-04-30 21:36:41,505 [salt.state       :1780][INFO    ][3532] Running state [/etc/profile] at time 21:36:41.505241
2019-04-30 21:36:41,505 [salt.state       :1813][INFO    ][3532] Executing state file.blockreplace for [/etc/profile]
2019-04-30 21:36:41,509 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'test -f /etc/profile' in directory '/root'
2019-04-30 21:36:41,517 [salt.state       :300 ][INFO    ][3532] No changes needed to be made
2019-04-30 21:36:41,517 [salt.state       :1951][INFO    ][3532] Completed state [/etc/profile] at time 21:36:41.517554 duration_in_ms=12.311
2019-04-30 21:36:41,517 [salt.state       :1780][INFO    ][3532] Running state [/etc/login.defs] at time 21:36:41.517742
2019-04-30 21:36:41,517 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/login.defs]
2019-04-30 21:36:41,596 [salt.state       :300 ][INFO    ][3532] File /etc/login.defs is in the correct state
2019-04-30 21:36:41,596 [salt.state       :1951][INFO    ][3532] Completed state [/etc/login.defs] at time 21:36:41.596400 duration_in_ms=78.657
2019-04-30 21:36:41,600 [salt.state       :1780][INFO    ][3532] Running state [at] at time 21:36:41.600917
2019-04-30 21:36:41,601 [salt.state       :1813][INFO    ][3532] Executing state pkg.installed for [at]
2019-04-30 21:36:41,601 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:36:42,006 [salt.state       :300 ][INFO    ][3532] All specified packages are already installed
2019-04-30 21:36:42,006 [salt.state       :1951][INFO    ][3532] Completed state [at] at time 21:36:42.006326 duration_in_ms=405.397
2019-04-30 21:36:42,007 [salt.state       :1780][INFO    ][3532] Running state [atd] at time 21:36:42.007835
2019-04-30 21:36:42,008 [salt.state       :1813][INFO    ][3532] Executing state service.running for [atd]
2019-04-30 21:36:42,008 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'status', 'atd.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:42,025 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-active', 'atd.service'] in directory '/root'
2019-04-30 21:36:42,032 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-enabled', 'atd.service'] in directory '/root'
2019-04-30 21:36:42,039 [salt.state       :300 ][INFO    ][3532] The service atd is already running
2019-04-30 21:36:42,039 [salt.state       :1951][INFO    ][3532] Completed state [atd] at time 21:36:42.039885 duration_in_ms=32.049
2019-04-30 21:36:42,041 [salt.state       :1780][INFO    ][3532] Running state [cron] at time 21:36:42.041484
2019-04-30 21:36:42,041 [salt.state       :1813][INFO    ][3532] Executing state pkg.installed for [cron]
2019-04-30 21:36:42,046 [salt.state       :300 ][INFO    ][3532] All specified packages are already installed
2019-04-30 21:36:42,046 [salt.state       :1951][INFO    ][3532] Completed state [cron] at time 21:36:42.046957 duration_in_ms=5.472
2019-04-30 21:36:42,047 [salt.state       :1780][INFO    ][3532] Running state [/etc/at.allow] at time 21:36:42.047662
2019-04-30 21:36:42,047 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/at.allow]
2019-04-30 21:36:42,063 [salt.state       :300 ][INFO    ][3532] File /etc/at.allow is in the correct state
2019-04-30 21:36:42,063 [salt.state       :1951][INFO    ][3532] Completed state [/etc/at.allow] at time 21:36:42.063341 duration_in_ms=15.679
2019-04-30 21:36:42,063 [salt.state       :1780][INFO    ][3532] Running state [/etc/at.deny] at time 21:36:42.063485
2019-04-30 21:36:42,063 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/at.deny]
2019-04-30 21:36:42,063 [salt.state       :300 ][INFO    ][3532] File /etc/at.deny is not present
2019-04-30 21:36:42,063 [salt.state       :1951][INFO    ][3532] Completed state [/etc/at.deny] at time 21:36:42.063965 duration_in_ms=0.479
2019-04-30 21:36:42,064 [salt.state       :1780][INFO    ][3532] Running state [cron] at time 21:36:42.064831
2019-04-30 21:36:42,064 [salt.state       :1813][INFO    ][3532] Executing state service.running for [cron]
2019-04-30 21:36:42,065 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'status', 'cron.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:42,072 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-active', 'cron.service'] in directory '/root'
2019-04-30 21:36:42,077 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-enabled', 'cron.service'] in directory '/root'
2019-04-30 21:36:42,082 [salt.state       :300 ][INFO    ][3532] The service cron is already running
2019-04-30 21:36:42,082 [salt.state       :1951][INFO    ][3532] Completed state [cron] at time 21:36:42.082688 duration_in_ms=17.856
2019-04-30 21:36:42,083 [salt.state       :1780][INFO    ][3532] Running state [/etc/cron.allow] at time 21:36:42.083599
2019-04-30 21:36:42,083 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/cron.allow]
2019-04-30 21:36:42,106 [salt.state       :300 ][INFO    ][3532] File /etc/cron.allow is in the correct state
2019-04-30 21:36:42,106 [salt.state       :1951][INFO    ][3532] Completed state [/etc/cron.allow] at time 21:36:42.106755 duration_in_ms=23.156
2019-04-30 21:36:42,106 [salt.state       :1780][INFO    ][3532] Running state [/etc/cron.deny] at time 21:36:42.106897
2019-04-30 21:36:42,107 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/cron.deny]
2019-04-30 21:36:42,107 [salt.state       :300 ][INFO    ][3532] File /etc/cron.deny is not present
2019-04-30 21:36:42,107 [salt.state       :1951][INFO    ][3532] Completed state [/etc/cron.deny] at time 21:36:42.107344 duration_in_ms=0.447
2019-04-30 21:36:42,108 [salt.state       :1780][INFO    ][3532] Running state [/etc/crontab] at time 21:36:42.108055
2019-04-30 21:36:42,108 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/crontab]
2019-04-30 21:36:42,108 [salt.state       :300 ][INFO    ][3532] File /etc/crontab exists with proper permissions. No changes made.
2019-04-30 21:36:42,108 [salt.state       :1951][INFO    ][3532] Completed state [/etc/crontab] at time 21:36:42.108724 duration_in_ms=0.669
2019-04-30 21:36:42,109 [salt.state       :1780][INFO    ][3532] Running state [/etc/cron.d] at time 21:36:42.109440
2019-04-30 21:36:42,109 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/etc/cron.d]
2019-04-30 21:36:42,110 [salt.state       :300 ][INFO    ][3532] Directory /etc/cron.d is in the correct state
Directory /etc/cron.d updated
2019-04-30 21:36:42,110 [salt.state       :1951][INFO    ][3532] Completed state [/etc/cron.d] at time 21:36:42.110206 duration_in_ms=0.765
2019-04-30 21:36:42,110 [salt.state       :1780][INFO    ][3532] Running state [/etc/cron.daily] at time 21:36:42.110906
2019-04-30 21:36:42,111 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/etc/cron.daily]
2019-04-30 21:36:42,116 [salt.state       :300 ][INFO    ][3532] Directory /etc/cron.daily is in the correct state
Directory /etc/cron.daily updated
2019-04-30 21:36:42,116 [salt.state       :1951][INFO    ][3532] Completed state [/etc/cron.daily] at time 21:36:42.116744 duration_in_ms=5.838
2019-04-30 21:36:42,117 [salt.state       :1780][INFO    ][3532] Running state [/etc/cron.hourly] at time 21:36:42.117456
2019-04-30 21:36:42,117 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/etc/cron.hourly]
2019-04-30 21:36:42,118 [salt.state       :300 ][INFO    ][3532] Directory /etc/cron.hourly is in the correct state
Directory /etc/cron.hourly updated
2019-04-30 21:36:42,118 [salt.state       :1951][INFO    ][3532] Completed state [/etc/cron.hourly] at time 21:36:42.118281 duration_in_ms=0.825
2019-04-30 21:36:42,119 [salt.state       :1780][INFO    ][3532] Running state [/etc/cron.monthly] at time 21:36:42.118989
2019-04-30 21:36:42,119 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/etc/cron.monthly]
2019-04-30 21:36:42,119 [salt.state       :300 ][INFO    ][3532] Directory /etc/cron.monthly is in the correct state
Directory /etc/cron.monthly updated
2019-04-30 21:36:42,119 [salt.state       :1951][INFO    ][3532] Completed state [/etc/cron.monthly] at time 21:36:42.119790 duration_in_ms=0.801
2019-04-30 21:36:42,120 [salt.state       :1780][INFO    ][3532] Running state [/etc/cron.weekly] at time 21:36:42.120526
2019-04-30 21:36:42,120 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/etc/cron.weekly]
2019-04-30 21:36:42,121 [salt.state       :300 ][INFO    ][3532] Directory /etc/cron.weekly is in the correct state
Directory /etc/cron.weekly updated
2019-04-30 21:36:42,121 [salt.state       :1951][INFO    ][3532] Completed state [/etc/cron.weekly] at time 21:36:42.121346 duration_in_ms=0.82
2019-04-30 21:36:42,123 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/apt.conf.d/99prefer_ipv4-salt] at time 21:36:42.123823
2019-04-30 21:36:42,123 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/apt/apt.conf.d/99prefer_ipv4-salt]
2019-04-30 21:36:42,138 [salt.state       :300 ][INFO    ][3532] File /etc/apt/apt.conf.d/99prefer_ipv4-salt is in the correct state
2019-04-30 21:36:42,138 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/apt.conf.d/99prefer_ipv4-salt] at time 21:36:42.138410 duration_in_ms=14.587
2019-04-30 21:36:42,138 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/apt.conf.d/99allow_downgrades-salt] at time 21:36:42.138552
2019-04-30 21:36:42,138 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/apt/apt.conf.d/99allow_downgrades-salt]
2019-04-30 21:36:42,151 [salt.state       :300 ][INFO    ][3532] File /etc/apt/apt.conf.d/99allow_downgrades-salt is in the correct state
2019-04-30 21:36:42,152 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/apt.conf.d/99allow_downgrades-salt] at time 21:36:42.152039 duration_in_ms=13.486
2019-04-30 21:36:42,153 [salt.state       :1780][INFO    ][3532] Running state [linux_repo_prereq_pkgs] at time 21:36:42.153324
2019-04-30 21:36:42,153 [salt.state       :1813][INFO    ][3532] Executing state pkg.installed for [linux_repo_prereq_pkgs]
2019-04-30 21:36:42,158 [salt.state       :300 ][INFO    ][3532] All specified packages are already installed
2019-04-30 21:36:42,158 [salt.state       :1951][INFO    ][3532] Completed state [linux_repo_prereq_pkgs] at time 21:36:42.158806 duration_in_ms=5.481
2019-04-30 21:36:42,158 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/apt.conf.d/99proxies-salt] at time 21:36:42.158948
2019-04-30 21:36:42,159 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt]
2019-04-30 21:36:42,159 [salt.state       :300 ][INFO    ][3532] File /etc/apt/apt.conf.d/99proxies-salt is not present
2019-04-30 21:36:42,159 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/apt.conf.d/99proxies-salt] at time 21:36:42.159368 duration_in_ms=0.42
2019-04-30 21:36:42,159 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack] at time 21:36:42.159501
2019-04-30 21:36:42,159 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack]
2019-04-30 21:36:42,159 [salt.state       :300 ][INFO    ][3532] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack is not present
2019-04-30 21:36:42,159 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack] at time 21:36:42.159917 duration_in_ms=0.415
2019-04-30 21:36:42,160 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/preferences.d/mirantis_openstack] at time 21:36:42.160045
2019-04-30 21:36:42,160 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/apt/preferences.d/mirantis_openstack]
2019-04-30 21:36:42,160 [salt.state       :300 ][INFO    ][3532] File /etc/apt/preferences.d/mirantis_openstack is not present
2019-04-30 21:36:42,160 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/preferences.d/mirantis_openstack] at time 21:36:42.160465 duration_in_ms=0.42
2019-04-30 21:36:42,160 [salt.state       :1780][INFO    ][3532] Running state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -] at time 21:36:42.160613
2019-04-30 21:36:42,160 [salt.state       :1813][INFO    ][3532] Executing state cmd.run for [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -]
2019-04-30 21:36:42,161 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -' in directory '/root'
2019-04-30 21:36:42,312 [salt.state       :300 ][INFO    ][3532] {'pid': 3643, 'retcode': 0, 'stderr': '', 'stdout': 'OK'}
2019-04-30 21:36:42,312 [salt.state       :1951][INFO    ][3532] Completed state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -] at time 21:36:42.312657 duration_in_ms=152.042
2019-04-30 21:36:42,315 [salt.state       :1780][INFO    ][3532] Running state [deb http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial main] at time 21:36:42.315667
2019-04-30 21:36:42,315 [salt.state       :1813][INFO    ][3532] Executing state pkgrepo.managed for [deb http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial main]
2019-04-30 21:36:42,466 [salt.state       :300 ][INFO    ][3532] Configured package repo 'deb http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial main'
2019-04-30 21:36:42,466 [salt.state       :1951][INFO    ][3532] Completed state [deb http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial main] at time 21:36:42.466565 duration_in_ms=150.897
2019-04-30 21:36:42,466 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_backports] at time 21:36:42.466723
2019-04-30 21:36:42,466 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_backports]
2019-04-30 21:36:42,467 [salt.state       :300 ][INFO    ][3532] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_backports is not present
2019-04-30 21:36:42,467 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_backports] at time 21:36:42.467233 duration_in_ms=0.51
2019-04-30 21:36:42,467 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/preferences.d/mirantis_openstack_backports] at time 21:36:42.467363
2019-04-30 21:36:42,467 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/apt/preferences.d/mirantis_openstack_backports]
2019-04-30 21:36:42,467 [salt.state       :300 ][INFO    ][3532] File /etc/apt/preferences.d/mirantis_openstack_backports is not present
2019-04-30 21:36:42,467 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/preferences.d/mirantis_openstack_backports] at time 21:36:42.467783 duration_in_ms=0.42
2019-04-30 21:36:42,467 [salt.state       :1780][INFO    ][3532] Running state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -] at time 21:36:42.467922
2019-04-30 21:36:42,468 [salt.state       :1813][INFO    ][3532] Executing state cmd.run for [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -]
2019-04-30 21:36:42,468 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -' in directory '/root'
2019-04-30 21:36:42,556 [salt.state       :300 ][INFO    ][3532] {'pid': 3757, 'retcode': 0, 'stderr': '', 'stdout': 'OK'}
2019-04-30 21:36:42,556 [salt.state       :1951][INFO    ][3532] Completed state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -] at time 21:36:42.556591 duration_in_ms=88.668
2019-04-30 21:36:42,559 [salt.state       :1780][INFO    ][3532] Running state [deb http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial main] at time 21:36:42.559827
2019-04-30 21:36:42,560 [salt.state       :1813][INFO    ][3532] Executing state pkgrepo.managed for [deb http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial main]
2019-04-30 21:36:42,621 [salt.state       :300 ][INFO    ][3532] Configured package repo 'deb http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial main'
2019-04-30 21:36:42,622 [salt.state       :1951][INFO    ][3532] Completed state [deb http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial main] at time 21:36:42.622009 duration_in_ms=62.182
2019-04-30 21:36:42,622 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/apt.conf.d/99proxies-salt-mcp_glusterfs] at time 21:36:42.622192
2019-04-30 21:36:42,622 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt-mcp_glusterfs]
2019-04-30 21:36:42,622 [salt.state       :300 ][INFO    ][3532] File /etc/apt/apt.conf.d/99proxies-salt-mcp_glusterfs is not present
2019-04-30 21:36:42,622 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mcp_glusterfs] at time 21:36:42.622783 duration_in_ms=0.591
2019-04-30 21:36:42,622 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/preferences.d/mcp_glusterfs] at time 21:36:42.622936
2019-04-30 21:36:42,623 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/apt/preferences.d/mcp_glusterfs]
2019-04-30 21:36:42,696 [salt.state       :300 ][INFO    ][3532] File /etc/apt/preferences.d/mcp_glusterfs is in the correct state
2019-04-30 21:36:42,697 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/preferences.d/mcp_glusterfs] at time 21:36:42.697085 duration_in_ms=74.148
2019-04-30 21:36:42,697 [salt.state       :1780][INFO    ][3532] Running state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUlOQkZQdFlGY0JFQURjUU1aOWFTUjFwdGJhRWVxLzhCenU3a2lwYXhWR2gzV2NtYTRMeitRUGUwb2Z4UmYrCm9ZUjIyVVZHbUpjUG5WY0dGYlhKNTB0OEJBeHd0US9UU21HZFE5M2JsNkxPUkFRQlovdWQxTFRyMkhLcGFhMEYKMWJ3cGkvVEFnQldxUDY0SHUwTEJHSVNjMEc1bTMvaG4vYmk2WHhJSU96Si9ML3ZxTGgxZGVWYURyWVlXeTVDbQplOEF1UHRxT0FSS3NlZnZWZ3dscG5iQ3RrK1FhRTY1dmdsOE1YaVlDYU9lblQwN0dEQ3ExeGI3aGtvVmxKUzRiCmY2RjNVTUpWTVZ5NG9FeVlrUnc0U1A3VUxlVDFzNHlyQmVEemJ4aEZhWlJKRnZHcHZNVzNBWnhmcmhYLzVPcFoKU2tRaUZuNS8yajRlSmxpNC9NbXB0QUFIcEdyNHRMQStzNm1IbUE5RTljN3dNZnlGWmUrd01odmFuZ1NEcDA5ZwpTU1pzMDBicUtTbllJSi9vR1JqYXhDbGxrdzRTTWZUT3F2OGwvR094UnMxMnlJY1pEMDhTU21ScG95TGZmcmwxCnpFbHlhaXh0QUpSZW5waFRaeXE3ZVJMUHlRbDZxRURBMVh0THMzVGhLNS80ZmdoTWJlN01PSGlNQjhNd0wxUnoKTFFrbC9QVTA4dnhmdW05a2kvbS9MUDV4cEpvcE5IWnMyTDQ3UmxYMit0cTZGSldiRHZRd09Hb0ZUVG54bWREZgo0RWtNaGxCNE4rdWpadzY0cFNNdDNjMDhOU2h4dHkyVVdwYlNiYzgvZTdQczRCN0x4NmVxNkFtcXJjVUNoZzhjCjkrUEkyTFVxajZtRGJjOGp4cFVzbHZqc0xVMDV4bnE2T0x2NFUvL3BVVFV6NmVJOEZnRmFkVlpjb1FBUkFRQUIKdEJsTVlYVnVZMmh3WVdRZ1VGQkJJR1p2Y2lCSGJIVnpkR1Z5aVFJNEJCTUJBZ0FpQlFKVDdXQlhBaHNEQmdzSgpDQWNEQWdZVkNBSUpDZ3NFRmdJREFRSWVBUUlYZ0FBS0NSQVQ0QnQ3UCtocHFaM0xFQUNZWUM0VWp4d1NIb3VWCjI5NUN4Znd0OVAzMkdjV0piRm1MWXRMSFdWVHQydmROL005WGIwMllnVkxKbS9uVnkydkpocWNNb3dTVzJqTzUKMDNtTHE2NzJnNW1IaXRuSXExbGg0elhjSEV2UDc5YURSUXV2a2dzTEVIamxrMk56WXFkQXNkUmszVGdPTGNLMApTUk03Q3dnd2QvYi9nVlV0UFlyWDFodlFLcmpHSk05VlpGY0NNWDJSbUdBUzBmdDNRSHpFQVBaQ2d5YW1rMHFCCjJlbzh0TFpZbTQyaU12cStaU3hHdWxoemk3Z0prcHYvd05kYVA0RTZvOG83S1kzSklXTW14Qm44UVpVS1lNb2IKemU0UFNCZzRHNGlHMnVlOUlyR0NiOE0xbys0NmFPU3lFSWM5OWJ6bkY4SnJ3N2E4c0J1ZlZSalNaSUU5QS9vTQpFdEIxcFRSRG45bHd4L0R5WWJDVjE2RE9zazZkNXg0UDhjcXZnZGFHemw3Vk5Mdmt3bU1hQ0gwZ1JGSUJyOTM3CnJFVWJlU0pIVHFyVkcwelh6U2FVSEV3WFBaRTBMdDJDOWRFbU1uVDZueEM3RmJKQjFBVFBETng4a0w3TXZCNGoKbDVIa2pyRDFXOVh1MnkwZHp3QUtsZzVqdnp3UDQ2TUpndm0rQVlLODA4WGhPaE1aald6enQ1UE9lRGNEaEdocApSU2ZRdEFoU25Sa090S1MxZHJNQ3QyN2hMWkRFWmZDcC8vYWo3anZWTDhGamFtR0VNZm05MUZMUWE1TFk3T29KCmFZb1psWVV0dGhyWFY2dzVLSEZqRllBS2dBOHRKemViVHZjMVE5YXZDbzJHNXFXTlpxNlRTTHhIRU1vL2c0Z3UKMmFHUlBSckt1OXcySWJvc2c0T3FaL1liWEM4U2pBPT0KPStRbmEKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQ==' | base64 -d | apt-key add -] at time 21:36:42.697250
2019-04-30 21:36:42,697 [salt.state       :1813][INFO    ][3532] Executing state cmd.run for [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUlOQkZQdFlGY0JFQURjUU1aOWFTUjFwdGJhRWVxLzhCenU3a2lwYXhWR2gzV2NtYTRMeitRUGUwb2Z4UmYrCm9ZUjIyVVZHbUpjUG5WY0dGYlhKNTB0OEJBeHd0US9UU21HZFE5M2JsNkxPUkFRQlovdWQxTFRyMkhLcGFhMEYKMWJ3cGkvVEFnQldxUDY0SHUwTEJHSVNjMEc1bTMvaG4vYmk2WHhJSU96Si9ML3ZxTGgxZGVWYURyWVlXeTVDbQplOEF1UHRxT0FSS3NlZnZWZ3dscG5iQ3RrK1FhRTY1dmdsOE1YaVlDYU9lblQwN0dEQ3ExeGI3aGtvVmxKUzRiCmY2RjNVTUpWTVZ5NG9FeVlrUnc0U1A3VUxlVDFzNHlyQmVEemJ4aEZhWlJKRnZHcHZNVzNBWnhmcmhYLzVPcFoKU2tRaUZuNS8yajRlSmxpNC9NbXB0QUFIcEdyNHRMQStzNm1IbUE5RTljN3dNZnlGWmUrd01odmFuZ1NEcDA5ZwpTU1pzMDBicUtTbllJSi9vR1JqYXhDbGxrdzRTTWZUT3F2OGwvR094UnMxMnlJY1pEMDhTU21ScG95TGZmcmwxCnpFbHlhaXh0QUpSZW5waFRaeXE3ZVJMUHlRbDZxRURBMVh0THMzVGhLNS80ZmdoTWJlN01PSGlNQjhNd0wxUnoKTFFrbC9QVTA4dnhmdW05a2kvbS9MUDV4cEpvcE5IWnMyTDQ3UmxYMit0cTZGSldiRHZRd09Hb0ZUVG54bWREZgo0RWtNaGxCNE4rdWpadzY0cFNNdDNjMDhOU2h4dHkyVVdwYlNiYzgvZTdQczRCN0x4NmVxNkFtcXJjVUNoZzhjCjkrUEkyTFVxajZtRGJjOGp4cFVzbHZqc0xVMDV4bnE2T0x2NFUvL3BVVFV6NmVJOEZnRmFkVlpjb1FBUkFRQUIKdEJsTVlYVnVZMmh3WVdRZ1VGQkJJR1p2Y2lCSGJIVnpkR1Z5aVFJNEJCTUJBZ0FpQlFKVDdXQlhBaHNEQmdzSgpDQWNEQWdZVkNBSUpDZ3NFRmdJREFRSWVBUUlYZ0FBS0NSQVQ0QnQ3UCtocHFaM0xFQUNZWUM0VWp4d1NIb3VWCjI5NUN4Znd0OVAzMkdjV0piRm1MWXRMSFdWVHQydmROL005WGIwMllnVkxKbS9uVnkydkpocWNNb3dTVzJqTzUKMDNtTHE2NzJnNW1IaXRuSXExbGg0elhjSEV2UDc5YURSUXV2a2dzTEVIamxrMk56WXFkQXNkUmszVGdPTGNLMApTUk03Q3dnd2QvYi9nVlV0UFlyWDFodlFLcmpHSk05VlpGY0NNWDJSbUdBUzBmdDNRSHpFQVBaQ2d5YW1rMHFCCjJlbzh0TFpZbTQyaU12cStaU3hHdWxoemk3Z0prcHYvd05kYVA0RTZvOG83S1kzSklXTW14Qm44UVpVS1lNb2IKemU0UFNCZzRHNGlHMnVlOUlyR0NiOE0xbys0NmFPU3lFSWM5OWJ6bkY4SnJ3N2E4c0J1ZlZSalNaSUU5QS9vTQpFdEIxcFRSRG45bHd4L0R5WWJDVjE2RE9zazZkNXg0UDhjcXZnZGFHemw3Vk5Mdmt3bU1hQ0gwZ1JGSUJyOTM3CnJFVWJlU0pIVHFyVkcwelh6U2FVSEV3WFBaRTBMdDJDOWRFbU1uVDZueEM3RmJKQjFBVFBETng4a0w3TXZCNGoKbDVIa2pyRDFXOVh1MnkwZHp3QUtsZzVqdnp3UDQ2TUpndm0rQVlLODA4WGhPaE1aald6enQ1UE9lRGNEaEdocApSU2ZRdEFoU25Sa090S1MxZHJNQ3QyN2hMWkRFWmZDcC8vYWo3anZWTDhGamFtR0VNZm05MUZMUWE1TFk3T29KCmFZb1psWVV0dGhyWFY2dzVLSEZqRllBS2dBOHRKemViVHZjMVE5YXZDbzJHNXFXTlpxNlRTTHhIRU1vL2c0Z3UKMmFHUlBSckt1OXcySWJvc2c0T3FaL1liWEM4U2pBPT0KPStRbmEKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQ==' | base64 -d | apt-key add -]
2019-04-30 21:36:42,697 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUlOQkZQdFlGY0JFQURjUU1aOWFTUjFwdGJhRWVxLzhCenU3a2lwYXhWR2gzV2NtYTRMeitRUGUwb2Z4UmYrCm9ZUjIyVVZHbUpjUG5WY0dGYlhKNTB0OEJBeHd0US9UU21HZFE5M2JsNkxPUkFRQlovdWQxTFRyMkhLcGFhMEYKMWJ3cGkvVEFnQldxUDY0SHUwTEJHSVNjMEc1bTMvaG4vYmk2WHhJSU96Si9ML3ZxTGgxZGVWYURyWVlXeTVDbQplOEF1UHRxT0FSS3NlZnZWZ3dscG5iQ3RrK1FhRTY1dmdsOE1YaVlDYU9lblQwN0dEQ3ExeGI3aGtvVmxKUzRiCmY2RjNVTUpWTVZ5NG9FeVlrUnc0U1A3VUxlVDFzNHlyQmVEemJ4aEZhWlJKRnZHcHZNVzNBWnhmcmhYLzVPcFoKU2tRaUZuNS8yajRlSmxpNC9NbXB0QUFIcEdyNHRMQStzNm1IbUE5RTljN3dNZnlGWmUrd01odmFuZ1NEcDA5ZwpTU1pzMDBicUtTbllJSi9vR1JqYXhDbGxrdzRTTWZUT3F2OGwvR094UnMxMnlJY1pEMDhTU21ScG95TGZmcmwxCnpFbHlhaXh0QUpSZW5waFRaeXE3ZVJMUHlRbDZxRURBMVh0THMzVGhLNS80ZmdoTWJlN01PSGlNQjhNd0wxUnoKTFFrbC9QVTA4dnhmdW05a2kvbS9MUDV4cEpvcE5IWnMyTDQ3UmxYMit0cTZGSldiRHZRd09Hb0ZUVG54bWREZgo0RWtNaGxCNE4rdWpadzY0cFNNdDNjMDhOU2h4dHkyVVdwYlNiYzgvZTdQczRCN0x4NmVxNkFtcXJjVUNoZzhjCjkrUEkyTFVxajZtRGJjOGp4cFVzbHZqc0xVMDV4bnE2T0x2NFUvL3BVVFV6NmVJOEZnRmFkVlpjb1FBUkFRQUIKdEJsTVlYVnVZMmh3WVdRZ1VGQkJJR1p2Y2lCSGJIVnpkR1Z5aVFJNEJCTUJBZ0FpQlFKVDdXQlhBaHNEQmdzSgpDQWNEQWdZVkNBSUpDZ3NFRmdJREFRSWVBUUlYZ0FBS0NSQVQ0QnQ3UCtocHFaM0xFQUNZWUM0VWp4d1NIb3VWCjI5NUN4Znd0OVAzMkdjV0piRm1MWXRMSFdWVHQydmROL005WGIwMllnVkxKbS9uVnkydkpocWNNb3dTVzJqTzUKMDNtTHE2NzJnNW1IaXRuSXExbGg0elhjSEV2UDc5YURSUXV2a2dzTEVIamxrMk56WXFkQXNkUmszVGdPTGNLMApTUk03Q3dnd2QvYi9nVlV0UFlyWDFodlFLcmpHSk05VlpGY0NNWDJSbUdBUzBmdDNRSHpFQVBaQ2d5YW1rMHFCCjJlbzh0TFpZbTQyaU12cStaU3hHdWxoemk3Z0prcHYvd05kYVA0RTZvOG83S1kzSklXTW14Qm44UVpVS1lNb2IKemU0UFNCZzRHNGlHMnVlOUlyR0NiOE0xbys0NmFPU3lFSWM5OWJ6bkY4SnJ3N2E4c0J1ZlZSalNaSUU5QS9vTQpFdEIxcFRSRG45bHd4L0R5WWJDVjE2RE9zazZkNXg0UDhjcXZnZGFHemw3Vk5Mdmt3bU1hQ0gwZ1JGSUJyOTM3CnJFVWJlU0pIVHFyVkcwelh6U2FVSEV3WFBaRTBMdDJDOWRFbU1uVDZueEM3RmJKQjFBVFBETng4a0w3TXZCNGoKbDVIa2pyRDFXOVh1MnkwZHp3QUtsZzVqdnp3UDQ2TUpndm0rQVlLODA4WGhPaE1aald6enQ1UE9lRGNEaEdocApSU2ZRdEFoU25Sa090S1MxZHJNQ3QyN2hMWkRFWmZDcC8vYWo3anZWTDhGamFtR0VNZm05MUZMUWE1TFk3T29KCmFZb1psWVV0dGhyWFY2dzVLSEZqRllBS2dBOHRKemViVHZjMVE5YXZDbzJHNXFXTlpxNlRTTHhIRU1vL2c0Z3UKMmFHUlBSckt1OXcySWJvc2c0T3FaL1liWEM4U2pBPT0KPStRbmEKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQ==' | base64 -d | apt-key add -' in directory '/root'
2019-04-30 21:36:42,786 [salt.state       :300 ][INFO    ][3532] {'pid': 3873, 'retcode': 0, 'stderr': '', 'stdout': 'OK'}
2019-04-30 21:36:42,787 [salt.state       :1951][INFO    ][3532] Completed state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUlOQkZQdFlGY0JFQURjUU1aOWFTUjFwdGJhRWVxLzhCenU3a2lwYXhWR2gzV2NtYTRMeitRUGUwb2Z4UmYrCm9ZUjIyVVZHbUpjUG5WY0dGYlhKNTB0OEJBeHd0US9UU21HZFE5M2JsNkxPUkFRQlovdWQxTFRyMkhLcGFhMEYKMWJ3cGkvVEFnQldxUDY0SHUwTEJHSVNjMEc1bTMvaG4vYmk2WHhJSU96Si9ML3ZxTGgxZGVWYURyWVlXeTVDbQplOEF1UHRxT0FSS3NlZnZWZ3dscG5iQ3RrK1FhRTY1dmdsOE1YaVlDYU9lblQwN0dEQ3ExeGI3aGtvVmxKUzRiCmY2RjNVTUpWTVZ5NG9FeVlrUnc0U1A3VUxlVDFzNHlyQmVEemJ4aEZhWlJKRnZHcHZNVzNBWnhmcmhYLzVPcFoKU2tRaUZuNS8yajRlSmxpNC9NbXB0QUFIcEdyNHRMQStzNm1IbUE5RTljN3dNZnlGWmUrd01odmFuZ1NEcDA5ZwpTU1pzMDBicUtTbllJSi9vR1JqYXhDbGxrdzRTTWZUT3F2OGwvR094UnMxMnlJY1pEMDhTU21ScG95TGZmcmwxCnpFbHlhaXh0QUpSZW5waFRaeXE3ZVJMUHlRbDZxRURBMVh0THMzVGhLNS80ZmdoTWJlN01PSGlNQjhNd0wxUnoKTFFrbC9QVTA4dnhmdW05a2kvbS9MUDV4cEpvcE5IWnMyTDQ3UmxYMit0cTZGSldiRHZRd09Hb0ZUVG54bWREZgo0RWtNaGxCNE4rdWpadzY0cFNNdDNjMDhOU2h4dHkyVVdwYlNiYzgvZTdQczRCN0x4NmVxNkFtcXJjVUNoZzhjCjkrUEkyTFVxajZtRGJjOGp4cFVzbHZqc0xVMDV4bnE2T0x2NFUvL3BVVFV6NmVJOEZnRmFkVlpjb1FBUkFRQUIKdEJsTVlYVnVZMmh3WVdRZ1VGQkJJR1p2Y2lCSGJIVnpkR1Z5aVFJNEJCTUJBZ0FpQlFKVDdXQlhBaHNEQmdzSgpDQWNEQWdZVkNBSUpDZ3NFRmdJREFRSWVBUUlYZ0FBS0NSQVQ0QnQ3UCtocHFaM0xFQUNZWUM0VWp4d1NIb3VWCjI5NUN4Znd0OVAzMkdjV0piRm1MWXRMSFdWVHQydmROL005WGIwMllnVkxKbS9uVnkydkpocWNNb3dTVzJqTzUKMDNtTHE2NzJnNW1IaXRuSXExbGg0elhjSEV2UDc5YURSUXV2a2dzTEVIamxrMk56WXFkQXNkUmszVGdPTGNLMApTUk03Q3dnd2QvYi9nVlV0UFlyWDFodlFLcmpHSk05VlpGY0NNWDJSbUdBUzBmdDNRSHpFQVBaQ2d5YW1rMHFCCjJlbzh0TFpZbTQyaU12cStaU3hHdWxoemk3Z0prcHYvd05kYVA0RTZvOG83S1kzSklXTW14Qm44UVpVS1lNb2IKemU0UFNCZzRHNGlHMnVlOUlyR0NiOE0xbys0NmFPU3lFSWM5OWJ6bkY4SnJ3N2E4c0J1ZlZSalNaSUU5QS9vTQpFdEIxcFRSRG45bHd4L0R5WWJDVjE2RE9zazZkNXg0UDhjcXZnZGFHemw3Vk5Mdmt3bU1hQ0gwZ1JGSUJyOTM3CnJFVWJlU0pIVHFyVkcwelh6U2FVSEV3WFBaRTBMdDJDOWRFbU1uVDZueEM3RmJKQjFBVFBETng4a0w3TXZCNGoKbDVIa2pyRDFXOVh1MnkwZHp3QUtsZzVqdnp3UDQ2TUpndm0rQVlLODA4WGhPaE1aald6enQ1UE9lRGNEaEdocApSU2ZRdEFoU25Sa090S1MxZHJNQ3QyN2hMWkRFWmZDcC8vYWo3anZWTDhGamFtR0VNZm05MUZMUWE1TFk3T29KCmFZb1psWVV0dGhyWFY2dzVLSEZqRllBS2dBOHRKemViVHZjMVE5YXZDbzJHNXFXTlpxNlRTTHhIRU1vL2c0Z3UKMmFHUlBSckt1OXcySWJvc2c0T3FaL1liWEM4U2pBPT0KPStRbmEKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQ==' | base64 -d | apt-key add -] at time 21:36:42.786951 duration_in_ms=89.7
2019-04-30 21:36:42,790 [salt.state       :1780][INFO    ][3532] Running state [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main] at time 21:36:42.790197
2019-04-30 21:36:42,790 [salt.state       :1813][INFO    ][3532] Executing state pkgrepo.managed for [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main]
2019-04-30 21:36:42,812 [salt.state       :300 ][INFO    ][3532] Package repo 'deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main' already configured
2019-04-30 21:36:42,812 [salt.state       :1951][INFO    ][3532] Completed state [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main] at time 21:36:42.812830 duration_in_ms=22.633
2019-04-30 21:36:42,813 [salt.state       :1780][INFO    ][3532] Running state [pkg.refresh_db] at time 21:36:42.813030
2019-04-30 21:36:42,813 [salt.state       :1813][INFO    ][3532] Executing state module.run for [pkg.refresh_db]
2019-04-30 21:36:42,813 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:42,813 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 21:36:46,052 [salt.state       :300 ][INFO    ][3532] {'ret': {'http://archive.ubuntu.com/ubuntu xenial-backports InRelease': None, 'http://archive.ubuntu.com/ubuntu xenial-updates InRelease': None, 'http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial InRelease': None, 'http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial InRelease': None, 'http://archive.ubuntu.com/ubuntu xenial-security InRelease': None, 'http://repo.saltstack.com/apt/ubuntu/16.04/amd64/2017.7 xenial InRelease': None, 'http://archive.ubuntu.com/ubuntu xenial InRelease': None, 'http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial InRelease': None}}
2019-04-30 21:36:46,052 [salt.state       :1951][INFO    ][3532] Completed state [pkg.refresh_db] at time 21:36:46.052920 duration_in_ms=3239.888
2019-04-30 21:36:46,053 [salt.state       :1780][INFO    ][3532] Running state [linux_extra_packages_removed] at time 21:36:46.053131
2019-04-30 21:36:46,053 [salt.state       :1813][INFO    ][3532] Executing state pkg.removed for [linux_extra_packages_removed]
2019-04-30 21:36:46,064 [salt.state       :300 ][INFO    ][3532] All specified packages are already absent
2019-04-30 21:36:46,064 [salt.state       :1951][INFO    ][3532] Completed state [linux_extra_packages_removed] at time 21:36:46.064355 duration_in_ms=11.225
2019-04-30 21:36:46,064 [salt.state       :1780][INFO    ][3532] Running state [linux_extra_packages_latest] at time 21:36:46.064515
2019-04-30 21:36:46,064 [salt.state       :1813][INFO    ][3532] Executing state pkg.latest for [linux_extra_packages_latest]
2019-04-30 21:36:46,073 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['apt-cache', '-q', 'policy', 'python-tornado'] in directory '/root'
2019-04-30 21:36:46,116 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['apt-cache', '-q', 'policy', 'smartmontools'] in directory '/root'
2019-04-30 21:36:46,145 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['apt-cache', '-q', 'policy', 'python-pymysql'] in directory '/root'
2019-04-30 21:36:46,174 [salt.state       :300 ][INFO    ][3532] All packages are up-to-date (python-pymysql, python-tornado, smartmontools).
2019-04-30 21:36:46,174 [salt.state       :1951][INFO    ][3532] Completed state [linux_extra_packages_latest] at time 21:36:46.174315 duration_in_ms=109.799
2019-04-30 21:36:46,174 [salt.state       :1780][INFO    ][3532] Running state [UTC] at time 21:36:46.174527
2019-04-30 21:36:46,174 [salt.state       :1813][INFO    ][3532] Executing state timezone.system for [UTC]
2019-04-30 21:36:46,175 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['timedatectl'] in directory '/root'
2019-04-30 21:36:46,231 [salt.state       :300 ][INFO    ][3532] Timezone UTC already set, UTC already set to UTC
2019-04-30 21:36:46,231 [salt.state       :1951][INFO    ][3532] Completed state [UTC] at time 21:36:46.231304 duration_in_ms=56.775
2019-04-30 21:36:46,231 [salt.state       :1780][INFO    ][3532] Running state [/etc/default/grub.d] at time 21:36:46.231561
2019-04-30 21:36:46,231 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/etc/default/grub.d]
2019-04-30 21:36:46,232 [salt.state       :300 ][INFO    ][3532] Directory /etc/default/grub.d is in the correct state
Directory /etc/default/grub.d updated
2019-04-30 21:36:46,232 [salt.state       :1951][INFO    ][3532] Completed state [/etc/default/grub.d] at time 21:36:46.232624 duration_in_ms=1.063
2019-04-30 21:36:46,236 [salt.state       :1780][INFO    ][3532] Running state [/etc/default/grub.d/99-custom-settings.cfg] at time 21:36:46.236359
2019-04-30 21:36:46,236 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/default/grub.d/99-custom-settings.cfg]
2019-04-30 21:36:46,262 [salt.state       :300 ][INFO    ][3532] File /etc/default/grub.d/99-custom-settings.cfg is in the correct state
2019-04-30 21:36:46,262 [salt.state       :1951][INFO    ][3532] Completed state [/etc/default/grub.d/99-custom-settings.cfg] at time 21:36:46.262861 duration_in_ms=26.502
2019-04-30 21:36:46,263 [salt.state       :1780][INFO    ][3532] Running state [/etc/default/grub.d/90-hugepages.cfg] at time 21:36:46.263829
2019-04-30 21:36:46,264 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/default/grub.d/90-hugepages.cfg]
2019-04-30 21:36:46,338 [salt.state       :300 ][INFO    ][3532] File /etc/default/grub.d/90-hugepages.cfg is in the correct state
2019-04-30 21:36:46,338 [salt.state       :1951][INFO    ][3532] Completed state [/etc/default/grub.d/90-hugepages.cfg] at time 21:36:46.338906 duration_in_ms=75.077
2019-04-30 21:36:46,340 [salt.state       :1780][INFO    ][3532] Running state [update-grub] at time 21:36:46.340172
2019-04-30 21:36:46,340 [salt.state       :1813][INFO    ][3532] Executing state cmd.wait for [update-grub]
2019-04-30 21:36:46,340 [salt.state       :300 ][INFO    ][3532] No changes made for update-grub
2019-04-30 21:36:46,340 [salt.state       :1951][INFO    ][3532] Completed state [update-grub] at time 21:36:46.340618 duration_in_ms=0.446
2019-04-30 21:36:46,341 [salt.state       :1780][INFO    ][3532] Running state [/boot/grub/grub.cfg] at time 21:36:46.341547
2019-04-30 21:36:46,341 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/boot/grub/grub.cfg]
2019-04-30 21:36:46,344 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'test -f /boot/grub/grub.cfg' in directory '/root'
2019-04-30 21:36:46,349 [salt.loaded.int.states.file:2298][WARNING ][3532] State for file: /boot/grub/grub.cfg - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:46,350 [salt.state       :300 ][INFO    ][3532] File /boot/grub/grub.cfg exists with proper permissions. No changes made.
2019-04-30 21:36:46,350 [salt.state       :1951][INFO    ][3532] Completed state [/boot/grub/grub.cfg] at time 21:36:46.350306 duration_in_ms=8.758
2019-04-30 21:36:46,350 [salt.state       :1780][INFO    ][3532] Running state [nf_conntrack] at time 21:36:46.350480
2019-04-30 21:36:46,350 [salt.state       :1813][INFO    ][3532] Executing state kmod.present for [nf_conntrack]
2019-04-30 21:36:46,350 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'lsmod' in directory '/root'
2019-04-30 21:36:46,358 [salt.state       :300 ][INFO    ][3532] Kernel module nf_conntrack is already present
2019-04-30 21:36:46,358 [salt.state       :1951][INFO    ][3532] Completed state [nf_conntrack] at time 21:36:46.358802 duration_in_ms=8.321
2019-04-30 21:36:46,359 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d] at time 21:36:46.359035
2019-04-30 21:36:46,359 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/etc/modprobe.d]
2019-04-30 21:36:46,359 [salt.state       :300 ][INFO    ][3532] Directory /etc/modprobe.d is in the correct state
Directory /etc/modprobe.d updated
2019-04-30 21:36:46,360 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d] at time 21:36:46.360073 duration_in_ms=1.038
2019-04-30 21:36:46,361 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/hfsplus.conf] at time 21:36:46.361432
2019-04-30 21:36:46,361 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/hfsplus.conf]
2019-04-30 21:36:46,449 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/hfsplus.conf is in the correct state
2019-04-30 21:36:46,450 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/hfsplus.conf] at time 21:36:46.449978 duration_in_ms=88.546
2019-04-30 21:36:46,450 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/rds.conf] at time 21:36:46.450908
2019-04-30 21:36:46,451 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/rds.conf]
2019-04-30 21:36:46,534 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/rds.conf is in the correct state
2019-04-30 21:36:46,535 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/rds.conf] at time 21:36:46.535008 duration_in_ms=84.1
2019-04-30 21:36:46,535 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/cramfs.conf] at time 21:36:46.535958
2019-04-30 21:36:46,536 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/cramfs.conf]
2019-04-30 21:36:46,616 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/cramfs.conf is in the correct state
2019-04-30 21:36:46,616 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/cramfs.conf] at time 21:36:46.616813 duration_in_ms=80.854
2019-04-30 21:36:46,617 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/freevxfs.conf] at time 21:36:46.617694
2019-04-30 21:36:46,617 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/freevxfs.conf]
2019-04-30 21:36:46,701 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/freevxfs.conf is in the correct state
2019-04-30 21:36:46,701 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/freevxfs.conf] at time 21:36:46.701546 duration_in_ms=83.851
2019-04-30 21:36:46,702 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/hfs.conf] at time 21:36:46.702629
2019-04-30 21:36:46,702 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/hfs.conf]
2019-04-30 21:36:46,783 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/hfs.conf is in the correct state
2019-04-30 21:36:46,783 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/hfs.conf] at time 21:36:46.783888 duration_in_ms=81.258
2019-04-30 21:36:46,784 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/squashfs.conf] at time 21:36:46.784782
2019-04-30 21:36:46,784 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/squashfs.conf]
2019-04-30 21:36:46,864 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/squashfs.conf is in the correct state
2019-04-30 21:36:46,865 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/squashfs.conf] at time 21:36:46.865111 duration_in_ms=80.329
2019-04-30 21:36:46,866 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/udf.conf] at time 21:36:46.865990
2019-04-30 21:36:46,866 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/udf.conf]
2019-04-30 21:36:46,946 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/udf.conf is in the correct state
2019-04-30 21:36:46,946 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/udf.conf] at time 21:36:46.946742 duration_in_ms=80.752
2019-04-30 21:36:46,947 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/vfat.conf] at time 21:36:46.947619
2019-04-30 21:36:46,947 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/vfat.conf]
2019-04-30 21:36:47,025 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/vfat.conf is in the correct state
2019-04-30 21:36:47,025 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/vfat.conf] at time 21:36:47.025748 duration_in_ms=78.129
2019-04-30 21:36:47,026 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/sctp.conf] at time 21:36:47.026614
2019-04-30 21:36:47,026 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/sctp.conf]
2019-04-30 21:36:47,106 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/sctp.conf is in the correct state
2019-04-30 21:36:47,107 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/sctp.conf] at time 21:36:47.107094 duration_in_ms=80.479
2019-04-30 21:36:47,107 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/jffs2.conf] at time 21:36:47.107969
2019-04-30 21:36:47,108 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/jffs2.conf]
2019-04-30 21:36:47,187 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/jffs2.conf is in the correct state
2019-04-30 21:36:47,187 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/jffs2.conf] at time 21:36:47.187895 duration_in_ms=79.925
2019-04-30 21:36:47,188 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/tipc.conf] at time 21:36:47.188792
2019-04-30 21:36:47,188 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/tipc.conf]
2019-04-30 21:36:47,269 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/tipc.conf is in the correct state
2019-04-30 21:36:47,270 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/tipc.conf] at time 21:36:47.270003 duration_in_ms=81.21
2019-04-30 21:36:47,270 [salt.state       :1780][INFO    ][3532] Running state [/etc/modprobe.d/dccp.conf] at time 21:36:47.270884
2019-04-30 21:36:47,271 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/modprobe.d/dccp.conf]
2019-04-30 21:36:47,350 [salt.state       :300 ][INFO    ][3532] File /etc/modprobe.d/dccp.conf is in the correct state
2019-04-30 21:36:47,351 [salt.state       :1951][INFO    ][3532] Completed state [/etc/modprobe.d/dccp.conf] at time 21:36:47.351013 duration_in_ms=80.129
2019-04-30 21:36:47,351 [salt.state       :1780][INFO    ][3532] Running state [vm.dirty_background_ratio] at time 21:36:47.351159
2019-04-30 21:36:47,351 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [vm.dirty_background_ratio]
2019-04-30 21:36:47,351 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n vm.dirty_background_ratio' in directory '/root'
2019-04-30 21:36:47,357 [salt.state       :300 ][INFO    ][3532] Sysctl value vm.dirty_background_ratio = 5 is already set
2019-04-30 21:36:47,357 [salt.state       :1951][INFO    ][3532] Completed state [vm.dirty_background_ratio] at time 21:36:47.357395 duration_in_ms=6.236
2019-04-30 21:36:47,357 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_congestion_control] at time 21:36:47.357578
2019-04-30 21:36:47,357 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_congestion_control]
2019-04-30 21:36:47,358 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_congestion_control' in directory '/root'
2019-04-30 21:36:47,364 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_congestion_control = yeah is already set
2019-04-30 21:36:47,364 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_congestion_control] at time 21:36:47.364620 duration_in_ms=7.04
2019-04-30 21:36:47,364 [salt.state       :1780][INFO    ][3532] Running state [net.core.netdev_budget] at time 21:36:47.364877
2019-04-30 21:36:47,365 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.core.netdev_budget]
2019-04-30 21:36:47,365 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.core.netdev_budget' in directory '/root'
2019-04-30 21:36:47,372 [salt.state       :300 ][INFO    ][3532] Sysctl value net.core.netdev_budget = 600 is already set
2019-04-30 21:36:47,372 [salt.state       :1951][INFO    ][3532] Completed state [net.core.netdev_budget] at time 21:36:47.372458 duration_in_ms=7.58
2019-04-30 21:36:47,372 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_slow_start_after_idle] at time 21:36:47.372711
2019-04-30 21:36:47,372 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_slow_start_after_idle]
2019-04-30 21:36:47,373 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_slow_start_after_idle' in directory '/root'
2019-04-30 21:36:47,379 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_slow_start_after_idle = 0 is already set
2019-04-30 21:36:47,379 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_slow_start_after_idle] at time 21:36:47.379310 duration_in_ms=6.599
2019-04-30 21:36:47,379 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.icmp_echo_ignore_broadcasts] at time 21:36:47.379499
2019-04-30 21:36:47,379 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.icmp_echo_ignore_broadcasts]
2019-04-30 21:36:47,380 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.icmp_echo_ignore_broadcasts' in directory '/root'
2019-04-30 21:36:47,385 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.icmp_echo_ignore_broadcasts = 1 is already set
2019-04-30 21:36:47,386 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.icmp_echo_ignore_broadcasts] at time 21:36:47.386109 duration_in_ms=6.609
2019-04-30 21:36:47,386 [salt.state       :1780][INFO    ][3532] Running state [net.nf_conntrack_max] at time 21:36:47.386362
2019-04-30 21:36:47,386 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.nf_conntrack_max]
2019-04-30 21:36:47,387 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.nf_conntrack_max' in directory '/root'
2019-04-30 21:36:47,392 [salt.state       :300 ][INFO    ][3532] Sysctl value net.nf_conntrack_max = 1048576 is already set
2019-04-30 21:36:47,393 [salt.state       :1951][INFO    ][3532] Completed state [net.nf_conntrack_max] at time 21:36:47.393152 duration_in_ms=6.788
2019-04-30 21:36:47,393 [salt.state       :1780][INFO    ][3532] Running state [fs.inotify.max_user_instances] at time 21:36:47.393342
2019-04-30 21:36:47,393 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [fs.inotify.max_user_instances]
2019-04-30 21:36:47,393 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n fs.inotify.max_user_instances' in directory '/root'
2019-04-30 21:36:47,399 [salt.state       :300 ][INFO    ][3532] Sysctl value fs.inotify.max_user_instances = 4096 is already set
2019-04-30 21:36:47,399 [salt.state       :1951][INFO    ][3532] Completed state [fs.inotify.max_user_instances] at time 21:36:47.399554 duration_in_ms=6.211
2019-04-30 21:36:47,399 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.icmp_ignore_bogus_error_responses] at time 21:36:47.399814
2019-04-30 21:36:47,400 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.icmp_ignore_bogus_error_responses]
2019-04-30 21:36:47,400 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.icmp_ignore_bogus_error_responses' in directory '/root'
2019-04-30 21:36:47,406 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.icmp_ignore_bogus_error_responses = 1 is already set
2019-04-30 21:36:47,406 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.icmp_ignore_bogus_error_responses] at time 21:36:47.406294 duration_in_ms=6.478
2019-04-30 21:36:47,406 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.default.log_martians] at time 21:36:47.406491
2019-04-30 21:36:47,406 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.default.log_martians]
2019-04-30 21:36:47,407 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.default.log_martians' in directory '/root'
2019-04-30 21:36:47,412 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.default.log_martians = 1 is already set
2019-04-30 21:36:47,412 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.default.log_martians] at time 21:36:47.412598 duration_in_ms=6.105
2019-04-30 21:36:47,412 [salt.state       :1780][INFO    ][3532] Running state [net.core.somaxconn] at time 21:36:47.412861
2019-04-30 21:36:47,413 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.core.somaxconn]
2019-04-30 21:36:47,413 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.core.somaxconn' in directory '/root'
2019-04-30 21:36:47,418 [salt.state       :300 ][INFO    ][3532] Sysctl value net.core.somaxconn = 4096 is already set
2019-04-30 21:36:47,419 [salt.state       :1951][INFO    ][3532] Completed state [net.core.somaxconn] at time 21:36:47.419092 duration_in_ms=6.23
2019-04-30 21:36:47,419 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.all.send_redirects] at time 21:36:47.419298
2019-04-30 21:36:47,419 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.all.send_redirects]
2019-04-30 21:36:47,419 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.all.send_redirects' in directory '/root'
2019-04-30 21:36:47,425 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.all.send_redirects = 0 is already set
2019-04-30 21:36:47,425 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.all.send_redirects] at time 21:36:47.425899 duration_in_ms=6.6
2019-04-30 21:36:47,426 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_tw_reuse] at time 21:36:47.426161
2019-04-30 21:36:47,426 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_tw_reuse]
2019-04-30 21:36:47,426 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_tw_reuse' in directory '/root'
2019-04-30 21:36:47,432 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_tw_reuse = 1 is already set
2019-04-30 21:36:47,432 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_tw_reuse] at time 21:36:47.432361 duration_in_ms=6.2
2019-04-30 21:36:47,432 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.default.rp_filter] at time 21:36:47.432576
2019-04-30 21:36:47,432 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.default.rp_filter]
2019-04-30 21:36:47,433 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.default.rp_filter' in directory '/root'
2019-04-30 21:36:47,438 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.default.rp_filter = 1 is already set
2019-04-30 21:36:47,439 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.default.rp_filter] at time 21:36:47.439002 duration_in_ms=6.425
2019-04-30 21:36:47,439 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_fin_timeout] at time 21:36:47.439263
2019-04-30 21:36:47,439 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_fin_timeout]
2019-04-30 21:36:47,440 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_fin_timeout' in directory '/root'
2019-04-30 21:36:47,445 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_fin_timeout = 30 is already set
2019-04-30 21:36:47,445 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_fin_timeout] at time 21:36:47.445950 duration_in_ms=6.687
2019-04-30 21:36:47,446 [salt.state       :1780][INFO    ][3532] Running state [net.core.netdev_budget_usecs] at time 21:36:47.446144
2019-04-30 21:36:47,446 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.core.netdev_budget_usecs]
2019-04-30 21:36:47,446 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.core.netdev_budget_usecs' in directory '/root'
2019-04-30 21:36:47,452 [salt.state       :300 ][INFO    ][3532] Sysctl value net.core.netdev_budget_usecs = 5000 is already set
2019-04-30 21:36:47,453 [salt.state       :1951][INFO    ][3532] Completed state [net.core.netdev_budget_usecs] at time 21:36:47.452992 duration_in_ms=6.847
2019-04-30 21:36:47,453 [salt.state       :1780][INFO    ][3532] Running state [kernel.panic] at time 21:36:47.453251
2019-04-30 21:36:47,453 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [kernel.panic]
2019-04-30 21:36:47,454 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n kernel.panic' in directory '/root'
2019-04-30 21:36:47,459 [salt.state       :300 ][INFO    ][3532] Sysctl value kernel.panic = 60 is already set
2019-04-30 21:36:47,459 [salt.state       :1951][INFO    ][3532] Completed state [kernel.panic] at time 21:36:47.459895 duration_in_ms=6.644
2019-04-30 21:36:47,460 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_keepalive_probes] at time 21:36:47.460093
2019-04-30 21:36:47,460 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_keepalive_probes]
2019-04-30 21:36:47,460 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_keepalive_probes' in directory '/root'
2019-04-30 21:36:47,466 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_keepalive_probes = 8 is already set
2019-04-30 21:36:47,466 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_keepalive_probes] at time 21:36:47.466715 duration_in_ms=6.621
2019-04-30 21:36:47,467 [salt.state       :1780][INFO    ][3532] Running state [vm.dirty_ratio] at time 21:36:47.466979
2019-04-30 21:36:47,467 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [vm.dirty_ratio]
2019-04-30 21:36:47,467 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n vm.dirty_ratio' in directory '/root'
2019-04-30 21:36:47,473 [salt.state       :300 ][INFO    ][3532] Sysctl value vm.dirty_ratio = 10 is already set
2019-04-30 21:36:47,473 [salt.state       :1951][INFO    ][3532] Completed state [vm.dirty_ratio] at time 21:36:47.473758 duration_in_ms=6.779
2019-04-30 21:36:47,473 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.all.log_martians] at time 21:36:47.473949
2019-04-30 21:36:47,474 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.all.log_martians]
2019-04-30 21:36:47,474 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.all.log_martians' in directory '/root'
2019-04-30 21:36:47,480 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.all.log_martians = 1 is already set
2019-04-30 21:36:47,480 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.all.log_martians] at time 21:36:47.480479 duration_in_ms=6.529
2019-04-30 21:36:47,480 [salt.state       :1780][INFO    ][3532] Running state [fs.suid_dumpable] at time 21:36:47.480741
2019-04-30 21:36:47,480 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [fs.suid_dumpable]
2019-04-30 21:36:47,481 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n fs.suid_dumpable' in directory '/root'
2019-04-30 21:36:47,486 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -w fs.suid_dumpable="0"' in directory '/root'
2019-04-30 21:36:47,493 [salt.state       :300 ][INFO    ][3532] {'fs.suid_dumpable': 0}
2019-04-30 21:36:47,493 [salt.state       :1951][INFO    ][3532] Completed state [fs.suid_dumpable] at time 21:36:47.493233 duration_in_ms=12.492
2019-04-30 21:36:47,493 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.default.accept_redirects] at time 21:36:47.493424
2019-04-30 21:36:47,493 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.default.accept_redirects]
2019-04-30 21:36:47,494 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.default.accept_redirects' in directory '/root'
2019-04-30 21:36:47,499 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.default.accept_redirects = 0 is already set
2019-04-30 21:36:47,500 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.default.accept_redirects] at time 21:36:47.500211 duration_in_ms=6.786
2019-04-30 21:36:47,500 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.default.secure_redirects] at time 21:36:47.500480
2019-04-30 21:36:47,500 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.default.secure_redirects]
2019-04-30 21:36:47,501 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.default.secure_redirects' in directory '/root'
2019-04-30 21:36:47,506 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.default.secure_redirects = 0 is already set
2019-04-30 21:36:47,506 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.default.secure_redirects] at time 21:36:47.506911 duration_in_ms=6.431
2019-04-30 21:36:47,507 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.default.accept_source_route] at time 21:36:47.507102
2019-04-30 21:36:47,507 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.default.accept_source_route]
2019-04-30 21:36:47,507 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.default.accept_source_route' in directory '/root'
2019-04-30 21:36:47,513 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.default.accept_source_route = 0 is already set
2019-04-30 21:36:47,513 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.default.accept_source_route] at time 21:36:47.513747 duration_in_ms=6.644
2019-04-30 21:36:47,514 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_keepalive_intvl] at time 21:36:47.514021
2019-04-30 21:36:47,514 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_keepalive_intvl]
2019-04-30 21:36:47,514 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_keepalive_intvl' in directory '/root'
2019-04-30 21:36:47,520 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_keepalive_intvl = 3 is already set
2019-04-30 21:36:47,520 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_keepalive_intvl] at time 21:36:47.520302 duration_in_ms=6.281
2019-04-30 21:36:47,520 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_keepalive_time] at time 21:36:47.520519
2019-04-30 21:36:47,520 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_keepalive_time]
2019-04-30 21:36:47,521 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_keepalive_time' in directory '/root'
2019-04-30 21:36:47,526 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_keepalive_time = 30 is already set
2019-04-30 21:36:47,526 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_keepalive_time] at time 21:36:47.526935 duration_in_ms=6.415
2019-04-30 21:36:47,527 [salt.state       :1780][INFO    ][3532] Running state [kernel.randomize_va_space] at time 21:36:47.527181
2019-04-30 21:36:47,527 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [kernel.randomize_va_space]
2019-04-30 21:36:47,527 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n kernel.randomize_va_space' in directory '/root'
2019-04-30 21:36:47,533 [salt.state       :300 ][INFO    ][3532] Sysctl value kernel.randomize_va_space = 2 is already set
2019-04-30 21:36:47,533 [salt.state       :1951][INFO    ][3532] Completed state [kernel.randomize_va_space] at time 21:36:47.533800 duration_in_ms=6.619
2019-04-30 21:36:47,534 [salt.state       :1780][INFO    ][3532] Running state [fs.file-max] at time 21:36:47.533992
2019-04-30 21:36:47,534 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [fs.file-max]
2019-04-30 21:36:47,534 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n fs.file-max' in directory '/root'
2019-04-30 21:36:47,539 [salt.state       :300 ][INFO    ][3532] Sysctl value fs.file-max = 124165 is already set
2019-04-30 21:36:47,540 [salt.state       :1951][INFO    ][3532] Completed state [fs.file-max] at time 21:36:47.540111 duration_in_ms=6.118
2019-04-30 21:36:47,540 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_syncookies] at time 21:36:47.540357
2019-04-30 21:36:47,540 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_syncookies]
2019-04-30 21:36:47,541 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_syncookies' in directory '/root'
2019-04-30 21:36:47,546 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_syncookies = 1 is already set
2019-04-30 21:36:47,546 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_syncookies] at time 21:36:47.546799 duration_in_ms=6.442
2019-04-30 21:36:47,547 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_max_syn_backlog] at time 21:36:47.547006
2019-04-30 21:36:47,547 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_max_syn_backlog]
2019-04-30 21:36:47,547 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_max_syn_backlog' in directory '/root'
2019-04-30 21:36:47,553 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_max_syn_backlog = 8192 is already set
2019-04-30 21:36:47,553 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_max_syn_backlog] at time 21:36:47.553558 duration_in_ms=6.551
2019-04-30 21:36:47,553 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.all.rp_filter] at time 21:36:47.553802
2019-04-30 21:36:47,554 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.all.rp_filter]
2019-04-30 21:36:47,554 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.all.rp_filter' in directory '/root'
2019-04-30 21:36:47,559 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.all.rp_filter = 1 is already set
2019-04-30 21:36:47,560 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.all.rp_filter] at time 21:36:47.560035 duration_in_ms=6.232
2019-04-30 21:36:47,560 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.all.accept_source_route] at time 21:36:47.560224
2019-04-30 21:36:47,560 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.all.accept_source_route]
2019-04-30 21:36:47,560 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.all.accept_source_route' in directory '/root'
2019-04-30 21:36:47,566 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.all.accept_source_route = 0 is already set
2019-04-30 21:36:47,566 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.all.accept_source_route] at time 21:36:47.566743 duration_in_ms=6.518
2019-04-30 21:36:47,567 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.tcp_retries2] at time 21:36:47.566987
2019-04-30 21:36:47,567 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.tcp_retries2]
2019-04-30 21:36:47,567 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.tcp_retries2' in directory '/root'
2019-04-30 21:36:47,573 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.tcp_retries2 = 5 is already set
2019-04-30 21:36:47,573 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.tcp_retries2] at time 21:36:47.573720 duration_in_ms=6.733
2019-04-30 21:36:47,573 [salt.state       :1780][INFO    ][3532] Running state [net.core.netdev_max_backlog] at time 21:36:47.573912
2019-04-30 21:36:47,574 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.core.netdev_max_backlog]
2019-04-30 21:36:47,574 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.core.netdev_max_backlog' in directory '/root'
2019-04-30 21:36:47,580 [salt.state       :300 ][INFO    ][3532] Sysctl value net.core.netdev_max_backlog = 261144 is already set
2019-04-30 21:36:47,580 [salt.state       :1951][INFO    ][3532] Completed state [net.core.netdev_max_backlog] at time 21:36:47.580639 duration_in_ms=6.726
2019-04-30 21:36:47,580 [salt.state       :1780][INFO    ][3532] Running state [vm.swappiness] at time 21:36:47.580885
2019-04-30 21:36:47,581 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [vm.swappiness]
2019-04-30 21:36:47,581 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n vm.swappiness' in directory '/root'
2019-04-30 21:36:47,587 [salt.state       :300 ][INFO    ][3532] Sysctl value vm.swappiness = 10 is already set
2019-04-30 21:36:47,587 [salt.state       :1951][INFO    ][3532] Completed state [vm.swappiness] at time 21:36:47.587178 duration_in_ms=6.293
2019-04-30 21:36:47,587 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.all.secure_redirects] at time 21:36:47.587367
2019-04-30 21:36:47,587 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.all.secure_redirects]
2019-04-30 21:36:47,587 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.all.secure_redirects' in directory '/root'
2019-04-30 21:36:47,593 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.all.secure_redirects = 0 is already set
2019-04-30 21:36:47,594 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.all.secure_redirects] at time 21:36:47.594111 duration_in_ms=6.742
2019-04-30 21:36:47,594 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.neigh.default.gc_thresh1] at time 21:36:47.594353
2019-04-30 21:36:47,594 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.neigh.default.gc_thresh1]
2019-04-30 21:36:47,595 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.neigh.default.gc_thresh1' in directory '/root'
2019-04-30 21:36:47,600 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.neigh.default.gc_thresh1 = 4096 is already set
2019-04-30 21:36:47,601 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.neigh.default.gc_thresh1] at time 21:36:47.601140 duration_in_ms=6.786
2019-04-30 21:36:47,601 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.neigh.default.gc_thresh2] at time 21:36:47.601331
2019-04-30 21:36:47,601 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.neigh.default.gc_thresh2]
2019-04-30 21:36:47,601 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.neigh.default.gc_thresh2' in directory '/root'
2019-04-30 21:36:47,607 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.neigh.default.gc_thresh2 = 8192 is already set
2019-04-30 21:36:47,608 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.neigh.default.gc_thresh2] at time 21:36:47.608042 duration_in_ms=6.711
2019-04-30 21:36:47,608 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.neigh.default.gc_thresh3] at time 21:36:47.608289
2019-04-30 21:36:47,608 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.neigh.default.gc_thresh3]
2019-04-30 21:36:47,609 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.neigh.default.gc_thresh3' in directory '/root'
2019-04-30 21:36:47,614 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.neigh.default.gc_thresh3 = 16384 is already set
2019-04-30 21:36:47,614 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.neigh.default.gc_thresh3] at time 21:36:47.614721 duration_in_ms=6.431
2019-04-30 21:36:47,614 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.default.send_redirects] at time 21:36:47.614911
2019-04-30 21:36:47,615 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.default.send_redirects]
2019-04-30 21:36:47,615 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.default.send_redirects' in directory '/root'
2019-04-30 21:36:47,621 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.default.send_redirects = 0 is already set
2019-04-30 21:36:47,621 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.default.send_redirects] at time 21:36:47.621599 duration_in_ms=6.686
2019-04-30 21:36:47,621 [salt.state       :1780][INFO    ][3532] Running state [net.ipv4.conf.all.accept_redirects] at time 21:36:47.621845
2019-04-30 21:36:47,622 [salt.state       :1813][INFO    ][3532] Executing state sysctl.present for [net.ipv4.conf.all.accept_redirects]
2019-04-30 21:36:47,622 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl -n net.ipv4.conf.all.accept_redirects' in directory '/root'
2019-04-30 21:36:47,627 [salt.state       :300 ][INFO    ][3532] Sysctl value net.ipv4.conf.all.accept_redirects = 0 is already set
2019-04-30 21:36:47,628 [salt.state       :1951][INFO    ][3532] Completed state [net.ipv4.conf.all.accept_redirects] at time 21:36:47.628156 duration_in_ms=6.31
2019-04-30 21:36:47,628 [salt.state       :1780][INFO    ][3532] Running state [/mnt/hugepages_1G] at time 21:36:47.628357
2019-04-30 21:36:47,628 [salt.state       :1813][INFO    ][3532] Executing state mount.mounted for [/mnt/hugepages_1G]
2019-04-30 21:36:47,628 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:47,636 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'blkid' in directory '/root'
2019-04-30 21:36:47,671 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:47,680 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -o mode=775,pagesize=1G,remount -t hugetlbfs Hugetlbfs-kvm-1g /mnt/hugepages_1G' in directory '/root'
2019-04-30 21:36:47,686 [salt.state       :300 ][INFO    ][3532] {'umount': 'Forced remount because options (pagesize=1G) changed'}
2019-04-30 21:36:47,687 [salt.state       :1951][INFO    ][3532] Completed state [/mnt/hugepages_1G] at time 21:36:47.687065 duration_in_ms=58.707
2019-04-30 21:36:47,687 [salt.state       :1780][INFO    ][3532] Running state [sysctl vm.nr_hugepages=16] at time 21:36:47.687260
2019-04-30 21:36:47,687 [salt.state       :1813][INFO    ][3532] Executing state cmd.run for [sysctl vm.nr_hugepages=16]
2019-04-30 21:36:47,687 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'sysctl vm.nr_hugepages | grep -qE '16'' in directory '/root'
2019-04-30 21:36:47,695 [salt.state       :300 ][INFO    ][3532] unless execution succeeded
2019-04-30 21:36:47,695 [salt.state       :1951][INFO    ][3532] Completed state [sysctl vm.nr_hugepages=16] at time 21:36:47.695247 duration_in_ms=7.986
2019-04-30 21:36:47,695 [salt.state       :1780][INFO    ][3532] Running state [systemctl mask dev-hugepages.mount] at time 21:36:47.695451
2019-04-30 21:36:47,695 [salt.state       :1813][INFO    ][3532] Executing state cmd.run for [systemctl mask dev-hugepages.mount]
2019-04-30 21:36:47,696 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'systemctl mask dev-hugepages.mount' in directory '/root'
2019-04-30 21:36:47,774 [salt.state       :300 ][INFO    ][3532] {'pid': 4468, 'retcode': 0, 'stderr': '', 'stdout': ''}
2019-04-30 21:36:47,775 [salt.state       :1951][INFO    ][3532] Completed state [systemctl mask dev-hugepages.mount] at time 21:36:47.774993 duration_in_ms=79.54
2019-04-30 21:36:47,775 [salt.state       :1780][INFO    ][3532] Running state [linux_sysfs_package] at time 21:36:47.775317
2019-04-30 21:36:47,775 [salt.state       :1813][INFO    ][3532] Executing state pkg.installed for [linux_sysfs_package]
2019-04-30 21:36:47,781 [salt.state       :300 ][INFO    ][3532] All specified packages are already installed
2019-04-30 21:36:47,781 [salt.state       :1951][INFO    ][3532] Completed state [linux_sysfs_package] at time 21:36:47.781248 duration_in_ms=5.932
2019-04-30 21:36:47,782 [salt.state       :1780][INFO    ][3532] Running state [/etc/sysfs.d] at time 21:36:47.782578
2019-04-30 21:36:47,782 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/etc/sysfs.d]
2019-04-30 21:36:47,783 [salt.state       :300 ][INFO    ][3532] Directory /etc/sysfs.d is in the correct state
Directory /etc/sysfs.d updated
2019-04-30 21:36:47,783 [salt.state       :1951][INFO    ][3532] Completed state [/etc/sysfs.d] at time 21:36:47.783247 duration_in_ms=0.668
2019-04-30 21:36:47,783 [salt.state       :1780][INFO    ][3532] Running state [ondemand] at time 21:36:47.783384
2019-04-30 21:36:47,783 [salt.state       :1813][INFO    ][3532] Executing state service.dead for [ondemand]
2019-04-30 21:36:47,783 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'status', 'ondemand.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:47,792 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-active', 'ondemand.service'] in directory '/root'
2019-04-30 21:36:47,800 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-enabled', 'ondemand.service'] in directory '/root'
2019-04-30 21:36:47,811 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'runlevel' in directory '/root'
2019-04-30 21:36:47,818 [salt.state       :300 ][INFO    ][3532] The service ondemand is already dead
2019-04-30 21:36:47,819 [salt.state       :1951][INFO    ][3532] Completed state [ondemand] at time 21:36:47.819191 duration_in_ms=35.806
2019-04-30 21:36:47,820 [salt.state       :1780][INFO    ][3532] Running state [/etc/sysfs.d/governor.conf] at time 21:36:47.820701
2019-04-30 21:36:47,820 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/sysfs.d/governor.conf]
2019-04-30 21:36:47,837 [salt.state       :300 ][INFO    ][3532] File /etc/sysfs.d/governor.conf is in the correct state
2019-04-30 21:36:47,837 [salt.state       :1951][INFO    ][3532] Completed state [/etc/sysfs.d/governor.conf] at time 21:36:47.837736 duration_in_ms=17.046
2019-04-30 21:36:47,837 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.837891
2019-04-30 21:36:47,838 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,838 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,838 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,838 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.838718 duration_in_ms=0.826
2019-04-30 21:36:47,838 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.838853
2019-04-30 21:36:47,838 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,839 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,839 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,839 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.839555 duration_in_ms=0.702
2019-04-30 21:36:47,839 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.839689
2019-04-30 21:36:47,839 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,840 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,840 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,840 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.840392 duration_in_ms=0.703
2019-04-30 21:36:47,840 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.840547
2019-04-30 21:36:47,840 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,840 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,841 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,841 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.841251 duration_in_ms=0.705
2019-04-30 21:36:47,841 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.841383
2019-04-30 21:36:47,841 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,841 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,841 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,842 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.842066 duration_in_ms=0.684
2019-04-30 21:36:47,842 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.842193
2019-04-30 21:36:47,842 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,842 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,842 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,842 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.842867 duration_in_ms=0.673
2019-04-30 21:36:47,843 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.842993
2019-04-30 21:36:47,843 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,843 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,843 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,843 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.843667 duration_in_ms=0.673
2019-04-30 21:36:47,843 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.843798
2019-04-30 21:36:47,843 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,844 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,844 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,844 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.844519 duration_in_ms=0.721
2019-04-30 21:36:47,844 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.844652
2019-04-30 21:36:47,844 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,844 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,845 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,845 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.845349 duration_in_ms=0.696
2019-04-30 21:36:47,845 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.845478
2019-04-30 21:36:47,845 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,845 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,846 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,846 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.846165 duration_in_ms=0.687
2019-04-30 21:36:47,846 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.846292
2019-04-30 21:36:47,846 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,846 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,846 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,846 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.846973 duration_in_ms=0.681
2019-04-30 21:36:47,847 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.847104
2019-04-30 21:36:47,847 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,847 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,847 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,847 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.847781 duration_in_ms=0.676
2019-04-30 21:36:47,847 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.847913
2019-04-30 21:36:47,848 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,848 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,848 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,848 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.848602 duration_in_ms=0.689
2019-04-30 21:36:47,848 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.848744
2019-04-30 21:36:47,848 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,849 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,849 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,849 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.849427 duration_in_ms=0.683
2019-04-30 21:36:47,849 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.849553
2019-04-30 21:36:47,849 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,849 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,850 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,850 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.850208 duration_in_ms=0.655
2019-04-30 21:36:47,850 [salt.state       :1780][INFO    ][3532] Running state [sysfs.write] at time 21:36:47.850334
2019-04-30 21:36:47,850 [salt.state       :1813][INFO    ][3532] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,850 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,850 [salt.state       :300 ][INFO    ][3532] {'ret': True}
2019-04-30 21:36:47,851 [salt.state       :1951][INFO    ][3532] Completed state [sysfs.write] at time 21:36:47.851030 duration_in_ms=0.695
2019-04-30 21:36:47,851 [salt.state       :1780][INFO    ][3532] Running state [en_US.UTF-8] at time 21:36:47.851161
2019-04-30 21:36:47,851 [salt.state       :1813][INFO    ][3532] Executing state locale.present for [en_US.UTF-8]
2019-04-30 21:36:47,851 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'locale -a' in directory '/root'
2019-04-30 21:36:47,857 [salt.state       :300 ][INFO    ][3532] Locale en_US.UTF-8 is already present
2019-04-30 21:36:47,857 [salt.state       :1951][INFO    ][3532] Completed state [en_US.UTF-8] at time 21:36:47.857598 duration_in_ms=6.436
2019-04-30 21:36:47,859 [salt.state       :1780][INFO    ][3532] Running state [en_US.UTF-8] at time 21:36:47.859026
2019-04-30 21:36:47,859 [salt.state       :1813][INFO    ][3532] Executing state locale.system for [en_US.UTF-8]
2019-04-30 21:36:47,859 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'localectl' in directory '/root'
2019-04-30 21:36:47,910 [salt.state       :300 ][INFO    ][3532] System locale en_US.UTF-8 already set
2019-04-30 21:36:47,911 [salt.state       :1951][INFO    ][3532] Completed state [en_US.UTF-8] at time 21:36:47.911008 duration_in_ms=51.981
2019-04-30 21:36:47,911 [salt.state       :1780][INFO    ][3532] Running state [root] at time 21:36:47.911227
2019-04-30 21:36:47,911 [salt.state       :1813][INFO    ][3532] Executing state group.present for [root]
2019-04-30 21:36:47,911 [salt.state       :300 ][INFO    ][3532] Group root is present and up to date
2019-04-30 21:36:47,911 [salt.state       :1951][INFO    ][3532] Completed state [root] at time 21:36:47.911830 duration_in_ms=0.603
2019-04-30 21:36:47,913 [salt.state       :1780][INFO    ][3532] Running state [root] at time 21:36:47.913076
2019-04-30 21:36:47,913 [salt.state       :1813][INFO    ][3532] Executing state user.present for [root]
2019-04-30 21:36:47,914 [salt.state       :300 ][INFO    ][3532] User root is present and up to date
2019-04-30 21:36:47,914 [salt.state       :1951][INFO    ][3532] Completed state [root] at time 21:36:47.914188 duration_in_ms=1.112
2019-04-30 21:36:47,915 [salt.state       :1780][INFO    ][3532] Running state [/root] at time 21:36:47.915217
2019-04-30 21:36:47,915 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/root]
2019-04-30 21:36:47,915 [salt.state       :300 ][INFO    ][3532] Directory /root is in the correct state
Directory /root updated
2019-04-30 21:36:47,915 [salt.state       :1951][INFO    ][3532] Completed state [/root] at time 21:36:47.915917 duration_in_ms=0.7
2019-04-30 21:36:47,916 [salt.state       :1780][INFO    ][3532] Running state [/etc/sudoers.d/90-salt-user-root] at time 21:36:47.916052
2019-04-30 21:36:47,916 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/sudoers.d/90-salt-user-root]
2019-04-30 21:36:47,924 [salt.state       :300 ][INFO    ][3532] File /etc/sudoers.d/90-salt-user-root is not present
2019-04-30 21:36:47,924 [salt.state       :1951][INFO    ][3532] Completed state [/etc/sudoers.d/90-salt-user-root] at time 21:36:47.924742 duration_in_ms=8.689
2019-04-30 21:36:47,924 [salt.state       :1780][INFO    ][3532] Running state [ubuntu] at time 21:36:47.924875
2019-04-30 21:36:47,925 [salt.state       :1813][INFO    ][3532] Executing state group.present for [ubuntu]
2019-04-30 21:36:47,925 [salt.state       :300 ][INFO    ][3532] Group ubuntu is present and up to date
2019-04-30 21:36:47,925 [salt.state       :1951][INFO    ][3532] Completed state [ubuntu] at time 21:36:47.925322 duration_in_ms=0.447
2019-04-30 21:36:47,926 [salt.state       :1780][INFO    ][3532] Running state [ubuntu] at time 21:36:47.926205
2019-04-30 21:36:47,926 [salt.state       :1813][INFO    ][3532] Executing state user.present for [ubuntu]
2019-04-30 21:36:47,929 [salt.state       :300 ][INFO    ][3532] User ubuntu is present and up to date
2019-04-30 21:36:47,930 [salt.state       :1951][INFO    ][3532] Completed state [ubuntu] at time 21:36:47.930042 duration_in_ms=3.836
2019-04-30 21:36:47,931 [salt.state       :1780][INFO    ][3532] Running state [/home/ubuntu] at time 21:36:47.931067
2019-04-30 21:36:47,931 [salt.state       :1813][INFO    ][3532] Executing state file.directory for [/home/ubuntu]
2019-04-30 21:36:47,931 [salt.state       :300 ][INFO    ][3532] Directory /home/ubuntu is in the correct state
Directory /home/ubuntu updated
2019-04-30 21:36:47,931 [salt.state       :1951][INFO    ][3532] Completed state [/home/ubuntu] at time 21:36:47.931754 duration_in_ms=0.688
2019-04-30 21:36:47,932 [salt.state       :1780][INFO    ][3532] Running state [/etc/sudoers.d/90-salt-user-ubuntu] at time 21:36:47.932729
2019-04-30 21:36:47,932 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/sudoers.d/90-salt-user-ubuntu]
2019-04-30 21:36:47,962 [salt.state       :300 ][INFO    ][3532] File /etc/sudoers.d/90-salt-user-ubuntu is in the correct state
2019-04-30 21:36:47,962 [salt.state       :1951][INFO    ][3532] Completed state [/etc/sudoers.d/90-salt-user-ubuntu] at time 21:36:47.962235 duration_in_ms=29.507
2019-04-30 21:36:47,962 [salt.state       :1780][INFO    ][3532] Running state [/etc/security/limits.d/90-salt-cis.conf] at time 21:36:47.962380
2019-04-30 21:36:47,962 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/security/limits.d/90-salt-cis.conf]
2019-04-30 21:36:48,037 [salt.state       :300 ][INFO    ][3532] File /etc/security/limits.d/90-salt-cis.conf is in the correct state
2019-04-30 21:36:48,037 [salt.state       :1951][INFO    ][3532] Completed state [/etc/security/limits.d/90-salt-cis.conf] at time 21:36:48.037677 duration_in_ms=75.297
2019-04-30 21:36:48,037 [salt.state       :1780][INFO    ][3532] Running state [/etc/security/limits.d/90-salt-default.conf] at time 21:36:48.037814
2019-04-30 21:36:48,037 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/security/limits.d/90-salt-default.conf]
2019-04-30 21:36:48,104 [salt.state       :300 ][INFO    ][3532] File /etc/security/limits.d/90-salt-default.conf is in the correct state
2019-04-30 21:36:48,104 [salt.state       :1951][INFO    ][3532] Completed state [/etc/security/limits.d/90-salt-default.conf] at time 21:36:48.104458 duration_in_ms=66.643
2019-04-30 21:36:48,104 [salt.state       :1780][INFO    ][3532] Running state [autofs] at time 21:36:48.104604
2019-04-30 21:36:48,104 [salt.state       :1813][INFO    ][3532] Executing state service.disabled for [autofs]
2019-04-30 21:36:48,105 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'status', 'autofs.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:48,113 [salt.state       :300 ][INFO    ][3532] The named service autofs is not available
2019-04-30 21:36:48,114 [salt.state       :1951][INFO    ][3532] Completed state [autofs] at time 21:36:48.113996 duration_in_ms=9.391
2019-04-30 21:36:48,114 [salt.state       :1780][INFO    ][3532] Running state [/etc/systemd/system.conf.d/90-salt.conf] at time 21:36:48.114184
2019-04-30 21:36:48,114 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/systemd/system.conf.d/90-salt.conf]
2019-04-30 21:36:48,190 [salt.state       :300 ][INFO    ][3532] File /etc/systemd/system.conf.d/90-salt.conf is in the correct state
2019-04-30 21:36:48,191 [salt.state       :1951][INFO    ][3532] Completed state [/etc/systemd/system.conf.d/90-salt.conf] at time 21:36:48.191039 duration_in_ms=76.853
2019-04-30 21:36:48,192 [salt.state       :1780][INFO    ][3532] Running state [service.systemctl_reload] at time 21:36:48.192511
2019-04-30 21:36:48,192 [salt.state       :1813][INFO    ][3532] Executing state module.wait for [service.systemctl_reload]
2019-04-30 21:36:48,193 [salt.state       :300 ][INFO    ][3532] No changes made for service.systemctl_reload
2019-04-30 21:36:48,193 [salt.state       :1951][INFO    ][3532] Completed state [service.systemctl_reload] at time 21:36:48.193242 duration_in_ms=0.731
2019-04-30 21:36:48,193 [salt.state       :1780][INFO    ][3532] Running state [/etc/shadow] at time 21:36:48.193439
2019-04-30 21:36:48,193 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/shadow]
2019-04-30 21:36:48,193 [salt.loaded.int.states.file:2298][WARNING ][3532] State for file: /etc/shadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,194 [salt.state       :300 ][INFO    ][3532] File /etc/shadow exists with proper permissions. No changes made.
2019-04-30 21:36:48,194 [salt.state       :1951][INFO    ][3532] Completed state [/etc/shadow] at time 21:36:48.194610 duration_in_ms=1.17
2019-04-30 21:36:48,194 [salt.state       :1780][INFO    ][3532] Running state [/etc/gshadow] at time 21:36:48.194807
2019-04-30 21:36:48,195 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/gshadow]
2019-04-30 21:36:48,195 [salt.loaded.int.states.file:2298][WARNING ][3532] State for file: /etc/gshadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,195 [salt.state       :300 ][INFO    ][3532] File /etc/gshadow exists with proper permissions. No changes made.
2019-04-30 21:36:48,195 [salt.state       :1951][INFO    ][3532] Completed state [/etc/gshadow] at time 21:36:48.195888 duration_in_ms=1.081
2019-04-30 21:36:48,196 [salt.state       :1780][INFO    ][3532] Running state [/etc/group-] at time 21:36:48.196080
2019-04-30 21:36:48,196 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/group-]
2019-04-30 21:36:48,196 [salt.loaded.int.states.file:2298][WARNING ][3532] State for file: /etc/group- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,196 [salt.state       :300 ][INFO    ][3532] File /etc/group- exists with proper permissions. No changes made.
2019-04-30 21:36:48,197 [salt.state       :1951][INFO    ][3532] Completed state [/etc/group-] at time 21:36:48.197136 duration_in_ms=1.056
2019-04-30 21:36:48,197 [salt.state       :1780][INFO    ][3532] Running state [/etc/group] at time 21:36:48.197330
2019-04-30 21:36:48,197 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/group]
2019-04-30 21:36:48,197 [salt.loaded.int.states.file:2298][WARNING ][3532] State for file: /etc/group - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,198 [salt.state       :300 ][INFO    ][3532] File /etc/group exists with proper permissions. No changes made.
2019-04-30 21:36:48,198 [salt.state       :1951][INFO    ][3532] Completed state [/etc/group] at time 21:36:48.198397 duration_in_ms=1.066
2019-04-30 21:36:48,198 [salt.state       :1780][INFO    ][3532] Running state [/etc/passwd-] at time 21:36:48.198583
2019-04-30 21:36:48,198 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/passwd-]
2019-04-30 21:36:48,199 [salt.loaded.int.states.file:2298][WARNING ][3532] State for file: /etc/passwd- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,199 [salt.state       :300 ][INFO    ][3532] File /etc/passwd- exists with proper permissions. No changes made.
2019-04-30 21:36:48,199 [salt.state       :1951][INFO    ][3532] Completed state [/etc/passwd-] at time 21:36:48.199632 duration_in_ms=1.048
2019-04-30 21:36:48,199 [salt.state       :1780][INFO    ][3532] Running state [/etc/passwd] at time 21:36:48.199821
2019-04-30 21:36:48,200 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/passwd]
2019-04-30 21:36:48,200 [salt.loaded.int.states.file:2298][WARNING ][3532] State for file: /etc/passwd - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,200 [salt.state       :300 ][INFO    ][3532] File /etc/passwd exists with proper permissions. No changes made.
2019-04-30 21:36:48,200 [salt.state       :1951][INFO    ][3532] Completed state [/etc/passwd] at time 21:36:48.200859 duration_in_ms=1.038
2019-04-30 21:36:48,201 [salt.state       :1780][INFO    ][3532] Running state [/var/tmp/dhcp_agent.patch] at time 21:36:48.201059
2019-04-30 21:36:48,201 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/var/tmp/dhcp_agent.patch]
2019-04-30 21:36:48,207 [salt.state       :300 ][INFO    ][3532] File /var/tmp/dhcp_agent.patch is in the correct state
2019-04-30 21:36:48,207 [salt.state       :1951][INFO    ][3532] Completed state [/var/tmp/dhcp_agent.patch] at time 21:36:48.207385 duration_in_ms=6.326
2019-04-30 21:36:48,207 [salt.state       :1780][INFO    ][3532] Running state [/etc/gshadow-] at time 21:36:48.207581
2019-04-30 21:36:48,207 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/gshadow-]
2019-04-30 21:36:48,208 [salt.loaded.int.states.file:2298][WARNING ][3532] State for file: /etc/gshadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,208 [salt.state       :300 ][INFO    ][3532] File /etc/gshadow- exists with proper permissions. No changes made.
2019-04-30 21:36:48,208 [salt.state       :1951][INFO    ][3532] Completed state [/etc/gshadow-] at time 21:36:48.208642 duration_in_ms=1.061
2019-04-30 21:36:48,208 [salt.state       :1780][INFO    ][3532] Running state [/etc/shadow-] at time 21:36:48.208830
2019-04-30 21:36:48,209 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/shadow-]
2019-04-30 21:36:48,209 [salt.loaded.int.states.file:2298][WARNING ][3532] State for file: /etc/shadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,209 [salt.state       :300 ][INFO    ][3532] File /etc/shadow- exists with proper permissions. No changes made.
2019-04-30 21:36:48,209 [salt.state       :1951][INFO    ][3532] Completed state [/etc/shadow-] at time 21:36:48.209882 duration_in_ms=1.052
2019-04-30 21:36:48,210 [salt.state       :1780][INFO    ][3532] Running state [/etc/issue] at time 21:36:48.210075
2019-04-30 21:36:48,210 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/issue]
2019-04-30 21:36:48,213 [salt.state       :300 ][INFO    ][3532] File /etc/issue is in the correct state
2019-04-30 21:36:48,213 [salt.state       :1951][INFO    ][3532] Completed state [/etc/issue] at time 21:36:48.213263 duration_in_ms=3.187
2019-04-30 21:36:48,213 [salt.state       :1780][INFO    ][3532] Running state [/etc/hostname] at time 21:36:48.213460
2019-04-30 21:36:48,213 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/hostname]
2019-04-30 21:36:48,242 [salt.state       :300 ][INFO    ][3532] File /etc/hostname is in the correct state
2019-04-30 21:36:48,242 [salt.state       :1951][INFO    ][3532] Completed state [/etc/hostname] at time 21:36:48.242308 duration_in_ms=28.847
2019-04-30 21:36:48,243 [salt.state       :1780][INFO    ][3532] Running state [hostname cmp001] at time 21:36:48.243406
2019-04-30 21:36:48,243 [salt.state       :1813][INFO    ][3532] Executing state cmd.run for [hostname cmp001]
2019-04-30 21:36:48,243 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'test "$(hostname)" = "cmp001"' in directory '/root'
2019-04-30 21:36:48,251 [salt.state       :300 ][INFO    ][3532] unless execution succeeded
2019-04-30 21:36:48,251 [salt.state       :1951][INFO    ][3532] Completed state [hostname cmp001] at time 21:36:48.251482 duration_in_ms=8.075
2019-04-30 21:36:48,252 [salt.state       :1780][INFO    ][3532] Running state [mdb02] at time 21:36:48.251977
2019-04-30 21:36:48,252 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mdb02]
2019-04-30 21:36:48,252 [salt.state       :300 ][INFO    ][3532] Host mdb02 (10.167.4.33) already present
2019-04-30 21:36:48,253 [salt.state       :1951][INFO    ][3532] Completed state [mdb02] at time 21:36:48.253033 duration_in_ms=1.056
2019-04-30 21:36:48,253 [salt.state       :1780][INFO    ][3532] Running state [mdb02.mcp-ovs-ha.local] at time 21:36:48.253351
2019-04-30 21:36:48,253 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mdb02.mcp-ovs-ha.local]
2019-04-30 21:36:48,253 [salt.state       :300 ][INFO    ][3532] Host mdb02.mcp-ovs-ha.local (10.167.4.33) already present
2019-04-30 21:36:48,254 [salt.state       :1951][INFO    ][3532] Completed state [mdb02.mcp-ovs-ha.local] at time 21:36:48.254151 duration_in_ms=0.8
2019-04-30 21:36:48,254 [salt.state       :1780][INFO    ][3532] Running state [mdb03] at time 21:36:48.254474
2019-04-30 21:36:48,254 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mdb03]
2019-04-30 21:36:48,255 [salt.state       :300 ][INFO    ][3532] Host mdb03 (10.167.4.34) already present
2019-04-30 21:36:48,255 [salt.state       :1951][INFO    ][3532] Completed state [mdb03] at time 21:36:48.255260 duration_in_ms=0.786
2019-04-30 21:36:48,255 [salt.state       :1780][INFO    ][3532] Running state [mdb03.mcp-ovs-ha.local] at time 21:36:48.255569
2019-04-30 21:36:48,255 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mdb03.mcp-ovs-ha.local]
2019-04-30 21:36:48,256 [salt.state       :300 ][INFO    ][3532] Host mdb03.mcp-ovs-ha.local (10.167.4.34) already present
2019-04-30 21:36:48,256 [salt.state       :1951][INFO    ][3532] Completed state [mdb03.mcp-ovs-ha.local] at time 21:36:48.256354 duration_in_ms=0.785
2019-04-30 21:36:48,256 [salt.state       :1780][INFO    ][3532] Running state [mdb01] at time 21:36:48.256689
2019-04-30 21:36:48,256 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mdb01]
2019-04-30 21:36:48,257 [salt.state       :300 ][INFO    ][3532] Host mdb01 (10.167.4.32) already present
2019-04-30 21:36:48,257 [salt.state       :1951][INFO    ][3532] Completed state [mdb01] at time 21:36:48.257477 duration_in_ms=0.788
2019-04-30 21:36:48,257 [salt.state       :1780][INFO    ][3532] Running state [mdb01.mcp-ovs-ha.local] at time 21:36:48.257786
2019-04-30 21:36:48,258 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mdb01.mcp-ovs-ha.local]
2019-04-30 21:36:48,258 [salt.state       :300 ][INFO    ][3532] Host mdb01.mcp-ovs-ha.local (10.167.4.32) already present
2019-04-30 21:36:48,258 [salt.state       :1951][INFO    ][3532] Completed state [mdb01.mcp-ovs-ha.local] at time 21:36:48.258575 duration_in_ms=0.79
2019-04-30 21:36:48,258 [salt.state       :1780][INFO    ][3532] Running state [mdb] at time 21:36:48.258885
2019-04-30 21:36:48,259 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mdb]
2019-04-30 21:36:48,259 [salt.state       :300 ][INFO    ][3532] Host mdb (10.167.4.31) already present
2019-04-30 21:36:48,259 [salt.state       :1951][INFO    ][3532] Completed state [mdb] at time 21:36:48.259671 duration_in_ms=0.786
2019-04-30 21:36:48,260 [salt.state       :1780][INFO    ][3532] Running state [mdb.mcp-ovs-ha.local] at time 21:36:48.259980
2019-04-30 21:36:48,260 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mdb.mcp-ovs-ha.local]
2019-04-30 21:36:48,261 [salt.state       :300 ][INFO    ][3532] Host mdb.mcp-ovs-ha.local (10.167.4.31) already present
2019-04-30 21:36:48,261 [salt.state       :1951][INFO    ][3532] Completed state [mdb.mcp-ovs-ha.local] at time 21:36:48.261550 duration_in_ms=1.57
2019-04-30 21:36:48,261 [salt.state       :1780][INFO    ][3532] Running state [cfg01] at time 21:36:48.261860
2019-04-30 21:36:48,262 [salt.state       :1813][INFO    ][3532] Executing state host.present for [cfg01]
2019-04-30 21:36:48,262 [salt.state       :300 ][INFO    ][3532] Host cfg01 (10.167.4.11) already present
2019-04-30 21:36:48,262 [salt.state       :1951][INFO    ][3532] Completed state [cfg01] at time 21:36:48.262625 duration_in_ms=0.764
2019-04-30 21:36:48,262 [salt.state       :1780][INFO    ][3532] Running state [cfg01.mcp-ovs-ha.local] at time 21:36:48.262928
2019-04-30 21:36:48,263 [salt.state       :1813][INFO    ][3532] Executing state host.present for [cfg01.mcp-ovs-ha.local]
2019-04-30 21:36:48,263 [salt.state       :300 ][INFO    ][3532] Host cfg01.mcp-ovs-ha.local (10.167.4.11) already present
2019-04-30 21:36:48,263 [salt.state       :1951][INFO    ][3532] Completed state [cfg01.mcp-ovs-ha.local] at time 21:36:48.263679 duration_in_ms=0.751
2019-04-30 21:36:48,264 [salt.state       :1780][INFO    ][3532] Running state [prx01] at time 21:36:48.263986
2019-04-30 21:36:48,264 [salt.state       :1813][INFO    ][3532] Executing state host.present for [prx01]
2019-04-30 21:36:48,264 [salt.state       :300 ][INFO    ][3532] Host prx01 (10.167.4.14) already present
2019-04-30 21:36:48,264 [salt.state       :1951][INFO    ][3532] Completed state [prx01] at time 21:36:48.264769 duration_in_ms=0.783
2019-04-30 21:36:48,265 [salt.state       :1780][INFO    ][3532] Running state [prx01.mcp-ovs-ha.local] at time 21:36:48.265076
2019-04-30 21:36:48,265 [salt.state       :1813][INFO    ][3532] Executing state host.present for [prx01.mcp-ovs-ha.local]
2019-04-30 21:36:48,265 [salt.state       :300 ][INFO    ][3532] Host prx01.mcp-ovs-ha.local (10.167.4.14) already present
2019-04-30 21:36:48,265 [salt.state       :1951][INFO    ][3532] Completed state [prx01.mcp-ovs-ha.local] at time 21:36:48.265828 duration_in_ms=0.752
2019-04-30 21:36:48,266 [salt.state       :1780][INFO    ][3532] Running state [kvm01] at time 21:36:48.266140
2019-04-30 21:36:48,266 [salt.state       :1813][INFO    ][3532] Executing state host.present for [kvm01]
2019-04-30 21:36:48,266 [salt.state       :300 ][INFO    ][3532] Host kvm01 (10.167.4.20) already present
2019-04-30 21:36:48,266 [salt.state       :1951][INFO    ][3532] Completed state [kvm01] at time 21:36:48.266898 duration_in_ms=0.758
2019-04-30 21:36:48,267 [salt.state       :1780][INFO    ][3532] Running state [kvm01.mcp-ovs-ha.local] at time 21:36:48.267205
2019-04-30 21:36:48,267 [salt.state       :1813][INFO    ][3532] Executing state host.present for [kvm01.mcp-ovs-ha.local]
2019-04-30 21:36:48,267 [salt.state       :300 ][INFO    ][3532] Host kvm01.mcp-ovs-ha.local (10.167.4.20) already present
2019-04-30 21:36:48,267 [salt.state       :1951][INFO    ][3532] Completed state [kvm01.mcp-ovs-ha.local] at time 21:36:48.267954 duration_in_ms=0.749
2019-04-30 21:36:48,268 [salt.state       :1780][INFO    ][3532] Running state [kvm03] at time 21:36:48.268277
2019-04-30 21:36:48,268 [salt.state       :1813][INFO    ][3532] Executing state host.present for [kvm03]
2019-04-30 21:36:48,268 [salt.state       :300 ][INFO    ][3532] Host kvm03 (10.167.4.22) already present
2019-04-30 21:36:48,269 [salt.state       :1951][INFO    ][3532] Completed state [kvm03] at time 21:36:48.269033 duration_in_ms=0.755
2019-04-30 21:36:48,269 [salt.state       :1780][INFO    ][3532] Running state [kvm03.mcp-ovs-ha.local] at time 21:36:48.269341
2019-04-30 21:36:48,269 [salt.state       :1813][INFO    ][3532] Executing state host.present for [kvm03.mcp-ovs-ha.local]
2019-04-30 21:36:48,269 [salt.state       :300 ][INFO    ][3532] Host kvm03.mcp-ovs-ha.local (10.167.4.22) already present
2019-04-30 21:36:48,270 [salt.state       :1951][INFO    ][3532] Completed state [kvm03.mcp-ovs-ha.local] at time 21:36:48.270067 duration_in_ms=0.726
2019-04-30 21:36:48,270 [salt.state       :1780][INFO    ][3532] Running state [kvm02] at time 21:36:48.270371
2019-04-30 21:36:48,270 [salt.state       :1813][INFO    ][3532] Executing state host.present for [kvm02]
2019-04-30 21:36:48,270 [salt.state       :300 ][INFO    ][3532] Host kvm02 (10.167.4.21) already present
2019-04-30 21:36:48,271 [salt.state       :1951][INFO    ][3532] Completed state [kvm02] at time 21:36:48.271098 duration_in_ms=0.727
2019-04-30 21:36:48,271 [salt.state       :1780][INFO    ][3532] Running state [kvm02.mcp-ovs-ha.local] at time 21:36:48.271398
2019-04-30 21:36:48,271 [salt.state       :1813][INFO    ][3532] Executing state host.present for [kvm02.mcp-ovs-ha.local]
2019-04-30 21:36:48,271 [salt.state       :300 ][INFO    ][3532] Host kvm02.mcp-ovs-ha.local (10.167.4.21) already present
2019-04-30 21:36:48,272 [salt.state       :1951][INFO    ][3532] Completed state [kvm02.mcp-ovs-ha.local] at time 21:36:48.272129 duration_in_ms=0.731
2019-04-30 21:36:48,272 [salt.state       :1780][INFO    ][3532] Running state [dbs] at time 21:36:48.272445
2019-04-30 21:36:48,272 [salt.state       :1813][INFO    ][3532] Executing state host.present for [dbs]
2019-04-30 21:36:48,273 [salt.state       :300 ][INFO    ][3532] Host dbs (10.167.4.23) already present
2019-04-30 21:36:48,273 [salt.state       :1951][INFO    ][3532] Completed state [dbs] at time 21:36:48.273154 duration_in_ms=0.709
2019-04-30 21:36:48,273 [salt.state       :1780][INFO    ][3532] Running state [dbs.mcp-ovs-ha.local] at time 21:36:48.273456
2019-04-30 21:36:48,273 [salt.state       :1813][INFO    ][3532] Executing state host.present for [dbs.mcp-ovs-ha.local]
2019-04-30 21:36:48,274 [salt.state       :300 ][INFO    ][3532] Host dbs.mcp-ovs-ha.local (10.167.4.23) already present
2019-04-30 21:36:48,274 [salt.state       :1951][INFO    ][3532] Completed state [dbs.mcp-ovs-ha.local] at time 21:36:48.274183 duration_in_ms=0.728
2019-04-30 21:36:48,274 [salt.state       :1780][INFO    ][3532] Running state [prx] at time 21:36:48.274491
2019-04-30 21:36:48,274 [salt.state       :1813][INFO    ][3532] Executing state host.present for [prx]
2019-04-30 21:36:48,275 [salt.state       :300 ][INFO    ][3532] Host prx (10.167.4.13) already present
2019-04-30 21:36:48,275 [salt.state       :1951][INFO    ][3532] Completed state [prx] at time 21:36:48.275222 duration_in_ms=0.731
2019-04-30 21:36:48,275 [salt.state       :1780][INFO    ][3532] Running state [prx.mcp-ovs-ha.local] at time 21:36:48.275525
2019-04-30 21:36:48,275 [salt.state       :1813][INFO    ][3532] Executing state host.present for [prx.mcp-ovs-ha.local]
2019-04-30 21:36:48,276 [salt.state       :300 ][INFO    ][3532] Host prx.mcp-ovs-ha.local (10.167.4.13) already present
2019-04-30 21:36:48,276 [salt.state       :1951][INFO    ][3532] Completed state [prx.mcp-ovs-ha.local] at time 21:36:48.276253 duration_in_ms=0.727
2019-04-30 21:36:48,276 [salt.state       :1780][INFO    ][3532] Running state [prx02] at time 21:36:48.276573
2019-04-30 21:36:48,276 [salt.state       :1813][INFO    ][3532] Executing state host.present for [prx02]
2019-04-30 21:36:48,277 [salt.state       :300 ][INFO    ][3532] Host prx02 (10.167.4.15) already present
2019-04-30 21:36:48,277 [salt.state       :1951][INFO    ][3532] Completed state [prx02] at time 21:36:48.277284 duration_in_ms=0.711
2019-04-30 21:36:48,277 [salt.state       :1780][INFO    ][3532] Running state [prx02.mcp-ovs-ha.local] at time 21:36:48.277589
2019-04-30 21:36:48,277 [salt.state       :1813][INFO    ][3532] Executing state host.present for [prx02.mcp-ovs-ha.local]
2019-04-30 21:36:48,278 [salt.state       :300 ][INFO    ][3532] Host prx02.mcp-ovs-ha.local (10.167.4.15) already present
2019-04-30 21:36:48,278 [salt.state       :1951][INFO    ][3532] Completed state [prx02.mcp-ovs-ha.local] at time 21:36:48.278340 duration_in_ms=0.75
2019-04-30 21:36:48,278 [salt.state       :1780][INFO    ][3532] Running state [msg02] at time 21:36:48.278639
2019-04-30 21:36:48,278 [salt.state       :1813][INFO    ][3532] Executing state host.present for [msg02]
2019-04-30 21:36:48,279 [salt.state       :300 ][INFO    ][3532] Host msg02 (10.167.4.29) already present
2019-04-30 21:36:48,279 [salt.state       :1951][INFO    ][3532] Completed state [msg02] at time 21:36:48.279355 duration_in_ms=0.716
2019-04-30 21:36:48,279 [salt.state       :1780][INFO    ][3532] Running state [msg02.mcp-ovs-ha.local] at time 21:36:48.279669
2019-04-30 21:36:48,279 [salt.state       :1813][INFO    ][3532] Executing state host.present for [msg02.mcp-ovs-ha.local]
2019-04-30 21:36:48,280 [salt.state       :300 ][INFO    ][3532] Host msg02.mcp-ovs-ha.local (10.167.4.29) already present
2019-04-30 21:36:48,280 [salt.state       :1951][INFO    ][3532] Completed state [msg02.mcp-ovs-ha.local] at time 21:36:48.280384 duration_in_ms=0.715
2019-04-30 21:36:48,280 [salt.state       :1780][INFO    ][3532] Running state [msg03] at time 21:36:48.280701
2019-04-30 21:36:48,280 [salt.state       :1813][INFO    ][3532] Executing state host.present for [msg03]
2019-04-30 21:36:48,281 [salt.state       :300 ][INFO    ][3532] Host msg03 (10.167.4.30) already present
2019-04-30 21:36:48,281 [salt.state       :1951][INFO    ][3532] Completed state [msg03] at time 21:36:48.281412 duration_in_ms=0.712
2019-04-30 21:36:48,281 [salt.state       :1780][INFO    ][3532] Running state [msg03.mcp-ovs-ha.local] at time 21:36:48.281714
2019-04-30 21:36:48,281 [salt.state       :1813][INFO    ][3532] Executing state host.present for [msg03.mcp-ovs-ha.local]
2019-04-30 21:36:48,282 [salt.state       :300 ][INFO    ][3532] Host msg03.mcp-ovs-ha.local (10.167.4.30) already present
2019-04-30 21:36:48,282 [salt.state       :1951][INFO    ][3532] Completed state [msg03.mcp-ovs-ha.local] at time 21:36:48.282426 duration_in_ms=0.711
2019-04-30 21:36:48,282 [salt.state       :1780][INFO    ][3532] Running state [msg01] at time 21:36:48.282728
2019-04-30 21:36:48,282 [salt.state       :1813][INFO    ][3532] Executing state host.present for [msg01]
2019-04-30 21:36:48,283 [salt.state       :300 ][INFO    ][3532] Host msg01 (10.167.4.28) already present
2019-04-30 21:36:48,283 [salt.state       :1951][INFO    ][3532] Completed state [msg01] at time 21:36:48.283543 duration_in_ms=0.815
2019-04-30 21:36:48,283 [salt.state       :1780][INFO    ][3532] Running state [msg01.mcp-ovs-ha.local] at time 21:36:48.283850
2019-04-30 21:36:48,284 [salt.state       :1813][INFO    ][3532] Executing state host.present for [msg01.mcp-ovs-ha.local]
2019-04-30 21:36:48,284 [salt.state       :300 ][INFO    ][3532] Host msg01.mcp-ovs-ha.local (10.167.4.28) already present
2019-04-30 21:36:48,284 [salt.state       :1951][INFO    ][3532] Completed state [msg01.mcp-ovs-ha.local] at time 21:36:48.284542 duration_in_ms=0.693
2019-04-30 21:36:48,284 [salt.state       :1780][INFO    ][3532] Running state [msg] at time 21:36:48.284857
2019-04-30 21:36:48,285 [salt.state       :1813][INFO    ][3532] Executing state host.present for [msg]
2019-04-30 21:36:48,285 [salt.state       :300 ][INFO    ][3532] Host msg (10.167.4.27) already present
2019-04-30 21:36:48,285 [salt.state       :1951][INFO    ][3532] Completed state [msg] at time 21:36:48.285563 duration_in_ms=0.706
2019-04-30 21:36:48,285 [salt.state       :1780][INFO    ][3532] Running state [msg.mcp-ovs-ha.local] at time 21:36:48.285868
2019-04-30 21:36:48,286 [salt.state       :1813][INFO    ][3532] Executing state host.present for [msg.mcp-ovs-ha.local]
2019-04-30 21:36:48,286 [salt.state       :300 ][INFO    ][3532] Host msg.mcp-ovs-ha.local (10.167.4.27) already present
2019-04-30 21:36:48,286 [salt.state       :1951][INFO    ][3532] Completed state [msg.mcp-ovs-ha.local] at time 21:36:48.286579 duration_in_ms=0.711
2019-04-30 21:36:48,286 [salt.state       :1780][INFO    ][3532] Running state [cfg01] at time 21:36:48.286890
2019-04-30 21:36:48,287 [salt.state       :1813][INFO    ][3532] Executing state host.present for [cfg01]
2019-04-30 21:36:48,287 [salt.state       :300 ][INFO    ][3532] Host cfg01 (10.167.4.11) already present
2019-04-30 21:36:48,287 [salt.state       :1951][INFO    ][3532] Completed state [cfg01] at time 21:36:48.287606 duration_in_ms=0.716
2019-04-30 21:36:48,287 [salt.state       :1780][INFO    ][3532] Running state [cfg01.mcp-ovs-ha.local] at time 21:36:48.287911
2019-04-30 21:36:48,288 [salt.state       :1813][INFO    ][3532] Executing state host.present for [cfg01.mcp-ovs-ha.local]
2019-04-30 21:36:48,288 [salt.state       :300 ][INFO    ][3532] Host cfg01.mcp-ovs-ha.local (10.167.4.11) already present
2019-04-30 21:36:48,288 [salt.state       :1951][INFO    ][3532] Completed state [cfg01.mcp-ovs-ha.local] at time 21:36:48.288613 duration_in_ms=0.702
2019-04-30 21:36:48,288 [salt.state       :1780][INFO    ][3532] Running state [cmp002] at time 21:36:48.288915
2019-04-30 21:36:48,289 [salt.state       :1813][INFO    ][3532] Executing state host.present for [cmp002]
2019-04-30 21:36:48,289 [salt.state       :300 ][INFO    ][3532] Host cmp002 (10.167.4.56) already present
2019-04-30 21:36:48,289 [salt.state       :1951][INFO    ][3532] Completed state [cmp002] at time 21:36:48.289612 duration_in_ms=0.697
2019-04-30 21:36:48,289 [salt.state       :1780][INFO    ][3532] Running state [cmp002.mcp-ovs-ha.local] at time 21:36:48.289912
2019-04-30 21:36:48,290 [salt.state       :1813][INFO    ][3532] Executing state host.present for [cmp002.mcp-ovs-ha.local]
2019-04-30 21:36:48,290 [salt.state       :300 ][INFO    ][3532] Host cmp002.mcp-ovs-ha.local (10.167.4.56) already present
2019-04-30 21:36:48,290 [salt.state       :1951][INFO    ][3532] Completed state [cmp002.mcp-ovs-ha.local] at time 21:36:48.290602 duration_in_ms=0.689
2019-04-30 21:36:48,290 [salt.state       :1780][INFO    ][3532] Running state [cmp001] at time 21:36:48.290907
2019-04-30 21:36:48,291 [salt.state       :1813][INFO    ][3532] Executing state host.present for [cmp001]
2019-04-30 21:36:48,291 [salt.state       :300 ][INFO    ][3532] Host cmp001 (10.167.4.55) already present
2019-04-30 21:36:48,291 [salt.state       :1951][INFO    ][3532] Completed state [cmp001] at time 21:36:48.291599 duration_in_ms=0.691
2019-04-30 21:36:48,291 [salt.state       :1780][INFO    ][3532] Running state [cmp001.mcp-ovs-ha.local] at time 21:36:48.291901
2019-04-30 21:36:48,292 [salt.state       :1813][INFO    ][3532] Executing state host.present for [cmp001.mcp-ovs-ha.local]
2019-04-30 21:36:48,292 [salt.state       :300 ][INFO    ][3532] Host cmp001.mcp-ovs-ha.local (10.167.4.55) already present
2019-04-30 21:36:48,292 [salt.state       :1951][INFO    ][3532] Completed state [cmp001.mcp-ovs-ha.local] at time 21:36:48.292583 duration_in_ms=0.683
2019-04-30 21:36:48,293 [salt.state       :1780][INFO    ][3532] Running state [file.replace] at time 21:36:48.293922
2019-04-30 21:36:48,294 [salt.state       :1813][INFO    ][3532] Executing state module.run for [file.replace]
2019-04-30 21:36:48,297 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'grep -q "cmp001 cmp001.mcp-ovs-ha.local" /etc/hosts' in directory '/root'
2019-04-30 21:36:48,304 [salt.utils.decorators:613 ][WARNING ][3532] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:48,306 [salt.state       :300 ][INFO    ][3532] {'ret': '--- \n+++ \n@@ -23,7 +23,7 @@\n 10.167.4.28\t\tmsg01 msg01.mcp-ovs-ha.local\n 10.167.4.27\t\tmsg msg.mcp-ovs-ha.local\n 10.167.4.56\t\tcmp002 cmp002.mcp-ovs-ha.local\n-10.167.4.55\t\tcmp001 cmp001.mcp-ovs-ha.local\n+10.167.4.55\t\tcmp001.mcp-ovs-ha.local cmp001\n 10.167.4.24\t\tdbs01 dbs01.mcp-ovs-ha.local\n 10.167.4.25\t\tdbs02 dbs02.mcp-ovs-ha.local\n 10.167.4.26\t\tdbs03 dbs03.mcp-ovs-ha.local\n'}
2019-04-30 21:36:48,306 [salt.state       :1951][INFO    ][3532] Completed state [file.replace] at time 21:36:48.306338 duration_in_ms=12.415
2019-04-30 21:36:48,306 [salt.state       :1780][INFO    ][3532] Running state [dbs01] at time 21:36:48.306723
2019-04-30 21:36:48,306 [salt.state       :1813][INFO    ][3532] Executing state host.present for [dbs01]
2019-04-30 21:36:48,307 [salt.state       :300 ][INFO    ][3532] Host dbs01 (10.167.4.24) already present
2019-04-30 21:36:48,307 [salt.state       :1951][INFO    ][3532] Completed state [dbs01] at time 21:36:48.307577 duration_in_ms=0.854
2019-04-30 21:36:48,307 [salt.state       :1780][INFO    ][3532] Running state [dbs01.mcp-ovs-ha.local] at time 21:36:48.307847
2019-04-30 21:36:48,308 [salt.state       :1813][INFO    ][3532] Executing state host.present for [dbs01.mcp-ovs-ha.local]
2019-04-30 21:36:48,308 [salt.state       :300 ][INFO    ][3532] Host dbs01.mcp-ovs-ha.local (10.167.4.24) already present
2019-04-30 21:36:48,308 [salt.state       :1951][INFO    ][3532] Completed state [dbs01.mcp-ovs-ha.local] at time 21:36:48.308389 duration_in_ms=0.543
2019-04-30 21:36:48,308 [salt.state       :1780][INFO    ][3532] Running state [dbs02] at time 21:36:48.308696
2019-04-30 21:36:48,308 [salt.state       :1813][INFO    ][3532] Executing state host.present for [dbs02]
2019-04-30 21:36:48,309 [salt.state       :300 ][INFO    ][3532] Host dbs02 (10.167.4.25) already present
2019-04-30 21:36:48,309 [salt.state       :1951][INFO    ][3532] Completed state [dbs02] at time 21:36:48.309247 duration_in_ms=0.552
2019-04-30 21:36:48,309 [salt.state       :1780][INFO    ][3532] Running state [dbs02.mcp-ovs-ha.local] at time 21:36:48.309534
2019-04-30 21:36:48,309 [salt.state       :1813][INFO    ][3532] Executing state host.present for [dbs02.mcp-ovs-ha.local]
2019-04-30 21:36:48,309 [salt.state       :300 ][INFO    ][3532] Host dbs02.mcp-ovs-ha.local (10.167.4.25) already present
2019-04-30 21:36:48,310 [salt.state       :1951][INFO    ][3532] Completed state [dbs02.mcp-ovs-ha.local] at time 21:36:48.310073 duration_in_ms=0.54
2019-04-30 21:36:48,310 [salt.state       :1780][INFO    ][3532] Running state [dbs03] at time 21:36:48.310338
2019-04-30 21:36:48,310 [salt.state       :1813][INFO    ][3532] Executing state host.present for [dbs03]
2019-04-30 21:36:48,310 [salt.state       :300 ][INFO    ][3532] Host dbs03 (10.167.4.26) already present
2019-04-30 21:36:48,310 [salt.state       :1951][INFO    ][3532] Completed state [dbs03] at time 21:36:48.310871 duration_in_ms=0.532
2019-04-30 21:36:48,311 [salt.state       :1780][INFO    ][3532] Running state [dbs03.mcp-ovs-ha.local] at time 21:36:48.311138
2019-04-30 21:36:48,311 [salt.state       :1813][INFO    ][3532] Executing state host.present for [dbs03.mcp-ovs-ha.local]
2019-04-30 21:36:48,311 [salt.state       :300 ][INFO    ][3532] Host dbs03.mcp-ovs-ha.local (10.167.4.26) already present
2019-04-30 21:36:48,311 [salt.state       :1951][INFO    ][3532] Completed state [dbs03.mcp-ovs-ha.local] at time 21:36:48.311674 duration_in_ms=0.536
2019-04-30 21:36:48,311 [salt.state       :1780][INFO    ][3532] Running state [mas01] at time 21:36:48.311940
2019-04-30 21:36:48,312 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mas01]
2019-04-30 21:36:48,312 [salt.state       :300 ][INFO    ][3532] Host mas01 (10.167.4.12) already present
2019-04-30 21:36:48,312 [salt.state       :1951][INFO    ][3532] Completed state [mas01] at time 21:36:48.312485 duration_in_ms=0.545
2019-04-30 21:36:48,312 [salt.state       :1780][INFO    ][3532] Running state [mas01.mcp-ovs-ha.local] at time 21:36:48.312768
2019-04-30 21:36:48,312 [salt.state       :1813][INFO    ][3532] Executing state host.present for [mas01.mcp-ovs-ha.local]
2019-04-30 21:36:48,313 [salt.state       :300 ][INFO    ][3532] Host mas01.mcp-ovs-ha.local (10.167.4.12) already present
2019-04-30 21:36:48,313 [salt.state       :1951][INFO    ][3532] Completed state [mas01.mcp-ovs-ha.local] at time 21:36:48.313305 duration_in_ms=0.536
2019-04-30 21:36:48,313 [salt.state       :1780][INFO    ][3532] Running state [ctl02] at time 21:36:48.313577
2019-04-30 21:36:48,313 [salt.state       :1813][INFO    ][3532] Executing state host.present for [ctl02]
2019-04-30 21:36:48,314 [salt.state       :300 ][INFO    ][3532] Host ctl02 (10.167.4.37) already present
2019-04-30 21:36:48,314 [salt.state       :1951][INFO    ][3532] Completed state [ctl02] at time 21:36:48.314112 duration_in_ms=0.534
2019-04-30 21:36:48,314 [salt.state       :1780][INFO    ][3532] Running state [ctl02.mcp-ovs-ha.local] at time 21:36:48.314381
2019-04-30 21:36:48,314 [salt.state       :1813][INFO    ][3532] Executing state host.present for [ctl02.mcp-ovs-ha.local]
2019-04-30 21:36:48,314 [salt.state       :300 ][INFO    ][3532] Host ctl02.mcp-ovs-ha.local (10.167.4.37) already present
2019-04-30 21:36:48,314 [salt.state       :1951][INFO    ][3532] Completed state [ctl02.mcp-ovs-ha.local] at time 21:36:48.314916 duration_in_ms=0.535
2019-04-30 21:36:48,315 [salt.state       :1780][INFO    ][3532] Running state [ctl03] at time 21:36:48.315189
2019-04-30 21:36:48,315 [salt.state       :1813][INFO    ][3532] Executing state host.present for [ctl03]
2019-04-30 21:36:48,315 [salt.state       :300 ][INFO    ][3532] Host ctl03 (10.167.4.38) already present
2019-04-30 21:36:48,315 [salt.state       :1951][INFO    ][3532] Completed state [ctl03] at time 21:36:48.315718 duration_in_ms=0.529
2019-04-30 21:36:48,316 [salt.state       :1780][INFO    ][3532] Running state [ctl03.mcp-ovs-ha.local] at time 21:36:48.315987
2019-04-30 21:36:48,316 [salt.state       :1813][INFO    ][3532] Executing state host.present for [ctl03.mcp-ovs-ha.local]
2019-04-30 21:36:48,316 [salt.state       :300 ][INFO    ][3532] Host ctl03.mcp-ovs-ha.local (10.167.4.38) already present
2019-04-30 21:36:48,316 [salt.state       :1951][INFO    ][3532] Completed state [ctl03.mcp-ovs-ha.local] at time 21:36:48.316541 duration_in_ms=0.554
2019-04-30 21:36:48,316 [salt.state       :1780][INFO    ][3532] Running state [ctl01] at time 21:36:48.316826
2019-04-30 21:36:48,316 [salt.state       :1813][INFO    ][3532] Executing state host.present for [ctl01]
2019-04-30 21:36:48,317 [salt.state       :300 ][INFO    ][3532] Host ctl01 (10.167.4.36) already present
2019-04-30 21:36:48,317 [salt.state       :1951][INFO    ][3532] Completed state [ctl01] at time 21:36:48.317361 duration_in_ms=0.535
2019-04-30 21:36:48,317 [salt.state       :1780][INFO    ][3532] Running state [ctl01.mcp-ovs-ha.local] at time 21:36:48.317635
2019-04-30 21:36:48,317 [salt.state       :1813][INFO    ][3532] Executing state host.present for [ctl01.mcp-ovs-ha.local]
2019-04-30 21:36:48,318 [salt.state       :300 ][INFO    ][3532] Host ctl01.mcp-ovs-ha.local (10.167.4.36) already present
2019-04-30 21:36:48,318 [salt.state       :1951][INFO    ][3532] Completed state [ctl01.mcp-ovs-ha.local] at time 21:36:48.318163 duration_in_ms=0.529
2019-04-30 21:36:48,318 [salt.state       :1780][INFO    ][3532] Running state [ctl] at time 21:36:48.318437
2019-04-30 21:36:48,318 [salt.state       :1813][INFO    ][3532] Executing state host.present for [ctl]
2019-04-30 21:36:48,318 [salt.state       :300 ][INFO    ][3532] Host ctl (10.167.4.35) already present
2019-04-30 21:36:48,319 [salt.state       :1951][INFO    ][3532] Completed state [ctl] at time 21:36:48.318976 duration_in_ms=0.538
2019-04-30 21:36:48,319 [salt.state       :1780][INFO    ][3532] Running state [ctl.mcp-ovs-ha.local] at time 21:36:48.319255
2019-04-30 21:36:48,319 [salt.state       :1813][INFO    ][3532] Executing state host.present for [ctl.mcp-ovs-ha.local]
2019-04-30 21:36:48,319 [salt.state       :300 ][INFO    ][3532] Host ctl.mcp-ovs-ha.local (10.167.4.35) already present
2019-04-30 21:36:48,319 [salt.state       :1951][INFO    ][3532] Completed state [ctl.mcp-ovs-ha.local] at time 21:36:48.319798 duration_in_ms=0.543
2019-04-30 21:36:48,319 [salt.state       :1780][INFO    ][3532] Running state [linux_network_bridge_pkgs] at time 21:36:48.319934
2019-04-30 21:36:48,320 [salt.state       :1813][INFO    ][3532] Executing state pkg.installed for [linux_network_bridge_pkgs]
2019-04-30 21:36:48,326 [salt.state       :300 ][INFO    ][3532] All specified packages are already installed
2019-04-30 21:36:48,326 [salt.state       :1951][INFO    ][3532] Completed state [linux_network_bridge_pkgs] at time 21:36:48.326316 duration_in_ms=6.382
2019-04-30 21:36:48,327 [salt.state       :1780][INFO    ][3532] Running state [/etc/systemd/system/ovsdb-server.service.d/override.conf] at time 21:36:48.327544
2019-04-30 21:36:48,327 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/systemd/system/ovsdb-server.service.d/override.conf]
2019-04-30 21:36:48,333 [salt.state       :300 ][INFO    ][3532] File /etc/systemd/system/ovsdb-server.service.d/override.conf is in the correct state
2019-04-30 21:36:48,333 [salt.state       :1951][INFO    ][3532] Completed state [/etc/systemd/system/ovsdb-server.service.d/override.conf] at time 21:36:48.333681 duration_in_ms=6.137
2019-04-30 21:36:48,334 [salt.state       :1780][INFO    ][3532] Running state [/etc/systemd/system/ovs-vswitchd.service.d/override.conf] at time 21:36:48.334642
2019-04-30 21:36:48,334 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/systemd/system/ovs-vswitchd.service.d/override.conf]
2019-04-30 21:36:48,337 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430213648320376
2019-04-30 21:36:48,339 [salt.state       :300 ][INFO    ][3532] File /etc/systemd/system/ovs-vswitchd.service.d/override.conf is in the correct state
2019-04-30 21:36:48,339 [salt.state       :1951][INFO    ][3532] Completed state [/etc/systemd/system/ovs-vswitchd.service.d/override.conf] at time 21:36:48.339657 duration_in_ms=5.016
2019-04-30 21:36:48,340 [salt.state       :1780][INFO    ][3532] Running state [/etc/systemd/system/networking.service.d/ovs_workaround.conf] at time 21:36:48.340626
2019-04-30 21:36:48,340 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/systemd/system/networking.service.d/ovs_workaround.conf]
2019-04-30 21:36:48,345 [salt.state       :300 ][INFO    ][3532] File /etc/systemd/system/networking.service.d/ovs_workaround.conf is in the correct state
2019-04-30 21:36:48,345 [salt.state       :1951][INFO    ][3532] Completed state [/etc/systemd/system/networking.service.d/ovs_workaround.conf] at time 21:36:48.345682 duration_in_ms=5.056
2019-04-30 21:36:48,345 [salt.state       :1780][INFO    ][3532] Running state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 21:36:48.345823
2019-04-30 21:36:48,345 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/network/interfaces.d/50-cloud-init.cfg]
2019-04-30 21:36:48,346 [salt.state       :300 ][INFO    ][3532] File /etc/network/interfaces.d/50-cloud-init.cfg is not present
2019-04-30 21:36:48,346 [salt.state       :1951][INFO    ][3532] Completed state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 21:36:48.346263 duration_in_ms=0.439
2019-04-30 21:36:48,345 [salt.minion      :1432][INFO    ][4534] Starting a new job with PID 4534
2019-04-30 21:36:48,347 [salt.state       :1780][INFO    ][3532] Running state [enp6s0.300] at time 21:36:48.347876
2019-04-30 21:36:48,348 [salt.state       :1813][INFO    ][3532] Executing state network.managed for [enp6s0.300]
2019-04-30 21:36:48,357 [salt.minion      :1711][INFO    ][4534] Returning information for job: 20190430213648320376
2019-04-30 21:36:48,426 [salt.state       :300 ][INFO    ][3532] Interface enp6s0.300 is up to date.
2019-04-30 21:36:48,426 [salt.state       :1951][INFO    ][3532] Completed state [enp6s0.300] at time 21:36:48.426958 duration_in_ms=79.08
2019-04-30 21:36:48,428 [salt.state       :1780][INFO    ][3532] Running state [br-ctl] at time 21:36:48.428166
2019-04-30 21:36:48,428 [salt.state       :1813][INFO    ][3532] Executing state network.managed for [br-ctl]
2019-04-30 21:36:48,434 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 21:36:48,446 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'bridge-utils'] in directory '/root'
2019-04-30 21:36:48,703 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:36:48,741 [salt.state       :300 ][INFO    ][3532] Interface br-ctl is up to date.
2019-04-30 21:36:48,741 [salt.state       :1951][INFO    ][3532] Completed state [br-ctl] at time 21:36:48.741320 duration_in_ms=313.152
2019-04-30 21:36:48,741 [salt.state       :1780][INFO    ][3532] Running state [enp6s0] at time 21:36:48.741544
2019-04-30 21:36:48,741 [salt.state       :1813][INFO    ][3532] Executing state network.managed for [enp6s0]
2019-04-30 21:36:48,757 [salt.state       :300 ][INFO    ][3532] Interface enp6s0 is up to date.
2019-04-30 21:36:48,757 [salt.state       :1951][INFO    ][3532] Completed state [enp6s0] at time 21:36:48.757707 duration_in_ms=16.161
2019-04-30 21:36:48,757 [salt.state       :1780][INFO    ][3532] Running state [br-floating] at time 21:36:48.757914
2019-04-30 21:36:48,758 [salt.state       :1813][INFO    ][3532] Executing state openvswitch_bridge.present for [br-floating]
2019-04-30 21:36:48,758 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'ovs-vsctl br-exists br-floating' in directory '/root'
2019-04-30 21:36:48,764 [salt.state       :300 ][INFO    ][3532] Bridge br-floating already exists.
2019-04-30 21:36:48,764 [salt.state       :1951][INFO    ][3532] Completed state [br-floating] at time 21:36:48.764494 duration_in_ms=6.578
2019-04-30 21:36:48,764 [salt.state       :1780][INFO    ][3532] Running state [/etc/network/interfaces.u/ifcfg-br-floating] at time 21:36:48.764679
2019-04-30 21:36:48,764 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/network/interfaces.u/ifcfg-br-floating]
2019-04-30 21:36:48,781 [salt.state       :300 ][INFO    ][3532] File /etc/network/interfaces.u/ifcfg-br-floating is in the correct state
2019-04-30 21:36:48,782 [salt.state       :1951][INFO    ][3532] Completed state [/etc/network/interfaces.u/ifcfg-br-floating] at time 21:36:48.781991 duration_in_ms=17.312
2019-04-30 21:36:48,782 [salt.state       :1780][INFO    ][3532] Running state [/etc/network/interfaces] at time 21:36:48.782141
2019-04-30 21:36:48,782 [salt.state       :1813][INFO    ][3532] Executing state file.prepend for [/etc/network/interfaces]
2019-04-30 21:36:48,783 [salt.state       :300 ][INFO    ][3532] File changed:
--- 
+++ 
@@ -1,3 +1,6 @@
+source /etc/network/interfaces.d/*
+# Workaround for Upstream-Bug: https://github.com/saltstack/salt/issues/40262
+source /etc/network/interfaces.u/*
 auto lo
 iface lo inet loopback
 auto enp6s0.300

2019-04-30 21:36:48,784 [salt.state       :1951][INFO    ][3532] Completed state [/etc/network/interfaces] at time 21:36:48.784032 duration_in_ms=1.891
2019-04-30 21:36:48,788 [salt.state       :1780][INFO    ][3532] Running state [/etc/network/interfaces] at time 21:36:48.788272
2019-04-30 21:36:48,788 [salt.state       :1813][INFO    ][3532] Executing state file.prepend for [/etc/network/interfaces]
2019-04-30 21:36:48,789 [salt.state       :300 ][INFO    ][3532] File /etc/network/interfaces is in correct state
2019-04-30 21:36:48,789 [salt.state       :1951][INFO    ][3532] Completed state [/etc/network/interfaces] at time 21:36:48.789393 duration_in_ms=1.121
2019-04-30 21:36:48,791 [salt.state       :1780][INFO    ][3532] Running state [ifup --ignore-errors br-floating] at time 21:36:48.791678
2019-04-30 21:36:48,791 [salt.state       :1813][INFO    ][3532] Executing state cmd.run for [ifup --ignore-errors br-floating]
2019-04-30 21:36:48,792 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command '/bin/false' in directory '/root'
2019-04-30 21:36:48,797 [salt.state       :300 ][INFO    ][3532] onlyif execution failed
2019-04-30 21:36:48,797 [salt.state       :1951][INFO    ][3532] Completed state [ifup --ignore-errors br-floating] at time 21:36:48.797370 duration_in_ms=5.691
2019-04-30 21:36:48,797 [salt.state       :1780][INFO    ][3532] Running state [ovs-vsctl add-port br-floating enp8s0] at time 21:36:48.797564
2019-04-30 21:36:48,797 [salt.state       :1813][INFO    ][3532] Executing state cmd.run for [ovs-vsctl add-port br-floating enp8s0]
2019-04-30 21:36:48,798 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'ovs-vsctl list-ports br-floating | grep -qFx enp8s0' in directory '/root'
2019-04-30 21:36:48,804 [salt.state       :300 ][INFO    ][3532] unless execution succeeded
2019-04-30 21:36:48,804 [salt.state       :1951][INFO    ][3532] Completed state [ovs-vsctl add-port br-floating enp8s0] at time 21:36:48.804594 duration_in_ms=7.029
2019-04-30 21:36:48,806 [salt.state       :1780][INFO    ][3532] Running state [enp7s0.1000] at time 21:36:48.806533
2019-04-30 21:36:48,806 [salt.state       :1813][INFO    ][3532] Executing state network.managed for [enp7s0.1000]
2019-04-30 21:36:48,821 [salt.state       :300 ][INFO    ][3532] Interface enp7s0.1000 is up to date.
2019-04-30 21:36:48,821 [salt.state       :1951][INFO    ][3532] Completed state [enp7s0.1000] at time 21:36:48.821716 duration_in_ms=15.183
2019-04-30 21:36:48,822 [salt.state       :1780][INFO    ][3532] Running state [br-mesh] at time 21:36:48.822940
2019-04-30 21:36:48,823 [salt.state       :1813][INFO    ][3532] Executing state network.managed for [br-mesh]
2019-04-30 21:36:48,829 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 21:36:48,839 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'bridge-utils'] in directory '/root'
2019-04-30 21:36:49,058 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:36:49,100 [salt.state       :300 ][INFO    ][3532] Interface br-mesh is up to date.
2019-04-30 21:36:49,101 [salt.state       :1951][INFO    ][3532] Completed state [br-mesh] at time 21:36:49.100970 duration_in_ms=278.028
2019-04-30 21:36:49,101 [salt.state       :1780][INFO    ][3532] Running state [enp7s0] at time 21:36:49.101239
2019-04-30 21:36:49,101 [salt.state       :1813][INFO    ][3532] Executing state network.managed for [enp7s0]
2019-04-30 21:36:49,119 [salt.state       :300 ][INFO    ][3532] Interface enp7s0 is up to date.
2019-04-30 21:36:49,120 [salt.state       :1951][INFO    ][3532] Completed state [enp7s0] at time 21:36:49.120173 duration_in_ms=18.933
2019-04-30 21:36:49,120 [salt.state       :1780][INFO    ][3532] Running state [/etc/network/interfaces] at time 21:36:49.120398
2019-04-30 21:36:49,120 [salt.state       :1813][INFO    ][3532] Executing state file.prepend for [/etc/network/interfaces]
2019-04-30 21:36:49,121 [salt.state       :300 ][INFO    ][3532] File changed:
--- 
+++ 
@@ -1,3 +1,6 @@
+source /etc/network/interfaces.d/*
+# Workaround for Upstream-Bug: https://github.com/saltstack/salt/issues/40262
+source /etc/network/interfaces.u/*
 auto lo
 iface lo inet loopback
 auto enp6s0.300

2019-04-30 21:36:49,122 [salt.state       :1951][INFO    ][3532] Completed state [/etc/network/interfaces] at time 21:36:49.122065 duration_in_ms=1.668
2019-04-30 21:36:49,122 [salt.state       :1780][INFO    ][3532] Running state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 21:36:49.122237
2019-04-30 21:36:49,122 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/network/interfaces.u/ifcfg-enp8s0]
2019-04-30 21:36:49,141 [salt.state       :300 ][INFO    ][3532] File /etc/network/interfaces.u/ifcfg-enp8s0 is in the correct state
2019-04-30 21:36:49,141 [salt.state       :1951][INFO    ][3532] Completed state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 21:36:49.141816 duration_in_ms=19.578
2019-04-30 21:36:49,142 [salt.state       :1780][INFO    ][3532] Running state [/etc/network/interfaces] at time 21:36:49.141988
2019-04-30 21:36:49,142 [salt.state       :1813][INFO    ][3532] Executing state file.replace for [/etc/network/interfaces]
2019-04-30 21:36:49,142 [salt.state       :300 ][INFO    ][3532] No changes needed to be made
2019-04-30 21:36:49,143 [salt.state       :1951][INFO    ][3532] Completed state [/etc/network/interfaces] at time 21:36:49.143105 duration_in_ms=1.117
2019-04-30 21:36:49,143 [salt.state       :1780][INFO    ][3532] Running state [/etc/network/interfaces] at time 21:36:49.143263
2019-04-30 21:36:49,143 [salt.state       :1813][INFO    ][3532] Executing state file.replace for [/etc/network/interfaces]
2019-04-30 21:36:49,144 [salt.state       :300 ][INFO    ][3532] No changes needed to be made
2019-04-30 21:36:49,144 [salt.state       :1951][INFO    ][3532] Completed state [/etc/network/interfaces] at time 21:36:49.144350 duration_in_ms=1.087
2019-04-30 21:36:49,148 [salt.state       :1780][INFO    ][3532] Running state [ifup enp8s0] at time 21:36:49.148844
2019-04-30 21:36:49,149 [salt.state       :1813][INFO    ][3532] Executing state cmd.run for [ifup enp8s0]
2019-04-30 21:36:49,149 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'ifup enp8s0' in directory '/root'
2019-04-30 21:36:49,155 [salt.state       :300 ][INFO    ][3532] {'pid': 4615, 'retcode': 0, 'stderr': 'ifup: interface enp8s0 already configured', 'stdout': ''}
2019-04-30 21:36:49,155 [salt.state       :1951][INFO    ][3532] Completed state [ifup enp8s0] at time 21:36:49.155269 duration_in_ms=6.426
2019-04-30 21:36:49,155 [salt.state       :1780][INFO    ][3532] Running state [/etc/network/interfaces] at time 21:36:49.155481
2019-04-30 21:36:49,155 [salt.state       :1813][INFO    ][3532] Executing state file.prepend for [/etc/network/interfaces]
2019-04-30 21:36:49,156 [salt.state       :300 ][INFO    ][3532] File /etc/network/interfaces is in correct state
2019-04-30 21:36:49,156 [salt.state       :1951][INFO    ][3532] Completed state [/etc/network/interfaces] at time 21:36:49.156884 duration_in_ms=1.402
2019-04-30 21:36:49,157 [salt.state       :1780][INFO    ][3532] Running state [/etc/udev/rules.d/60-net-txqueue.rules] at time 21:36:49.157046
2019-04-30 21:36:49,157 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/udev/rules.d/60-net-txqueue.rules]
2019-04-30 21:36:49,170 [salt.state       :300 ][INFO    ][3532] File /etc/udev/rules.d/60-net-txqueue.rules is in the correct state
2019-04-30 21:36:49,171 [salt.state       :1951][INFO    ][3532] Completed state [/etc/udev/rules.d/60-net-txqueue.rules] at time 21:36:49.171072 duration_in_ms=14.026
2019-04-30 21:36:49,173 [salt.state       :1780][INFO    ][3532] Running state [/etc/profile.d/proxy.sh] at time 21:36:49.173432
2019-04-30 21:36:49,173 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/profile.d/proxy.sh]
2019-04-30 21:36:49,173 [salt.state       :300 ][INFO    ][3532] File /etc/profile.d/proxy.sh is not present
2019-04-30 21:36:49,173 [salt.state       :1951][INFO    ][3532] Completed state [/etc/profile.d/proxy.sh] at time 21:36:49.173954 duration_in_ms=0.522
2019-04-30 21:36:49,174 [salt.state       :1780][INFO    ][3532] Running state [/etc/apt/apt.conf.d/95proxies] at time 21:36:49.174103
2019-04-30 21:36:49,174 [salt.state       :1813][INFO    ][3532] Executing state file.absent for [/etc/apt/apt.conf.d/95proxies]
2019-04-30 21:36:49,174 [salt.state       :300 ][INFO    ][3532] File /etc/apt/apt.conf.d/95proxies is not present
2019-04-30 21:36:49,174 [salt.state       :1951][INFO    ][3532] Completed state [/etc/apt/apt.conf.d/95proxies] at time 21:36:49.174583 duration_in_ms=0.481
2019-04-30 21:36:49,174 [salt.state       :1780][INFO    ][3532] Running state [linux_lvm_pkgs] at time 21:36:49.174733
2019-04-30 21:36:49,174 [salt.state       :1813][INFO    ][3532] Executing state pkg.installed for [linux_lvm_pkgs]
2019-04-30 21:36:49,180 [salt.state       :300 ][INFO    ][3532] All specified packages are already installed
2019-04-30 21:36:49,180 [salt.state       :1951][INFO    ][3532] Completed state [linux_lvm_pkgs] at time 21:36:49.180573 duration_in_ms=5.84
2019-04-30 21:36:49,181 [salt.state       :1780][INFO    ][3532] Running state [/etc/lvm/lvm.conf] at time 21:36:49.181709
2019-04-30 21:36:49,181 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/lvm/lvm.conf]
2019-04-30 21:36:49,271 [salt.state       :300 ][INFO    ][3532] File /etc/lvm/lvm.conf is in the correct state
2019-04-30 21:36:49,272 [salt.state       :1951][INFO    ][3532] Completed state [/etc/lvm/lvm.conf] at time 21:36:49.272055 duration_in_ms=90.346
2019-04-30 21:36:49,273 [salt.state       :1780][INFO    ][3532] Running state [lvm2-lvmetad] at time 21:36:49.273867
2019-04-30 21:36:49,274 [salt.state       :1813][INFO    ][3532] Executing state service.running for [lvm2-lvmetad]
2019-04-30 21:36:49,274 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'status', 'lvm2-lvmetad.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:49,283 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-active', 'lvm2-lvmetad.service'] in directory '/root'
2019-04-30 21:36:49,291 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmetad.service'] in directory '/root'
2019-04-30 21:36:49,298 [salt.state       :300 ][INFO    ][3532] The service lvm2-lvmetad is already running
2019-04-30 21:36:49,298 [salt.state       :1951][INFO    ][3532] Completed state [lvm2-lvmetad] at time 21:36:49.298847 duration_in_ms=24.98
2019-04-30 21:36:49,300 [salt.state       :1780][INFO    ][3532] Running state [lvm2-lvmpolld] at time 21:36:49.300796
2019-04-30 21:36:49,301 [salt.state       :1813][INFO    ][3532] Executing state service.running for [lvm2-lvmpolld]
2019-04-30 21:36:49,301 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'status', 'lvm2-lvmpolld.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:49,309 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-active', 'lvm2-lvmpolld.service'] in directory '/root'
2019-04-30 21:36:49,316 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2019-04-30 21:36:49,324 [salt.state       :300 ][INFO    ][3532] The service lvm2-lvmpolld is already running
2019-04-30 21:36:49,324 [salt.state       :1951][INFO    ][3532] Completed state [lvm2-lvmpolld] at time 21:36:49.324743 duration_in_ms=23.946
2019-04-30 21:36:49,326 [salt.state       :1780][INFO    ][3532] Running state [lvm2-monitor] at time 21:36:49.326622
2019-04-30 21:36:49,326 [salt.state       :1813][INFO    ][3532] Executing state service.running for [lvm2-monitor]
2019-04-30 21:36:49,327 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'status', 'lvm2-monitor.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:49,335 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-active', 'lvm2-monitor.service'] in directory '/root'
2019-04-30 21:36:49,343 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-enabled', 'lvm2-monitor.service'] in directory '/root'
2019-04-30 21:36:49,350 [salt.state       :300 ][INFO    ][3532] The service lvm2-monitor is already running
2019-04-30 21:36:49,350 [salt.state       :1951][INFO    ][3532] Completed state [lvm2-monitor] at time 21:36:49.350875 duration_in_ms=24.252
2019-04-30 21:36:49,353 [salt.state       :1780][INFO    ][3532] Running state [/dev/sda2] at time 21:36:49.353615
2019-04-30 21:36:49,353 [salt.state       :1813][INFO    ][3532] Executing state lvm.pv_present for [/dev/sda2]
2019-04-30 21:36:49,354 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['pvdisplay', '-c', '/dev/sda2'] in directory '/root'
2019-04-30 21:36:49,386 [salt.state       :300 ][INFO    ][3532] Physical Volume /dev/sda2 already present
2019-04-30 21:36:49,386 [salt.state       :1951][INFO    ][3532] Completed state [/dev/sda2] at time 21:36:49.386638 duration_in_ms=33.023
2019-04-30 21:36:49,387 [salt.state       :1780][INFO    ][3532] Running state [vgroot] at time 21:36:49.387909
2019-04-30 21:36:49,388 [salt.state       :1813][INFO    ][3532] Executing state lvm.vg_present for [vgroot]
2019-04-30 21:36:49,388 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['vgdisplay', '-c', 'vgroot'] in directory '/root'
2019-04-30 21:36:49,398 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['pvdisplay', '-c', '/dev/sda2'] in directory '/root'
2019-04-30 21:36:49,408 [salt.state       :300 ][INFO    ][3532] Volume Group vgroot already present
/dev/sda2 is part of Volume Group
2019-04-30 21:36:49,408 [salt.state       :1951][INFO    ][3532] Completed state [vgroot] at time 21:36:49.408863 duration_in_ms=20.953
2019-04-30 21:36:49,409 [salt.state       :1780][INFO    ][3532] Running state [/dev/shm] at time 21:36:49.409063
2019-04-30 21:36:49,409 [salt.state       :1813][INFO    ][3532] Executing state mount.mounted for [/dev/shm]
2019-04-30 21:36:49,409 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,417 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'blkid' in directory '/root'
2019-04-30 21:36:49,425 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,433 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -o rw,nosuid,nodev,noexec,relatime,remount -t tmpfs shm /dev/shm' in directory '/root'
2019-04-30 21:36:49,439 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,446 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -o rw,nosuid,nodev,noexec,relatime,remount -t tmpfs shm /dev/shm' in directory '/root'
2019-04-30 21:36:49,451 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,458 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'umount /dev/shm' in directory '/root'
2019-04-30 21:36:49,464 [salt.loaded.int.module.cmdmod:730 ][ERROR   ][3532] Command '['umount', '/dev/shm']' failed with return code: 32
2019-04-30 21:36:49,465 [salt.loaded.int.module.cmdmod:734 ][ERROR   ][3532] stderr: umount: /dev/shm: target is busy
        (In some cases useful info about processes that
         use the device is found by lsof(8) or fuser(1).)
2019-04-30 21:36:49,465 [salt.loaded.int.module.cmdmod:736 ][ERROR   ][3532] retcode: 32
2019-04-30 21:36:49,465 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,472 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command 'blkid' in directory '/root'
2019-04-30 21:36:49,480 [salt.state       :300 ][INFO    ][3532] {'umount': "Forced unmount because devices don't match. Wanted: shm, current: tmpfs, /tmpfs"}
2019-04-30 21:36:49,480 [salt.state       :1951][INFO    ][3532] Completed state [/dev/shm] at time 21:36:49.480494 duration_in_ms=71.43
2019-04-30 21:36:49,480 [salt.state       :1780][INFO    ][3532] Running state [ntp] at time 21:36:49.480709
2019-04-30 21:36:49,480 [salt.state       :1813][INFO    ][3532] Executing state pkg.installed for [ntp]
2019-04-30 21:36:49,485 [salt.state       :300 ][INFO    ][3532] All specified packages are already installed
2019-04-30 21:36:49,486 [salt.state       :1951][INFO    ][3532] Completed state [ntp] at time 21:36:49.486072 duration_in_ms=5.363
2019-04-30 21:36:49,487 [salt.state       :1780][INFO    ][3532] Running state [/etc/ntp.conf] at time 21:36:49.487510
2019-04-30 21:36:49,487 [salt.state       :1813][INFO    ][3532] Executing state file.managed for [/etc/ntp.conf]
2019-04-30 21:36:49,533 [salt.state       :300 ][INFO    ][3532] File /etc/ntp.conf is in the correct state
2019-04-30 21:36:49,533 [salt.state       :1951][INFO    ][3532] Completed state [/etc/ntp.conf] at time 21:36:49.533756 duration_in_ms=46.245
2019-04-30 21:36:49,535 [salt.state       :1780][INFO    ][3532] Running state [ntp] at time 21:36:49.535062
2019-04-30 21:36:49,535 [salt.state       :1813][INFO    ][3532] Executing state service.running for [ntp]
2019-04-30 21:36:49,535 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'status', 'ntp.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:49,542 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-active', 'ntp.service'] in directory '/root'
2019-04-30 21:36:49,550 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3532] Executing command ['systemctl', 'is-enabled', 'ntp.service'] in directory '/root'
2019-04-30 21:36:49,560 [salt.state       :300 ][INFO    ][3532] The service ntp is already running
2019-04-30 21:36:49,560 [salt.state       :1951][INFO    ][3532] Completed state [ntp] at time 21:36:49.560866 duration_in_ms=25.803
2019-04-30 21:36:49,565 [salt.minion      :1711][INFO    ][3532] Returning information for job: 20190430213633214368
2019-04-30 21:36:50,304 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command pkg.upgrade with jid 20190430213650288277
2019-04-30 21:36:50,314 [salt.minion      :1432][INFO    ][4659] Starting a new job with PID 4659
2019-04-30 21:36:50,328 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][4659] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:36:50,599 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][4659] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'dist-upgrade'] in directory '/root'
2019-04-30 21:37:05,368 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430213705350778
2019-04-30 21:37:05,378 [salt.minion      :1432][INFO    ][7250] Starting a new job with PID 7250
2019-04-30 21:37:05,390 [salt.minion      :1711][INFO    ][7250] Returning information for job: 20190430213705350778
2019-04-30 21:37:35,509 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430213735492672
2019-04-30 21:37:35,519 [salt.minion      :1432][INFO    ][9036] Starting a new job with PID 9036
2019-04-30 21:37:35,537 [salt.minion      :1711][INFO    ][9036] Returning information for job: 20190430213735492672
2019-04-30 21:37:41,943 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][4659] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:37:41,963 [salt.minion      :1711][INFO    ][4659] Returning information for job: 20190430213650288277
2019-04-30 21:38:59,086 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command state.apply with jid 20190430213859072859
2019-04-30 21:38:59,097 [salt.minion      :1432][INFO    ][9448] Starting a new job with PID 9448
2019-04-30 21:39:03,777 [salt.state       :915 ][INFO    ][9448] Loading fresh modules for state activity
2019-04-30 21:39:03,808 [salt.fileclient  :1219][INFO    ][9448] Fetching file from saltenv 'base', ** done ** 'salt/init.sls'
2019-04-30 21:39:04,062 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,076 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,239 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,252 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,265 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,275 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,287 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,299 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,310 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,322 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,332 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,398 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,410 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,420 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,432 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,443 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,454 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,465 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,476 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,487 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,570 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,583 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,595 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,606 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,617 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,628 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,639 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,651 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,663 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,673 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,689 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,701 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,711 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,722 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,734 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,744 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,755 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,765 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,775 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,786 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,819 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'salt-minion --version' in directory '/root'
2019-04-30 21:39:04,985 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'salt-minion --version' in directory '/root'
2019-04-30 21:39:05,585 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,600 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,762 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,774 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,784 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,795 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,806 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,817 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,827 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,837 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,847 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,909 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,920 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,931 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,942 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,953 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,963 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,974 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,984 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,995 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,075 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,087 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,097 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,107 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,118 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,129 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,139 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,151 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,163 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,173 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,187 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,199 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,210 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,221 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,231 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,242 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,252 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,263 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,275 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,285 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:06,318 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'salt-minion --version' in directory '/root'
2019-04-30 21:39:06,485 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'salt-minion --version' in directory '/root'
2019-04-30 21:39:07,390 [salt.state       :1780][INFO    ][9448] Running state [salt-minion] at time 21:39:07.390310
2019-04-30 21:39:07,390 [salt.state       :1813][INFO    ][9448] Executing state pkg.installed for [salt-minion]
2019-04-30 21:39:07,390 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:39:07,625 [salt.state       :300 ][INFO    ][9448] All specified packages are already installed
2019-04-30 21:39:07,625 [salt.state       :1951][INFO    ][9448] Completed state [salt-minion] at time 21:39:07.625791 duration_in_ms=235.481
2019-04-30 21:39:07,626 [salt.state       :1780][INFO    ][9448] Running state [salt_minion_dependency_packages] at time 21:39:07.625994
2019-04-30 21:39:07,626 [salt.state       :1813][INFO    ][9448] Executing state pkg.installed for [salt_minion_dependency_packages]
2019-04-30 21:39:07,630 [salt.state       :300 ][INFO    ][9448] All specified packages are already installed
2019-04-30 21:39:07,630 [salt.state       :1951][INFO    ][9448] Completed state [salt_minion_dependency_packages] at time 21:39:07.630739 duration_in_ms=4.745
2019-04-30 21:39:07,632 [salt.state       :1780][INFO    ][9448] Running state [/etc/salt/minion.d/minion.conf] at time 21:39:07.632557
2019-04-30 21:39:07,632 [salt.state       :1813][INFO    ][9448] Executing state file.managed for [/etc/salt/minion.d/minion.conf]
2019-04-30 21:39:07,788 [salt.state       :300 ][INFO    ][9448] File /etc/salt/minion.d/minion.conf is in the correct state
2019-04-30 21:39:07,788 [salt.state       :1951][INFO    ][9448] Completed state [/etc/salt/minion.d/minion.conf] at time 21:39:07.788145 duration_in_ms=155.588
2019-04-30 21:39:07,788 [salt.state       :1780][INFO    ][9448] Running state [python-netaddr] at time 21:39:07.788288
2019-04-30 21:39:07,788 [salt.state       :1813][INFO    ][9448] Executing state pkg.installed for [python-netaddr]
2019-04-30 21:39:07,792 [salt.state       :300 ][INFO    ][9448] All specified packages are already installed
2019-04-30 21:39:07,792 [salt.state       :1951][INFO    ][9448] Completed state [python-netaddr] at time 21:39:07.792773 duration_in_ms=4.485
2019-04-30 21:39:07,794 [salt.state       :1780][INFO    ][9448] Running state [/etc/systemd/system/salt-minion.service.d/50-restarts.conf] at time 21:39:07.794324
2019-04-30 21:39:07,794 [salt.state       :1813][INFO    ][9448] Executing state file.managed for [/etc/systemd/system/salt-minion.service.d/50-restarts.conf]
2019-04-30 21:39:07,803 [salt.state       :300 ][INFO    ][9448] File /etc/systemd/system/salt-minion.service.d/50-restarts.conf is in the correct state
2019-04-30 21:39:07,803 [salt.state       :1951][INFO    ][9448] Completed state [/etc/systemd/system/salt-minion.service.d/50-restarts.conf] at time 21:39:07.803408 duration_in_ms=9.085
2019-04-30 21:39:07,803 [salt.state       :1780][INFO    ][9448] Running state [salt-minion] at time 21:39:07.803886
2019-04-30 21:39:07,804 [salt.state       :1813][INFO    ][9448] Executing state service.running for [salt-minion]
2019-04-30 21:39:07,804 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
2019-04-30 21:39:07,816 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command ['systemctl', 'is-active', 'salt-minion.service'] in directory '/root'
2019-04-30 21:39:07,821 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command ['systemctl', 'is-enabled', 'salt-minion.service'] in directory '/root'
2019-04-30 21:39:07,828 [salt.state       :300 ][INFO    ][9448] The service salt-minion is already running
2019-04-30 21:39:07,828 [salt.state       :1951][INFO    ][9448] Completed state [salt-minion] at time 21:39:07.828190 duration_in_ms=24.303
2019-04-30 21:39:07,828 [salt.state       :1780][INFO    ][9448] Running state [/etc/salt/grains.d] at time 21:39:07.828975
2019-04-30 21:39:07,829 [salt.state       :1813][INFO    ][9448] Executing state file.directory for [/etc/salt/grains.d]
2019-04-30 21:39:07,829 [salt.state       :300 ][INFO    ][9448] Directory /etc/salt/grains.d is in the correct state
Directory /etc/salt/grains.d updated
2019-04-30 21:39:07,829 [salt.state       :1951][INFO    ][9448] Completed state [/etc/salt/grains.d] at time 21:39:07.829801 duration_in_ms=0.826
2019-04-30 21:39:07,830 [salt.state       :1780][INFO    ][9448] Running state [/etc/salt/grains] at time 21:39:07.830150
2019-04-30 21:39:07,830 [salt.state       :1813][INFO    ][9448] Executing state file.managed for [/etc/salt/grains]
2019-04-30 21:39:07,830 [salt.state       :300 ][INFO    ][9448] File /etc/salt/grains exists with proper permissions. No changes made.
2019-04-30 21:39:07,830 [salt.state       :1951][INFO    ][9448] Completed state [/etc/salt/grains] at time 21:39:07.830715 duration_in_ms=0.564
2019-04-30 21:39:07,830 [salt.state       :1780][INFO    ][9448] Running state [/etc/salt/grains.d/placeholder] at time 21:39:07.830942
2019-04-30 21:39:07,831 [salt.state       :1813][INFO    ][9448] Executing state file.managed for [/etc/salt/grains.d/placeholder]
2019-04-30 21:39:07,836 [salt.state       :300 ][INFO    ][9448] File /etc/salt/grains.d/placeholder exists with proper permissions. No changes made.
2019-04-30 21:39:07,836 [salt.state       :1951][INFO    ][9448] Completed state [/etc/salt/grains.d/placeholder] at time 21:39:07.836488 duration_in_ms=5.545
2019-04-30 21:39:07,836 [salt.state       :1780][INFO    ][9448] Running state [/etc/salt/grains.d/sphinx] at time 21:39:07.836716
2019-04-30 21:39:07,836 [salt.state       :1813][INFO    ][9448] Executing state file.managed for [/etc/salt/grains.d/sphinx]
2019-04-30 21:39:07,844 [salt.state       :300 ][INFO    ][9448] File changed:
--- 
+++ 
@@ -41,13 +41,13 @@
 
                 * lvm2: 2.02.133-1ubuntu10
 
-                * sysfsutils: dpkg-query: no packages found matching sysfsutils
+                * sysfsutils: 2.1.0+repack-4
 
                 * sg3-utils: dpkg-query: no packages found matching sg3-utils
 
                 * python-cinder: dpkg-query: no packages found matching python-cinder
 
-                * python-mysqldb: dpkg-query: no packages found matching python-mysqldb
+                * python-mysqldb: <none>
 
                 * p7zip: dpkg-query: no packages found matching p7zip
 
@@ -86,8 +86,11 @@
             ip:
               name: IP Addresses
               value:
+              - 10.1.0.5
+              - 10.167.4.55
               - 127.0.0.1
-              - 192.168.11.38
+              - 172.30.10.111
+              - 192.168.11.36
         system:
           name: System
           param:
@@ -135,7 +138,7 @@
 
                 * pm-utils: dpkg-query: no packages found matching pm-utils
 
-                * sysfsutils: dpkg-query: no packages found matching sysfsutils
+                * sysfsutils: 2.1.0+repack-4
 
                 * sg3-utils: dpkg-query: no packages found matching sg3-utils
 

2019-04-30 21:39:07,844 [salt.state       :1951][INFO    ][9448] Completed state [/etc/salt/grains.d/sphinx] at time 21:39:07.844848 duration_in_ms=8.132
2019-04-30 21:39:07,846 [salt.state       :1780][INFO    ][9448] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.845999
2019-04-30 21:39:07,846 [salt.state       :1813][INFO    ][9448] Executing state cmd.wait for [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"]
2019-04-30 21:39:07,846 [salt.state       :300 ][INFO    ][9448] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"
2019-04-30 21:39:07,846 [salt.state       :1951][INFO    ][9448] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.846456 duration_in_ms=0.457
2019-04-30 21:39:07,846 [salt.state       :1780][INFO    ][9448] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.846570
2019-04-30 21:39:07,846 [salt.state       :1813][INFO    ][9448] Executing state cmd.mod_watch for [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"]
2019-04-30 21:39:07,847 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"' in directory '/root'
2019-04-30 21:39:07,929 [salt.state       :300 ][INFO    ][9448] {'pid': 9818, 'retcode': 0, 'stderr': '', 'stdout': ''}
2019-04-30 21:39:07,929 [salt.state       :1951][INFO    ][9448] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.929521 duration_in_ms=82.95
2019-04-30 21:39:07,929 [salt.state       :1780][INFO    ][9448] Running state [/etc/salt/grains.d/dns_records] at time 21:39:07.929891
2019-04-30 21:39:07,930 [salt.state       :1813][INFO    ][9448] Executing state file.managed for [/etc/salt/grains.d/dns_records]
2019-04-30 21:39:07,938 [salt.state       :300 ][INFO    ][9448] File /etc/salt/grains.d/dns_records is in the correct state
2019-04-30 21:39:07,939 [salt.state       :1951][INFO    ][9448] Completed state [/etc/salt/grains.d/dns_records] at time 21:39:07.939083 duration_in_ms=9.192
2019-04-30 21:39:07,939 [salt.state       :1780][INFO    ][9448] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.939633
2019-04-30 21:39:07,939 [salt.state       :1813][INFO    ][9448] Executing state cmd.wait for [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"]
2019-04-30 21:39:07,940 [salt.state       :300 ][INFO    ][9448] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"
2019-04-30 21:39:07,940 [salt.state       :1951][INFO    ][9448] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.940187 duration_in_ms=0.553
2019-04-30 21:39:07,940 [salt.state       :1780][INFO    ][9448] Running state [/etc/salt/grains.d/salt] at time 21:39:07.940455
2019-04-30 21:39:07,940 [salt.state       :1813][INFO    ][9448] Executing state file.managed for [/etc/salt/grains.d/salt]
2019-04-30 21:39:07,944 [salt.state       :300 ][INFO    ][9448] File /etc/salt/grains.d/salt is in the correct state
2019-04-30 21:39:07,944 [salt.state       :1951][INFO    ][9448] Completed state [/etc/salt/grains.d/salt] at time 21:39:07.944765 duration_in_ms=4.31
2019-04-30 21:39:07,945 [salt.state       :1780][INFO    ][9448] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.945258
2019-04-30 21:39:07,945 [salt.state       :1813][INFO    ][9448] Executing state cmd.wait for [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"]
2019-04-30 21:39:07,945 [salt.state       :300 ][INFO    ][9448] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"
2019-04-30 21:39:07,945 [salt.state       :1951][INFO    ][9448] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.945796 duration_in_ms=0.538
2019-04-30 21:39:07,946 [salt.state       :1780][INFO    ][9448] Running state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 21:39:07.946812
2019-04-30 21:39:07,947 [salt.state       :1813][INFO    ][9448] Executing state cmd.wait for [cat /etc/salt/grains.d/* > /etc/salt/grains]
2019-04-30 21:39:07,947 [salt.state       :300 ][INFO    ][9448] No changes made for cat /etc/salt/grains.d/* > /etc/salt/grains
2019-04-30 21:39:07,947 [salt.state       :1951][INFO    ][9448] Completed state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 21:39:07.947359 duration_in_ms=0.547
2019-04-30 21:39:07,947 [salt.state       :1780][INFO    ][9448] Running state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 21:39:07.947500
2019-04-30 21:39:07,947 [salt.state       :1813][INFO    ][9448] Executing state cmd.mod_watch for [cat /etc/salt/grains.d/* > /etc/salt/grains]
2019-04-30 21:39:07,948 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9448] Executing command 'cat /etc/salt/grains.d/* > /etc/salt/grains' in directory '/root'
2019-04-30 21:39:07,954 [salt.state       :300 ][INFO    ][9448] {'pid': 9820, 'retcode': 0, 'stderr': '', 'stdout': ''}
2019-04-30 21:39:07,954 [salt.state       :1951][INFO    ][9448] Completed state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 21:39:07.954742 duration_in_ms=7.241
2019-04-30 21:39:07,955 [salt.state       :1780][INFO    ][9448] Running state [mine.update] at time 21:39:07.955301
2019-04-30 21:39:07,955 [salt.state       :1813][INFO    ][9448] Executing state module.wait for [mine.update]
2019-04-30 21:39:07,955 [salt.state       :300 ][INFO    ][9448] No changes made for mine.update
2019-04-30 21:39:07,955 [salt.state       :1951][INFO    ][9448] Completed state [mine.update] at time 21:39:07.955818 duration_in_ms=0.517
2019-04-30 21:39:07,955 [salt.state       :1780][INFO    ][9448] Running state [mine.update] at time 21:39:07.955934
2019-04-30 21:39:07,956 [salt.state       :1813][INFO    ][9448] Executing state module.mod_watch for [mine.update]
2019-04-30 21:39:07,956 [salt.utils.decorators:613 ][WARNING ][9448] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:39:08,473 [salt.state       :300 ][INFO    ][9448] {'ret': True}
2019-04-30 21:39:08,473 [salt.state       :1951][INFO    ][9448] Completed state [mine.update] at time 21:39:08.473545 duration_in_ms=517.61
2019-04-30 21:39:08,473 [salt.state       :1780][INFO    ][9448] Running state [ca-certificates] at time 21:39:08.473814
2019-04-30 21:39:08,474 [salt.state       :1813][INFO    ][9448] Executing state pkg.installed for [ca-certificates]
2019-04-30 21:39:08,480 [salt.state       :300 ][INFO    ][9448] All specified packages are already installed
2019-04-30 21:39:08,481 [salt.state       :1951][INFO    ][9448] Completed state [ca-certificates] at time 21:39:08.481149 duration_in_ms=7.335
2019-04-30 21:39:08,481 [salt.state       :1780][INFO    ][9448] Running state [update-ca-certificates] at time 21:39:08.481683
2019-04-30 21:39:08,481 [salt.state       :1813][INFO    ][9448] Executing state cmd.wait for [update-ca-certificates]
2019-04-30 21:39:08,482 [salt.state       :300 ][INFO    ][9448] No changes made for update-ca-certificates
2019-04-30 21:39:08,482 [salt.state       :1951][INFO    ][9448] Completed state [update-ca-certificates] at time 21:39:08.482403 duration_in_ms=0.72
2019-04-30 21:39:08,484 [salt.minion      :1711][INFO    ][9448] Returning information for job: 20190430213859072859
2019-04-30 21:42:53,458 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.sync_all with jid 20190430214253441001
2019-04-30 21:42:53,472 [salt.minion      :1432][INFO    ][9832] Starting a new job with PID 9832
2019-04-30 21:42:56,710 [salt.state       :915 ][INFO    ][9832] Loading fresh modules for state activity
2019-04-30 21:42:56,960 [salt.utils.extmods:71  ][INFO    ][9832] Creating module dir '/var/cache/salt/minion/extmods/clouds'
2019-04-30 21:42:56,963 [salt.utils.extmods:82  ][INFO    ][9832] Syncing clouds for environment 'base'
2019-04-30 21:42:56,964 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_clouds, for base)
2019-04-30 21:42:56,964 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_clouds/' for environment 'base'
2019-04-30 21:42:57,003 [salt.utils.extmods:71  ][INFO    ][9832] Creating module dir '/var/cache/salt/minion/extmods/beacons'
2019-04-30 21:42:57,006 [salt.utils.extmods:82  ][INFO    ][9832] Syncing beacons for environment 'base'
2019-04-30 21:42:57,006 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_beacons, for base)
2019-04-30 21:42:57,006 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_beacons/' for environment 'base'
2019-04-30 21:42:57,049 [salt.utils.extmods:82  ][INFO    ][9832] Syncing modules for environment 'base'
2019-04-30 21:42:57,050 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_modules, for base)
2019-04-30 21:42:57,050 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_modules/' for environment 'base'
2019-04-30 21:42:57,626 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/_modules/gnocchiv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/_modules/gnocchiv1/__init__.py'
2019-04-30 21:42:57,626 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/_modules/gnocchiv1/archive_policy.py' to '/var/cache/salt/minion/extmods/modules/_modules/gnocchiv1/archive_policy.py'
2019-04-30 21:42:57,627 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/_modules/gnocchiv1/common.py' to '/var/cache/salt/minion/extmods/modules/_modules/gnocchiv1/common.py'
2019-04-30 21:42:57,627 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/architect.py' to '/var/cache/salt/minion/extmods/modules/architect.py'
2019-04-30 21:42:57,627 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/artifactory.py' to '/var/cache/salt/minion/extmods/modules/artifactory.py'
2019-04-30 21:42:57,627 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/auditd.py' to '/var/cache/salt/minion/extmods/modules/auditd.py'
2019-04-30 21:42:57,628 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/avinetworks.py' to '/var/cache/salt/minion/extmods/modules/avinetworks.py'
2019-04-30 21:42:57,628 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/__init__.py'
2019-04-30 21:42:57,628 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/acl.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/acl.py'
2019-04-30 21:42:57,629 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/common.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/common.py'
2019-04-30 21:42:57,629 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/secrets.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/secrets.py'
2019-04-30 21:42:57,629 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ceph_ng.py' to '/var/cache/salt/minion/extmods/modules/ceph_ng.py'
2019-04-30 21:42:57,629 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cfgdrive.py' to '/var/cache/salt/minion/extmods/modules/cfgdrive.py'
2019-04-30 21:42:57,629 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderng.py' to '/var/cache/salt/minion/extmods/modules/cinderng.py'
2019-04-30 21:42:57,630 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/__init__.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/__init__.py'
2019-04-30 21:42:57,630 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/arg_converter.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/arg_converter.py'
2019-04-30 21:42:57,630 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/common.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/common.py'
2019-04-30 21:42:57,631 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/lists.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/lists.py'
2019-04-30 21:42:57,631 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/services.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/services.py'
2019-04-30 21:42:57,631 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/volume.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/volume.py'
2019-04-30 21:42:57,631 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/volume_actions.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/volume_actions.py'
2019-04-30 21:42:57,632 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/volume_types.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/volume_types.py'
2019-04-30 21:42:57,632 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/volumes.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/volumes.py'
2019-04-30 21:42:57,632 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/configdrive.py' to '/var/cache/salt/minion/extmods/modules/configdrive.py'
2019-04-30 21:42:57,632 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/contrail.py' to '/var/cache/salt/minion/extmods/modules/contrail.py'
2019-04-30 21:42:57,633 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/contrail_health.py' to '/var/cache/salt/minion/extmods/modules/contrail_health.py'
2019-04-30 21:42:57,633 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/creds.py' to '/var/cache/salt/minion/extmods/modules/creds.py'
2019-04-30 21:42:57,634 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/devops_utils.py' to '/var/cache/salt/minion/extmods/modules/devops_utils.py'
2019-04-30 21:42:57,634 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/dockerng_service.py' to '/var/cache/salt/minion/extmods/modules/dockerng_service.py'
2019-04-30 21:42:57,634 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/encode_json.py' to '/var/cache/salt/minion/extmods/modules/encode_json.py'
2019-04-30 21:42:57,634 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/gerrit.py' to '/var/cache/salt/minion/extmods/modules/gerrit.py'
2019-04-30 21:42:57,635 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/gitlab.py' to '/var/cache/salt/minion/extmods/modules/gitlab.py'
2019-04-30 21:42:57,635 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/glanceng.py' to '/var/cache/salt/minion/extmods/modules/glanceng.py'
2019-04-30 21:42:57,635 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/__init__.py' to '/var/cache/salt/minion/extmods/modules/glancev2/__init__.py'
2019-04-30 21:42:57,635 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/common.py' to '/var/cache/salt/minion/extmods/modules/glancev2/common.py'
2019-04-30 21:42:57,636 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/image.py' to '/var/cache/salt/minion/extmods/modules/glancev2/image.py'
2019-04-30 21:42:57,636 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/task.py' to '/var/cache/salt/minion/extmods/modules/glancev2/task.py'
2019-04-30 21:42:57,636 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/__init__.py'
2019-04-30 21:42:57,636 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/archive_policy.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/archive_policy.py'
2019-04-30 21:42:57,637 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/common.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/common.py'
2019-04-30 21:42:57,637 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/gnocchiv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/gnocchiv1/__init__.py'
2019-04-30 21:42:57,637 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/gnocchiv1/archive_policy.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/gnocchiv1/archive_policy.py'
2019-04-30 21:42:57,637 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/gnocchiv1/common.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/gnocchiv1/common.py'
2019-04-30 21:42:57,638 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/heat.py' to '/var/cache/salt/minion/extmods/modules/heat.py'
2019-04-30 21:42:57,638 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/heatv1/__init__.py'
2019-04-30 21:42:57,638 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/common.py' to '/var/cache/salt/minion/extmods/modules/heatv1/common.py'
2019-04-30 21:42:57,638 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/services.py' to '/var/cache/salt/minion/extmods/modules/heatv1/services.py'
2019-04-30 21:42:57,639 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/stack.py' to '/var/cache/salt/minion/extmods/modules/heatv1/stack.py'
2019-04-30 21:42:57,639 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/heka_alarming.py' to '/var/cache/salt/minion/extmods/modules/heka_alarming.py'
2019-04-30 21:42:57,639 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/helm.py' to '/var/cache/salt/minion/extmods/modules/helm.py'
2019-04-30 21:42:57,639 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/iptables_extra.py' to '/var/cache/salt/minion/extmods/modules/iptables_extra.py'
2019-04-30 21:42:57,640 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ironicng.py' to '/var/cache/salt/minion/extmods/modules/ironicng.py'
2019-04-30 21:42:57,640 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/__init__.py'
2019-04-30 21:42:57,640 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/chassis.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/chassis.py'
2019-04-30 21:42:57,646 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/common.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/common.py'
2019-04-30 21:42:57,646 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/drivers.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/drivers.py'
2019-04-30 21:42:57,646 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/nodes.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/nodes.py'
2019-04-30 21:42:57,647 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/ports.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/ports.py'
2019-04-30 21:42:57,647 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/volumes.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/volumes.py'
2019-04-30 21:42:57,647 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/jenkins_common.py' to '/var/cache/salt/minion/extmods/modules/jenkins_common.py'
2019-04-30 21:42:57,647 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystone_policy.py' to '/var/cache/salt/minion/extmods/modules/keystone_policy.py'
2019-04-30 21:42:57,648 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystoneng.py' to '/var/cache/salt/minion/extmods/modules/keystoneng.py'
2019-04-30 21:42:57,648 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/__init__.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/__init__.py'
2019-04-30 21:42:57,648 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/arg_converter.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/arg_converter.py'
2019-04-30 21:42:57,649 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/common.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/common.py'
2019-04-30 21:42:57,649 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/domains.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/domains.py'
2019-04-30 21:42:57,649 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/endpoints.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/endpoints.py'
2019-04-30 21:42:57,650 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/groups.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/groups.py'
2019-04-30 21:42:57,650 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/lists.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/lists.py'
2019-04-30 21:42:57,650 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/projects.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/projects.py'
2019-04-30 21:42:57,651 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/regions.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/regions.py'
2019-04-30 21:42:57,651 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/roles.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/roles.py'
2019-04-30 21:42:57,651 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/services.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/services.py'
2019-04-30 21:42:57,652 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/users.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/users.py'
2019-04-30 21:42:57,652 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/linux_hosts.py' to '/var/cache/salt/minion/extmods/modules/linux_hosts.py'
2019-04-30 21:42:57,652 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/linux_netlink.py' to '/var/cache/salt/minion/extmods/modules/linux_netlink.py'
2019-04-30 21:42:57,653 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/maas.py' to '/var/cache/salt/minion/extmods/modules/maas.py'
2019-04-30 21:42:57,653 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/maas_client.py' to '/var/cache/salt/minion/extmods/modules/maas_client.py'
2019-04-30 21:42:57,653 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/maasng.py' to '/var/cache/salt/minion/extmods/modules/maasng.py'
2019-04-30 21:42:57,654 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/manilang/__init__.py' to '/var/cache/salt/minion/extmods/modules/manilang/__init__.py'
2019-04-30 21:42:57,654 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/manilang/common.py' to '/var/cache/salt/minion/extmods/modules/manilang/common.py'
2019-04-30 21:42:57,654 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/manilang/share_types.py' to '/var/cache/salt/minion/extmods/modules/manilang/share_types.py'
2019-04-30 21:42:57,654 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/manilang/shares.py' to '/var/cache/salt/minion/extmods/modules/manilang/shares.py'
2019-04-30 21:42:57,655 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/modelschema.py' to '/var/cache/salt/minion/extmods/modules/modelschema.py'
2019-04-30 21:42:57,655 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/modelutils.py' to '/var/cache/salt/minion/extmods/modules/modelutils.py'
2019-04-30 21:42:57,655 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/multipart.py' to '/var/cache/salt/minion/extmods/modules/multipart.py'
2019-04-30 21:42:57,655 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/nagios_alarming.py' to '/var/cache/salt/minion/extmods/modules/nagios_alarming.py'
2019-04-30 21:42:57,656 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/net_checks.py' to '/var/cache/salt/minion/extmods/modules/net_checks.py'
2019-04-30 21:42:57,656 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/netutils.py' to '/var/cache/salt/minion/extmods/modules/netutils.py'
2019-04-30 21:42:57,656 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronng.py' to '/var/cache/salt/minion/extmods/modules/neutronng.py'
2019-04-30 21:42:57,656 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/__init__.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/__init__.py'
2019-04-30 21:42:57,657 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/agents.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/agents.py'
2019-04-30 21:42:57,657 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/arg_converter.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/arg_converter.py'
2019-04-30 21:42:57,657 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/auto_alloc.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/auto_alloc.py'
2019-04-30 21:42:57,657 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/common.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/common.py'
2019-04-30 21:42:57,658 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/lists.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/lists.py'
2019-04-30 21:42:57,658 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/networks.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/networks.py'
2019-04-30 21:42:57,658 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/ports.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/ports.py'
2019-04-30 21:42:57,658 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/routers.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/routers.py'
2019-04-30 21:42:57,658 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/subnetpools.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/subnetpools.py'
2019-04-30 21:42:57,659 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/subnets.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/subnets.py'
2019-04-30 21:42:57,659 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/novang.py' to '/var/cache/salt/minion/extmods/modules/novang.py'
2019-04-30 21:42:57,659 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/novav21/__init__.py' to '/var/cache/salt/minion/extmods/modules/novav21/__init__.py'
2019-04-30 21:42:57,659 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/novav21/aggregates.py' to '/var/cache/salt/minion/extmods/modules/novav21/aggregates.py'
2019-04-30 21:42:57,660 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/novav21/common.py' to '/var/cache/salt/minion/extmods/modules/novav21/common.py'
2019-04-30 21:42:57,660 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/novav21/flavors.py' to '/var/cache/salt/minion/extmods/modules/novav21/flavors.py'
2019-04-30 21:42:57,660 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/novav21/keypairs.py' to '/var/cache/salt/minion/extmods/modules/novav21/keypairs.py'
2019-04-30 21:42:57,660 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/novav21/quotas.py' to '/var/cache/salt/minion/extmods/modules/novav21/quotas.py'
2019-04-30 21:42:57,661 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/novav21/servers.py' to '/var/cache/salt/minion/extmods/modules/novav21/servers.py'
2019-04-30 21:42:57,661 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/novav21/services.py' to '/var/cache/salt/minion/extmods/modules/novav21/services.py'
2019-04-30 21:42:57,661 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/octaviav2/__init__.py' to '/var/cache/salt/minion/extmods/modules/octaviav2/__init__.py'
2019-04-30 21:42:57,661 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/octaviav2/common.py' to '/var/cache/salt/minion/extmods/modules/octaviav2/common.py'
2019-04-30 21:42:57,661 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/octaviav2/loadbalancers.py' to '/var/cache/salt/minion/extmods/modules/octaviav2/loadbalancers.py'
2019-04-30 21:42:57,662 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/ovs_config.py' to '/var/cache/salt/minion/extmods/modules/ovs_config.py'
2019-04-30 21:42:57,662 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/reclass.py' to '/var/cache/salt/minion/extmods/modules/reclass.py'
2019-04-30 21:42:57,662 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/rsyslog_util.py' to '/var/cache/salt/minion/extmods/modules/rsyslog_util.py'
2019-04-30 21:42:57,662 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/rundeck.py' to '/var/cache/salt/minion/extmods/modules/rundeck.py'
2019-04-30 21:42:57,663 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/saltkey.py' to '/var/cache/salt/minion/extmods/modules/saltkey.py'
2019-04-30 21:42:57,663 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/saltresource.py' to '/var/cache/salt/minion/extmods/modules/saltresource.py'
2019-04-30 21:42:57,663 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/seedng.py' to '/var/cache/salt/minion/extmods/modules/seedng.py'
2019-04-30 21:42:57,663 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/testing/__init__.py' to '/var/cache/salt/minion/extmods/modules/testing/__init__.py'
2019-04-30 21:42:57,664 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/testing/credentials.py' to '/var/cache/salt/minion/extmods/modules/testing/credentials.py'
2019-04-30 21:42:57,665 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/testing/django.py' to '/var/cache/salt/minion/extmods/modules/testing/django.py'
2019-04-30 21:42:57,666 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/testing/django_client_proxy.py' to '/var/cache/salt/minion/extmods/modules/testing/django_client_proxy.py'
2019-04-30 21:42:57,667 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/utils.py' to '/var/cache/salt/minion/extmods/modules/utils.py'
2019-04-30 21:42:57,667 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_modules/virtng.py' to '/var/cache/salt/minion/extmods/modules/virtng.py'
2019-04-30 21:42:57,674 [salt.utils.extmods:82  ][INFO    ][9832] Syncing states for environment 'base'
2019-04-30 21:42:57,674 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_states, for base)
2019-04-30 21:42:57,675 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_states/' for environment 'base'
2019-04-30 21:42:57,905 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/_states/gnocchiv1.py' to '/var/cache/salt/minion/extmods/states/_states/gnocchiv1.py'
2019-04-30 21:42:57,905 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/artifactory.py' to '/var/cache/salt/minion/extmods/states/artifactory.py'
2019-04-30 21:42:57,906 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/avinetworks.py' to '/var/cache/salt/minion/extmods/states/avinetworks.py'
2019-04-30 21:42:57,906 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/barbicanv1.py' to '/var/cache/salt/minion/extmods/states/barbicanv1.py'
2019-04-30 21:42:57,906 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/cinderng.py' to '/var/cache/salt/minion/extmods/states/cinderng.py'
2019-04-30 21:42:57,907 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/cinderv3.py' to '/var/cache/salt/minion/extmods/states/cinderv3.py'
2019-04-30 21:42:57,907 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/contrail.py' to '/var/cache/salt/minion/extmods/states/contrail.py'
2019-04-30 21:42:57,907 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/contrail_health.py' to '/var/cache/salt/minion/extmods/states/contrail_health.py'
2019-04-30 21:42:57,908 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/debmirror.py' to '/var/cache/salt/minion/extmods/states/debmirror.py'
2019-04-30 21:42:57,908 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/dockerng_service.py' to '/var/cache/salt/minion/extmods/states/dockerng_service.py'
2019-04-30 21:42:57,908 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/gerrit.py' to '/var/cache/salt/minion/extmods/states/gerrit.py'
2019-04-30 21:42:57,909 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/gitlab.py' to '/var/cache/salt/minion/extmods/states/gitlab.py'
2019-04-30 21:42:57,909 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/glanceng.py' to '/var/cache/salt/minion/extmods/states/glanceng.py'
2019-04-30 21:42:57,910 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/glancev2.py' to '/var/cache/salt/minion/extmods/states/glancev2.py'
2019-04-30 21:42:57,910 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/gnocchiv1.py' to '/var/cache/salt/minion/extmods/states/gnocchiv1.py'
2019-04-30 21:42:57,910 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/grafana3_dashboard.py' to '/var/cache/salt/minion/extmods/states/grafana3_dashboard.py'
2019-04-30 21:42:57,911 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/grafana3_datasource.py' to '/var/cache/salt/minion/extmods/states/grafana3_datasource.py'
2019-04-30 21:42:57,911 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/heat.py' to '/var/cache/salt/minion/extmods/states/heat.py'
2019-04-30 21:42:57,912 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/heatv1.py' to '/var/cache/salt/minion/extmods/states/heatv1.py'
2019-04-30 21:42:57,912 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/helm_release.py' to '/var/cache/salt/minion/extmods/states/helm_release.py'
2019-04-30 21:42:57,912 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/helm_repos.py' to '/var/cache/salt/minion/extmods/states/helm_repos.py'
2019-04-30 21:42:57,913 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/httpng.py' to '/var/cache/salt/minion/extmods/states/httpng.py'
2019-04-30 21:42:57,913 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/ironicng.py' to '/var/cache/salt/minion/extmods/states/ironicng.py'
2019-04-30 21:42:57,913 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/ironicv1.py' to '/var/cache/salt/minion/extmods/states/ironicv1.py'
2019-04-30 21:42:57,914 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_approval.py' to '/var/cache/salt/minion/extmods/states/jenkins_approval.py'
2019-04-30 21:42:57,914 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_artifactory.py' to '/var/cache/salt/minion/extmods/states/jenkins_artifactory.py'
2019-04-30 21:42:57,915 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_credential.py' to '/var/cache/salt/minion/extmods/states/jenkins_credential.py'
2019-04-30 21:42:57,915 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_gerrit.py' to '/var/cache/salt/minion/extmods/states/jenkins_gerrit.py'
2019-04-30 21:42:57,915 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_globalenvprop.py' to '/var/cache/salt/minion/extmods/states/jenkins_globalenvprop.py'
2019-04-30 21:42:57,916 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_jira.py' to '/var/cache/salt/minion/extmods/states/jenkins_jira.py'
2019-04-30 21:42:57,916 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_job.py' to '/var/cache/salt/minion/extmods/states/jenkins_job.py'
2019-04-30 21:42:57,916 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_lib.py' to '/var/cache/salt/minion/extmods/states/jenkins_lib.py'
2019-04-30 21:42:57,917 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_node.py' to '/var/cache/salt/minion/extmods/states/jenkins_node.py'
2019-04-30 21:42:57,917 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_plugin.py' to '/var/cache/salt/minion/extmods/states/jenkins_plugin.py'
2019-04-30 21:42:57,917 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_security.py' to '/var/cache/salt/minion/extmods/states/jenkins_security.py'
2019-04-30 21:42:57,918 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_slack.py' to '/var/cache/salt/minion/extmods/states/jenkins_slack.py'
2019-04-30 21:42:57,918 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_smtp.py' to '/var/cache/salt/minion/extmods/states/jenkins_smtp.py'
2019-04-30 21:42:57,919 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_theme.py' to '/var/cache/salt/minion/extmods/states/jenkins_theme.py'
2019-04-30 21:42:57,919 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_throttle_category.py' to '/var/cache/salt/minion/extmods/states/jenkins_throttle_category.py'
2019-04-30 21:42:57,919 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_user.py' to '/var/cache/salt/minion/extmods/states/jenkins_user.py'
2019-04-30 21:42:57,920 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/jenkins_view.py' to '/var/cache/salt/minion/extmods/states/jenkins_view.py'
2019-04-30 21:42:57,920 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/keystone_policy.py' to '/var/cache/salt/minion/extmods/states/keystone_policy.py'
2019-04-30 21:42:57,920 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/keystoneng.py' to '/var/cache/salt/minion/extmods/states/keystoneng.py'
2019-04-30 21:42:57,921 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/keystonev3.py' to '/var/cache/salt/minion/extmods/states/keystonev3.py'
2019-04-30 21:42:57,928 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/kibana_object.py' to '/var/cache/salt/minion/extmods/states/kibana_object.py'
2019-04-30 21:42:57,929 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/maasng.py' to '/var/cache/salt/minion/extmods/states/maasng.py'
2019-04-30 21:42:57,929 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/manilang.py' to '/var/cache/salt/minion/extmods/states/manilang.py'
2019-04-30 21:42:57,929 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/neutronng.py' to '/var/cache/salt/minion/extmods/states/neutronng.py'
2019-04-30 21:42:57,930 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/neutronv2.py' to '/var/cache/salt/minion/extmods/states/neutronv2.py'
2019-04-30 21:42:57,930 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/novang.py' to '/var/cache/salt/minion/extmods/states/novang.py'
2019-04-30 21:42:57,931 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/novav21.py' to '/var/cache/salt/minion/extmods/states/novav21.py'
2019-04-30 21:42:57,931 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/octaviav2.py' to '/var/cache/salt/minion/extmods/states/octaviav2.py'
2019-04-30 21:42:57,931 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/ovs_config.py' to '/var/cache/salt/minion/extmods/states/ovs_config.py'
2019-04-30 21:42:57,932 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/powerdns_mysql.py' to '/var/cache/salt/minion/extmods/states/powerdns_mysql.py'
2019-04-30 21:42:57,932 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/reclass.py' to '/var/cache/salt/minion/extmods/states/reclass.py'
2019-04-30 21:42:57,932 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/rundeck_project.py' to '/var/cache/salt/minion/extmods/states/rundeck_project.py'
2019-04-30 21:42:57,933 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/rundeck_scm.py' to '/var/cache/salt/minion/extmods/states/rundeck_scm.py'
2019-04-30 21:42:57,933 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_states/rundeck_secret.py' to '/var/cache/salt/minion/extmods/states/rundeck_secret.py'
2019-04-30 21:42:57,936 [salt.utils.extmods:71  ][INFO    ][9832] Creating module dir '/var/cache/salt/minion/extmods/sdb'
2019-04-30 21:42:57,938 [salt.utils.extmods:82  ][INFO    ][9832] Syncing sdb for environment 'base'
2019-04-30 21:42:57,938 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_sdb, for base)
2019-04-30 21:42:57,938 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_sdb/' for environment 'base'
2019-04-30 21:42:57,980 [salt.utils.extmods:82  ][INFO    ][9832] Syncing grains for environment 'base'
2019-04-30 21:42:57,980 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_grains, for base)
2019-04-30 21:42:57,980 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_grains/' for environment 'base'
2019-04-30 21:42:58,089 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/ceilometer_policy.py' to '/var/cache/salt/minion/extmods/grains/ceilometer_policy.py'
2019-04-30 21:42:58,089 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/ceph.py' to '/var/cache/salt/minion/extmods/grains/ceph.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/cinder_policy.py' to '/var/cache/salt/minion/extmods/grains/cinder_policy.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/docker_swarm.py' to '/var/cache/salt/minion/extmods/grains/docker_swarm.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/glance_policy.py' to '/var/cache/salt/minion/extmods/grains/glance_policy.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/heat_policy.py' to '/var/cache/salt/minion/extmods/grains/heat_policy.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/jenkins_plugins.py' to '/var/cache/salt/minion/extmods/grains/jenkins_plugins.py'
2019-04-30 21:42:58,091 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/keystone_policy.py' to '/var/cache/salt/minion/extmods/grains/keystone_policy.py'
2019-04-30 21:42:58,091 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/kubernetes.py' to '/var/cache/salt/minion/extmods/grains/kubernetes.py'
2019-04-30 21:42:58,091 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/neutron_policy.py' to '/var/cache/salt/minion/extmods/grains/neutron_policy.py'
2019-04-30 21:42:58,091 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/nova_policy.py' to '/var/cache/salt/minion/extmods/grains/nova_policy.py'
2019-04-30 21:42:58,091 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_grains/ssh_fingerprints.py' to '/var/cache/salt/minion/extmods/grains/ssh_fingerprints.py'
2019-04-30 21:42:58,092 [salt.utils.extmods:71  ][INFO    ][9832] Creating module dir '/var/cache/salt/minion/extmods/renderers'
2019-04-30 21:42:58,095 [salt.utils.extmods:82  ][INFO    ][9832] Syncing renderers for environment 'base'
2019-04-30 21:42:58,095 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_renderers, for base)
2019-04-30 21:42:58,095 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_renderers/' for environment 'base'
2019-04-30 21:42:58,134 [salt.utils.extmods:82  ][INFO    ][9832] Syncing returners for environment 'base'
2019-04-30 21:42:58,134 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_returners, for base)
2019-04-30 21:42:58,135 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_returners/' for environment 'base'
2019-04-30 21:42:58,172 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_returners/postgres_graph_db.py' to '/var/cache/salt/minion/extmods/returners/postgres_graph_db.py'
2019-04-30 21:42:58,173 [salt.utils.extmods:71  ][INFO    ][9832] Creating module dir '/var/cache/salt/minion/extmods/output'
2019-04-30 21:42:58,175 [salt.utils.extmods:82  ][INFO    ][9832] Syncing output for environment 'base'
2019-04-30 21:42:58,175 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_output, for base)
2019-04-30 21:42:58,176 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_output/' for environment 'base'
2019-04-30 21:42:58,213 [salt.utils.extmods:71  ][INFO    ][9832] Creating module dir '/var/cache/salt/minion/extmods/utils'
2019-04-30 21:42:58,216 [salt.utils.extmods:82  ][INFO    ][9832] Syncing utils for environment 'base'
2019-04-30 21:42:58,216 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_utils, for base)
2019-04-30 21:42:58,216 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_utils/' for environment 'base'
2019-04-30 21:42:58,256 [salt.utils.extmods:71  ][INFO    ][9832] Creating module dir '/var/cache/salt/minion/extmods/log_handlers'
2019-04-30 21:42:58,259 [salt.utils.extmods:82  ][INFO    ][9832] Syncing log_handlers for environment 'base'
2019-04-30 21:42:58,259 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_log_handlers, for base)
2019-04-30 21:42:58,259 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_log_handlers/' for environment 'base'
2019-04-30 21:42:58,294 [salt.utils.extmods:71  ][INFO    ][9832] Creating module dir '/var/cache/salt/minion/extmods/proxy'
2019-04-30 21:42:58,298 [salt.utils.extmods:82  ][INFO    ][9832] Syncing proxy for environment 'base'
2019-04-30 21:42:58,298 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_proxy, for base)
2019-04-30 21:42:58,298 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_proxy/' for environment 'base'
2019-04-30 21:42:58,338 [salt.utils.extmods:82  ][INFO    ][9832] Syncing engines for environment 'base'
2019-04-30 21:42:58,338 [salt.utils.extmods:86  ][INFO    ][9832] Loading cache from salt://_engines, for base)
2019-04-30 21:42:58,338 [salt.fileclient  :230 ][INFO    ][9832] Caching directory '_engines/' for environment 'base'
2019-04-30 21:42:58,381 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_engines/architect.py' to '/var/cache/salt/minion/extmods/engines/architect.py'
2019-04-30 21:42:58,382 [salt.utils.extmods:111 ][INFO    ][9832] Copying '/var/cache/salt/minion/files/base/_engines/saltgraph.py' to '/var/cache/salt/minion/extmods/engines/saltgraph.py'
2019-04-30 21:42:58,384 [salt.minion      :1711][INFO    ][9832] Returning information for job: 20190430214253441001
2019-04-30 22:01:02,578 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command state.sls with jid 20190430220102562635
2019-04-30 22:01:02,589 [salt.minion      :1432][INFO    ][9945] Starting a new job with PID 9945
2019-04-30 22:01:03,259 [salt.state       :915 ][INFO    ][9945] Loading fresh modules for state activity
2019-04-30 22:01:03,284 [salt.fileclient  :1219][INFO    ][9945] Fetching file from saltenv 'base', ** done ** 'glusterfs/client.sls'
2019-04-30 22:01:03,309 [salt.fileclient  :1219][INFO    ][9945] Fetching file from saltenv 'base', ** done ** 'glusterfs/map.jinja'
2019-04-30 22:01:03,325 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:01:03,622 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command 'systemd-escape -p --suffix=mount /var/lib/nova/instances' in directory '/root'
2019-04-30 22:01:04,087 [salt.state       :1780][INFO    ][9945] Running state [glusterfs-client] at time 22:01:04.087465
2019-04-30 22:01:04,087 [salt.state       :1813][INFO    ][9945] Executing state pkg.installed for [glusterfs-client]
2019-04-30 22:01:04,102 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['apt-cache', '-q', 'policy', 'glusterfs-client'] in directory '/root'
2019-04-30 22:01:04,145 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:01:06,429 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:01:06,443 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'glusterfs-client'] in directory '/root'
2019-04-30 22:01:12,894 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:01:12,916 [salt.state       :300 ][INFO    ][9945] Made the following changes:
'python-jwt' changed from 'absent' to '1.3.0-1ubuntu0.1'
'glusterfs-client' changed from 'absent' to '3.13.2-ubuntu1~xenial2'
'libaio1' changed from 'absent' to '0.3.110-2'
'attr' changed from 'absent' to '1:2.4.47-2'
'libpython2.7' changed from 'absent' to '2.7.12-1ubuntu0~16.04.4'
'glusterfs-common' changed from 'absent' to '3.13.2-ubuntu1~xenial2'
'librdmacm1' changed from 'absent' to '1.0.21-1'
'liburcu4' changed from 'absent' to '0.9.1-3'
'libibverbs1' changed from 'absent' to '1.1.8-1.1ubuntu2'
'python-prettytable' changed from 'absent' to '0.7.2-3'

2019-04-30 22:01:12,948 [salt.state       :915 ][INFO    ][9945] Loading fresh modules for state activity
2019-04-30 22:01:12,968 [salt.state       :1951][INFO    ][9945] Completed state [glusterfs-client] at time 22:01:12.968922 duration_in_ms=8881.458
2019-04-30 22:01:12,972 [salt.state       :1780][INFO    ][9945] Running state [attr] at time 22:01:12.972254
2019-04-30 22:01:12,972 [salt.state       :1813][INFO    ][9945] Executing state pkg.installed for [attr]
2019-04-30 22:01:13,352 [salt.state       :300 ][INFO    ][9945] All specified packages are already installed
2019-04-30 22:01:13,352 [salt.state       :1951][INFO    ][9945] Completed state [attr] at time 22:01:13.352751 duration_in_ms=380.496
2019-04-30 22:01:13,354 [salt.state       :1780][INFO    ][9945] Running state [/etc/systemd/system/var-lib-nova-instances.mount] at time 22:01:13.354208
2019-04-30 22:01:13,354 [salt.state       :1813][INFO    ][9945] Executing state file.managed for [/etc/systemd/system/var-lib-nova-instances.mount]
2019-04-30 22:01:13,373 [salt.fileclient  :1219][INFO    ][9945] Fetching file from saltenv 'base', ** done ** 'glusterfs/files/glusterfs-client.mount'
2019-04-30 22:01:13,381 [salt.state       :300 ][INFO    ][9945] File changed:
New file
2019-04-30 22:01:13,381 [salt.state       :1951][INFO    ][9945] Completed state [/etc/systemd/system/var-lib-nova-instances.mount] at time 22:01:13.381808 duration_in_ms=27.6
2019-04-30 22:01:13,382 [salt.state       :1780][INFO    ][9945] Running state [var-lib-nova-instances.mount] at time 22:01:13.382517
2019-04-30 22:01:13,382 [salt.state       :1813][INFO    ][9945] Executing state service.running for [var-lib-nova-instances.mount]
2019-04-30 22:01:13,383 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemctl', 'status', 'var-lib-nova-instances.mount', '-n', '0'] in directory '/root'
2019-04-30 22:01:13,393 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemctl', 'is-active', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:13,400 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:13,409 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:13,450 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemctl', 'is-active', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:13,458 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:13,465 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:13,475 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:13,541 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9945] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:13,548 [salt.state       :300 ][INFO    ][9945] {'var-lib-nova-instances.mount': True}
2019-04-30 22:01:13,549 [salt.state       :1951][INFO    ][9945] Completed state [var-lib-nova-instances.mount] at time 22:01:13.548964 duration_in_ms=166.446
2019-04-30 22:01:13,549 [salt.minion      :1711][INFO    ][9945] Returning information for job: 20190430220102562635
2019-04-30 22:24:07,811 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command state.sls with jid 20190430222407792286
2019-04-30 22:24:07,821 [salt.minion      :1432][INFO    ][11269] Starting a new job with PID 11269
2019-04-30 22:24:11,549 [salt.state       :915 ][INFO    ][11269] Loading fresh modules for state activity
2019-04-30 22:24:11,738 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'cinder/init.sls'
2019-04-30 22:24:11,758 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'cinder/volume.sls'
2019-04-30 22:24:11,825 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'cinder/user.sls'
2019-04-30 22:24:11,869 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'cinder/_ssl/volume_mysql.sls'
2019-04-30 22:24:11,946 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'cinder/_ssl/rabbitmq.sls'
2019-04-30 22:24:12,392 [salt.state       :1780][INFO    ][11269] Running state [cinder] at time 22:24:12.392674
2019-04-30 22:24:12,392 [salt.state       :1813][INFO    ][11269] Executing state group.present for [cinder]
2019-04-30 22:24:12,393 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['groupadd', '-g 304', '-r', 'cinder'] in directory '/root'
2019-04-30 22:24:12,503 [salt.state       :300 ][INFO    ][11269] {'passwd': 'x', 'gid': 304, 'name': 'cinder', 'members': []}
2019-04-30 22:24:12,504 [salt.state       :1951][INFO    ][11269] Completed state [cinder] at time 22:24:12.504041 duration_in_ms=111.367
2019-04-30 22:24:12,504 [salt.state       :1780][INFO    ][11269] Running state [cinder] at time 22:24:12.504340
2019-04-30 22:24:12,504 [salt.state       :1813][INFO    ][11269] Executing state user.present for [cinder]
2019-04-30 22:24:12,505 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['useradd', '-s', '/bin/false', '-u', '304', '-g', '304', '-m', '-d', '/var/lib/cinder', '-r', 'cinder'] in directory '/root'
2019-04-30 22:24:12,681 [salt.state       :300 ][INFO    ][11269] {'shell': '/bin/false', 'workphone': '', 'uid': 304, 'passwd': 'x', 'roomnumber': '', 'groups': ['cinder'], 'home': '/var/lib/cinder', 'name': 'cinder', 'gid': 304, 'fullname': '', 'homephone': ''}
2019-04-30 22:24:12,681 [salt.state       :1951][INFO    ][11269] Completed state [cinder] at time 22:24:12.681729 duration_in_ms=177.388
2019-04-30 22:24:12,682 [salt.state       :1780][INFO    ][11269] Running state [cinder-volume] at time 22:24:12.682083
2019-04-30 22:24:12,682 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [cinder-volume]
2019-04-30 22:24:12,682 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:24:12,957 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['apt-cache', '-q', 'policy', 'cinder-volume'] in directory '/root'
2019-04-30 22:24:13,003 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:24:14,578 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:24:14,592 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'cinder-volume'] in directory '/root'
2019-04-30 22:24:22,859 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430222422840207
2019-04-30 22:24:22,869 [salt.minion      :1432][INFO    ][11915] Starting a new job with PID 11915
2019-04-30 22:24:22,886 [salt.minion      :1711][INFO    ][11915] Returning information for job: 20190430222422840207
2019-04-30 22:24:52,908 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430222452889620
2019-04-30 22:24:52,922 [salt.minion      :1432][INFO    ][12806] Starting a new job with PID 12806
2019-04-30 22:24:52,932 [salt.minion      :1711][INFO    ][12806] Returning information for job: 20190430222452889620
2019-04-30 22:25:23,007 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430222522988328
2019-04-30 22:25:23,016 [salt.minion      :1432][INFO    ][14285] Starting a new job with PID 14285
2019-04-30 22:25:23,025 [salt.minion      :1711][INFO    ][14285] Returning information for job: 20190430222522988328
2019-04-30 22:25:53,024 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430222553006475
2019-04-30 22:25:53,032 [salt.minion      :1432][INFO    ][15274] Starting a new job with PID 15274
2019-04-30 22:25:53,042 [salt.minion      :1711][INFO    ][15274] Returning information for job: 20190430222553006475
2019-04-30 22:26:03,703 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:26:03,726 [salt.state       :300 ][INFO    ][11269] Made the following changes:
'python-routes' changed from 'absent' to '2.4.1-1~u16.04+mcp2'
'python-retrying' changed from 'absent' to '1.3.3-1'
'libiscsi2' changed from 'absent' to '1.12.0-2'
'python-os-service-types' changed from 'absent' to '1.3.0-1~u16.04+mcp'
'python-kombu' changed from 'absent' to '4.1.0-1~u16.04+mcp1'
'python-oslo.concurrency' changed from 'absent' to '3.27.0-2~u16.04+mcp7'
'python-kafka' changed from 'absent' to '1.3.2-1.1~u16.04+mcp2'
'python-sqlparse' changed from 'absent' to '0.2.2-1~u16.04+mcp1'
'python-monotonic' changed from 'absent' to '0.6-2'
'python-psycopg2' changed from 'absent' to '2.7.4-1.0~u16.04+mcp1'
'python-secretstorage' changed from 'absent' to '2.1.3-1'
'python2.7-numpy' changed from 'absent' to '1'
'python-glanceclient' changed from 'absent' to '1:2.13.1-3~u16.04+mcp4'
'python-formencode' changed from 'absent' to '1.3.0-0ubuntu5'
'python-functools32' changed from 'absent' to '3.2.3.2-2'
'python-migrate' changed from 'absent' to '0.11.0-1~u16.04+mcp3'
'python-cachetools' changed from 'absent' to '2.0.0-2.0~u16.04+mcp1'
'libboost-thread1.58.0' changed from 'absent' to '1.58.0+dfsg-5ubuntu3.1'
'libquadmath0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.11'
'python-editor' changed from 'absent' to '0.4-2'
'python-egenix-mxtools' changed from 'absent' to '3.2.9-1'
'python-blinker' changed from 'absent' to '1.3.dfsg2-1build1'
'python-roman' changed from 'absent' to '2.0.0-2'
'python-pastescript' changed from 'absent' to '2.0.2-2~u16.04+mcp'
'sg3-utils' changed from 'absent' to '1.40-0ubuntu1'
'python-tenacity' changed from 'absent' to '4.12.0-1~u16.04+mcp'
'python-oslo.versionedobjects' changed from 'absent' to '1.33.3-1~u16.04+mcp9'
'python-setuptools' changed from 'absent' to '33.1.1-1~u16.04+mcp'
'docutils-doc' changed from 'absent' to '0.12+dfsg-1'
'python-dbus' changed from 'absent' to '1.2.0-3'
'librados2' changed from 'absent' to '10.2.11-0ubuntu0.16.04.1'
'python-traceback2' changed from 'absent' to '1.4.0-3'
'python-fixtures' changed from 'absent' to '3.0.0-1.1~u16.04+mcp2'
'python-pycadf' changed from 'absent' to '2.7.0-1~u16.04+mcp5'
'python-httplib2' changed from 'absent' to '0.9.1+dfsg-1'
'python-testtools' changed from 'absent' to '2.3.0-1.0~u16.04+mcp1'
'libboost-system1.58.0' changed from 'absent' to '1.58.0+dfsg-5ubuntu3.1'
'python-egenix-mxdatetime' changed from 'absent' to '3.2.9-1'
'python-anyjson' changed from 'absent' to '0.3.3-1build1'
'libnss3-nssdb' changed from 'absent' to '2:3.28.4-0ubuntu0.16.04.5'
'python-pymemcache' changed from 'absent' to '1.3.2-2ubuntu1'
'libblas3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-dnspython' changed from 'absent' to '1.14.0-3.1~u16.04+mcp2'
'python-babel' changed from 'absent' to '2.6.0+dfsg.1-1~u16.04+mcp'
'python2.7-paramiko' changed from 'absent' to '1'
'python-jsonschema' changed from 'absent' to '2.6.0-2.0~u16.04+mcp1'
'python-pil' changed from 'absent' to '3.1.2-0ubuntu1.1'
'python-oslo.privsep' changed from 'absent' to '1.29.0-1~u16.04+mcp'
'python2.7-lxml' changed from 'absent' to '1'
'python-suds' changed from 'absent' to '0.7~git20150727.94664dd-3'
'python-oslo.db' changed from 'absent' to '4.40.1-1~u16.04+mcp5'
'python2.7-sqlalchemy-ext' changed from 'absent' to '1'
'libnspr4' changed from 'absent' to '2:4.13.1-0ubuntu0.16.04.1'
'python-os-win' changed from 'absent' to '3.0.0-1.0~u16.04+mcp2'
'python-tz' changed from 'absent' to '2014.10~dfsg1-0ubuntu2'
'python2.7-simplejson' changed from 'absent' to '1'
'libtiff5' changed from 'absent' to '4.0.6-1ubuntu0.6'
'python-funcsigs' changed from 'absent' to '1.0.2-4.0~u16.04+mcp1'
'python-scgi' changed from 'absent' to '1.13-1.1build1'
'python2.7-pil' changed from 'absent' to '1'
'os-brick-common' changed from 'absent' to '2.5.6-1~u16.04+mcp3'
'python-repoze.lru' changed from 'absent' to '0.6-6'
'python-posix-ipc' changed from 'absent' to '0.9.8-2build2'
'formencode-i18n' changed from 'absent' to '1.3.0-0ubuntu5'
'python2.7-testtools' changed from 'absent' to '1'
'python-alembic' changed from 'absent' to '1.0.0-2~u16.04+mcp'
'docutils' changed from 'absent' to '1'
'python2.7-dbus' changed from 'absent' to '1'
'python-oslo.middleware' changed from 'absent' to '3.36.0-1~u16.04+mcp6'
'python-pygments' changed from 'absent' to '2.2.0+dfsg-1~u16.04+mcp2'
'python-pillow' changed from 'absent' to '1'
'libpaperg' changed from 'absent' to '1'
'liblapack.so.3' changed from 'absent' to '1'
'python2.7-netifaces' changed from 'absent' to '1'
'python-numpy-dev' changed from 'absent' to '1'
'liblcms2-2' changed from 'absent' to '2.6-3ubuntu2.1'
'docutils-common' changed from 'absent' to '0.12+dfsg-1'
'python-oslo.context' changed from 'absent' to '1:2.21.0-1~u16.04+mcp4'
'qemu-block-extra' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'sharutils' changed from 'absent' to '1:4.15.2-1ubuntu0.1'
'python-cursive' changed from 'absent' to '0.2.1-1.0~u16.04+mcp1'
'qemu-utils' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'python-oslo.cache' changed from 'absent' to '1.30.2-1~u16.04+mcp3'
'python2.7-pyinotify' changed from 'absent' to '1'
'python-webob' changed from 'absent' to '1:1.8.2-1~u16.04+mcp'
'python-pyparsing' changed from 'absent' to '2.2.0+dfsg1-2~u16.04+mcp1'
'python-babel-localedata' changed from 'absent' to '2.6.0+dfsg.1-1~u16.04+mcp'
'python-mimeparse' changed from 'absent' to '0.1.4-1build1'
'python-barbicanclient' changed from 'absent' to '4.7.2-2~u16.04+mcp4'
'python-castellan' changed from 'absent' to '0.19.0-1~u16.04+mcp4'
'python-cmd2' changed from 'absent' to '0.6.8-1'
'python-oslo.vmware' changed from 'absent' to '2.26.0-2~u16.04+mcp'
'python-distribute' changed from 'absent' to '1'
'libconfig-general-perl' changed from 'absent' to '2.60-1'
'libsgutils2-2' changed from 'absent' to '1.40-0ubuntu1'
'python-iso8601' changed from 'absent' to '0.1.11-1'
'python-jsonpatch' changed from 'absent' to '1.21-1~u16.04+mcp1'
'libwebpmux1' changed from 'absent' to '0.4.4-1'
'tgt' changed from 'absent' to '1:1.0.63-1ubuntu1.1'
'python-testscenarios' changed from 'absent' to '0.4-4'
'python-oslo.policy' changed from 'absent' to '1.38.1-1~u16.04+mcp'
'python-stevedore' changed from 'absent' to '1:1.29.0-1~u16.04+mcp4'
'python-paste' changed from 'absent' to '2.0.3+dfsg-4.1~u16.04+mcp1'
'python-lxml' changed from 'absent' to '3.5.0-1ubuntu0.1'
'python-oslo.config' changed from 'absent' to '1:6.4.0-1~u16.04+mcp'
'libnss3' changed from 'absent' to '2:3.28.4-0ubuntu0.16.04.5'
'python-paramiko' changed from 'absent' to '2.0.0-1.1~u16.04+mcp2'
'python-futurist' changed from 'absent' to '1.6.0-1.0~u16.04+mcp7'
'python-f2py' changed from 'absent' to '1'
'libpaper1' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python-fasteners' changed from 'absent' to '0.12.0-2ubuntu1'
'python2.7-gi' changed from 'absent' to '1'
'python-linecache2' changed from 'absent' to '1.0.0-2'
'python-pastedeploy-tpl' changed from 'absent' to '1.5.2-1'
'python-oauthlib' changed from 'absent' to '1.0.3-1'
'python-oslo-db' changed from 'absent' to '1'
'python2.7-testscenarios' changed from 'absent' to '1'
'libblas-common' changed from 'absent' to '3.6.0-2ubuntu2'
'libgfortran3' changed from 'absent' to '5.4.0-6ubuntu1~16.04.11'
'python-gi' changed from 'absent' to '3.20.0-0ubuntu1'
'libpq5' changed from 'absent' to '9.5.16-0ubuntu0.16.04.1'
'pycadf-common' changed from 'absent' to '2.7.0-1~u16.04+mcp5'
'python-contextlib2' changed from 'absent' to '0.5.1-1'
'libjpeg8' changed from 'absent' to '8c-2ubuntu8'
'python-oslo.serialization' changed from 'absent' to '2.27.0-1~u16.04+mcp5'
'python-oslo.utils' changed from 'absent' to '3.36.4-1~u16.04+mcp'
'python-taskflow' changed from 'absent' to '3.1.0-1.0~u16.04+mcp9'
'python-cinder' changed from 'absent' to '2:13.0.4-0ubuntu3~u16.04+mcp65'
'python-automaton' changed from 'absent' to '1.15.0-1.0~u16.04+mcp5'
'python-warlock' changed from 'absent' to '1.2.0-2.0~u16.04+mcp1'
'python-oslo.rootwrap' changed from 'absent' to '5.14.1-1~u16.04+mcp6'
'python2.7-iso8601' changed from 'absent' to '1'
'python-numpy' changed from 'absent' to '1:1.11.0-1ubuntu1'
'python-simplejson' changed from 'absent' to '3.8.1-1ubuntu2'
'python-wrapt' changed from 'absent' to '1.8.0-5build2'
'python-tooz' changed from 'absent' to '1.60.1-1.0~u16.04+mcp2'
'python-docutils' changed from 'absent' to '0.12+dfsg-1'
'python-openid' changed from 'absent' to '2.2.5-6'
'python-pastedeploy' changed from 'absent' to '1.5.2-1'
'python2.7-cmd2' changed from 'absent' to '1'
'libpaper-utils' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python2.7-zope.interface' changed from 'absent' to '1'
'python-cliff' changed from 'absent' to '2.11.1-1~u16.04+mcp6'
'python-oslo.i18n' changed from 'absent' to '3.21.0-1~u16.04+mcp6'
'python-bs4' changed from 'absent' to '4.6.0-1~u16.04+mcp1'
'cinder-volume' changed from 'absent' to '2:13.0.4-0ubuntu3~u16.04+mcp65'
'python-oslo.reports' changed from 'absent' to '1.28.0-2~u16.04+mcp6'
'python2.7-greenlet' changed from 'absent' to '1'
'python-networkx' changed from 'absent' to '1.11-1ubuntu1'
'python-statsd' changed from 'absent' to '3.2.1-2~u16.04+mcp2'
'libboost-random1.58.0' changed from 'absent' to '1.58.0+dfsg-5ubuntu3.1'
'python-redis' changed from 'absent' to '2.10.5-1ubuntu1'
'python-oslo-utils' changed from 'absent' to '1'
'libblas.so.3' changed from 'absent' to '1'
'python-novaclient' changed from 'absent' to '2:11.0.0-2~u16.04+mcp20'
'python-unicodecsv' changed from 'absent' to '0.14.1-1'
'python-memcache' changed from 'absent' to '1.57+fixed-1~u16.04+mcp1'
'python-mock' changed from 'absent' to '2.0.0-1.1~u16.04+mcp2'
'python-rfc3986' changed from 'absent' to '0.3.1-2.1~u16.04+mcp2'
'python-eventlet' changed from 'absent' to '0.20.0-5~u16.04+mcp0'
'python-unittest2' changed from 'absent' to '1.1.0-6.1'
'python2.7-pyparsing' changed from 'absent' to '1'
'python-oslo.log' changed from 'absent' to '3.39.2-1~u16.04+mcp2'
'python-pyinotify' changed from 'absent' to '0.9.6-1.1~u16.04+mcp2'
'libjpeg-turbo8' changed from 'absent' to '1.4.2-0ubuntu3.1'
'python-amqp' changed from 'absent' to '2.3.2-1~u16.04+mcp2'
'python-pbr' changed from 'absent' to '4.2.0-4~u16.04+mcp1'
'libwebp5' changed from 'absent' to '0.4.4-1'
'python-zope.interface' changed from 'absent' to '4.1.3-1build1'
'python-numpy-abi9' changed from 'absent' to '1'
'python-vine' changed from 'absent' to '1.1.3+dfsg-2~u16.04+mcp3'
'python-defusedxml' changed from 'absent' to '0.5.0-1~u16.04+mcp1'
'python-kazoo' changed from 'absent' to '2.2.1-1ubuntu1'
'python-decorator' changed from 'absent' to '4.3.0-1~u16.04+mcp'
'python-osprofiler' changed from 'absent' to '2.3.0-1~u16.04+mcp'
'python-oslo.messaging' changed from 'absent' to '8.1.2-1~u16.04+mcp12'
'python-os-brick' changed from 'absent' to '2.5.6-1~u16.04+mcp3'
'python-debtcollector' changed from 'absent' to '1.20.0-2~u16.04+mcp'
'python-keyrings.alt' changed from 'absent' to '1.1.1-1'
'python-oslo-log' changed from 'absent' to '1'
'python-json-pointer' changed from 'absent' to '1.9-3'
'python-dogpile.cache' changed from 'absent' to '0.6.2-1.1~u16.04+mcp2'
'python-html5lib' changed from 'absent' to '0.999-4'
'python-swiftclient' changed from 'absent' to '1:3.6.0-2~u16.04+mcp6'
'liblapack3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-testresources' changed from 'absent' to '2.0.0-1.0~u16.04+mcp1'
'python-keystoneclient' changed from 'absent' to '1:3.17.0-1~u16.04+mcp6'
'python-greenlet' changed from 'absent' to '0.4.15-1~u16.04+mcp'
'python-sqlalchemy-ext' changed from 'absent' to '1.2.10+ds1-1~u16.04+mcp'
'python-oslo.service' changed from 'absent' to '1.31.7-1~u16.04+mcp4'
'librbd1' changed from 'absent' to '10.2.11-0ubuntu0.16.04.1'
'python-oslo-context' changed from 'absent' to '1'
'python-keyring' changed from 'absent' to '8.5.1-1.1~u16.04+mcp2'
'python-zake' changed from 'absent' to '0.1.6-1'
'python-zopeinterface' changed from 'absent' to '1'
'libboost-iostreams1.58.0' changed from 'absent' to '1.58.0+dfsg-5ubuntu3.1'
'python-numpy-api10' changed from 'absent' to '1'
'python-keystoneauth1' changed from 'absent' to '3.10.0-1~u16.04+mcp10'
'python-tempita' changed from 'absent' to '0.5.2-1build1'
'python-sqlalchemy' changed from 'absent' to '1.2.10+ds1-1~u16.04+mcp'
'python-keystonemiddleware' changed from 'absent' to '5.2.0-2~u16.04+mcp10'
'python-zope' changed from 'absent' to '1'
'python-voluptuous' changed from 'absent' to '0.9.3-1.1~u16.04+mcp2'
'python-extras' changed from 'absent' to '1.0.0-2.0~u16.04+mcp1'
'cinder-common' changed from 'absent' to '2:13.0.4-0ubuntu3~u16.04+mcp65'
'python-oslo-rootwrap' changed from 'absent' to '1'
'python-netifaces' changed from 'absent' to '0.10.4-0.1build2'
'libjbig0' changed from 'absent' to '2.1-3.1'

2019-04-30 22:26:03,746 [salt.state       :915 ][INFO    ][11269] Loading fresh modules for state activity
2019-04-30 22:26:03,769 [salt.state       :1951][INFO    ][11269] Completed state [cinder-volume] at time 22:26:03.769802 duration_in_ms=111087.718
2019-04-30 22:26:03,773 [salt.state       :1780][INFO    ][11269] Running state [lvm2] at time 22:26:03.773929
2019-04-30 22:26:03,774 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [lvm2]
2019-04-30 22:26:04,618 [salt.state       :300 ][INFO    ][11269] All specified packages are already installed
2019-04-30 22:26:04,618 [salt.state       :1951][INFO    ][11269] Completed state [lvm2] at time 22:26:04.618665 duration_in_ms=844.735
2019-04-30 22:26:04,619 [salt.state       :1780][INFO    ][11269] Running state [sysfsutils] at time 22:26:04.618983
2019-04-30 22:26:04,619 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [sysfsutils]
2019-04-30 22:26:04,623 [salt.state       :300 ][INFO    ][11269] All specified packages are already installed
2019-04-30 22:26:04,623 [salt.state       :1951][INFO    ][11269] Completed state [sysfsutils] at time 22:26:04.623908 duration_in_ms=4.925
2019-04-30 22:26:04,624 [salt.state       :1780][INFO    ][11269] Running state [sg3-utils] at time 22:26:04.624126
2019-04-30 22:26:04,624 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [sg3-utils]
2019-04-30 22:26:04,628 [salt.state       :300 ][INFO    ][11269] All specified packages are already installed
2019-04-30 22:26:04,628 [salt.state       :1951][INFO    ][11269] Completed state [sg3-utils] at time 22:26:04.628940 duration_in_ms=4.815
2019-04-30 22:26:04,629 [salt.state       :1780][INFO    ][11269] Running state [python-cinder] at time 22:26:04.629170
2019-04-30 22:26:04,629 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [python-cinder]
2019-04-30 22:26:04,633 [salt.state       :300 ][INFO    ][11269] All specified packages are already installed
2019-04-30 22:26:04,633 [salt.state       :1951][INFO    ][11269] Completed state [python-cinder] at time 22:26:04.633732 duration_in_ms=4.562
2019-04-30 22:26:04,633 [salt.state       :1780][INFO    ][11269] Running state [python-mysqldb] at time 22:26:04.633949
2019-04-30 22:26:04,634 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [python-mysqldb]
2019-04-30 22:26:04,647 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:26:04,661 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-mysqldb'] in directory '/root'
2019-04-30 22:26:07,640 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:26:07,666 [salt.state       :300 ][INFO    ][11269] Made the following changes:
'python2.7-mysqldb' changed from 'absent' to '1'
'mysql-common' changed from 'absent' to '5.7.26-0ubuntu0.16.04.1'
'mysql-common-5.6' changed from 'absent' to '1'
'libmysqlclient20' changed from 'absent' to '5.7.26-0ubuntu0.16.04.1'
'python-mysqldb' changed from 'absent' to '1.3.7-1build2'

2019-04-30 22:26:07,677 [salt.state       :915 ][INFO    ][11269] Loading fresh modules for state activity
2019-04-30 22:26:07,697 [salt.state       :1951][INFO    ][11269] Completed state [python-mysqldb] at time 22:26:07.697798 duration_in_ms=3063.848
2019-04-30 22:26:07,701 [salt.state       :1780][INFO    ][11269] Running state [p7zip] at time 22:26:07.701304
2019-04-30 22:26:07,701 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [p7zip]
2019-04-30 22:26:08,106 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:26:08,119 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'p7zip'] in directory '/root'
2019-04-30 22:26:09,898 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:26:09,922 [salt.state       :300 ][INFO    ][11269] Made the following changes:
'p7zip' changed from 'absent' to '9.20.1~dfsg.1-4.2ubuntu0.1'

2019-04-30 22:26:09,937 [salt.state       :915 ][INFO    ][11269] Loading fresh modules for state activity
2019-04-30 22:26:10,042 [salt.state       :1951][INFO    ][11269] Completed state [p7zip] at time 22:26:10.042161 duration_in_ms=2340.857
2019-04-30 22:26:10,045 [salt.state       :1780][INFO    ][11269] Running state [gettext-base] at time 22:26:10.045634
2019-04-30 22:26:10,045 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [gettext-base]
2019-04-30 22:26:10,430 [salt.state       :300 ][INFO    ][11269] All specified packages are already installed
2019-04-30 22:26:10,431 [salt.state       :1951][INFO    ][11269] Completed state [gettext-base] at time 22:26:10.431185 duration_in_ms=385.55
2019-04-30 22:26:10,431 [salt.state       :1780][INFO    ][11269] Running state [python-memcache] at time 22:26:10.431475
2019-04-30 22:26:10,431 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [python-memcache]
2019-04-30 22:26:10,436 [salt.state       :300 ][INFO    ][11269] All specified packages are already installed
2019-04-30 22:26:10,436 [salt.state       :1951][INFO    ][11269] Completed state [python-memcache] at time 22:26:10.436418 duration_in_ms=4.943
2019-04-30 22:26:10,436 [salt.state       :1780][INFO    ][11269] Running state [python-pycadf] at time 22:26:10.436649
2019-04-30 22:26:10,436 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [python-pycadf]
2019-04-30 22:26:10,441 [salt.state       :300 ][INFO    ][11269] All specified packages are already installed
2019-04-30 22:26:10,441 [salt.state       :1951][INFO    ][11269] Completed state [python-pycadf] at time 22:26:10.441373 duration_in_ms=4.725
2019-04-30 22:26:10,441 [salt.state       :1780][INFO    ][11269] Running state [cinder_volume_ssl_mysql] at time 22:26:10.441933
2019-04-30 22:26:10,442 [salt.state       :1813][INFO    ][11269] Executing state test.show_notification for [cinder_volume_ssl_mysql]
2019-04-30 22:26:10,442 [salt.state       :300 ][INFO    ][11269] Running cinder._ssl.volume_mysql
2019-04-30 22:26:10,442 [salt.state       :1951][INFO    ][11269] Completed state [cinder_volume_ssl_mysql] at time 22:26:10.442381 duration_in_ms=0.448
2019-04-30 22:26:10,442 [salt.state       :1780][INFO    ][11269] Running state [cinder_volume_ssl_rabbitmq] at time 22:26:10.442621
2019-04-30 22:26:10,442 [salt.state       :1813][INFO    ][11269] Executing state test.show_notification for [cinder_volume_ssl_rabbitmq]
2019-04-30 22:26:10,442 [salt.state       :300 ][INFO    ][11269] Running cinder._ssl.rabbitmq
2019-04-30 22:26:10,443 [salt.state       :1951][INFO    ][11269] Completed state [cinder_volume_ssl_rabbitmq] at time 22:26:10.443033 duration_in_ms=0.412
2019-04-30 22:26:10,444 [salt.state       :1780][INFO    ][11269] Running state [/var/lock/cinder] at time 22:26:10.444787
2019-04-30 22:26:10,444 [salt.state       :1813][INFO    ][11269] Executing state file.directory for [/var/lock/cinder]
2019-04-30 22:26:10,445 [salt.state       :300 ][INFO    ][11269] {'/var/lock/cinder': 'New Dir'}
2019-04-30 22:26:10,445 [salt.state       :1951][INFO    ][11269] Completed state [/var/lock/cinder] at time 22:26:10.445874 duration_in_ms=1.087
2019-04-30 22:26:10,446 [salt.state       :1780][INFO    ][11269] Running state [/etc/cinder/cinder.conf] at time 22:26:10.446227
2019-04-30 22:26:10,446 [salt.state       :1813][INFO    ][11269] Executing state file.managed for [/etc/cinder/cinder.conf]
2019-04-30 22:26:10,469 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'cinder/files/rocky/cinder.conf.volume.Debian'
2019-04-30 22:26:10,564 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/messaging/_default.conf'
2019-04-30 22:26:10,589 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_log.conf'
2019-04-30 22:26:10,605 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'cinder/files/backend/_lvm.conf'
2019-04-30 22:26:10,618 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/castellan/_barbican.conf'
2019-04-30 22:26:10,631 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/keystoneauth/_type_password.conf'
2019-04-30 22:26:10,658 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/keystonemiddleware/_auth_token.conf'
2019-04-30 22:26:10,680 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_database.conf'
2019-04-30 22:26:10,698 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/messaging/_notifications.conf'
2019-04-30 22:26:10,711 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/messaging/_rabbit.conf'
2019-04-30 22:26:10,750 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_middleware.conf'
2019-04-30 22:26:10,759 [salt.state       :300 ][INFO    ][11269] File changed:
--- 
+++ 
@@ -1,15 +1,4401 @@
+
 [DEFAULT]
+
+#
+# From cinder
+#
+
 rootwrap_config = /etc/cinder/rootwrap.conf
 api_paste_confg = /etc/cinder/api-paste.ini
+#
+# From oslo.messaging
+#
+
+# Size of RPC connection pool. (integer value)
+#rpc_conn_pool_size = 30
+
+# The pool size limit for connections expiration policy (integer
+# value)
+#conn_pool_min_size = 2
+
+# The time-to-live in sec of idle connections in the pool (integer
+# value)
+#conn_pool_ttl = 1200
+
+# ZeroMQ bind address. Should be a wildcard (*), an ethernet
+# interface, or IP. The "host" option should point or resolve to this
+# address. (string value)
+#rpc_zmq_bind_address = *
+
+# MatchMaker driver. (string value)
+# Possible values:
+# redis - <No description provided>
+# sentinel - <No description provided>
+# dummy - <No description provided>
+#rpc_zmq_matchmaker = redis
+
+# Number of ZeroMQ contexts, defaults to 1. (integer value)
+#rpc_zmq_contexts = 1
+
+# Maximum number of ingress messages to locally buffer per topic.
+# Default is unlimited. (integer value)
+#rpc_zmq_topic_backlog = <None>
+
+# Directory for holding IPC sockets. (string value)
+#rpc_zmq_ipc_dir = /var/run/openstack
+
+# Name of this node. Must be a valid hostname, FQDN, or IP address.
+# Must match "host" option, if running Nova. (string value)
+#rpc_zmq_host = localhost
+
+# Number of seconds to wait before all pending messages will be sent
+# after closing a socket. The default value of -1 specifies an
+# infinite linger period. The value of 0 specifies no linger period.
+# Pending messages shall be discarded immediately when the socket is
+# closed. Positive values specify an upper bound for the linger
+# period. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
+#zmq_linger = -1
+
+# The default number of seconds that poll should wait. Poll raises
+# timeout exception when timeout expired. (integer value)
+#rpc_poll_timeout = 1
+
+
+# Expiration timeout in seconds of a name service record about
+# existing target ( < 0 means no timeout). (integer value)
+#zmq_target_expire = 300
+
+# Update period in seconds of a name service record about existing
+# target. (integer value)
+#zmq_target_update = 180
+
+# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
+# (boolean value)
+#use_pub_sub = false
+
+# Use ROUTER remote proxy. (boolean value)
+#use_router_proxy = false
+
+# This option makes direct connections dynamic or static. It makes
+# sense only with use_router_proxy=False which means to use direct
+# connections for direct message types (ignored otherwise). (boolean
+# value)
+#use_dynamic_connections = false
+
+# How many additional connections to a host will be made for failover
+# reasons. This option is actual only in dynamic connections mode.
+# (integer value)
+#zmq_failover_connections = 2
+
+# Minimal port number for random ports range. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#rpc_zmq_min_port = 49153
+
+# Maximal port number for random ports range. (integer value)
+# Minimum value: 1
+# Maximum value: 65536
+#rpc_zmq_max_port = 65536
+
+# Number of retries to find free port number before fail with
+# ZMQBindError. (integer value)
+#rpc_zmq_bind_port_retries = 100
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#rpc_zmq_serialization = json
+
+# This option configures round-robin mode in zmq socket. True means
+# not keeping a queue when server side disconnects. False means to
+# keep queue and messages even if server is disconnected, when the
+# server appears we send all accumulated messages to it. (boolean
+# value)
+#zmq_immediate = true
+
+# Enable/disable TCP keepalive (KA) mechanism. The default value of -1
+# (or any other negative value) means to skip any overrides and leave
+# it to OS default; 0 and 1 (or any other positive value) mean to
+# disable and enable the option respectively. (integer value)
+#zmq_tcp_keepalive = -1
+
+# The duration between two keepalive transmissions in idle condition.
+# The unit is platform dependent, for example, seconds in Linux,
+# milliseconds in Windows etc. The default value of -1 (or any other
+# negative value and 0) means to skip any overrides and leave it to OS
+# default. (integer value)
+#zmq_tcp_keepalive_idle = -1
+
+# The number of retransmissions to be carried out before declaring
+# that remote end is not available. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_cnt = -1
+
+# The duration between two successive keepalive retransmissions, if
+# acknowledgement to the previous keepalive transmission is not
+# received. The unit is platform dependent, for example, seconds in
+# Linux, milliseconds in Windows etc. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_intvl = -1
+
+# Maximum number of (green) threads to work concurrently. (integer
+# value)
+#rpc_thread_pool_size = 100
+
+# Expiration timeout in seconds of a sent/received message after which
+# it is not tracked anymore by a client/server. (integer value)
+#rpc_message_ttl = 300
+
+# Wait for message acknowledgements from receivers. This mechanism
+# works only via proxy without PUB/SUB. (boolean value)
+#rpc_use_acks = false
+
+# Number of seconds to wait for an ack from a cast/call. After each
+# retry attempt this timeout is multiplied by some specified
+# multiplier. (integer value)
+#rpc_ack_timeout_base = 15
+
+# Number to multiply base ack timeout by after each retry attempt.
+# (integer value)
+#rpc_ack_timeout_multiplier = 2
+
+# Default number of message sending attempts in case of any problems
+# occurred: positive value N means at most N retries, 0 means no
+# retries, None or -1 (or any other negative values) mean to retry
+# forever. This option is used only if acknowledgments are enabled.
+# (integer value)
+#rpc_retry_attempts = 3
+
+# List of publisher hosts SubConsumer can subscribe on. This option
+# has higher priority then the default publishers list taken from the
+# matchmaker. (list value)
+#subscribe_on =
+
+# Size of executor thread pool when executor is threading or eventlet.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
+#executor_thread_pool_size = 64
+
+# Seconds to wait for a response from a call. (integer value)
+#rpc_response_timeout = 60
+rpc_response_timeout = 3600
+
+# The network address and optional user credentials for connecting to
+# the messaging backend, in URL format. The expected format is:
+#
+# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
+#
+# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
+#
+# For full details on the fields in the URL see the documentation of
+# oslo_messaging.TransportURL at
+# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
+# (string value)
+#transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
+
+# DEPRECATED: The messaging driver to use, defaults to rabbit. Other
+# drivers include amqp and zmq. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rpc_backend = rabbit
+
+# The default exchange under which topics are scoped. May be
+# overridden by an exchange name specified in the transport_url
+# option. (string value)
+#control_exchange = openstack
+control_exchange = cinder
+
+#
+# From oslo.log
+#
+
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
+# Note: This option can be changed without restarting.
+#debug = false
+
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
+# Note: This option can be changed without restarting.
+# Deprecated group/name - [DEFAULT]/log_config
+
+# Defines the format string for %%(asctime)s in log records. Default:
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
+#log_date_format = %Y-%m-%d %H:%M:%S
+
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logfile
+#log_file = <None>
+
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logdir
+#log_dir = <None>
+
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
+# (boolean value)
+#use_journal = false
+
+# Syslog facility to receive log lines. This option is ignored if
+# log_config_append is set. (string value)
+#syslog_log_facility = LOG_USER
+
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_json = false
+
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_stderr = false
+
+# Format string to use for log messages with context. (string value)
+#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+# Format string to use for log messages when context is undefined.
+# (string value)
+#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
+#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+
+# Prefix each line of exception output with this format. (string
+# value)
+#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
+
+# Defines the format string for %(user_identity)s that is used in
+# logging_context_format_string. (string value)
+#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
+
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+
+# Enables or disables publication of error events. (boolean value)
+#publish_errors = false
+
+# The format for an instance that is passed with the log message.
+# (string value)
+#instance_format = "[instance: %(uuid)s] "
+
+# The format for an instance UUID that is passed with the log message.
+# (string value)
+#instance_uuid_format = "[instance: %(uuid)s] "
+
+# Interval, number of seconds, of log rate limiting. (integer value)
+#rate_limit_interval = 0
+
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
+#rate_limit_burst = 0
+
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
+#rate_limit_except_level = CRITICAL
+
+# Enables or disables fatal status of deprecations. (boolean value)
+#fatal_deprecations = false
+
+# The maximum number of items that a collection resource returns in a single
+# response (integer value)
+#osapi_max_limit = 1000
+
+# Json file indicating user visible filter parameters for list queries. (string
+# value)
+# Deprecated group/name - [DEFAULT]/query_volume_filters
+#resource_query_filters_file = /etc/cinder/resource_filters.json
+
+# DEPRECATED: Volume filter options which non-admin user could use to query
+# volumes. Default values are: ['name', 'status', 'metadata',
+# 'availability_zone' ,'bootable', 'group_id'] (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#query_volume_filters = name,status,metadata,availability_zone,bootable,group_id
+
+# DEPRECATED: Allow the ability to modify the extra-spec settings of an in-use
+# volume-type. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#allow_inuse_volume_type_modification = false
+
+# Treat X-Forwarded-For as the canonical remote address. Only enable this if
+# you have a sanitizing proxy. (boolean value)
+#use_forwarded_for = false
+
+# Public url to use for versions endpoint. The default is None, which will use
+# the request's host_url attribute to populate the URL base. If Cinder is
+# operating behind a proxy, you will want to change this to represent the
+# proxy's URL. (string value)
+#public_endpoint = <None>
+
+# Backup services use same backend. (boolean value)
+#backup_use_same_host = false
+
+# Compression algorithm (None to disable) (string value)
+# Possible values:
+# none - <No description provided>
+# off - <No description provided>
+# no - <No description provided>
+# zlib - <No description provided>
+# gzip - <No description provided>
+# bz2 - <No description provided>
+# bzip2 - <No description provided>
+#backup_compression_algorithm = zlib
+
+# Backup metadata version to be used when backing up volume metadata. If this
+# number is bumped, make sure the service doing the restore supports the new
+# version. (integer value)
+#backup_metadata_version = 2
+
+# The number of chunks or objects, for which one Ceilometer notification will
+# be sent (integer value)
+#backup_object_number_per_notification = 10
+
+# Interval, in seconds, between two progress notifications reporting the backup
+# status (integer value)
+#backup_timer_interval = 120
+
+# Ceph configuration file to use. (string value)
+#backup_ceph_conf = /etc/ceph/ceph.conf
+
+# The Ceph user to connect with. Default here is to use the same user as for
+# Cinder volumes. If not using cephx this should be set to None. (string value)
+#backup_ceph_user = cinder
+
+# The chunk size, in bytes, that a backup is broken into before transfer to the
+# Ceph object store. (integer value)
+#backup_ceph_chunk_size = 134217728
+
+# The Ceph pool where volume backups are stored. (string value)
+#backup_ceph_pool = backups
+
+# RBD stripe unit to use when creating a backup image. (integer value)
+#backup_ceph_stripe_unit = 0
+
+# RBD stripe count to use when creating a backup image. (integer value)
+#backup_ceph_stripe_count = 0
+
+# If True, apply JOURNALING and EXCLUSIVE_LOCK feature bits to the backup RBD
+# objects to allow mirroring (boolean value)
+#backup_ceph_image_journals = false
+
+# If True, always discard excess bytes when restoring volumes i.e. pad with
+# zeroes. (boolean value)
+#restore_discard_excess_bytes = true
+
+# The GCS bucket to use. (string value)
+#backup_gcs_bucket = <None>
+
+# The size in bytes of GCS backup objects. (integer value)
+#backup_gcs_object_size = 52428800
+
+# The size in bytes that changes are tracked for incremental backups.
+# backup_gcs_object_size has to be multiple of backup_gcs_block_size. (integer
+# value)
+#backup_gcs_block_size = 32768
+
+# GCS object will be downloaded in chunks of bytes. (integer value)
+#backup_gcs_reader_chunk_size = 2097152
+
+# GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the
+# file is to be uploaded as a single chunk. (integer value)
+#backup_gcs_writer_chunk_size = 2097152
+
+# Number of times to retry. (integer value)
+#backup_gcs_num_retries = 3
+
+# List of GCS error codes. (list value)
+#backup_gcs_retry_error_codes = 429
+
+# Location of GCS bucket. (string value)
+#backup_gcs_bucket_location = US
+
+# Storage class of GCS bucket. (string value)
+#backup_gcs_storage_class = NEARLINE
+
+# Absolute path of GCS service account credential file. (string value)
+#backup_gcs_credential_file = <None>
+
+# Owner project id for GCS bucket. (string value)
+#backup_gcs_project_id = <None>
+
+# Http user-agent string for gcs api. (string value)
+#backup_gcs_user_agent = gcscinder
+
+# Enable or Disable the timer to send the periodic progress notifications to
+# Ceilometer when backing up the volume to the GCS backend storage. The default
+# value is True to enable the timer. (boolean value)
+#backup_gcs_enable_progress_timer = true
+
+# URL for http proxy access. (uri value)
+#backup_gcs_proxy_url = <None>
+
+# Base dir containing mount point for gluster share. (string value)
+#glusterfs_backup_mount_point = $state_path/backup_mount
+
+# GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format.
+# Eg: 1.2.3.4:backup_vol (string value)
+#glusterfs_backup_share = <None>
+
+# Base dir containing mount point for NFS share. (string value)
+#backup_mount_point_base = $state_path/backup_mount
+
+# NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format.
+# (string value)
+#backup_share = <None>
+
+# Mount options passed to the NFS client. See NFS man page for details. (string
+# value)
+#backup_mount_options = <None>
+
+# The maximum size in bytes of the files used to hold backups. If the volume
+# being backed up exceeds this size, then it will be backed up into multiple
+# files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
+# (integer value)
+#backup_file_size = 1999994880
+
+# The size in bytes that changes are tracked for incremental backups.
+# backup_file_size has to be multiple of backup_sha_block_size_bytes. (integer
+# value)
+#backup_sha_block_size_bytes = 32768
+
+# Enable or Disable the timer to send the periodic progress notifications to
+# Ceilometer when backing up the volume to the backend storage. The default
+# value is True to enable the timer. (boolean value)
+#backup_enable_progress_timer = true
+
+# Path specifying where to store backups. (string value)
+#backup_posix_path = $state_path/backup
+
+# Custom directory to use for backups. (string value)
+#backup_container = <None>
+
+# The URL of the Swift endpoint (uri value)
+#backup_swift_url = <None>
+
+# The URL of the Keystone endpoint (uri value)
+#backup_swift_auth_url = <None>
+
+# Info to match when looking for swift in the service catalog. Format is:
+# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
+# Only used if backup_swift_url is unset (string value)
+#swift_catalog_info = object-store:swift:publicURL
+
+# Info to match when looking for keystone in the service catalog. Format is:
+# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
+# Only used if backup_swift_auth_url is unset (string value)
+#keystone_catalog_info = identity:Identity Service:publicURL
+
+# Swift authentication mechanism (per_user or single_user). (string value)
+# Possible values:
+# per_user - <No description provided>
+# single_user - <No description provided>
+#backup_swift_auth = per_user
+
+# Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0
+# or "3" for auth 3.0 (string value)
+#backup_swift_auth_version = 1
+
+# Swift tenant/account name. Required when connecting to an auth 2.0 system
+# (string value)
+#backup_swift_tenant = <None>
+
+# Swift user domain name. Required when connecting to an auth 3.0 system
+# (string value)
+#backup_swift_user_domain = <None>
+
+# Swift project domain name. Required when connecting to an auth 3.0 system
+# (string value)
+#backup_swift_project_domain = <None>
+
+# Swift project/account name. Required when connecting to an auth 3.0 system
+# (string value)
+#backup_swift_project = <None>
+
+# Swift user name (string value)
+#backup_swift_user = <None>
+
+# Swift key for authentication (string value)
+#backup_swift_key = <None>
+
+# The default Swift container to use (string value)
+#backup_swift_container = volumebackups
+
+# The size in bytes of Swift backup objects (integer value)
+#backup_swift_object_size = 52428800
+
+# The size in bytes that changes are tracked for incremental backups.
+# backup_swift_object_size has to be multiple of backup_swift_block_size.
+# (integer value)
+#backup_swift_block_size = 32768
+
+# The number of retries to make for Swift operations (integer value)
+#backup_swift_retry_attempts = 3
+
+# The backoff time in seconds between Swift retries (integer value)
+#backup_swift_retry_backoff = 2
+
+# Enable or Disable the timer to send the periodic progress notifications to
+# Ceilometer when backing up the volume to the Swift backend storage. The
+# default value is True to enable the timer. (boolean value)
+#backup_swift_enable_progress_timer = true
+
+# Location of the CA certificate file to use for swift client requests. (string
+# value)
+#backup_swift_ca_cert_file = <None>
+
+# Bypass verification of server certificate when making SSL connection to
+# Swift. (boolean value)
+#backup_swift_auth_insecure = false
+
+# Volume prefix for the backup id when backing up to TSM (string value)
+#backup_tsm_volume_prefix = backup
+
+# TSM password for the running username (string value)
+#backup_tsm_password = password
+
+# Enable or Disable compression for backups (boolean value)
+#backup_tsm_compression = true
+
+# Driver to use for backups. (string value)
+#backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
+
+# Offload pending backup delete during backup service startup. If false, the
+# backup service will remain down until all pending backups are deleted.
+# (boolean value)
+#backup_service_inithost_offload = true
+
+# Size of the native threads pool for the backups.  Most backup drivers rely
+# heavily on this, it can be decreased for specific drivers that don't.
+# (integer value)
+# Minimum value: 20
+#backup_native_threads_pool_size = 60
+
+# Number of backup processes to launch. Improves performance with concurrent
+# backups. (integer value)
+# Minimum value: 1
+# Maximum value: 4
+#backup_workers = 1
+
+# Name of this cluster. Used to group volume hosts that share the same backend
+# configurations to work in HA Active-Active mode.  Active-Active is not yet
+# supported. (string value)
+#cluster = <None>
+
+# Top-level directory for maintaining cinder's state (string value)
+state_path = /var/lib/cinder
+
+# IP address of this host (host address value)
+#my_ip = <HOST_IP_ADDRESS>
+my_ip = 10.167.4.55
+
+# A list of the URLs of glance API servers available to cinder
+# ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to
+# http. (list value)
+glance_api_servers = http://10.167.4.35:9292
+
+# Number retries when downloading an image from glance (integer value)
+# Minimum value: 0
+glance_num_retries = 0
+
+# Allow to perform insecure SSL (https) requests to glance (https will be used
+# but cert validation will not be performed). (boolean value)
+#glance_api_insecure = false
+
+# Enables or disables negotiation of SSL layer compression. In some cases
+# disabling compression can improve data throughput, such as when high network
+# bandwidth is available and you use compressed image formats like qcow2.
+# (boolean value)
+#glance_api_ssl_compression = false
+
+# Location of ca certificates file to use for glance client requests. (string
+# value)
+
+# http/https timeout value for glance operations. If no value (None) is
+# supplied here, the glanceclient default value is used. (integer value)
+#glance_request_timeout = <None>
+
+# DEPRECATED: Deploy v2 of the Cinder API. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#enable_v2_api = true
+
+# Deploy v3 of the Cinder API. (boolean value)
+enable_v3_api = true
+
+# Enables or disables rate limit of the API. (boolean value)
+#api_rate_limit = true
+
+# Specify list of extensions to load when using osapi_volume_extension option
+# with cinder.api.contrib.select_extensions (list value)
+#osapi_volume_ext_list =
+
+# osapi volume extension to load (multi valued)
+osapi_volume_extension = cinder.api.contrib.standard_extensions
+
+# Full class name for the Manager for volume (string value)
+#volume_manager = cinder.volume.manager.VolumeManager
+
+# Full class name for the Manager for volume backup (string value)
+#backup_manager = cinder.backup.manager.BackupManager
+
+# Full class name for the Manager for scheduler (string value)
+#scheduler_manager = cinder.scheduler.manager.SchedulerManager
+
+# Name of this node.  This can be an opaque identifier. It is not necessarily a
+# host name, FQDN, or IP address. (host address value)
+#host = localhost
+
+# Availability zone of this node. Can be overridden per volume backend with the
+# option "backend_availability_zone". (string value)
+#storage_availability_zone = nova
+
+# Default availability zone for new volumes. If not set, the
+# storage_availability_zone option value is used as the default for new
+# volumes. (string value)
+#default_availability_zone = <None>
+
+# If the requested Cinder availability zone is unavailable, fall back to the
+# value of default_availability_zone, then storage_availability_zone, instead
+# of failing. (boolean value)
+allow_availability_zone_fallback = True
+
+# Default volume type to use (string value)
+#default_volume_type = <None>
+
+# Default group type to use (string value)
+#default_group_type = <None>
+
+# Time period for which to generate volume usages. The options are hour, day,
+# month, or year. (string value)
+#volume_usage_audit_period = month
+
+# Path to the rootwrap configuration file to use for running commands as root
+# (string value)
+#rootwrap_config = /etc/cinder/rootwrap.conf
+
+# Enable monkey patching (boolean value)
+#monkey_patch = false
+
+# List of modules/decorators to monkey patch (list value)
+#monkey_patch_modules =
+
+# Maximum time since last check-in for a service to be considered up (integer
+# value)
+#service_down_time = 60
+
+# The full class name of the volume API class to use (string value)
+#volume_api_class = cinder.volume.api.API
+
+# The full class name of the volume backup API class (string value)
+#backup_api_class = cinder.backup.api.API
+
+# The strategy to use for auth. Supports noauth or keystone. (string value)
+# Possible values:
+# noauth - <No description provided>
+# keystone - <No description provided>
+auth_strategy = keystone
+
+# A list of backend names to use. These backend names should be backed by a
+# unique [CONFIG] group with its options (list value)
+#enabled_backends = <None>
+default_volume_type=lvm-driver
+
+enabled_backends=lvm-driver
+
+
+# Whether snapshots count against gigabyte quota (boolean value)
+#no_snapshot_gb_quota = false
+
+# The full class name of the volume transfer API class (string value)
+#transfer_api_class = cinder.transfer.api.API
+
+# The full class name of the consistencygroup API class (string value)
+#consistencygroup_api_class = cinder.consistencygroup.api.API
+
+# The full class name of the group API class (string value)
+#group_api_class = cinder.group.api.API
+
+# The full class name of the compute API class to use (string value)
+#compute_api_class = cinder.compute.nova.API
+
+# ID of the project which will be used as the Cinder internal tenant. (string
+# value)
+#cinder_internal_tenant_project_id = <None>
+
+# ID of the user to be used in volume operations as the Cinder internal tenant.
+# (string value)
+#cinder_internal_tenant_user_id = <None>
+
+# Services to be added to the available pool on create (boolean value)
+#enable_new_services = true
+
+# Template string to be used to generate volume names (string value)
+volume_name_template = volume-%s
+
+# Template string to be used to generate snapshot names (string value)
+#snapshot_name_template = snapshot-%s
+
+# Template string to be used to generate backup names (string value)
+#backup_name_template = backup-%s
+
+# Driver to use for database access (string value)
+#db_driver = cinder.db
+
+# A list of url schemes that can be downloaded directly via the direct_url.
+# Currently supported schemes: [file, cinder]. (list value)
+#allowed_direct_url_schemes =
+
+# Info to match when looking for glance in the service catalog. Format is:
+# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
+# Only used if glance_api_servers are not provided. (string value)
+#glance_catalog_info = image:glance:publicURL
+
+# Default core properties of image (list value)
+#glance_core_properties = checksum,container_format,disk_format,image_name,image_id,min_disk,min_ram,name,size
+
+# Directory used for temporary storage during image conversion (string value)
+#image_conversion_dir = $state_path/conversion
+
+# message minimum life in seconds. (integer value)
+#message_ttl = 2592000
+
+# interval between periodic task runs to clean expired messages in seconds.
+# (integer value)
+#message_reap_interval = 86400
+
+# Number of volumes allowed per project (integer value)
+#quota_volumes = 10
+
+# Number of volume snapshots allowed per project (integer value)
+#quota_snapshots = 10
+
+# Number of consistencygroups allowed per project (integer value)
+#quota_consistencygroups = 10
+
+# Number of groups allowed per project (integer value)
+#quota_groups = 10
+
+# Total amount of storage, in gigabytes, allowed for volumes and snapshots per
+# project (integer value)
+#quota_gigabytes = 1000
+
+# Number of volume backups allowed per project (integer value)
+#quota_backups = 10
+
+# Total amount of storage, in gigabytes, allowed for backups per project
+# (integer value)
+#quota_backup_gigabytes = 1000
+
+# Number of seconds until a reservation expires (integer value)
+#reservation_expire = 86400
+
+# Interval between periodic task runs to clean expired reservations in seconds.
+# (integer value)
+#reservation_clean_interval = $reservation_expire
+
+# Count of reservations until usage is refreshed (integer value)
+#until_refresh = 0
+
+# Number of seconds between subsequent usage refreshes (integer value)
+#max_age = 0
+
+# Default driver to use for quota checks (string value)
+#quota_driver = cinder.quota.DbQuotaDriver
+
+# Enables or disables use of default quota class with default quota. (boolean
+# value)
+#use_default_quota_class = true
+
+# Max size allowed per volume, in gigabytes (integer value)
+#per_volume_size_limit = -1
+
+# The scheduler host manager class to use (string value)
+#scheduler_host_manager = cinder.scheduler.host_manager.HostManager
+
+# Maximum number of attempts to schedule a volume (integer value)
+#scheduler_max_attempts = 3
+
+# Which filter class names to use for filtering hosts when not specified in the
+# request. (list value)
+#scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter
+
+# Which weigher class names to use for weighing hosts. (list value)
+#scheduler_default_weighers = CapacityWeigher
+
+# Which handler to use for selecting the host/pool after weighing (string
+# value)
+#scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler
+
+# Default scheduler driver to use (string value)
+#scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler
+
+# Absolute path to scheduler configuration JSON file. (string value)
+#scheduler_json_config_location =
+
+# Multiplier used for weighing free capacity. Negative numbers mean to stack vs
+# spread. (floating point value)
+#capacity_weight_multiplier = 1.0
+
+# Multiplier used for weighing allocated capacity. Positive numbers mean to
+# stack vs spread. (floating point value)
+#allocated_capacity_weight_multiplier = -1.0
+
+# Multiplier used for weighing volume number. Negative numbers mean to spread
+# vs stack. (floating point value)
+#volume_number_multiplier = -1.0
+
+# Interval, in seconds, between nodes reporting state to datastore (integer
+# value)
+#report_interval = 10
+
+# Interval, in seconds, between running periodic tasks (integer value)
+#periodic_interval = 60
+
+# Range, in seconds, to randomly delay when starting the periodic task
+# scheduler to reduce stampeding. (Disable by setting to 0) (integer value)
+#periodic_fuzzy_delay = 60
+
+# IP address on which OpenStack Volume API listens (string value)
+osapi_volume_listen = 10.167.4.55
+
+# Port on which OpenStack Volume API listens (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#osapi_volume_listen_port = 8776
+
+# Number of workers for OpenStack Volume API service. The default is equal to
+# the number of CPUs available. (integer value)
+osapi_volume_workers = 4
+
+# Wraps the socket in a SSL context if True is set. A certificate file and key
+# file must be specified. (boolean value)
+#osapi_volume_use_ssl = false
+
+# Option to enable strict host key checking.  When set to "True" Cinder will
+# only connect to systems with a host key present in the configured
+# "ssh_hosts_key_file".  When set to "False" the host key will be saved upon
+# first connection and used for subsequent connections.  Default=False (boolean
+# value)
+#strict_ssh_host_key_policy = false
+
+# File containing SSH host keys for the systems with which Cinder needs to
+# communicate.  OPTIONAL: Default=$state_path/ssh_known_hosts (string value)
+#ssh_hosts_key_file = $state_path/ssh_known_hosts
+
+# The number of characters in the salt. (integer value)
+#volume_transfer_salt_length = 8
+
+# The number of characters in the autogenerated auth key. (integer value)
+#volume_transfer_key_length = 16
+
+# Enables the Force option on upload_to_image. This enables running
+# upload_volume on in-use volumes for backends that support it. (boolean value)
+#enable_force_upload = false
+enable_force_upload = false
+
+# Create volume from snapshot at the host where snapshot resides (boolean
+# value)
+#snapshot_same_host = true
+
+# Ensure that the new volumes are the same AZ as snapshot or source volume
+# (boolean value)
+#cloned_volume_same_az = true
+
+# Cache volume availability zones in memory for the provided duration in
+# seconds (integer value)
+#az_cache_duration = 3600
+
+# Number of times to attempt to run flakey shell commands (integer value)
+#num_shell_tries = 3
+
+# The percentage of backend capacity is reserved (integer value)
+# Minimum value: 0
+# Maximum value: 100
+#reserved_percentage = 0
+
+# Prefix for iSCSI volumes (string value)
+# Deprecated group/name - [DEFAULT]/iscsi_target_prefix
+#target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSCSI daemon is listening on (string value)
+# Deprecated group/name - [DEFAULT]/iscsi_ip_address
+#target_ip_address = $my_ip
+
+# The list of secondary IP addresses of the iSCSI daemon (list value)
+#iscsi_secondary_ip_addresses =
+
+# The port that the iSCSI daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/iscsi_port
+#target_port = 3260
+
+# The maximum number of times to rescan targets to find volume (integer value)
+#num_volume_device_scan_tries = 3
+
+# The backend name for a given driver implementation (string value)
+volume_backend_name = DEFAULT
+
+# Do we attach/detach volumes in cinder using multipath for volume to image and
+# image to volume transfers? (boolean value)
+#use_multipath_for_image_xfer = false
+
+# If this is set to True, attachment of volumes for image transfer will be
+# aborted when multipathd is not running. Otherwise, it will fallback to single
+# path. (boolean value)
+#enforce_multipath_for_image_xfer = false
+
+# Method used to wipe old volumes (string value)
+# Possible values:
+# none - <No description provided>
+# zero - <No description provided>
+#volume_clear = zero
+volume_clear = none
+
+# Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
+# (integer value)
+# Maximum value: 1024
+#volume_clear_size = 0
+
+# The flag to pass to ionice to alter the i/o priority of the process used to
+# zero a volume after deletion, for example "-c3" for idle only priority.
+# (string value)
+#volume_clear_ionice = <None>
+
+# Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI
+# support, scstadmin for SCST target support, ietadm for iSCSI Enterprise
+# Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, or fake
+# for testing. (string value)
+# Possible values:
+# tgtadm - <No description provided>
+# lioadm - <No description provided>
+# scstadmin - <No description provided>
+# iscsictl - <No description provided>
+# ietadm - <No description provided>
+# nvmet - <No description provided>
+# fake - <No description provided>
+# Deprecated group/name - [DEFAULT]/iscsi_helper
+target_helper = tgtadm
+
+# Volume configuration file storage directory (string value)
+volumes_dir = /var/lib/cinder/volumes
+
+# IET configuration file (string value)
+#iet_conf = /etc/iet/ietd.conf
+
+# Chiscsi (CXT) global defaults configuration file (string value)
+#chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf
+
+# Sets the behavior of the iSCSI target to either perform blockio or fileio
+# optionally, auto can be set and Cinder will autodetect type of backing device
+# (string value)
+# Possible values:
+# blockio - <No description provided>
+# fileio - <No description provided>
+# auto - <No description provided>
+#iscsi_iotype = fileio
+
+# The default block size used when copying/clearing volumes (string value)
+#volume_dd_blocksize = 1M
+
+# The blkio cgroup name to be used to limit bandwidth of volume copy (string
+# value)
+#volume_copy_blkio_cgroup_name = cinder-volume-copy
+
+# The upper limit of bandwidth of volume copy. 0 => unlimited (integer value)
+#volume_copy_bps_limit = 0
+
+# Sets the behavior of the iSCSI target to either perform write-back(on) or
+# write-through(off). This parameter is valid if target_helper is set to
+# tgtadm. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+#iscsi_write_cache = on
+
+# Sets the target-specific flags for the iSCSI target. Only used for tgtadm to
+# specify backing device flags using bsoflags option. The specified string is
+# passed as is to the underlying tool. (string value)
+#iscsi_target_flags =
+
+# Determines the target protocol for new volumes, created with tgtadm, lioadm
+# and nvmet target helpers. In order to enable RDMA, this parameter should be
+# set with the value "iser". The supported iSCSI protocol values are "iscsi"
+# and "iser", in case of nvmet target set to "nvmet_rdma". (string value)
+# Possible values:
+# iscsi - <No description provided>
+# iser - <No description provided>
+# nvmet_rdma - <No description provided>
+# Deprecated group/name - [DEFAULT]/iscsi_protocol
+#target_protocol = iscsi
+
+# The path to the client certificate key for verification, if the driver
+# supports it. (string value)
+#driver_client_cert_key = <None>
+
+# The path to the client certificate for verification, if the driver supports
+# it. (string value)
+#driver_client_cert = <None>
+
+# Tell driver to use SSL for connection to backend storage if the driver
+# supports it. (boolean value)
+#driver_use_ssl = false
+
+# Representation of the over subscription ratio when thin provisioning is
+# enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times
+# of the total physical capacity. If the ratio is 10.5, it means provisioned
+# capacity can be 10.5 times of the total physical capacity. A ratio of 1.0
+# means provisioned capacity cannot exceed the total physical capacity. If
+# ratio is 'auto', Cinder will automatically calculate the ratio based on the
+# provisioned capacity and the used space. If not set to auto, the ratio has to
+# be a minimum of 1.0. (string value)
+#max_over_subscription_ratio = 20.0
+
+# Certain ISCSI targets have predefined target names, SCST target driver uses
+# this name. (string value)
+#scst_target_iqn_name = <None>
+
+# SCST target implementation can choose from multiple SCST target drivers.
+# (string value)
+#scst_target_driver = iscsi
+
+# Option to enable/disable CHAP authentication for targets. (boolean value)
+#use_chap_auth = false
+
+# CHAP user name. (string value)
+#chap_username =
+
+# Password for specified CHAP account name. (string value)
+#chap_password =
+
+# Namespace for driver private data values to be saved in. (string value)
+#driver_data_namespace = <None>
+
+# String representation for an equation that will be used to filter hosts. Only
+# used when the driver filter is set to be used by the Cinder scheduler.
+# (string value)
+#filter_function = <None>
+
+# String representation for an equation that will be used to determine the
+# goodness of a host. Only used when using the goodness weigher is set to be
+# used by the Cinder scheduler. (string value)
+#goodness_function = <None>
+
+# If set to True the http client will validate the SSL certificate of the
+# backend endpoint. (boolean value)
+#driver_ssl_cert_verify = false
+
+# Can be used to specify a non default path to a CA_BUNDLE file or directory
+# with certificates of trusted CAs, which will be used to validate the backend
+# (string value)
+#driver_ssl_cert_path = <None>
+
+# List of options that control which trace info is written to the DEBUG log
+# level to assist developers. Valid values are method and api. (list value)
+#trace_flags = <None>
+
+# Multi opt of dictionaries to represent a replication target device.  This
+# option may be specified multiple times in a single config section to specify
+# multiple replication target devices.  Each entry takes the standard dict
+# config form: replication_device =
+# target_device_id:<required>,key1:value1,key2:value2... (dict value)
+#replication_device = <None>
+
+# If set to True, upload-to-image in raw format will create a cloned volume and
+# register its location to the image service, instead of uploading the volume
+# content. The cinder backend and locations support must be enabled in the
+# image service. (boolean value)
+#image_upload_use_cinder_backend = false
+
+# If set to True, the image volume created by upload-to-image will be placed in
+# the internal tenant. Otherwise, the image volume is created in the current
+# context's tenant. (boolean value)
+#image_upload_use_internal_tenant = false
+
+# Enable the image volume cache for this backend. (boolean value)
+#image_volume_cache_enabled = false
+
+# Max size of the image volume cache for this backend in GB. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_size_gb = 0
+
+# Max number of entries allowed in the image volume cache. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_count = 0
+
+# Report to clients of Cinder that the backend supports discard (aka.
+# trim/unmap). This will not actually change the behavior of the backend or the
+# client directly, it will only notify that it can be used. (boolean value)
+#report_discard_supported = false
+
+# Protocol for transferring data between host and storage back-end. (string
+# value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+#storage_protocol = iscsi
+
+# If this is set to True, a temporary snapshot will be created for performing
+# non-disruptive backups. Otherwise a temporary volume will be cloned in order
+# to perform a backup. (boolean value)
+#backup_use_temp_snapshot = false
+
+# Set this to True when you want to allow an unsupported driver to start.
+# Drivers that haven't maintained a working CI system and testing are marked as
+# unsupported until CI is working again.  This also marks a driver as
+# deprecated and may be removed in the next release. (boolean value)
+#enable_unsupported_driver = false
+
+# Availability zone for this volume backend. If not set, the
+# storage_availability_zone option value is used as the default for all
+# backends. (string value)
+#backend_availability_zone = <None>
+
+# The maximum number of times to rescan iSER targetto find volume (integer
+# value)
+#num_iser_scan_tries = 3
+
+# Prefix for iSER volumes (string value)
+#iser_target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSER daemon is listening on (string value)
+#iser_ip_address = $my_ip
+
+# The port that the iSER daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#iser_port = 3260
+
+# The name of the iSER target user-land tool to use (string value)
+#iser_helper = tgtadm
+
+# The port that the NVMe target is listening on. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#nvmet_port_id = 1
+
+# The namespace id associated with the subsystem that will be created with the
+# path for the LVM volume. (integer value)
+#nvmet_ns_id = 10
+
+# DataCore virtual disk type (single/mirrored). Mirrored virtual disks require
+# two storage servers in the server group. (string value)
+# Possible values:
+# single - <No description provided>
+# mirrored - <No description provided>
+#datacore_disk_type = single
+
+# DataCore virtual disk storage profile. (string value)
+#datacore_storage_profile = <None>
+
+# List of DataCore disk pools that can be used by volume driver. (list value)
+#datacore_disk_pools =
+
+# Seconds to wait for a response from a DataCore API call. (integer value)
+# Minimum value: 1
+#datacore_api_timeout = 300
+
+# Seconds to wait for DataCore virtual disk to come out of the "Failed" state.
+# (integer value)
+# Minimum value: 0
+#datacore_disk_failed_delay = 15
+
+# List of iSCSI targets that cannot be used to attach volume. To prevent the
+# DataCore iSCSI volume driver from using some front-end targets in volume
+# attachment, specify this option and list the iqn and target machine for each
+# target as the value, such as <iqn:target name>, <iqn:target name>,
+# <iqn:target name>. (list value)
+#datacore_iscsi_unallowed_targets =
+
+# Configure CHAP authentication for iSCSI connections. (boolean value)
+#datacore_iscsi_chap_enabled = false
+
+# iSCSI CHAP authentication password storage file. (string value)
+#datacore_iscsi_chap_storage = <None>
+
+# Storage system autoexpand parameter for volumes (True/False) (boolean value)
+#instorage_mcs_vol_autoexpand = true
+
+# Storage system compression option for volumes (boolean value)
+#instorage_mcs_vol_compression = false
+
+# Enable InTier for volumes (boolean value)
+#instorage_mcs_vol_intier = true
+
+# Allow tenants to specify QOS on create (boolean value)
+#instorage_mcs_allow_tenant_qos = false
+
+# Storage system grain size parameter for volumes (32/64/128/256) (integer
+# value)
+# Minimum value: 32
+# Maximum value: 256
+#instorage_mcs_vol_grainsize = 256
+
+# Storage system space-efficiency parameter for volumes (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#instorage_mcs_vol_rsize = 2
+
+# Storage system threshold for volume capacity warnings (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#instorage_mcs_vol_warning = 0
+
+# Maximum number of seconds to wait for LocalCopy to be prepared. (integer
+# value)
+# Minimum value: 1
+# Maximum value: 600
+#instorage_mcs_localcopy_timeout = 120
+
+# Specifies the InStorage LocalCopy copy rate to be used when creating a full
+# volume copy. The default is rate is 50, and the valid rates are 1-100.
+# (integer value)
+# Minimum value: 1
+# Maximum value: 100
+#instorage_mcs_localcopy_rate = 50
+
+# The I/O group in which to allocate volumes. It can be a comma-separated list
+# in which case the driver will select an io_group based on least number of
+# volumes associated with the io_group. (string value)
+#instorage_mcs_vol_iogrp = 0
+
+# Specifies secondary management IP or hostname to be used if san_ip is invalid
+# or becomes inaccessible. (string value)
+#instorage_san_secondary_ip = <None>
+
+# Comma separated list of storage system storage pools for volumes. (list
+# value)
+#instorage_mcs_volpool_name = volpool
+
+# Configure CHAP authentication for iSCSI connections (Default: Enabled)
+# (boolean value)
+#instorage_mcs_iscsi_chap_enabled = true
+
+# The StorPool template for volumes with no type. (string value)
+#storpool_template = <None>
+
+# The default StorPool chain replication value.  Used when creating a volume
+# with no specified type if storpool_template is not set.  Also used for
+# calculating the apparent free space reported in the stats. (integer value)
+#storpool_replication = 3
+
+# Create sparse Lun. (boolean value)
+#vrts_lun_sparse = true
+
+# VA config file. (string value)
+#vrts_target_config = /etc/cinder/vrts_target.xml
+
+# Timeout for creating the volume to migrate to when performing volume
+# migration (seconds) (integer value)
+#migration_create_volume_timeout_secs = 300
+
+# Offload pending volume delete during volume service startup (boolean value)
+#volume_service_inithost_offload = false
+
+# FC Zoning mode configured, only 'fabric' is supported now. (string value)
+#zoning_mode = <None>
+
+# Sets the value of TCP_KEEPALIVE (True/False) for each server socket. (boolean
+# value)
+#tcp_keepalive = true
+
+# Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not
+# supported on OS X. (integer value)
+#tcp_keepalive_interval = <None>
+
+# Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X.
+# (integer value)
+#tcp_keepalive_count = <None>
+
+
+[backend]
+
+#
+# From cinder
+#
+
+# Backend override of host value. (string value)
+#backend_host = <None>
+[lvm-driver]
+host=cmp001
+volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
+volume_backend_name=lvm-driver
 iscsi_helper = tgtadm
-volume_name_template = volume-%s
-volume_group = cinder-volumes
-verbose = True
-auth_strategy = keystone
-state_path = /var/lib/cinder
-lock_path = /var/lock/cinder
-volumes_dir = /var/lib/cinder/volumes
-enabled_backends = lvm
+volume_group = vgroot
+
+
+[backend_defaults]
+
+#
+# From cinder
+#
+
+# Number of times to attempt to run flakey shell commands (integer value)
+#num_shell_tries = 3
+
+# The percentage of backend capacity is reserved (integer value)
+# Minimum value: 0
+# Maximum value: 100
+#reserved_percentage = 0
+
+# Prefix for iSCSI volumes (string value)
+# Deprecated group/name - [backend_defaults]/iscsi_target_prefix
+#target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSCSI daemon is listening on (string value)
+# Deprecated group/name - [backend_defaults]/iscsi_ip_address
+#target_ip_address = $my_ip
+
+# The list of secondary IP addresses of the iSCSI daemon (list value)
+#iscsi_secondary_ip_addresses =
+
+# The port that the iSCSI daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# Deprecated group/name - [backend_defaults]/iscsi_port
+#target_port = 3260
+
+# The maximum number of times to rescan targets to find volume (integer value)
+#num_volume_device_scan_tries = 3
+
+# The backend name for a given driver implementation (string value)
+#volume_backend_name = <None>
+
+# Do we attach/detach volumes in cinder using multipath for volume to image and
+# image to volume transfers? (boolean value)
+#use_multipath_for_image_xfer = false
+
+# If this is set to True, attachment of volumes for image transfer will be
+# aborted when multipathd is not running. Otherwise, it will fallback to single
+# path. (boolean value)
+#enforce_multipath_for_image_xfer = false
+
+# Method used to wipe old volumes (string value)
+# Possible values:
+# none - <No description provided>
+# zero - <No description provided>
+#volume_clear = zero
+
+# Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
+# (integer value)
+# Maximum value: 1024
+#volume_clear_size = 0
+
+# The flag to pass to ionice to alter the i/o priority of the process used to
+# zero a volume after deletion, for example "-c3" for idle only priority.
+# (string value)
+#volume_clear_ionice = <None>
+
+# Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI
+# support, scstadmin for SCST target support, ietadm for iSCSI Enterprise
+# Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, or fake
+# for testing. (string value)
+# Possible values:
+# tgtadm - <No description provided>
+# lioadm - <No description provided>
+# scstadmin - <No description provided>
+# iscsictl - <No description provided>
+# ietadm - <No description provided>
+# nvmet - <No description provided>
+# fake - <No description provided>
+# Deprecated group/name - [backend_defaults]/iscsi_helper
+#target_helper = tgtadm
+
+# Volume configuration file storage directory (string value)
+#volumes_dir = $state_path/volumes
+
+# IET configuration file (string value)
+#iet_conf = /etc/iet/ietd.conf
+
+# Chiscsi (CXT) global defaults configuration file (string value)
+#chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf
+
+# Sets the behavior of the iSCSI target to either perform blockio or fileio
+# optionally, auto can be set and Cinder will autodetect type of backing device
+# (string value)
+# Possible values:
+# blockio - <No description provided>
+# fileio - <No description provided>
+# auto - <No description provided>
+#iscsi_iotype = fileio
+
+# The default block size used when copying/clearing volumes (string value)
+#volume_dd_blocksize = 1M
+
+# The blkio cgroup name to be used to limit bandwidth of volume copy (string
+# value)
+#volume_copy_blkio_cgroup_name = cinder-volume-copy
+
+# The upper limit of bandwidth of volume copy. 0 => unlimited (integer value)
+#volume_copy_bps_limit = 0
+
+# Sets the behavior of the iSCSI target to either perform write-back(on) or
+# write-through(off). This parameter is valid if target_helper is set to
+# tgtadm. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+#iscsi_write_cache = on
+
+# Sets the target-specific flags for the iSCSI target. Only used for tgtadm to
+# specify backing device flags using bsoflags option. The specified string is
+# passed as is to the underlying tool. (string value)
+#iscsi_target_flags =
+
+# Determines the target protocol for new volumes, created with tgtadm, lioadm
+# and nvmet target helpers. In order to enable RDMA, this parameter should be
+# set with the value "iser". The supported iSCSI protocol values are "iscsi"
+# and "iser", in case of nvmet target set to "nvmet_rdma". (string value)
+# Possible values:
+# iscsi - <No description provided>
+# iser - <No description provided>
+# nvmet_rdma - <No description provided>
+# Deprecated group/name - [backend_defaults]/iscsi_protocol
+#target_protocol = iscsi
+
+# The path to the client certificate key for verification, if the driver
+# supports it. (string value)
+#driver_client_cert_key = <None>
+
+# The path to the client certificate for verification, if the driver supports
+# it. (string value)
+#driver_client_cert = <None>
+
+# Tell driver to use SSL for connection to backend storage if the driver
+# supports it. (boolean value)
+#driver_use_ssl = false
+
+# Representation of the over subscription ratio when thin provisioning is
+# enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times
+# of the total physical capacity. If the ratio is 10.5, it means provisioned
+# capacity can be 10.5 times of the total physical capacity. A ratio of 1.0
+# means provisioned capacity cannot exceed the total physical capacity. If
+# ratio is 'auto', Cinder will automatically calculate the ratio based on the
+# provisioned capacity and the used space. If not set to auto, the ratio has to
+# be a minimum of 1.0. (string value)
+#max_over_subscription_ratio = 20.0
+
+# Certain ISCSI targets have predefined target names, SCST target driver uses
+# this name. (string value)
+#scst_target_iqn_name = <None>
+
+# SCST target implementation can choose from multiple SCST target drivers.
+# (string value)
+#scst_target_driver = iscsi
+
+# Option to enable/disable CHAP authentication for targets. (boolean value)
+#use_chap_auth = false
+
+# CHAP user name. (string value)
+#chap_username =
+
+# Password for specified CHAP account name. (string value)
+#chap_password =
+
+# Namespace for driver private data values to be saved in. (string value)
+#driver_data_namespace = <None>
+
+# String representation for an equation that will be used to filter hosts. Only
+# used when the driver filter is set to be used by the Cinder scheduler.
+# (string value)
+#filter_function = <None>
+
+# String representation for an equation that will be used to determine the
+# goodness of a host. Only used when using the goodness weigher is set to be
+# used by the Cinder scheduler. (string value)
+#goodness_function = <None>
+
+# If set to True the http client will validate the SSL certificate of the
+# backend endpoint. (boolean value)
+#driver_ssl_cert_verify = false
+
+# Can be used to specify a non default path to a CA_BUNDLE file or directory
+# with certificates of trusted CAs, which will be used to validate the backend
+# (string value)
+#driver_ssl_cert_path = <None>
+
+# List of options that control which trace info is written to the DEBUG log
+# level to assist developers. Valid values are method and api. (list value)
+#trace_flags = <None>
+
+# Multi opt of dictionaries to represent a replication target device.  This
+# option may be specified multiple times in a single config section to specify
+# multiple replication target devices.  Each entry takes the standard dict
+# config form: replication_device =
+# target_device_id:<required>,key1:value1,key2:value2... (dict value)
+#replication_device = <None>
+
+# If set to True, upload-to-image in raw format will create a cloned volume and
+# register its location to the image service, instead of uploading the volume
+# content. The cinder backend and locations support must be enabled in the
+# image service. (boolean value)
+#image_upload_use_cinder_backend = false
+
+# If set to True, the image volume created by upload-to-image will be placed in
+# the internal tenant. Otherwise, the image volume is created in the current
+# context's tenant. (boolean value)
+#image_upload_use_internal_tenant = false
+
+# Enable the image volume cache for this backend. (boolean value)
+#image_volume_cache_enabled = false
+
+# Max size of the image volume cache for this backend in GB. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_size_gb = 0
+
+# Max number of entries allowed in the image volume cache. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_count = 0
+
+# Report to clients of Cinder that the backend supports discard (aka.
+# trim/unmap). This will not actually change the behavior of the backend or the
+# client directly, it will only notify that it can be used. (boolean value)
+#report_discard_supported = false
+
+# Protocol for transferring data between host and storage back-end. (string
+# value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+#storage_protocol = iscsi
+
+# If this is set to True, a temporary snapshot will be created for performing
+# non-disruptive backups. Otherwise a temporary volume will be cloned in order
+# to perform a backup. (boolean value)
+#backup_use_temp_snapshot = false
+
+# Set this to True when you want to allow an unsupported driver to start.
+# Drivers that haven't maintained a working CI system and testing are marked as
+# unsupported until CI is working again.  This also marks a driver as
+# deprecated and may be removed in the next release. (boolean value)
+#enable_unsupported_driver = false
+
+# Availability zone for this volume backend. If not set, the
+# storage_availability_zone option value is used as the default for all
+# backends. (string value)
+#backend_availability_zone = <None>
+
+# The maximum number of times to rescan iSER targetto find volume (integer
+# value)
+#num_iser_scan_tries = 3
+
+# Prefix for iSER volumes (string value)
+#iser_target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSER daemon is listening on (string value)
+#iser_ip_address = $my_ip
+
+# The port that the iSER daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#iser_port = 3260
+
+# The name of the iSER target user-land tool to use (string value)
+#iser_helper = tgtadm
+
+# The port that the NVMe target is listening on. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#nvmet_port_id = 1
+
+# The namespace id associated with the subsystem that will be created with the
+# path for the LVM volume. (integer value)
+#nvmet_ns_id = 10
+
+# Hostname for the CoprHD Instance (string value)
+#coprhd_hostname = <None>
+
+# Port for the CoprHD Instance (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#coprhd_port = 4443
+
+# Username for accessing the CoprHD Instance (string value)
+#coprhd_username = <None>
+
+# Password for accessing the CoprHD Instance (string value)
+#coprhd_password = <None>
+
+# Tenant to utilize within the CoprHD Instance (string value)
+#coprhd_tenant = <None>
+
+# Project to utilize within the CoprHD Instance (string value)
+#coprhd_project = <None>
+
+# Virtual Array to utilize within the CoprHD Instance (string value)
+#coprhd_varray = <None>
+
+# True | False to indicate if the storage array in CoprHD is VMAX or VPLEX
+# (boolean value)
+#coprhd_emulate_snapshot = false
+
+# Rest Gateway IP or FQDN for Scaleio (string value)
+#coprhd_scaleio_rest_gateway_host = None
+
+# Rest Gateway Port for Scaleio (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#coprhd_scaleio_rest_gateway_port = 4984
+
+# Username for Rest Gateway (string value)
+#coprhd_scaleio_rest_server_username = <None>
+
+# Rest Gateway Password (string value)
+#coprhd_scaleio_rest_server_password = <None>
+
+# verify server certificate (boolean value)
+#scaleio_verify_server_certificate = false
+
+# Server certificate path (string value)
+#scaleio_server_certificate_path = <None>
+
+# Datera API port. (string value)
+#datera_api_port = 7717
+
+# DEPRECATED: Datera API version. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#datera_api_version = 2
+
+# Timeout for HTTP 503 retry messages (integer value)
+#datera_503_timeout = 120
+
+# Interval between 503 retries (integer value)
+#datera_503_interval = 5
+
+# True to set function arg and return logging (boolean value)
+#datera_debug = false
+
+# ONLY FOR DEBUG/TESTING PURPOSES
+# True to set replica_count to 1 (boolean value)
+#datera_debug_replica_count_override = false
+
+# If set to 'Map' --> OpenStack project ID will be mapped implicitly to Datera
+# tenant ID
+# If set to 'None' --> Datera tenant ID will not be used during volume
+# provisioning
+# If set to anything else --> Datera tenant ID will be the provided value
+# (string value)
+#datera_tenant_id = <None>
+
+# Set to True to disable profiling in the Datera driver (boolean value)
+#datera_disable_profiler = false
+
+# Group name to use for creating volumes. Defaults to "group-0". (string value)
+#eqlx_group_name = group-0
+
+# Maximum retry count for reconnection. Default is 5. (integer value)
+# Minimum value: 0
+#eqlx_cli_max_retries = 5
+
+# Pool in which volumes will be created. Defaults to "default". (string value)
+#eqlx_pool = default
+
+# Storage Center System Serial Number (integer value)
+#dell_sc_ssn = 64702
+
+# Dell API port (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#dell_sc_api_port = 3033
+
+# Name of the server folder to use on the Storage Center (string value)
+#dell_sc_server_folder = openstack
+
+# Name of the volume folder to use on the Storage Center (string value)
+#dell_sc_volume_folder = openstack
+
+# Enable HTTPS SC certificate verification (boolean value)
+#dell_sc_verify_cert = false
+
+# IP address of secondary DSM volume (string value)
+#secondary_san_ip =
+
+# Secondary DSM user name (string value)
+#secondary_san_login = Admin
+
+# Secondary DSM user password name (string value)
+#secondary_san_password =
+
+# Secondary Dell API port (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#secondary_sc_api_port = 3033
+
+# Dell SC API async call default timeout in seconds. (integer value)
+#dell_api_async_rest_timeout = 15
+
+# Dell SC API sync call default timeout in seconds. (integer value)
+#dell_api_sync_rest_timeout = 30
+
+# Domain IP to be excluded from iSCSI returns. (IP address value)
+#excluded_domain_ip = <None>
+
+# Server OS type to use when creating a new server on the Storage Center.
+# (string value)
+#dell_server_os = Red Hat Linux 6.x
+
+# REST server port. (string value)
+#sio_rest_server_port = 443
+
+# Verify server certificate. (boolean value)
+#sio_verify_server_certificate = false
+
+# Server certificate path. (string value)
+#sio_server_certificate_path = <None>
+
+# Round up volume capacity. (boolean value)
+#sio_round_volume_capacity = true
+
+# Unmap volume before deletion. (boolean value)
+#sio_unmap_volume_before_deletion = false
+
+# Storage Pools. (string value)
+#sio_storage_pools = <None>
+
+# DEPRECATED: Protection Domain ID. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_protection_domain_id = <None>
+
+# DEPRECATED: Protection Domain name. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_protection_domain_name = <None>
+
+# DEPRECATED: Storage Pool name. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_storage_pool_name = <None>
+
+# DEPRECATED: Storage Pool ID. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_storage_pool_id = <None>
+
+# ScaleIO API version. (string value)
+#sio_server_api_version = <None>
+
+# max_over_subscription_ratio setting for the ScaleIO driver. This replaces the
+# general max_over_subscription_ratio which has no effect in this
+# driver.Maximum value allowed for ScaleIO is 10.0. (floating point value)
+#sio_max_over_subscription_ratio = 10.0
+
+# Allow thick volumes to be created in Storage Pools when zero padding is
+# disabled. This option should not be enabled if multiple tenants will utilize
+# thick volumes from a shared Storage Pool. (boolean value)
+#sio_allow_non_padded_thick_volumes = false
+
+# A comma-separated list of storage pool names to be used. (list value)
+#unity_storage_pool_names = <None>
+
+# A comma-separated list of iSCSI or FC ports to be used. Each port can be
+# Unix-style glob expressions. (list value)
+#unity_io_ports = <None>
+
+# To remove the host from Unity when the last LUN is detached from it. By
+# default, it is False. (boolean value)
+#remove_empty_host = false
+
+# DEPRECATED: Use this file for cinder emc plugin config data. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#cinder_dell_emc_config_file = /etc/cinder/cinder_dell_emc_config.xml
+
+# Use this value to specify length of the interval in seconds. (integer value)
+#interval = 3
+
+# Use this value to specify number of retries. (integer value)
+#retries = 200
+
+# Use this value to enable the initiator_check. (boolean value)
+#initiator_check = false
+
+# REST server port number. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#san_rest_port = 8443
+
+# Serial number of the array to connect to. (string value)
+#vmax_array = <None>
+
+# Storage resource pool on array to use for provisioning. (string value)
+#vmax_srp = <None>
+
+# Service level to use for provisioning storage. (string value)
+#vmax_service_level = <None>
+
+# Workload (string value)
+#vmax_workload = <None>
+
+# List of port groups containing frontend ports configured prior for server
+# connection. (list value)
+#vmax_port_groups = <None>
+
+# VNX authentication scope type. By default, the value is global. (string
+# value)
+#storage_vnx_authentication_type = global
+
+# Directory path that contains the VNX security file. Make sure the security
+# file is generated first. (string value)
+#storage_vnx_security_file_dir = <None>
+
+# Naviseccli Path. (string value)
+#naviseccli_path = <None>
+
+# Comma-separated list of storage pool names to be used. (list value)
+#storage_vnx_pool_names = <None>
+
+# Default timeout for CLI operations in minutes. For example, LUN migration is
+# a typical long running operation, which depends on the LUN size and the load
+# of the array. An upper bound in the specific deployment can be set to avoid
+# unnecessary long wait. By default, it is 365 days long. (integer value)
+#default_timeout = 31536000
+
+# Default max number of LUNs in a storage group. By default, the value is 255.
+# (integer value)
+#max_luns_per_storage_group = 255
+
+# To destroy storage group when the last LUN is removed from it. By default,
+# the value is False. (boolean value)
+#destroy_empty_storage_group = false
+
+# Mapping between hostname and its iSCSI initiator IP addresses. (string value)
+#iscsi_initiators = <None>
+
+# Comma separated iSCSI or FC ports to be used in Nova or Cinder. (list value)
+#io_port_list = <None>
+
+# Automatically register initiators. By default, the value is False. (boolean
+# value)
+#initiator_auto_registration = false
+
+# Automatically deregister initiators after the related storage group is
+# destroyed. By default, the value is False. (boolean value)
+#initiator_auto_deregistration = false
+
+# DEPRECATED: Report free_capacity_gb as 0 when the limit to maximum number of
+# pool LUNs is reached. By default, the value is False. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#check_max_pool_luns_threshold = false
+
+# Delete a LUN even if it is in Storage Groups. By default, the value is False.
+# (boolean value)
+#force_delete_lun_in_storagegroup = false
+
+# Force LUN creation even if the full threshold of pool is reached. By default,
+# the value is False. (boolean value)
+#ignore_pool_full_threshold = false
+
+# XMS cluster id in multi-cluster environment (string value)
+#xtremio_cluster_name =
+
+# Number of retries in case array is busy (integer value)
+#xtremio_array_busy_retry_count = 5
+
+# Interval between retries in case array is busy (integer value)
+#xtremio_array_busy_retry_interval = 5
+
+# Number of volumes created from each cached glance image (integer value)
+#xtremio_volumes_per_glance_cache = 100
+
+# Should the driver remove initiator groups with no volumes after the last
+# connection was terminated. Since the behavior till now was to leave the IG
+# be, we default to False (not deleting IGs without connected volumes); setting
+# this parameter to True will remove any IG after terminating its connection to
+# the last volume. (boolean value)
+#xtremio_clean_unused_ig = false
+
+# The IP of DMS client socket server (IP address value)
+#disco_client = 127.0.0.1
+
+# The port to connect DMS client socket server (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#disco_client_port = 9898
+
+# DEPRECATED: Path to the wsdl file to communicate with DISCO request manager
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#disco_wsdl_path = /etc/cinder/DISCOService.wsdl
+
+# DEPRECATED: The IP address of the REST server (IP address value)
+# Deprecated group/name - [DEFAULT]/rest_ip
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Using san_ip later
+#disco_rest_ip = <None>
+
+# Use soap client or rest client for communicating with DISCO. Possible values
+# are "soap" or "rest". (string value)
+# Possible values:
+# soap - <No description provided>
+# rest - <No description provided>
+# Deprecated group/name - [DEFAULT]/choice_client
+#disco_choice_client = <None>
+
+# DEPRECATED: The port of DISCO source API (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Using san_api_port later
+#disco_src_api_port = 8080
+
+# Prefix before volume name to differentiate DISCO volume created through
+# openstack and the other ones (string value)
+# Deprecated group/name - [backend_defaults]/volume_name_prefix
+#disco_volume_name_prefix = openstack-
+
+# How long we check whether a snapshot is finished before we give up (integer
+# value)
+# Deprecated group/name - [backend_defaults]/snapshot_check_timeout
+#disco_snapshot_check_timeout = 3600
+
+# How long we check whether a restore is finished before we give up (integer
+# value)
+# Deprecated group/name - [backend_defaults]/restore_check_timeout
+#disco_restore_check_timeout = 3600
+
+# How long we check whether a clone is finished before we give up (integer
+# value)
+# Deprecated group/name - [backend_defaults]/clone_check_timeout
+#disco_clone_check_timeout = 3600
+
+# How long we wait before retrying to get an item detail (integer value)
+# Deprecated group/name - [backend_defaults]/retry_interval
+#disco_retry_interval = 1
+
+# Number of nodes that should replicate the data. (integer value)
+#drbdmanage_redundancy = 1
+
+# Resource deployment completion wait policy. (string value)
+#drbdmanage_resource_policy = {"ratio": "0.51", "timeout": "60"}
+
+# Disk options to set on new resources. See http://www.drbd.org/en/doc/users-
+# guide-90/re-drbdconf for all the details. (string value)
+#drbdmanage_disk_options = {"c-min-rate": "4M"}
+
+# Net options to set on new resources. See http://www.drbd.org/en/doc/users-
+# guide-90/re-drbdconf for all the details. (string value)
+#drbdmanage_net_options = {"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"}
+
+# Resource options to set on new resources. See
+# http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details.
+# (string value)
+#drbdmanage_resource_options = {"auto-promote-timeout": "300"}
+
+# Snapshot completion wait policy. (string value)
+#drbdmanage_snapshot_policy = {"count": "1", "timeout": "60"}
+
+# Volume resize completion wait policy. (string value)
+#drbdmanage_resize_policy = {"timeout": "60"}
+
+# Resource deployment completion wait plugin. (string value)
+#drbdmanage_resource_plugin = drbdmanage.plugins.plugins.wait_for.WaitForResource
+
+# Snapshot completion wait plugin. (string value)
+#drbdmanage_snapshot_plugin = drbdmanage.plugins.plugins.wait_for.WaitForSnapshot
+
+# Volume resize completion wait plugin. (string value)
+#drbdmanage_resize_plugin = drbdmanage.plugins.plugins.wait_for.WaitForVolumeSize
+
+# If set, the c-vol node will receive a useable
+#                 /dev/drbdX device, even if the actual data is stored on
+#                 other nodes only.
+#                 This is useful for debugging, maintenance, and to be
+#                 able to do the iSCSI export from the c-vol node. (boolean
+# value)
+#drbdmanage_devs_on_volume = true
+
+# config file for cinder eternus_dx volume driver (string value)
+#cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml
+
+# The flag of thin storage allocation. (boolean value)
+#dsware_isthin = false
+
+# Fusionstorage manager ip addr for cinder-volume. (string value)
+#dsware_manager =
+
+# Fusionstorage agent ip addr range. (string value)
+#fusionstorageagent =
+
+# Pool type, like sata-2copy. (string value)
+#pool_type = default
+
+# Pool id permit to use. (list value)
+#pool_id_filter =
+
+# Create clone volume timeout. (integer value)
+#clone_volume_timeout = 680
+
+# Space network name to use for data transfer (string value)
+#hgst_net = Net 1 (IPv4)
+
+# Comma separated list of Space storage servers:devices. ex:
+# os1_stor:gbd0,os2_stor:gbd0 (string value)
+#hgst_storage_servers = os:gbd0
+
+# Should spaces be redundantly stored (1/0) (string value)
+#hgst_redundancy = 0
+
+# User to own created spaces (string value)
+#hgst_space_user = root
+
+# Group to own created spaces (string value)
+#hgst_space_group = disk
+
+# UNIX mode for created spaces (string value)
+#hgst_space_mode = 0600
+
+# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1 (string value)
+#hpe3par_api_url =
+
+# 3PAR username with the 'edit' role (string value)
+#hpe3par_username =
+
+# 3PAR password for the user specified in hpe3par_username (string value)
+#hpe3par_password =
+
+# List of the CPG(s) to use for volume creation (list value)
+#hpe3par_cpg = OpenStack
+
+# The CPG to use for Snapshots for volumes. If empty the userCPG will be used.
+# (string value)
+#hpe3par_cpg_snap =
+
+# The time in hours to retain a snapshot.  You can't delete it before this
+# expires. (string value)
+#hpe3par_snapshot_retention =
+
+# The time in hours when a snapshot expires  and is deleted.  This must be
+# larger than expiration (string value)
+#hpe3par_snapshot_expiration =
+
+# Enable HTTP debugging to 3PAR (boolean value)
+#hpe3par_debug = false
+
+# List of target iSCSI addresses to use. (list value)
+#hpe3par_iscsi_ips =
+
+# Enable CHAP authentication for iSCSI connections. (boolean value)
+#hpe3par_iscsi_chap_enabled = false
+
+# HPE LeftHand WSAPI Server Url like https://<LeftHand ip>:8081/lhos (uri
+# value)
+# Deprecated group/name - [backend_defaults]/hplefthand_api_url
+#hpelefthand_api_url = <None>
+
+# HPE LeftHand Super user username (string value)
+# Deprecated group/name - [backend_defaults]/hplefthand_username
+#hpelefthand_username = <None>
+
+# HPE LeftHand Super user password (string value)
+# Deprecated group/name - [backend_defaults]/hplefthand_password
+#hpelefthand_password = <None>
+
+# HPE LeftHand cluster name (string value)
+# Deprecated group/name - [backend_defaults]/hplefthand_clustername
+#hpelefthand_clustername = <None>
+
+# Configure CHAP authentication for iSCSI connections (Default: Disabled)
+# (boolean value)
+# Deprecated group/name - [backend_defaults]/hplefthand_iscsi_chap_enabled
+#hpelefthand_iscsi_chap_enabled = false
+
+# Enable HTTP debugging to LeftHand (boolean value)
+# Deprecated group/name - [backend_defaults]/hplefthand_debug
+#hpelefthand_debug = false
+
+# Port number of SSH service. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#hpelefthand_ssh_port = 16022
+
+# The configuration file for the Cinder Huawei driver. (string value)
+#cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
+
+# The remote device hypermetro will use. (string value)
+#hypermetro_devices = <None>
+
+# The remote metro device san user. (string value)
+#metro_san_user = <None>
+
+# The remote metro device san password. (string value)
+#metro_san_password = <None>
+
+# The remote metro device domain name. (string value)
+#metro_domain_name = <None>
+
+# The remote metro device request url. (string value)
+#metro_san_address = <None>
+
+# The remote metro device pool names. (string value)
+#metro_storage_pools = <None>
+
+# Connection protocol should be FC. (Default is FC.) (string value)
+#flashsystem_connection_protocol = FC
+
+# Allows vdisk to multi host mapping. (Default is True) (boolean value)
+#flashsystem_multihostmap_enabled = true
+
+# DEPRECATED: This option no longer has any affect. It is deprecated and will
+# be removed in the next release. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#flashsystem_multipath_enabled = false
+
+# Default iSCSI Port ID of FlashSystem. (Default port is 0.) (integer value)
+#flashsystem_iscsi_portid = 0
+
+# Specifies the path of the GPFS directory where Block Storage volume and
+# snapshot files are stored. (string value)
+#gpfs_mount_point_base = <None>
+
+# Specifies the path of the Image service repository in GPFS.  Leave undefined
+# if not storing images in GPFS. (string value)
+#gpfs_images_dir = <None>
+
+# Specifies the type of image copy to be used.  Set this when the Image service
+# repository also uses GPFS so that image files can be transferred efficiently
+# from the Image service to the Block Storage service. There are two valid
+# values: "copy" specifies that a full copy of the image is made;
+# "copy_on_write" specifies that copy-on-write optimization strategy is used
+# and unmodified blocks of the image file are shared efficiently. (string
+# value)
+# Possible values:
+# copy - <No description provided>
+# copy_on_write - <No description provided>
+# <None> - <No description provided>
+#gpfs_images_share_mode = <None>
+
+# Specifies an upper limit on the number of indirections required to reach a
+# specific block due to snapshots or clones.  A lengthy chain of copy-on-write
+# snapshots or clones can have a negative impact on performance, but improves
+# space utilization.  0 indicates unlimited clone depth. (integer value)
+#gpfs_max_clone_depth = 0
+
+# Specifies that volumes are created as sparse files which initially consume no
+# space. If set to False, the volume is created as a fully allocated file, in
+# which case, creation may take a significantly longer time. (boolean value)
+#gpfs_sparse_volumes = true
+
+# Specifies the storage pool that volumes are assigned to. By default, the
+# system storage pool is used. (string value)
+#gpfs_storage_pool = system
+
+# Comma-separated list of IP address or hostnames of GPFS nodes. (list value)
+#gpfs_hosts =
+
+# Username for GPFS nodes. (string value)
+#gpfs_user_login = root
+
+# Password for GPFS node user. (string value)
+#gpfs_user_password =
+
+# Filename of private key to use for SSH authentication. (string value)
+#gpfs_private_key =
+
+# SSH port to use. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#gpfs_ssh_port = 22
+
+# File containing SSH host keys for the gpfs nodes with which driver needs to
+# communicate. Default=$state_path/ssh_known_hosts (string value)
+#gpfs_hosts_key_file = $state_path/ssh_known_hosts
+
+# Option to enable strict gpfs host key checking while connecting to gpfs
+# nodes. Default=False (boolean value)
+#gpfs_strict_host_key_policy = false
+
+# Mapping between IODevice address and unit address. (string value)
+#ds8k_devadd_unitadd_mapping =
+
+# Set the first two digits of SSID. (string value)
+#ds8k_ssid_prefix = FF
+
+# Reserve LSSs for consistency group. (string value)
+#lss_range_for_cg =
+
+# Set to zLinux if your OpenStack version is prior to Liberty and you're
+# connecting to zLinux systems. Otherwise set to auto. Valid values for this
+# parameter are: 'auto', 'AMDLinuxRHEL', 'AMDLinuxSuse', 'AppleOSX', 'Fujitsu',
+# 'Hp', 'HpTru64', 'HpVms', 'LinuxDT', 'LinuxRF', 'LinuxRHEL', 'LinuxSuse',
+# 'Novell', 'SGI', 'SVC', 'SanFsAIX', 'SanFsLinux', 'Sun', 'VMWare', 'Win2000',
+# 'Win2003', 'Win2008', 'Win2012', 'iLinux', 'nSeries', 'pLinux', 'pSeries',
+# 'pSeriesPowerswap', 'zLinux', 'iSeries'. (string value)
+#ds8k_host_type = auto
+
+# Proxy driver that connects to the IBM Storage Array (string value)
+#proxy = cinder.volume.drivers.ibm.ibm_storage.proxy.IBMStorageProxy
+
+# Connection type to the IBM Storage Array (string value)
+# Possible values:
+# fibre_channel - <No description provided>
+# iscsi - <No description provided>
+#connection_type = iscsi
+
+# CHAP authentication mode, effective only for iscsi (disabled|enabled) (string
+# value)
+# Possible values:
+# disabled - <No description provided>
+# enabled - <No description provided>
+#chap = disabled
+
+# List of Management IP addresses (separated by commas) (string value)
+#management_ips =
+
+# Comma separated list of storage system storage pools for volumes. (list
+# value)
+#storwize_svc_volpool_name = volpool
+
+# Storage system space-efficiency parameter for volumes (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#storwize_svc_vol_rsize = 2
+
+# Storage system threshold for volume capacity warnings (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#storwize_svc_vol_warning = 0
+
+# Storage system autoexpand parameter for volumes (True/False) (boolean value)
+#storwize_svc_vol_autoexpand = true
+
+# Storage system grain size parameter for volumes (32/64/128/256) (integer
+# value)
+#storwize_svc_vol_grainsize = 256
+
+# Storage system compression option for volumes (boolean value)
+#storwize_svc_vol_compression = false
+
+# Enable Easy Tier for volumes (boolean value)
+#storwize_svc_vol_easytier = true
+
+# The I/O group in which to allocate volumes. It can be a comma-separated list
+# in which case the driver will select an io_group based on least number of
+# volumes associated with the io_group. (string value)
+#storwize_svc_vol_iogrp = 0
+
+# Maximum number of seconds to wait for FlashCopy to be prepared. (integer
+# value)
+# Minimum value: 1
+# Maximum value: 600
+#storwize_svc_flashcopy_timeout = 120
+
+# DEPRECATED: This option no longer has any affect. It is deprecated and will
+# be removed in the next release. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#storwize_svc_multihostmap_enabled = true
+
+# Allow tenants to specify QOS on create (boolean value)
+#storwize_svc_allow_tenant_qos = false
+
+# If operating in stretched cluster mode, specify the name of the pool in which
+# mirrored copies are stored.Example: "pool2" (string value)
+#storwize_svc_stretched_cluster_partner = <None>
+
+# Specifies secondary management IP or hostname to be used if san_ip is invalid
+# or becomes inaccessible. (string value)
+#storwize_san_secondary_ip = <None>
+
+# Specifies that the volume not be formatted during creation. (boolean value)
+#storwize_svc_vol_nofmtdisk = false
+
+# Specifies the Storwize FlashCopy copy rate to be used when creating a full
+# volume copy. The default is rate is 50, and the valid rates are 1-150.
+# (integer value)
+# Minimum value: 1
+# Maximum value: 150
+#storwize_svc_flashcopy_rate = 50
+
+# Specifies the name of the pool in which mirrored copy is stored. Example:
+# "pool2" (string value)
+#storwize_svc_mirror_pool = <None>
+
+# Specifies the name of the peer pool for hyperswap volume, the peer pool must
+# exist on the other site. (string value)
+#storwize_peer_pool = <None>
+
+# Specifies the site information for host. One WWPN or multi WWPNs used in the
+# host can be specified. For example:
+# storwize_preferred_host_site=site1:wwpn1,site2:wwpn2&wwpn3 or
+# storwize_preferred_host_site=site1:iqn1,site2:iqn2 (dict value)
+#storwize_preferred_host_site =
+
+# This defines an optional cycle period that applies to Global Mirror
+# relationships with a cycling mode of multi. A Global Mirror relationship
+# using the multi cycling_mode performs a complete cycle at most once each
+# period. The default is 300 seconds, and the valid seconds are 60-86400.
+# (integer value)
+# Minimum value: 60
+# Maximum value: 86400
+#cycle_period_seconds = 300
+
+# Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
+# (boolean value)
+#storwize_svc_multipath_enabled = false
+
+# Configure CHAP authentication for iSCSI connections (Default: Enabled)
+# (boolean value)
+#storwize_svc_iscsi_chap_enabled = true
+
+# Name of the pool from which volumes are allocated (string value)
+#infinidat_pool_name = <None>
+
+# Protocol for transferring data between host and storage back-end. (string
+# value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+#infinidat_storage_protocol = fc
+
+# List of names of network spaces to use for iSCSI connectivity (list value)
+#infinidat_iscsi_netspaces =
+
+# Specifies whether to turn on compression for newly created volumes. (boolean
+# value)
+#infinidat_use_compression = false
+
+# K2 driver will calculate max_oversubscription_ratio on setting this option as
+# True. (boolean value)
+#auto_calc_max_oversubscription_ratio = false
+
+# Whether or not our private network has unique FQDN on each initiator or not.
+# For example networks with QA systems usually have multiple servers/VMs with
+# the same FQDN.  When true this will create host entries on K2 using the FQDN,
+# when false it will use the reversed IQN/WWNN. (boolean value)
+#unique_fqdn_network = true
+
+# Disabling iSCSI discovery (sendtargets) for multipath connections on K2
+# driver. (boolean value)
+#disable_discovery = false
+
+# Pool or Vdisk name to use for volume creation. (string value)
+#lenovo_backend_name = A
+
+# linear (for VDisk) or virtual (for Pool). (string value)
+# Possible values:
+# linear - <No description provided>
+# virtual - <No description provided>
+#lenovo_backend_type = virtual
+
+# Lenovo api interface protocol. (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+#lenovo_api_protocol = https
+
+# Whether to verify Lenovo array SSL certificate. (boolean value)
+#lenovo_verify_certificate = false
+
+# Lenovo array SSL certificate path. (string value)
+#lenovo_verify_certificate_path = <None>
+
+# List of comma-separated target iSCSI IP addresses. (list value)
+#lenovo_iscsi_ips =
+
+# Name for the VG that will contain exported volumes (string value)
+#volume_group = cinder-volumes
+
+# If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors
+# + 2 PVs with available space (integer value)
+#lvm_mirrors = 0
+
+# Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to
+# thin if thin is supported. (string value)
+# Possible values:
+# default - <No description provided>
+# thin - <No description provided>
+# auto - <No description provided>
+#lvm_type = auto
+
+# LVM conf file to use for the LVM driver in Cinder; this setting is ignored if
+# the specified file does not exist (You can also specify 'None' to not use a
+# conf file even if one exists). (string value)
+#lvm_conf_file = /etc/cinder/lvm.conf
+
+# Suppress leaked file descriptor warnings in LVM commands. (boolean value)
+#lvm_suppress_fd_warnings = false
+
+# The storage family type used on the storage system; valid values are
+# ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
+# (string value)
+# Possible values:
+# ontap_cluster - <No description provided>
+# eseries - <No description provided>
+#netapp_storage_family = ontap_cluster
+
+# The storage protocol to be used on the data path with the storage system.
+# (string value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+# nfs - <No description provided>
+#netapp_storage_protocol = <None>
+
+# The hostname (or IP address) for the storage system or proxy server. (string
+# value)
+#netapp_server_hostname = <None>
+
+# The TCP port to use for communication with the storage system or proxy
+# server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for
+# HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. (integer value)
+#netapp_server_port = <None>
+
+# The transport protocol used when communicating with the storage system or
+# proxy server. (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+#netapp_transport_type = http
+
+# Administrative user account name used to access the storage system or proxy
+# server. (string value)
+#netapp_login = <None>
+
+# Password for the administrative user account specified in the netapp_login
+# option. (string value)
+#netapp_password = <None>
+
+# This option specifies the virtual storage server (Vserver) name on the
+# storage cluster on which provisioning of block storage volumes should occur.
+# (string value)
+#netapp_vserver = <None>
+
+# The quantity to be multiplied by the requested volume size to ensure enough
+# space is available on the virtual storage server (Vserver) to fulfill the
+# volume creation request.  Note: this option is deprecated and will be removed
+# in favor of "reserved_percentage" in the Mitaka release. (floating point
+# value)
+#netapp_size_multiplier = 1.2
+
+# This option determines if storage space is reserved for LUN allocation. If
+# enabled, LUNs are thick provisioned. If space reservation is disabled,
+# storage space is allocated on demand. (string value)
+# Possible values:
+# enabled - <No description provided>
+# disabled - <No description provided>
+#netapp_lun_space_reservation = enabled
+
+# If the percentage of available space for an NFS share has dropped below the
+# value specified by this option, the NFS image cache will be cleaned. (integer
+# value)
+#thres_avl_size_perc_start = 20
+
+# When the percentage of available space on an NFS share has reached the
+# percentage specified by this option, the driver will stop clearing files from
+# the NFS image cache that have not been accessed in the last M minutes, where
+# M is the value of the expiry_thres_minutes configuration option. (integer
+# value)
+#thres_avl_size_perc_stop = 60
+
+# This option specifies the threshold for last access time for images in the
+# NFS image cache. When a cache cleaning cycle begins, images in the cache that
+# have not been accessed in the last M minutes, where M is the value of this
+# parameter, will be deleted from the cache to create free space on the NFS
+# share. (integer value)
+#expiry_thres_minutes = 720
+
+# This option is used to specify the path to the E-Series proxy application on
+# a proxy server. The value is combined with the value of the
+# netapp_transport_type, netapp_server_hostname, and netapp_server_port options
+# to create the URL used by the driver to connect to the proxy application.
+# (string value)
+#netapp_webservice_path = /devmgr/v2
+
+# This option is only utilized when the storage family is configured to
+# eseries. This option is used to restrict provisioning to the specified
+# volumes. Specify the value of this option to be a comma separated list of
+# volume hostnames or IP addresses to be used for provisioning. (string
+# value)
+#netapp_volume_ips = <None>
+
+# Password for the NetApp E-Series storage array. (string value)
+#netapp_sa_password = <None>
+
+# This option specifies whether the driver should allow operations that require
+# multiple attachments to a volume. An example would be live migration of
+# servers that have volumes attached. When enabled, this backend is limited to
+# 256 total volumes in order to guarantee volumes can be accessed by more than
+# one host. (boolean value)
+#netapp_enable_multiattach = false
+
+# This option specifies the path of the NetApp copy offload tool binary. Ensure
+# that the binary has execute permissions set which allow the effective user of
+# the cinder-volume process to execute the file. (string value)
+#netapp_copyoffload_tool_path = <None>
+
+# This option defines the type of operating system that will access a LUN
+# exported from Data ONTAP; it is assigned to the LUN at the time it is
+# created. (string value)
+#netapp_lun_ostype = <None>
+
+# This option defines the type of operating system for all initiators that can
+# access a LUN. This information is used when mapping LUNs to individual hosts
+# or groups of hosts. (string value)
+#netapp_host_type = <None>
+
+# This option is used to restrict provisioning to the specified pools. Specify
+# the value of this option to be a regular expression which will be applied to
+# the names of objects from the storage backend which represent pools in
+# Cinder. This option is only utilized when the storage protocol is configured
+# to use iSCSI or FC. (string value)
+# Deprecated group/name - [backend_defaults]/netapp_volume_list
+# Deprecated group/name - [backend_defaults]/netapp_storage_pools
+#netapp_pool_name_search_pattern = (.+)
+
+# Multi opt of dictionaries to represent the aggregate mapping between source
+# and destination back ends when using whole back end replication. For every
+# source aggregate associated with a cinder pool (NetApp FlexVol), you would
+# need to specify the destination aggregate on the replication target device. A
+# replication target device is configured with the configuration option
+# replication_device. Specify this option as many times as you have replication
+# devices. Each entry takes the standard dict config form:
+# netapp_replication_aggregate_map =
+# backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
+# (dict value)
+#netapp_replication_aggregate_map = <None>
+
+# The maximum time in seconds to wait for existing SnapMirror transfers to
+# complete before aborting during a failover. (integer value)
+# Minimum value: 0
+#netapp_snapmirror_quiesce_timeout = 3600
+
+# IP address of Nexenta SA (string value)
+#nexenta_host =
+
+# HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443
+# for HTTPS and 8080 for HTTP is used (integer value)
+#nexenta_rest_port = 0
+
+# Use http or https for REST connection (default auto) (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+# auto - <No description provided>
+#nexenta_rest_protocol = auto
+
+# Use secure HTTP for REST connection (default True) (boolean value)
+#nexenta_use_https = true
+
+# User name to connect to Nexenta SA (string value)
+#nexenta_user = admin
+
+# Password to connect to Nexenta SA (string value)
+#nexenta_password = nexenta
+
+# Nexenta target portal port (integer value)
+#nexenta_iscsi_target_portal_port = 3260
+
+# SA Pool that holds all volumes (string value)
+#nexenta_volume = cinder
+
+# IQN prefix for iSCSI targets (string value)
+#nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder-
+
+# Prefix for iSCSI target groups on SA (string value)
+#nexenta_target_group_prefix = cinder/
+
+# Volume group for ns5 (string value)
+#nexenta_volume_group = iscsi
+
+# Compression value for new ZFS folders. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+# gzip - <No description provided>
+# gzip-1 - <No description provided>
+# gzip-2 - <No description provided>
+# gzip-3 - <No description provided>
+# gzip-4 - <No description provided>
+# gzip-5 - <No description provided>
+# gzip-6 - <No description provided>
+# gzip-7 - <No description provided>
+# gzip-8 - <No description provided>
+# gzip-9 - <No description provided>
+# lzjb - <No description provided>
+# zle - <No description provided>
+# lz4 - <No description provided>
+#nexenta_dataset_compression = on
+
+# Deduplication value for new ZFS folders. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+# sha256 - <No description provided>
+# verify - <No description provided>
+# sha256, verify - <No description provided>
+#nexenta_dataset_dedup = off
+
+# Human-readable description for the folder. (string value)
+#nexenta_dataset_description =
+
+# Block size for datasets (integer value)
+#nexenta_blocksize = 4096
+
+# Block size for datasets (integer value)
+#nexenta_ns5_blocksize = 32
+
+# Enables or disables the creation of sparse datasets (boolean value)
+#nexenta_sparse = false
+
+# File with the list of available nfs shares (string value)
+#nexenta_shares_config = /etc/cinder/nfs_shares
+
+# Base directory that contains NFS share mount points (string value)
+#nexenta_mount_point_base = $state_path/mnt
+
+# Enables or disables the creation of volumes as sparsed files that take no
+# space. If disabled (False), volume is created as a regular file, which takes
+# a long time. (boolean value)
+#nexenta_sparsed_volumes = true
+
+# If set True cache NexentaStor appliance volroot option value. (boolean value)
+#nexenta_nms_cache_volroot = true
+
+# Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best
+# compression. (integer value)
+#nexenta_rrmgr_compression = 0
+
+# TCP Buffer size in KiloBytes. (integer value)
+#nexenta_rrmgr_tcp_buf_size = 4096
+
+# Number of TCP connections. (integer value)
+#nexenta_rrmgr_connections = 2
+
+# NexentaEdge logical path of directory to store symbolic links to NBDs (string
+# value)
+#nexenta_nbd_symlinks_dir = /dev/disk/by-path
+
+# IP address of NexentaEdge management REST API endpoint (string value)
+#nexenta_rest_address =
+
+# User name to connect to NexentaEdge (string value)
+#nexenta_rest_user = admin
+
+# Password to connect to NexentaEdge (string value)
+#nexenta_rest_password = nexenta
+
+# NexentaEdge logical path of bucket for LUNs (string value)
+#nexenta_lun_container =
+
+# NexentaEdge iSCSI service name (string value)
+#nexenta_iscsi_service =
+
+# NexentaEdge iSCSI Gateway client address for non-VIP service (string value)
+#nexenta_client_address =
+
+# NexentaEdge iSCSI LUN object chunk size (integer value)
+#nexenta_chunksize = 32768
+
+# File with the list of available NFS shares. (string value)
+#nfs_shares_config = /etc/cinder/nfs_shares
+
+# Create volumes as sparsed files which take no space. If set to False volume
+# is created as regular file. In such case volume creation takes a lot of time.
+# (boolean value)
+#nfs_sparsed_volumes = true
+
+# Create volumes as QCOW2 files rather than raw files. (boolean value)
+#nfs_qcow2_volumes = false
+
+# Base dir containing mount points for NFS shares. (string value)
+#nfs_mount_point_base = $state_path/mnt
+
+# Mount options passed to the NFS client. See section of the NFS man page for
+# details. (string value)
+#nfs_mount_options = <None>
+
+# The number of attempts to mount NFS shares before raising an error.  At least
+# one attempt will be made to mount an NFS share, regardless of the value
+# specified. (integer value)
+#nfs_mount_attempts = 3
+
+# Enable support for snapshots on the NFS driver. Platforms using libvirt
+# <1.2.7 will encounter issues with this feature. (boolean value)
+#nfs_snapshot_support = false
+
+# Nimble Controller pool name (string value)
+#nimble_pool_name = default
+
+# Nimble Subnet Label (string value)
+#nimble_subnet_label = *
+
+# Whether to verify Nimble SSL Certificate (boolean value)
+#nimble_verify_certificate = false
+
+# Path to Nimble Array SSL certificate (string value)
+#nimble_verify_cert_path = <None>
+
+# DPL pool uuid in which DPL volumes are stored. (string value)
+#dpl_pool =
+
+# DPL port number. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#dpl_port = 8357
+
+# REST API authorization token. (string value)
+#pure_api_token = <None>
+
+# Automatically determine an oversubscription ratio based on the current total
+# data reduction values. If used this calculated value will override the
+# max_over_subscription_ratio config option. (boolean value)
+#pure_automatic_max_oversubscription_ratio = true
+
+# Snapshot replication interval in seconds. (integer value)
+#pure_replica_interval_default = 3600
+
+# Retain all snapshots on target for this time (in seconds.) (integer value)
+#pure_replica_retention_short_term_default = 14400
+
+# Retain how many snapshots for each day. (integer value)
+#pure_replica_retention_long_term_per_day_default = 3
+
+# Retain snapshots per day on target for this time (in days.) (integer value)
+#pure_replica_retention_long_term_default = 7
+
+# When enabled, all Pure volumes, snapshots, and protection groups will be
+# eradicated at the time of deletion in Cinder. Data will NOT be recoverable
+# after a delete with this set to True! When disabled, volumes and snapshots
+# will go into pending eradication state and can be recovered. (boolean value)
+#pure_eradicate_on_delete = false
+
+# The URL to management QNAP Storage. Driver does not support IPv6 address in
+# URL. (uri value)
+#qnap_management_url = <None>
+
+# The pool name in the QNAP Storage (string value)
+#qnap_poolname = <None>
+
+# Communication protocol to access QNAP storage (string value)
+#qnap_storage_protocol = iscsi
+
+# Quobyte URL to the Quobyte volume using e.g. a DNS SRV record (preferred) or
+# a host list (alternatively) like quobyte://<DIR host1>, <DIR host2>/<volume
+# name> (string value)
+#quobyte_volume_url = <None>
+
+# Path to a Quobyte Client configuration file. (string value)
+#quobyte_client_cfg = <None>
+
+# Create volumes as sparse files which take no space. If set to False, volume
+# is created as regular file. (boolean value)
+#quobyte_sparsed_volumes = true
+
+# Create volumes as QCOW2 files rather than raw files. (boolean value)
+#quobyte_qcow2_volumes = true
+
+# Base dir containing the mount point for the Quobyte volume. (string value)
+#quobyte_mount_point_base = $state_path/mnt
+
+# Create a cache of volumes from merged snapshots to speed up creation of
+# multiple volumes from a single snapshot. (boolean value)
+#quobyte_volume_from_snapshot_cache = false
+
+# The name of ceph cluster (string value)
+#rbd_cluster_name = ceph
+
+# The RADOS pool where rbd volumes are stored (string value)
+#rbd_pool = rbd
+
+# The RADOS client name for accessing rbd volumes - only set when using cephx
+# authentication (string value)
+#rbd_user = <None>
+
+# Path to the ceph configuration file (string value)
+#rbd_ceph_conf =
+
+# Path to the ceph keyring file (string value)
+#rbd_keyring_conf =
+
+# Flatten volumes created from snapshots to remove dependency from volume to
+# snapshot (boolean value)
+#rbd_flatten_volume_from_snapshot = false
+
+# The libvirt uuid of the secret for the rbd_user volumes (string value)
+#rbd_secret_uuid = <None>
+
+# Maximum number of nested volume clones that are taken before a flatten
+# occurs. Set to 0 to disable cloning. (integer value)
+#rbd_max_clone_depth = 5
+
+# Volumes will be chunked into objects of this size (in megabytes). (integer
+# value)
+#rbd_store_chunk_size = 4
+
+# Timeout value (in seconds) used when connecting to ceph cluster. If value <
+# 0, no timeout is set and default librados value is used. (integer value)
+#rados_connect_timeout = -1
+
+# Number of retries if connection to ceph cluster failed. (integer value)
+#rados_connection_retries = 3
+
+# Interval value (in seconds) between connection retries to ceph cluster.
+# (integer value)
+#rados_connection_interval = 5
+
+# Timeout value (in seconds) used when connecting to ceph cluster to do a
+# demotion/promotion of volumes. If value < 0, no timeout is set and default
+# librados value is used. (integer value)
+#replication_connect_timeout = 5
+
+# Set to True for driver to report total capacity as a dynamic value -used +
+# current free- and to False to report a static value -quota max bytes if
+# defined and global size of cluster if not. (boolean value)
+#report_dynamic_total_capacity = true
+
+# Set to True if the pool is used exclusively by Cinder. On exclusive use
+# driver won't query images' provisioned size as they will match the value
+# calculated by the Cinder core code for allocated_capacity_gb. This reduces
+# the load on the Ceph cluster as well as on the volume service. (boolean
+# value)
+#rbd_exclusive_cinder_pool = false
+
+# IP address or Hostname of NAS system. (string value)
+#nas_host =
+
+# User name to connect to NAS system. (string value)
+#nas_login = admin
+
+# Password to connect to NAS system. (string value)
+#nas_password =
+
+# SSH port to use to connect to NAS system. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#nas_ssh_port = 22
+
+# Filename of private key to use for SSH authentication. (string value)
+#nas_private_key =
+
+# Allow network-attached storage systems to operate in a secure environment
+# where root level access is not permitted. If set to False, access is as the
+# root user and insecure. If set to True, access is not as root. If set to
+# auto, a check is done to determine if this is a new installation: True is
+# used if so, otherwise False. Default is auto. (string value)
+#nas_secure_file_operations = auto
+
+# Set more secure file permissions on network-attached storage volume files to
+# restrict broad other/world access. If set to False, volumes are created with
+# open permissions. If set to True, volumes are created with permissions for
+# the cinder user and group (660). If set to auto, a check is done to determine
+# if this is a new installation: True is used if so, otherwise False. Default
+# is auto. (string value)
+#nas_secure_file_permissions = auto
+
+# Path to the share to use for storing Cinder volumes. For example:
+# "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 .
+# (string value)
+#nas_share_path =
+
+# Options used to mount the storage backend file system where Cinder volumes
+# are stored. (string value)
+#nas_mount_options = <None>
+
+# Provisioning type that will be used when creating volumes. (string value)
+# Possible values:
+# thin - <No description provided>
+# thick - <No description provided>
+#nas_volume_prov_type = thin
+
+# Pool or Vdisk name to use for volume creation. (string value)
+#hpmsa_backend_name = A
+
+# linear (for Vdisk) or virtual (for Pool). (string value)
+# Possible values:
+# linear - <No description provided>
+# virtual - <No description provided>
+#hpmsa_backend_type = virtual
+
+# HPMSA API interface protocol. (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+#hpmsa_api_protocol = https
+
+# Whether to verify HPMSA array SSL certificate. (boolean value)
+#hpmsa_verify_certificate = false
+
+# HPMSA array SSL certificate path. (string value)
+#hpmsa_verify_certificate_path = <None>
+
+# List of comma-separated target iSCSI IP addresses. (list value)
+#hpmsa_iscsi_ips =
+
+# Use thin provisioning for SAN volumes? (boolean value)
+#san_thin_provision = true
+
+# IP address of SAN volume (string value)
+#san_ip =
+
+# Username for SAN volume (string value)
+#san_login = admin
+
+# Password for SAN volume (string value)
+#san_password =
+
+# Filename of private key to use for SSH authentication (string value)
+#san_private_key =
+
+# Cluster name to use for creating volumes (string value)
+#san_clustername =
+
+# SSH port to use with SAN (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#san_ssh_port = 22
+
+# Port to use to access the SAN API (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#san_api_port = <None>
+
+# Execute commands locally instead of over SSH; use if the volume service is
+# running on the SAN device (boolean value)
+#san_is_local = false
+
+# SSH connection timeout in seconds (integer value)
+#ssh_conn_timeout = 30
+
+# Minimum ssh connections in the pool (integer value)
+#ssh_min_pool_conn = 1
+
+# Maximum ssh connections in the pool (integer value)
+#ssh_max_pool_conn = 5
+
+# IP address of sheep daemon. (string value)
+#sheepdog_store_address = 127.0.0.1
+
+# Port of sheep daemon. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#sheepdog_store_port = 7000
+
+# Set 512 byte emulation on volume creation;  (boolean value)
+#sf_emulate_512 = true
+
+# Allow tenants to specify QOS on create (boolean value)
+#sf_allow_tenant_qos = false
+
+# Create SolidFire accounts with this prefix. Any string can be used here, but
+# the string "hostname" is special and will create a prefix using the cinder
+# node hostname (previous default behavior).  The default is NO prefix. (string
+# value)
+#sf_account_prefix = <None>
+
+# Create SolidFire volumes with this prefix. Volume names are of the form
+# <sf_volume_prefix><cinder-volume-id>.  The default is to use a prefix of
+# 'UUID-'. (string value)
+#sf_volume_prefix = UUID-
+
+# Account name on the SolidFire Cluster to use as owner of template/cache
+# volumes (created if does not exist). (string value)
+#sf_template_account_name = openstack-vtemplate
+
+# DEPRECATED: This option is deprecated and will be removed in the next
+# OpenStack release.  Please use the general cinder image-caching feature
+# instead. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: The Cinder caching feature should be used rather than this driver
+# specific implementation.
+#sf_allow_template_caching = false
+
+# Overrides default cluster SVIP with the one specified. This is required or
+# deployments that have implemented the use of VLANs for iSCSI networks in
+# their cloud. (string value)
+#sf_svip = <None>
+
+# SolidFire API port. Useful if the device api is behind a proxy on a different
+# port. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#sf_api_port = 443
+
+# Utilize volume access groups on a per-tenant basis. (boolean value)
+#sf_enable_vag = false
+
+# Volume on Synology storage to be used for creating lun. (string value)
+#synology_pool_name =
+
+# Management port for Synology storage. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#synology_admin_port = 5000
+
+# Administrator of Synology storage. (string value)
+#synology_username = admin
+
+# Password of administrator for logging in Synology storage. (string value)
+#synology_password =
+
+# Do certificate validation or not if $driver_use_ssl is True (boolean value)
+#synology_ssl_verify = true
+
+# One time password of administrator for logging in Synology storage if OTP is
+# enabled. (string value)
+#synology_one_time_pass = <None>
+
+# Device id for skip one time password check for logging in Synology storage if
+# OTP is enabled. (string value)
+#synology_device_id = <None>
+
+# The hostname (or IP address) for the storage system (string value)
+#tintri_server_hostname = <None>
+
+# User name for the storage system (string value)
+#tintri_server_username = <None>
+
+# Password for the storage system (string value)
+#tintri_server_password = <None>
+
+# API version for the storage system (string value)
+#tintri_api_version = v310
+
+# Delete unused image snapshots older than mentioned days (integer value)
+#tintri_image_cache_expiry_days = 30
+
+# Path to image nfs shares file (string value)
+#tintri_image_shares_config = <None>
+
+# IP address for connecting to VMware vCenter server. (string value)
+#vmware_host_ip = <None>
+
+# Port number for connecting to VMware vCenter server. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#vmware_host_port = 443
+
+# Username for authenticating with VMware vCenter server. (string value)
+#vmware_host_username = <None>
+
+# Password for authenticating with VMware vCenter server. (string value)
+#vmware_host_password = <None>
+
+# Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl.
+# Optional over-ride to default location for bug work-arounds. (string value)
+#vmware_wsdl_location = <None>
+
+# Number of times VMware vCenter server API must be retried upon connection
+# related issues. (integer value)
+#vmware_api_retry_count = 10
+
+# The interval (in seconds) for polling remote tasks invoked on VMware vCenter
+# server. (floating point value)
+#vmware_task_poll_interval = 2.0
+
+# Name of the vCenter inventory folder that will contain Cinder volumes. This
+# folder will be created under "OpenStack/<project_folder>", where
+# project_folder is of format "Project (<volume_project_id>)". (string value)
+#vmware_volume_folder = Volumes
+
+# Timeout in seconds for VMDK volume transfer between Cinder and Glance.
+# (integer value)
+#vmware_image_transfer_timeout_secs = 7200
+
+# Max number of objects to be retrieved per batch. Query results will be
+# obtained in batches from the server and not in one shot. Server may still
+# limit the count to something less than the configured value. (integer value)
+#vmware_max_objects_retrieval = 100
+
+# Optional string specifying the VMware vCenter server version. The driver
+# attempts to retrieve the version from VMware vCenter server. Set this
+# configuration only if you want to override the vCenter server version.
+# (string value)
+#vmware_host_version = <None>
+
+# Directory where virtual disks are stored during volume backup and restore.
+# (string value)
+#vmware_tmp_dir = /tmp
+
+# CA bundle file to use in verifying the vCenter server certificate. (string
+# value)
+#vmware_ca_file = <None>
+
+# If true, the vCenter server certificate is not verified. If false, then the
+# default CA truststore is used for verification. This option is ignored if
+# "vmware_ca_file" is set. (boolean value)
+#vmware_insecure = false
+
+# Name of a vCenter compute cluster where volumes should be created. (multi
+# valued)
+#vmware_cluster_name =
+
+# Maximum number of connections in http connection pool. (integer value)
+#vmware_connection_pool_size = 10
+
+# Default adapter type to be used for attaching volumes. (string value)
+# Possible values:
+# lsiLogic - <No description provided>
+# busLogic - <No description provided>
+# lsiLogicsas - <No description provided>
+# paraVirtual - <No description provided>
+# ide - <No description provided>
+#vmware_adapter_type = lsiLogic
+
+# Volume snapshot format in vCenter server. (string value)
+# Possible values:
+# template - <No description provided>
+# COW - <No description provided>
+#vmware_snapshot_format = template
+
+# If true, the backend volume in vCenter server is created lazily when the
+# volume is created without any source. The backend volume is created when the
+# volume is attached, uploaded to image service or during backup. (boolean
+# value)
+#vmware_lazy_create = true
+
+# Regular expression pattern to match the name of datastores where backend
+# volumes are created. (string value)
+#vmware_datastore_regex = <None>
+
+# File with the list of available vzstorage shares. (string value)
+#vzstorage_shares_config = /etc/cinder/vzstorage_shares
+
+# Create volumes as sparsed files which take no space rather than regular files
+# when using raw format, in which case volume creation takes lot of time.
+# (boolean value)
+#vzstorage_sparsed_volumes = true
+
+# Percent of ACTUAL usage of the underlying volume before no new volumes can be
+# allocated to the volume destination. (floating point value)
+#vzstorage_used_ratio = 0.95
+
+# Base dir containing mount points for vzstorage shares. (string value)
+#vzstorage_mount_point_base = $state_path/mnt
+
+# Mount options passed to the vzstorage client. See section of the pstorage-
+# mount man page for details. (list value)
+#vzstorage_mount_options = <None>
+
+# Default format that will be used when creating volumes if no volume format is
+# specified. (string value)
+#vzstorage_default_volume_format = raw
+
+# Path to store VHD backed volumes (string value)
+#windows_iscsi_lun_path = C:\iSCSIVirtualDisks
+
+# File with the list of available smbfs shares. (string value)
+#smbfs_shares_config = C:\OpenStack\smbfs_shares.txt
+
+# Default format that will be used when creating volumes if no volume format is
+# specified. (string value)
+# Possible values:
+# vhd - <No description provided>
+# vhdx - <No description provided>
+#smbfs_default_volume_format = vhd
+
+# Base dir containing mount points for smbfs shares. (string value)
+#smbfs_mount_point_base = C:\OpenStack\_mnt
+
+# Mappings between share locations and pool names. If not specified, the share
+# names will be used as pool names. Example:
+# //addr/share:pool_name,//addr/share2:pool_name2 (dict value)
+#smbfs_pool_mappings =
+
+# VPSA - Use ISER instead of iSCSI (boolean value)
+#zadara_use_iser = true
+
+# VPSA - Management Host name or IP address (string value)
+#zadara_vpsa_host = <None>
+
+# VPSA - Port number (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#zadara_vpsa_port = <None>
+
+# VPSA - Use SSL connection (boolean value)
+#zadara_vpsa_use_ssl = false
+
+# If set to True the http client will validate the SSL certificate of the VPSA
+# endpoint. (boolean value)
+#zadara_ssl_cert_verify = true
+
+# VPSA - Username (string value)
+#zadara_user = <None>
+
+# VPSA - Password (string value)
+#zadara_password = <None>
+
+# VPSA - Storage Pool assigned for volumes (string value)
+#zadara_vpsa_poolname = <None>
+
+# VPSA - Default encryption policy for volumes (boolean value)
+#zadara_vol_encrypt = false
+
+# VPSA - Default template for VPSA volume names (string value)
+#zadara_vol_name_template = OS_%s
+
+# VPSA - Attach snapshot policy for volumes (boolean value)
+#zadara_default_snap_policy = false
+
+# Storage pool name. (string value)
+#zfssa_pool = <None>
+
+# Project name. (string value)
+#zfssa_project = <None>
+
+# Block size. (string value)
+# Possible values:
+# 512 - <No description provided>
+# 1k - <No description provided>
+# 2k - <No description provided>
+# 4k - <No description provided>
+# 8k - <No description provided>
+# 16k - <No description provided>
+# 32k - <No description provided>
+# 64k - <No description provided>
+# 128k - <No description provided>
+#zfssa_lun_volblocksize = 8k
+
+# Flag to enable sparse (thin-provisioned): True, False. (boolean value)
+#zfssa_lun_sparse = false
+
+# Data compression. (string value)
+# Possible values:
+# off - <No description provided>
+# lzjb - <No description provided>
+# gzip-2 - <No description provided>
+# gzip - <No description provided>
+# gzip-9 - <No description provided>
+#zfssa_lun_compression = off
+
+# Synchronous write bias. (string value)
+# Possible values:
+# latency - <No description provided>
+# throughput - <No description provided>
+#zfssa_lun_logbias = latency
+
+# iSCSI initiator group. (string value)
+#zfssa_initiator_group =
+
+# iSCSI initiator IQNs. (comma separated) (string value)
+#zfssa_initiator =
+
+# iSCSI initiator CHAP user (name). (string value)
+#zfssa_initiator_user =
+
+# Secret of the iSCSI initiator CHAP user. (string value)
+#zfssa_initiator_password =
+
+# iSCSI initiators configuration. (string value)
+#zfssa_initiator_config =
+
+# iSCSI target group name. (string value)
+#zfssa_target_group = tgt-grp
+
+# iSCSI target CHAP user (name). (string value)
+#zfssa_target_user =
+
+# Secret of the iSCSI target CHAP user. (string value)
+#zfssa_target_password =
+
+# iSCSI target portal (Data-IP:Port, w.x.y.z:3260). (string value)
+#zfssa_target_portal = <None>
+
+# Network interfaces of iSCSI targets. (comma separated) (string value)
+#zfssa_target_interfaces = <None>
+
+# REST connection timeout. (seconds) (integer value)
+#zfssa_rest_timeout = <None>
+
+# IP address used for replication data. (maybe the same as data ip) (string
+# value)
+#zfssa_replication_ip =
+
+# Flag to enable local caching: True, False. (boolean value)
+#zfssa_enable_local_cache = true
+
+# Name of ZFSSA project where cache volumes are stored. (string value)
+#zfssa_cache_project = os-cinder-cache
+
+# Driver policy for volume manage. (string value)
+# Possible values:
+# loose - <No description provided>
+# strict - <No description provided>
+#zfssa_manage_policy = loose
+
+# Data path IP address (string value)
+#zfssa_data_ip = <None>
+
+# HTTPS port number (string value)
+#zfssa_https_port = 443
+
+# Options to be passed while mounting share over nfs (string value)
+#zfssa_nfs_mount_options =
+
+# Storage pool name. (string value)
+#zfssa_nfs_pool =
+
+# Project name. (string value)
+#zfssa_nfs_project = NFSProject
+
+# Share name. (string value)
+#zfssa_nfs_share = nfs_share
+
+# Data compression. (string value)
+# Possible values:
+# off - <No description provided>
+# lzjb - <No description provided>
+# gzip-2 - <No description provided>
+# gzip - <No description provided>
+# gzip-9 - <No description provided>
+#zfssa_nfs_share_compression = off
+
+# Synchronous write bias-latency, throughput. (string value)
+# Possible values:
+# latency - <No description provided>
+# throughput - <No description provided>
+#zfssa_nfs_share_logbias = latency
+
+# Name of directory inside zfssa_nfs_share where cache volumes are stored.
+# (string value)
+#zfssa_cache_directory = os-cinder-cache
+
+# Driver to use for volume creation (string value)
+#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
+
+# User defined capabilities, a JSON formatted string specifying key/value
+# pairs. The key/value pairs can be used by the CapabilitiesFilter to select
+# between backends when requests specify volume types. For example, specifying
+# a service level or the geographical location of a backend, then creating a
+# volume type to allow the user to select by these different properties.
+# (string value)
+#extra_capabilities = {}
+
+# Suppress requests library SSL certificate warnings. (boolean value)
+#suppress_requests_ssl_warnings = false
+
+# Size of the native threads pool for the backend.  Increase for backends that
+# heavily rely on this, like the RBD driver. (integer value)
+# Minimum value: 20
+#backend_native_threads_pool_size = 20
+
+
+[coordination]
+
+#
+# From cinder
+#
+
+# The backend URL to use for distributed coordination. (string value)
+#backend_url = file://$state_path
+
+
+[fc-zone-manager]
+
+#
+# From cinder
+#
+
+# South bound connector for zoning operation (string value)
+#brcd_sb_connector = HTTP
+
+# Southbound connector for zoning operation (string value)
+#cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
+
+# FC Zone Driver responsible for zone management (string value)
+#zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver
+
+# Zoning policy configured by user; valid values include "initiator-target" or
+# "initiator" (string value)
+#zoning_policy = initiator-target
+
+# Comma separated list of Fibre Channel fabric names. This list of names is
+# used to retrieve other SAN credentials for connecting to each SAN fabric
+# (string value)
+#fc_fabric_names = <None>
+
+# FC SAN Lookup Service (string value)
+#fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService
+
+# Set this to True when you want to allow an unsupported zone manager driver to
+# start.  Drivers that haven't maintained a working CI system and testing are
+# marked as unsupported until CI is working again.  This also marks a driver as
+# deprecated and may be removed in the next release. (boolean value)
+#enable_unsupported_driver = false
+
+
+[nova]
+
+#
+# From cinder
+#
+
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#interface = public
+
+# The authentication URL for the nova connection when using the current users
+# token (string value)
+#token_auth_url = <None>
+
+# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# (string value)
+#cafile = <None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile = <None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section = <None>
+
+
+[service_user]
+
+#
+# From cinder
+#
+
+#
+# When True, if sending a user token to an REST API, also send a service token.
+#  (boolean value)
+#send_service_user_token = false
+[barbican]
+#
+# From castellan.config
+#
+
+# Use this endpoint to connect to Barbican, for example:
+# "http://localhost:9311/" (string value)
+#barbican_endpoint = <None>
+
+# Version of the Barbican API, for example: "v1" (string value)
+#barbican_api_version = <None>
+
+# Use this endpoint to connect to Keystone (string value)
+# Deprecated group/name - [key_manager]/auth_url
+#auth_endpoint = http://localhost/identity/v3
+auth_endpoint = http://10.167.4.35:35357/v3
+
+# Number of seconds to wait before retrying poll for key creation completion
+# (integer value)
+#retry_delay = 1
+
+# Number of times to retry poll for key creation completion (integer value)
+#number_of_retries = 60
+
+# Specifies if insecure TLS (https) requests. If False, the server's
+# certificate will not be validated (boolean value)
+#verify_ssl = true
+
+# Specifies the type of endpoint.  Allowed values are: public, internal, and
+# admin (string value)
+# Possible values:
+# public - <No description provided>
+# internal - <No description provided>
+# admin - <No description provided>
+#barbican_endpoint_type = public
+barbican_endpoint_type = internal
+
+
+[key_manager]
+#
+# From castellan.config
+#
+
+# Specify the key manager implementation. Options are "barbican" and "vault".
+# Default is  "barbican". Will support the  values earlier set using
+# [key_manager]/api_class for some time. (string value)
+# Deprecated group/name - [key_manager]/api_class
+#backend = barbican
+backend = barbican
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internalURL
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = cinder
+
+
+[keystone_authtoken]
+
+#
+# From keystonemiddleware.auth_token
+#
+
+# Complete "public" Identity API endpoint. This endpoint should not be an
+# "admin" endpoint, as it should be accessible by all end users.
+# Unauthenticated clients are redirected to this endpoint to authenticate.
+# Although this endpoint should ideally be unversioned, client support in the
+# wild varies. If you're using a versioned v2 endpoint here, then this should
+# *not* be the same endpoint the service user utilizes for validating tokens,
+# because normal end users may not be able to reach that endpoint. (string
+# value)
+# Deprecated group/name - [keystone_authtoken]/auth_uri
+#www_authenticate_uri = <None>
+www_authenticate_uri = http://10.167.4.35:5000
+
+# DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
+# be an "admin" endpoint, as it should be accessible by all end users.
+# Unauthenticated clients are redirected to this endpoint to authenticate.
+# Although this endpoint should ideally be unversioned, client support in the
+# wild varies. If you're using a versioned v2 endpoint here, then this should
+# *not* be the same endpoint the service user utilizes for validating tokens,
+# because normal end users may not be able to reach that endpoint. This option
+# is deprecated in favor of www_authenticate_uri and will be removed in the S
+# release. (string value)
+# This option is deprecated for removal since Queens.
+# Its value may be silently ignored in the future.
+# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
+# and will be removed in the S  release.
+#auth_uri = <None>
+auth_uri = http://10.167.4.35:5000
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+# Do not handle authorization requests within the middleware, but delegate the
+# authorization decision to downstream WSGI components. (boolean value)
+#delay_auth_decision = false
+
+# Request timeout value for communicating with Identity API server. (integer
+# value)
+#http_connect_timeout = <None>
+
+# How many times are we trying to reconnect when communicating with Identity
+# API Server. (integer value)
+#http_request_max_retries = 3
+
+# Request environment key where the Swift cache object is stored. When
+# auth_token middleware is deployed with a Swift cache, use this option to have
+# the middleware share a caching backend with swift. Otherwise, use the
+# ``memcached_servers`` option instead. (string value)
+#cache = <None>
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# The region in which the identity server can be found. (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# DEPRECATED: Directory used to cache files related to PKI tokens. This option
+# has been deprecated in the Ocata release and will be removed in the P
+# release. (string value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#signing_dir = <None>
+
+# Optionally specify a list of memcached server(s) to use for caching. If left
+# undefined, tokens will instead be cached in-process. (list value)
+# Deprecated group/name - [keystone_authtoken]/memcache_servers
+#memcached_servers = <None>
+memcached_servers=10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
+
+# In order to prevent excessive effort spent validating tokens, the middleware
+# caches previously-seen tokens for a configurable duration (in seconds). Set
+# to -1 to disable caching completely. (integer value)
+#token_cache_time = 300
+
+# DEPRECATED: Determines the frequency at which the list of revoked tokens is
+# retrieved from the Identity service (in seconds). A high number of revocation
+# events combined with a low cache duration may significantly reduce
+# performance. Only valid for PKI tokens. This option has been deprecated in
+# the Ocata release and will be removed in the P release. (integer value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#revocation_cache_time = 10
+
+# (Optional) Number of seconds memcached server is considered dead before it is
+# tried again. (integer value)
+#memcache_pool_dead_retry = 300
+
+# (Optional) Maximum total number of open connections to every memcached
+# server. (integer value)
+#memcache_pool_maxsize = 10
+
+# (Optional) Socket timeout in seconds for communicating with a memcached
+# server. (integer value)
+#memcache_pool_socket_timeout = 3
+
+# (Optional) Number of seconds a connection to memcached is held unused in the
+# pool before it is closed. (integer value)
+#memcache_pool_unused_timeout = 60
+
+# (Optional) Number of seconds that an operation will wait to get a memcached
+# client connection from the pool. (integer value)
+#memcache_pool_conn_get_timeout = 10
+
+# (Optional) Use the advanced (eventlet safe) memcached client pool. The
+# advanced pool will only work under python 2.x. (boolean value)
+#memcache_use_advanced_pool = false
+
+# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
+# middleware will not ask for service catalog on token validation and will not
+# set the X-Service-Catalog header. (boolean value)
+#include_service_catalog = true
+
+# Used to control the use and type of token binding. Can be set to: "disabled"
+# to not check token binding. "permissive" (default) to validate binding
+# information if the bind type is of a form known to the server and ignore it
+# if not. "strict" like "permissive" but if the bind type is unknown the token
+# will be rejected. "required" any form of token binding is needed to be
+# allowed. Finally the name of a binding method that must be present in tokens.
+# (string value)
+#enforce_token_bind = permissive
+
+# DEPRECATED: If true, the revocation list will be checked for cached tokens.
+# This requires that PKI tokens are configured on the identity server. (boolean
+# value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#check_revocations_for_cached = false
+
+# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
+# single algorithm or multiple. The algorithms are those supported by Python
+# standard hashlib.new(). The hashes will be tried in the order given, so put
+# the preferred one first for performance. The result of the first hash will be
+# stored in the cache. This will typically be set to multiple values only while
+# migrating from a less secure algorithm to a more secure one. Once all the old
+# tokens are expired this option should be set to a single value for better
+# performance. (list value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#hash_algorithms = md5
+
+# A choice of roles that must be present in a service token. Service tokens are
+# allowed to request that an expired token can be used and so this check should
+# tightly control that only actual services should be sending this token. Roles
+# here are applied as an ANY check so any role in this list must be present.
+# For backwards compatibility reasons this currently only affects the
+# allow_expired check. (list value)
+#service_token_roles = service
+
+# For backwards compatibility reasons we must let valid service tokens pass
+# that don't pass the service_token_roles check as valid. Setting this true
+# will become the default in a future release and should be enabled if
+# possible. (boolean value)
+#service_token_roles_required = false
+
+# Authentication type to load (string value)
+# Deprecated group/name - [keystone_authtoken]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section = <None>
+
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internalURL
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = cinder
+
+[profiler]
+
+[oslo_concurrency]
 
 [database]
-connection = sqlite:////var/lib/cinder/cinder.sqlite
+#
+# From oslo.db
+#
+
+# If True, SQLite uses synchronous mode. (boolean value)
+#sqlite_synchronous = true
+
+# The back end to use for the database. (string value)
+# Deprecated group/name - [DEFAULT]/db_backend
+#backend = sqlalchemy
+
+# The SQLAlchemy connection string to use to connect to the database.
+# (string value)
+# Deprecated group/name - [DEFAULT]/sql_connection
+# Deprecated group/name - [DATABASE]/sql_connection
+# Deprecated group/name - [sql]/connection
+#connection = <None>
+connection = mysql+pymysql://cinder:opnfv_secret@10.167.4.23/cinder?charset=utf8
+# The SQLAlchemy connection string to use to connect to the slave
+# database. (string value)
+#slave_connection = <None>
+
+# The SQL mode to be used for MySQL sessions. This option, including
+# the default, overrides any server-set SQL mode. To use whatever SQL
+# mode is set by the server configuration, set this to no value.
+# Example: mysql_sql_mode= (string value)
+#mysql_sql_mode = TRADITIONAL
+
+# If True, transparently enables support for handling MySQL Cluster
+# (NDB). (boolean value)
+#mysql_enable_ndb = false
+
+# Connections which have been present in the connection pool longer
+# than this number of seconds will be replaced with a new one the next
+# time they are checked out from the pool. (integer value)
+# Deprecated group/name - [DATABASE]/idle_timeout
+# Deprecated group/name - [database]/idle_timeout
+# Deprecated group/name - [DEFAULT]/sql_idle_timeout
+# Deprecated group/name - [DATABASE]/sql_idle_timeout
+# Deprecated group/name - [sql]/idle_timeout
+#connection_recycle_time = 3600
+# (obryndzii) we change default connection_recycle_time to 280 in order to fix numerous
+# DBConnection errors in services until we implement this setting in reclass-system
+connection_recycle_time = 300
+
+# Minimum number of SQL connections to keep open in a pool. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_min_pool_size
+# Deprecated group/name - [DATABASE]/sql_min_pool_size
+#min_pool_size = 1
+
+# Maximum number of SQL connections to keep open in a pool. Setting a
+# value of 0 indicates no limit. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_pool_size
+# Deprecated group/name - [DATABASE]/sql_max_pool_size
+#max_pool_size = 5
+max_pool_size = 10
+
+# Maximum number of database connection retries during startup. Set to
+# -1 to specify an infinite retry count. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_retries
+# Deprecated group/name - [DATABASE]/sql_max_retries
+#max_retries = 10
+max_retries = -1
+
+# Interval between retries of opening a SQL connection. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_retry_interval
+# Deprecated group/name - [DATABASE]/reconnect_interval
+#retry_interval = 10
+
+# If set, use this value for max_overflow with SQLAlchemy. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_max_overflow
+# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
+#max_overflow = 50
+max_overflow = 30
+
+# Verbosity of SQL debugging information: 0=None, 100=Everything.
+# (integer value)
+# Minimum value: 0
+# Maximum value: 100
+# Deprecated group/name - [DEFAULT]/sql_connection_debug
+#connection_debug = 0
+
+# Add Python stack traces to SQL as comment strings. (boolean value)
+# Deprecated group/name - [DEFAULT]/sql_connection_trace
+#connection_trace = false
+
+# If set, use this value for pool_timeout with SQLAlchemy. (integer
+# value)
+# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
+#pool_timeout = <None>
+
+# Enable the experimental use of database reconnect on connection
+# lost. (boolean value)
+#use_db_reconnect = false
+
+# Seconds between retries of a database transaction. (integer value)
+#db_retry_interval = 1
+
+# If True, increases the interval between retries of a database
+# operation up to db_max_retry_interval. (boolean value)
+#db_inc_retry_interval = true
+
+# If db_inc_retry_interval is set, the maximum seconds between retries
+# of a database operation. (integer value)
+#db_max_retry_interval = 10
+
+# Maximum retries in case of connection error or deadlock error before
+# error is raised. Set to -1 to specify an infinite retry count.
+# (integer value)
+#db_max_retries = 20
+
+#
+# From oslo.db.concurrency
+#
+
+# Enable the experimental use of thread pooling for all DB API calls
+# (boolean value)
+# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
+#use_tpool = false
+
+[oslo_messaging_notifications]
+#
+# From oslo.messaging
+#
+
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+
+# A URL representing the messaging driver to use for notifications. If
+# not set, we fall back to the same configuration used for RPC.
+# (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+
+# The maximum number of attempts to re-send a notification message
+# which failed to be delivered due to a recoverable error. 0 - No
+# retry, -1 - indefinite (integer value)
+#retry = -1
+[oslo_messaging_rabbit]
+#
+# From oslo.messaging
+#
+
+# Use durable queues in AMQP. (boolean value)
+# Deprecated group/name - [DEFAULT]/amqp_durable_queues
+# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
+#amqp_durable_queues = false
+
+# Auto-delete queues in AMQP. (boolean value)
+#amqp_auto_delete = false
+
+# Enable SSL (boolean value)
+#ssl = <None>
+
+
+# How long to wait before reconnecting in response to an AMQP consumer
+# cancel notification. (floating point value)
+#kombu_reconnect_delay = 1.0
+
+# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression
+# will not be used. This option may not be available in future
+# versions. (string value)
+#kombu_compression = <None>
+
+# How long to wait a missing client before abandoning to send it its
+# replies. This value should not be longer than rpc_response_timeout.
+# (integer value)
+# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
+#kombu_missing_consumer_retry_timeout = 60
+
+# Determines how the next RabbitMQ node is chosen in case the one we
+# are currently connected to becomes unavailable. Takes effect only if
+# more than one RabbitMQ node is provided in config. (string value)
+# Possible values:
+# round-robin - <No description provided>
+# shuffle - <No description provided>
+#kombu_failover_strategy = round-robin
+
+# DEPRECATED: The RabbitMQ broker address where a single node is used.
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_host = localhost
+
+# DEPRECATED: The RabbitMQ broker port where a single node is used.
+# (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_port = 5672
+
+# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_hosts = $rabbit_host:$rabbit_port
+
+# DEPRECATED: The RabbitMQ userid. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_userid = guest
+
+# DEPRECATED: The RabbitMQ password. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_password = guest
+
+# The RabbitMQ login method. (string value)
+# Possible values:
+# PLAIN - <No description provided>
+# AMQPLAIN - <No description provided>
+# RABBIT-CR-DEMO - <No description provided>
+#rabbit_login_method = AMQPLAIN
+
+# DEPRECATED: The RabbitMQ virtual host. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_virtual_host = /
+
+# How frequently to retry connecting with RabbitMQ. (integer value)
+#rabbit_retry_interval = 1
+
+# How long to backoff for between retries when connecting to RabbitMQ.
+# (integer value)
+#rabbit_retry_backoff = 2
+
+# Maximum interval of RabbitMQ connection retries. Default is 30
+# seconds. (integer value)
+#rabbit_interval_max = 30
+
+# DEPRECATED: Maximum number of RabbitMQ connection retries. Default
+# is 0 (infinite retry count). (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#rabbit_max_retries = 0
+
+# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change
+# this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0,
+# queue mirroring is no longer controlled by the x-ha-policy argument
+# when declaring a queue. If you just want to make sure that all
+# queues (except those with auto-generated names) are mirrored across
+# all nodes, run: "rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-
+# mode": "all"}' " (boolean value)
+#rabbit_ha_queues = false
+
+# Positive integer representing duration in seconds for queue TTL
+# (x-expires). Queues which are unused for the duration of the TTL are
+# automatically deleted. The parameter affects only reply and fanout
+# queues. (integer value)
+# Minimum value: 1
+#rabbit_transient_queues_ttl = 1800
+
+# Specifies the number of messages to prefetch. Setting to zero allows
+# unlimited messages. (integer value)
+
+# NOTE(dmescheryakov) hardcoding to >0 by default
+# Having no prefetch limit makes oslo.messaging consume all available
+# messages from the queue. That can lead to a situation when several
+# server processes hog all the messages leaving others out of business.
+# That leads to artificial high message processing latency and at the
+# extrime to MessagingTimeout errors.
+rabbit_qos_prefetch_count = 64
+
+# Number of seconds after which the Rabbit broker is considered down
+# if heartbeat's keep-alive fails (0 disable the heartbeat).
+# EXPERIMENTAL (integer value)
+#heartbeat_timeout_threshold = 60
+
+# How often times during the heartbeat_timeout_threshold we check the
+# heartbeat. (integer value)
+#heartbeat_rate = 2
+
+# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
+# (boolean value)
+#fake_rabbit = false
+
+# Maximum number of channels to allow (integer value)
+#channel_max = <None>
+
+
+# The maximum byte size for an AMQP frame (integer value)
+#frame_max = <None>
+
+# How often to send heartbeats for consumer's connections (integer
+# value)
+#heartbeat_interval = 3
+
+# Arguments passed to ssl.wrap_socket (dict value)
+#ssl_options = <None>
+
+# Set socket timeout in seconds for connection's socket (floating
+# point value)
+#socket_timeout = 0.25
+
+# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating
+# point value)
+#tcp_user_timeout = 0.25
+
+# Set delay for reconnection to some host which has connection error
+# (floating point value)
+#host_connection_reconnect_delay = 0.25
+
+# Connection factory implementation (string value)
+# Possible values:
+# new - <No description provided>
+# single - <No description provided>
+# read_write - <No description provided>
+#connection_factory = single
+
+# Maximum number of connections to keep queued. (integer value)
+#pool_max_size = 30
+
+# Maximum number of connections to create above `pool_max_size`.
+# (integer value)
+#pool_max_overflow = 0
+
+# Default number of seconds to wait for a connections to available
+# (integer value)
+#pool_timeout = 30
+
+# Lifetime of a connection (since creation) in seconds or None for no
+# recycling. Expired connections are closed on acquire. (integer
+# value)
+#pool_recycle = 600
+
+# Threshold at which inactive (since release) connections are
+# considered stale in seconds or None for no staleness. Stale
+# connections are closed on acquire. (integer value)
+#pool_stale = 60
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#default_serializer_type = json
+
+# Persist notification messages. (boolean value)
+#notification_persistence = false
+
+# Exchange name for sending notifications (string value)
+#default_notification_exchange = ${control_exchange}_notification
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# notification listener. (integer value)
+#notification_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending notification, -1 means infinite retry. (integer value)
+#default_notification_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending notification message (floating point value)
+#notification_retry_delay = 0.25
+
+# Time to live for rpc queues without consumers in seconds. (integer
+# value)
+#rpc_queue_expiration = 60
+
+# Exchange name for sending RPC messages (string value)
+#default_rpc_exchange = ${control_exchange}_rpc
+
+# Exchange name for receiving RPC replies (string value)
+#rpc_reply_exchange = ${control_exchange}_rpc_reply
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc listener. (integer value)
+#rpc_listener_prefetch_count = 100
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc reply listener. (integer value)
+#rpc_reply_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending reply. -1 means infinite retry during rpc_timeout (integer
+# value)
+#rpc_reply_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending reply. (floating point value)
+#rpc_reply_retry_delay = 0.25
+
+# Reconnecting retry count in case of connectivity problem during
+# sending RPC message, -1 means infinite retry. If actual retry
+# attempts in not 0 the rpc request could be processed more than one
+# time (integer value)
+#default_rpc_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending RPC message (floating point value)
+#rpc_retry_delay = 0.25
+
+[oslo_middleware]
+#
+# From oslo.middleware
+#
+
+# The maximum body size for each  request, in bytes. (integer value)
+# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
+# Deprecated group/name - [DEFAULT]/max_request_body_size
+#max_request_body_size = 114688
+
+# DEPRECATED: The HTTP Header that will be used to determine what the
+# original request protocol scheme was, even if it was hidden by a SSL
+# termination proxy. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#secure_proxy_ssl_header = X-Forwarded-Proto
+
+# Whether the application is behind a proxy or not. This determines if
+# the middleware should parse the headers or not. (boolean value)
+enable_proxy_headers_parsing = True
+
+[oslo_policy]
+
+[oslo_reports]

2019-04-30 22:26:10,761 [salt.state       :1951][INFO    ][11269] Completed state [/etc/cinder/cinder.conf] at time 22:26:10.761048 duration_in_ms=314.819
2019-04-30 22:26:10,761 [salt.state       :1780][INFO    ][11269] Running state [/etc/cinder/api-paste.ini] at time 22:26:10.761536
2019-04-30 22:26:10,761 [salt.state       :1813][INFO    ][11269] Executing state file.managed for [/etc/cinder/api-paste.ini]
2019-04-30 22:26:10,775 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'cinder/files/rocky/api-paste.ini.volume.Debian'
2019-04-30 22:26:10,821 [salt.state       :300 ][INFO    ][11269] {'mode': '0640'}
2019-04-30 22:26:10,821 [salt.state       :1951][INFO    ][11269] Completed state [/etc/cinder/api-paste.ini] at time 22:26:10.821944 duration_in_ms=60.407
2019-04-30 22:26:10,822 [salt.state       :1780][INFO    ][11269] Running state [/etc/default/cinder-volume] at time 22:26:10.822316
2019-04-30 22:26:10,822 [salt.state       :1813][INFO    ][11269] Executing state file.managed for [/etc/default/cinder-volume]
2019-04-30 22:26:10,836 [salt.fileclient  :1219][INFO    ][11269] Fetching file from saltenv 'base', ** done ** 'cinder/files/default'
2019-04-30 22:26:10,842 [salt.state       :300 ][INFO    ][11269] File changed:
New file
2019-04-30 22:26:10,842 [salt.state       :1951][INFO    ][11269] Completed state [/etc/default/cinder-volume] at time 22:26:10.842205 duration_in_ms=19.888
2019-04-30 22:26:10,844 [salt.state       :1780][INFO    ][11269] Running state [cinder-volume] at time 22:26:10.843983
2019-04-30 22:26:10,844 [salt.state       :1813][INFO    ][11269] Executing state service.running for [cinder-volume]
2019-04-30 22:26:10,844 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'status', 'cinder-volume.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:10,855 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:10,861 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:10,866 [salt.state       :300 ][INFO    ][11269] The service cinder-volume is already running
2019-04-30 22:26:10,866 [salt.state       :1951][INFO    ][11269] Completed state [cinder-volume] at time 22:26:10.866936 duration_in_ms=22.953
2019-04-30 22:26:10,867 [salt.state       :1780][INFO    ][11269] Running state [cinder-volume] at time 22:26:10.867083
2019-04-30 22:26:10,867 [salt.state       :1813][INFO    ][11269] Executing state service.mod_watch for [cinder-volume]
2019-04-30 22:26:10,867 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:10,873 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:10,886 [salt.loaded.int.module.cmdmod:730 ][ERROR   ][11269] Command '['systemd-run', '--scope', 'systemctl', 'restart', 'cinder-volume.service']' failed with return code: 1
2019-04-30 22:26:10,887 [salt.loaded.int.module.cmdmod:734 ][ERROR   ][11269] stderr: Running scope as unit run-r1f8d653a4e354f4fa9890cdfe74bc8d8.scope.
Job for cinder-volume.service failed. See "systemctl status cinder-volume.service" and "journalctl -xe" for details.
2019-04-30 22:26:10,887 [salt.loaded.int.module.cmdmod:736 ][ERROR   ][11269] retcode: 1
2019-04-30 22:26:10,887 [salt.state       :302 ][ERROR   ][11269] Running scope as unit run-r1f8d653a4e354f4fa9890cdfe74bc8d8.scope.
Job for cinder-volume.service failed. See "systemctl status cinder-volume.service" and "journalctl -xe" for details.
2019-04-30 22:26:10,887 [salt.state       :1951][INFO    ][11269] Completed state [cinder-volume] at time 22:26:10.887529 duration_in_ms=20.446
2019-04-30 22:26:10,887 [salt.state       :1780][INFO    ][11269] Running state [open-iscsi] at time 22:26:10.887951
2019-04-30 22:26:10,888 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [open-iscsi]
2019-04-30 22:26:10,895 [salt.state       :300 ][INFO    ][11269] All specified packages are already installed
2019-04-30 22:26:10,895 [salt.state       :1951][INFO    ][11269] Completed state [open-iscsi] at time 22:26:10.895293 duration_in_ms=7.342
2019-04-30 22:26:10,895 [salt.state       :1780][INFO    ][11269] Running state [tgt] at time 22:26:10.895563
2019-04-30 22:26:10,895 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [tgt]
2019-04-30 22:26:10,900 [salt.state       :300 ][INFO    ][11269] All specified packages are already installed
2019-04-30 22:26:10,900 [salt.state       :1951][INFO    ][11269] Completed state [tgt] at time 22:26:10.900184 duration_in_ms=4.621
2019-04-30 22:26:10,900 [salt.state       :1780][INFO    ][11269] Running state [thin-provisioning-tools] at time 22:26:10.900465
2019-04-30 22:26:10,900 [salt.state       :1813][INFO    ][11269] Executing state pkg.installed for [thin-provisioning-tools]
2019-04-30 22:26:10,914 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:26:10,930 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'thin-provisioning-tools'] in directory '/root'
2019-04-30 22:26:12,755 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:26:12,785 [salt.state       :300 ][INFO    ][11269] Made the following changes:
'thin-provisioning-tools' changed from 'absent' to '0.5.6-1ubuntu1'

2019-04-30 22:26:12,804 [salt.state       :915 ][INFO    ][11269] Loading fresh modules for state activity
2019-04-30 22:26:12,826 [salt.state       :1951][INFO    ][11269] Completed state [thin-provisioning-tools] at time 22:26:12.826244 duration_in_ms=1925.779
2019-04-30 22:26:13,145 [salt.state       :1780][INFO    ][11269] Running state [open-iscsi] at time 22:26:13.145402
2019-04-30 22:26:13,145 [salt.state       :1813][INFO    ][11269] Executing state service.running for [open-iscsi]
2019-04-30 22:26:13,146 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'status', 'open-iscsi.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:13,156 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'is-active', 'open-iscsi.service'] in directory '/root'
2019-04-30 22:26:13,163 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'is-enabled', 'open-iscsi.service'] in directory '/root'
2019-04-30 22:26:13,171 [salt.state       :300 ][INFO    ][11269] The service open-iscsi is already running
2019-04-30 22:26:13,171 [salt.state       :1951][INFO    ][11269] Completed state [open-iscsi] at time 22:26:13.171953 duration_in_ms=26.551
2019-04-30 22:26:13,172 [salt.state       :1780][INFO    ][11269] Running state [tgt] at time 22:26:13.172320
2019-04-30 22:26:13,172 [salt.state       :1813][INFO    ][11269] Executing state service.running for [tgt]
2019-04-30 22:26:13,172 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'status', 'tgt.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:13,181 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'is-active', 'tgt.service'] in directory '/root'
2019-04-30 22:26:13,188 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'is-enabled', 'tgt.service'] in directory '/root'
2019-04-30 22:26:13,196 [salt.state       :300 ][INFO    ][11269] The service tgt is already running
2019-04-30 22:26:13,196 [salt.state       :1951][INFO    ][11269] Completed state [tgt] at time 22:26:13.196845 duration_in_ms=24.524
2019-04-30 22:26:13,197 [salt.state       :1780][INFO    ][11269] Running state [iscsid] at time 22:26:13.197393
2019-04-30 22:26:13,197 [salt.state       :1813][INFO    ][11269] Executing state service.running for [iscsid]
2019-04-30 22:26:13,198 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'status', 'iscsid.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:13,204 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'is-active', 'iscsid.service'] in directory '/root'
2019-04-30 22:26:13,212 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11269] Executing command ['systemctl', 'is-enabled', 'iscsid.service'] in directory '/root'
2019-04-30 22:26:13,220 [salt.state       :300 ][INFO    ][11269] The service iscsid is already running
2019-04-30 22:26:13,220 [salt.state       :1951][INFO    ][11269] Completed state [iscsid] at time 22:26:13.220708 duration_in_ms=23.314
2019-04-30 22:26:13,222 [salt.minion      :1711][INFO    ][11269] Returning information for job: 20190430222407792286
2019-04-30 22:26:23,740 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command state.sls with jid 20190430222623724024
2019-04-30 22:26:23,750 [salt.minion      :1432][INFO    ][16895] Starting a new job with PID 16895
2019-04-30 22:26:28,423 [salt.state       :915 ][INFO    ][16895] Loading fresh modules for state activity
2019-04-30 22:26:29,722 [salt.state       :1780][INFO    ][16895] Running state [cinder-volume] at time 22:26:29.722872
2019-04-30 22:26:29,723 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [cinder-volume]
2019-04-30 22:26:29,723 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:26:30,058 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,058 [salt.state       :1951][INFO    ][16895] Completed state [cinder-volume] at time 22:26:30.058410 duration_in_ms=335.538
2019-04-30 22:26:30,058 [salt.state       :1780][INFO    ][16895] Running state [lvm2] at time 22:26:30.058605
2019-04-30 22:26:30,058 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [lvm2]
2019-04-30 22:26:30,063 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,063 [salt.state       :1951][INFO    ][16895] Completed state [lvm2] at time 22:26:30.063408 duration_in_ms=4.803
2019-04-30 22:26:30,063 [salt.state       :1780][INFO    ][16895] Running state [sysfsutils] at time 22:26:30.063545
2019-04-30 22:26:30,063 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [sysfsutils]
2019-04-30 22:26:30,068 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,068 [salt.state       :1951][INFO    ][16895] Completed state [sysfsutils] at time 22:26:30.068315 duration_in_ms=4.77
2019-04-30 22:26:30,068 [salt.state       :1780][INFO    ][16895] Running state [sg3-utils] at time 22:26:30.068463
2019-04-30 22:26:30,068 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [sg3-utils]
2019-04-30 22:26:30,072 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,073 [salt.state       :1951][INFO    ][16895] Completed state [sg3-utils] at time 22:26:30.073018 duration_in_ms=4.555
2019-04-30 22:26:30,073 [salt.state       :1780][INFO    ][16895] Running state [python-cinder] at time 22:26:30.073149
2019-04-30 22:26:30,073 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [python-cinder]
2019-04-30 22:26:30,077 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,077 [salt.state       :1951][INFO    ][16895] Completed state [python-cinder] at time 22:26:30.077794 duration_in_ms=4.645
2019-04-30 22:26:30,077 [salt.state       :1780][INFO    ][16895] Running state [python-mysqldb] at time 22:26:30.077929
2019-04-30 22:26:30,078 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [python-mysqldb]
2019-04-30 22:26:30,082 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,082 [salt.state       :1951][INFO    ][16895] Completed state [python-mysqldb] at time 22:26:30.082632 duration_in_ms=4.703
2019-04-30 22:26:30,082 [salt.state       :1780][INFO    ][16895] Running state [p7zip] at time 22:26:30.082768
2019-04-30 22:26:30,082 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [p7zip]
2019-04-30 22:26:30,087 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,087 [salt.state       :1951][INFO    ][16895] Completed state [p7zip] at time 22:26:30.087320 duration_in_ms=4.552
2019-04-30 22:26:30,087 [salt.state       :1780][INFO    ][16895] Running state [gettext-base] at time 22:26:30.087457
2019-04-30 22:26:30,087 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [gettext-base]
2019-04-30 22:26:30,091 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,092 [salt.state       :1951][INFO    ][16895] Completed state [gettext-base] at time 22:26:30.092070 duration_in_ms=4.613
2019-04-30 22:26:30,092 [salt.state       :1780][INFO    ][16895] Running state [python-memcache] at time 22:26:30.092206
2019-04-30 22:26:30,092 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [python-memcache]
2019-04-30 22:26:30,096 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,096 [salt.state       :1951][INFO    ][16895] Completed state [python-memcache] at time 22:26:30.096875 duration_in_ms=4.669
2019-04-30 22:26:30,097 [salt.state       :1780][INFO    ][16895] Running state [python-pycadf] at time 22:26:30.097009
2019-04-30 22:26:30,097 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [python-pycadf]
2019-04-30 22:26:30,101 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,101 [salt.state       :1951][INFO    ][16895] Completed state [python-pycadf] at time 22:26:30.101661 duration_in_ms=4.653
2019-04-30 22:26:30,101 [salt.state       :1780][INFO    ][16895] Running state [cinder_volume_ssl_mysql] at time 22:26:30.101927
2019-04-30 22:26:30,102 [salt.state       :1813][INFO    ][16895] Executing state test.show_notification for [cinder_volume_ssl_mysql]
2019-04-30 22:26:30,102 [salt.state       :300 ][INFO    ][16895] Running cinder._ssl.volume_mysql
2019-04-30 22:26:30,102 [salt.state       :1951][INFO    ][16895] Completed state [cinder_volume_ssl_mysql] at time 22:26:30.102342 duration_in_ms=0.416
2019-04-30 22:26:30,102 [salt.state       :1780][INFO    ][16895] Running state [cinder_volume_ssl_rabbitmq] at time 22:26:30.102578
2019-04-30 22:26:30,102 [salt.state       :1813][INFO    ][16895] Executing state test.show_notification for [cinder_volume_ssl_rabbitmq]
2019-04-30 22:26:30,102 [salt.state       :300 ][INFO    ][16895] Running cinder._ssl.rabbitmq
2019-04-30 22:26:30,103 [salt.state       :1951][INFO    ][16895] Completed state [cinder_volume_ssl_rabbitmq] at time 22:26:30.102989 duration_in_ms=0.41
2019-04-30 22:26:30,104 [salt.state       :1780][INFO    ][16895] Running state [/var/lock/cinder] at time 22:26:30.104717
2019-04-30 22:26:30,104 [salt.state       :1813][INFO    ][16895] Executing state file.directory for [/var/lock/cinder]
2019-04-30 22:26:30,105 [salt.state       :300 ][INFO    ][16895] Directory /var/lock/cinder is in the correct state
Directory /var/lock/cinder updated
2019-04-30 22:26:30,105 [salt.state       :1951][INFO    ][16895] Completed state [/var/lock/cinder] at time 22:26:30.105608 duration_in_ms=0.89
2019-04-30 22:26:30,105 [salt.state       :1780][INFO    ][16895] Running state [/etc/cinder/cinder.conf] at time 22:26:30.105947
2019-04-30 22:26:30,106 [salt.state       :1813][INFO    ][16895] Executing state file.managed for [/etc/cinder/cinder.conf]
2019-04-30 22:26:30,336 [salt.state       :300 ][INFO    ][16895] File /etc/cinder/cinder.conf is in the correct state
2019-04-30 22:26:30,336 [salt.state       :1951][INFO    ][16895] Completed state [/etc/cinder/cinder.conf] at time 22:26:30.336294 duration_in_ms=230.347
2019-04-30 22:26:30,336 [salt.state       :1780][INFO    ][16895] Running state [/etc/cinder/api-paste.ini] at time 22:26:30.336567
2019-04-30 22:26:30,336 [salt.state       :1813][INFO    ][16895] Executing state file.managed for [/etc/cinder/api-paste.ini]
2019-04-30 22:26:30,378 [salt.state       :300 ][INFO    ][16895] File /etc/cinder/api-paste.ini is in the correct state
2019-04-30 22:26:30,378 [salt.state       :1951][INFO    ][16895] Completed state [/etc/cinder/api-paste.ini] at time 22:26:30.378585 duration_in_ms=42.017
2019-04-30 22:26:30,378 [salt.state       :1780][INFO    ][16895] Running state [/etc/default/cinder-volume] at time 22:26:30.378834
2019-04-30 22:26:30,378 [salt.state       :1813][INFO    ][16895] Executing state file.managed for [/etc/default/cinder-volume]
2019-04-30 22:26:30,391 [salt.state       :300 ][INFO    ][16895] File /etc/default/cinder-volume is in the correct state
2019-04-30 22:26:30,391 [salt.state       :1951][INFO    ][16895] Completed state [/etc/default/cinder-volume] at time 22:26:30.391302 duration_in_ms=12.468
2019-04-30 22:26:30,392 [salt.state       :1780][INFO    ][16895] Running state [cinder-volume] at time 22:26:30.392559
2019-04-30 22:26:30,392 [salt.state       :1813][INFO    ][16895] Executing state service.running for [cinder-volume]
2019-04-30 22:26:30,393 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'status', 'cinder-volume.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:30,406 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:30,412 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:30,418 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:30,427 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:30,435 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:30,444 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:30,453 [salt.state       :300 ][INFO    ][16895] {'cinder-volume': True}
2019-04-30 22:26:30,453 [salt.state       :1951][INFO    ][16895] Completed state [cinder-volume] at time 22:26:30.453187 duration_in_ms=60.625
2019-04-30 22:26:30,453 [salt.state       :1780][INFO    ][16895] Running state [open-iscsi] at time 22:26:30.453572
2019-04-30 22:26:30,453 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [open-iscsi]
2019-04-30 22:26:30,459 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,459 [salt.state       :1951][INFO    ][16895] Completed state [open-iscsi] at time 22:26:30.459845 duration_in_ms=6.272
2019-04-30 22:26:30,460 [salt.state       :1780][INFO    ][16895] Running state [tgt] at time 22:26:30.460103
2019-04-30 22:26:30,460 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [tgt]
2019-04-30 22:26:30,464 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,464 [salt.state       :1951][INFO    ][16895] Completed state [tgt] at time 22:26:30.464899 duration_in_ms=4.796
2019-04-30 22:26:30,465 [salt.state       :1780][INFO    ][16895] Running state [thin-provisioning-tools] at time 22:26:30.465152
2019-04-30 22:26:30,465 [salt.state       :1813][INFO    ][16895] Executing state pkg.installed for [thin-provisioning-tools]
2019-04-30 22:26:30,469 [salt.state       :300 ][INFO    ][16895] All specified packages are already installed
2019-04-30 22:26:30,469 [salt.state       :1951][INFO    ][16895] Completed state [thin-provisioning-tools] at time 22:26:30.469801 duration_in_ms=4.649
2019-04-30 22:26:30,470 [salt.state       :1780][INFO    ][16895] Running state [open-iscsi] at time 22:26:30.470227
2019-04-30 22:26:30,470 [salt.state       :1813][INFO    ][16895] Executing state service.running for [open-iscsi]
2019-04-30 22:26:30,470 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'status', 'open-iscsi.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:30,480 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-active', 'open-iscsi.service'] in directory '/root'
2019-04-30 22:26:30,489 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-enabled', 'open-iscsi.service'] in directory '/root'
2019-04-30 22:26:30,496 [salt.state       :300 ][INFO    ][16895] The service open-iscsi is already running
2019-04-30 22:26:30,496 [salt.state       :1951][INFO    ][16895] Completed state [open-iscsi] at time 22:26:30.496337 duration_in_ms=26.109
2019-04-30 22:26:30,496 [salt.state       :1780][INFO    ][16895] Running state [tgt] at time 22:26:30.496707
2019-04-30 22:26:30,496 [salt.state       :1813][INFO    ][16895] Executing state service.running for [tgt]
2019-04-30 22:26:30,497 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'status', 'tgt.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:30,503 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-active', 'tgt.service'] in directory '/root'
2019-04-30 22:26:30,509 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-enabled', 'tgt.service'] in directory '/root'
2019-04-30 22:26:30,516 [salt.state       :300 ][INFO    ][16895] The service tgt is already running
2019-04-30 22:26:30,517 [salt.state       :1951][INFO    ][16895] Completed state [tgt] at time 22:26:30.517050 duration_in_ms=20.342
2019-04-30 22:26:30,517 [salt.state       :1780][INFO    ][16895] Running state [iscsid] at time 22:26:30.517390
2019-04-30 22:26:30,517 [salt.state       :1813][INFO    ][16895] Executing state service.running for [iscsid]
2019-04-30 22:26:30,517 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'status', 'iscsid.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:30,527 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-active', 'iscsid.service'] in directory '/root'
2019-04-30 22:26:30,533 [salt.loaded.int.module.cmdmod:395 ][INFO    ][16895] Executing command ['systemctl', 'is-enabled', 'iscsid.service'] in directory '/root'
2019-04-30 22:26:30,541 [salt.state       :300 ][INFO    ][16895] The service iscsid is already running
2019-04-30 22:26:30,541 [salt.state       :1951][INFO    ][16895] Completed state [iscsid] at time 22:26:30.541434 duration_in_ms=24.043
2019-04-30 22:26:30,542 [salt.minion      :1711][INFO    ][16895] Returning information for job: 20190430222623724024
2019-04-30 22:26:42,044 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command pkg.install with jid 20190430222642027084
2019-04-30 22:26:42,054 [salt.minion      :1432][INFO    ][17242] Starting a new job with PID 17242
2019-04-30 22:26:42,880 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][17242] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:26:43,151 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][17242] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:26:43,165 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][17242] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-neutron'] in directory '/root'
2019-04-30 22:26:57,153 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430222657135320
2019-04-30 22:26:57,165 [salt.minion      :1432][INFO    ][17858] Starting a new job with PID 17858
2019-04-30 22:26:57,178 [salt.minion      :1711][INFO    ][17858] Returning information for job: 20190430222657135320
2019-04-30 22:27:04,271 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][17242] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:27:04,297 [salt.minion      :1711][INFO    ][17242] Returning information for job: 20190430222642027084
2019-04-30 22:27:04,930 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command file.patch with jid 20190430222704913147
2019-04-30 22:27:04,940 [salt.minion      :1432][INFO    ][18076] Starting a new job with PID 18076
2019-04-30 22:27:04,950 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][18076] Executing command ['/usr/bin/patch', '--forward', '--reject-file=-', '-i', '/var/tmp/dhcp_agent.patch', '/usr/lib/python2.7/dist-packages/neutron/agent/dhcp/agent.py'] in directory '/root'
2019-04-30 22:27:04,977 [salt.minion      :1711][INFO    ][18076] Returning information for job: 20190430222704913147
2019-04-30 22:29:16,345 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command state.sls with jid 20190430222916329301
2019-04-30 22:29:16,356 [salt.minion      :1432][INFO    ][18122] Starting a new job with PID 18122
2019-04-30 22:29:20,078 [salt.state       :915 ][INFO    ][18122] Loading fresh modules for state activity
2019-04-30 22:29:20,107 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/gateway.sls'
2019-04-30 22:29:20,141 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/map.jinja'
2019-04-30 22:29:20,195 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/agents/_vpp.sls'
2019-04-30 22:29:20,255 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/_ssl/rabbitmq.sls'
2019-04-30 22:29:21,342 [salt.state       :1780][INFO    ][18122] Running state [neutron-dhcp-agent] at time 22:29:21.342137
2019-04-30 22:29:21,342 [salt.state       :1813][INFO    ][18122] Executing state pkg.installed for [neutron-dhcp-agent]
2019-04-30 22:29:21,342 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:21,609 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['apt-cache', '-q', 'policy', 'neutron-dhcp-agent'] in directory '/root'
2019-04-30 22:29:21,654 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:29:23,173 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:29:23,187 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-dhcp-agent'] in directory '/root'
2019-04-30 22:29:30,469 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:30,495 [salt.state       :300 ][INFO    ][18122] Made the following changes:
'haproxy' changed from 'absent' to '1.6.3-1ubuntu0.2'
'neutron-dhcp-agent' changed from 'absent' to '2:13.0.3-2~u16.04+mcp88'
'neutron-metadata-agent' changed from 'absent' to '2:13.0.3-2~u16.04+mcp88'
'dnsmasq-utils' changed from 'absent' to '2.80-1~u16.04+mcp1'
'liblua5.3-0' changed from 'absent' to '5.3.1-1ubuntu2.1'

2019-04-30 22:29:30,509 [salt.state       :915 ][INFO    ][18122] Loading fresh modules for state activity
2019-04-30 22:29:30,623 [salt.state       :1951][INFO    ][18122] Completed state [neutron-dhcp-agent] at time 22:29:30.623675 duration_in_ms=9281.539
2019-04-30 22:29:30,626 [salt.state       :1780][INFO    ][18122] Running state [neutron-metadata-agent] at time 22:29:30.626975
2019-04-30 22:29:30,627 [salt.state       :1813][INFO    ][18122] Executing state pkg.installed for [neutron-metadata-agent]
2019-04-30 22:29:31,021 [salt.state       :300 ][INFO    ][18122] All specified packages are already installed
2019-04-30 22:29:31,021 [salt.state       :1951][INFO    ][18122] Completed state [neutron-metadata-agent] at time 22:29:31.021486 duration_in_ms=394.511
2019-04-30 22:29:31,021 [salt.state       :1780][INFO    ][18122] Running state [neutron-openvswitch-agent] at time 22:29:31.021691
2019-04-30 22:29:31,021 [salt.state       :1813][INFO    ][18122] Executing state pkg.installed for [neutron-openvswitch-agent]
2019-04-30 22:29:31,035 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:29:31,049 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-openvswitch-agent'] in directory '/root'
2019-04-30 22:29:31,365 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430222931347222
2019-04-30 22:29:31,374 [salt.minion      :1432][INFO    ][19366] Starting a new job with PID 19366
2019-04-30 22:29:31,384 [salt.minion      :1711][INFO    ][19366] Returning information for job: 20190430222931347222
2019-04-30 22:29:35,815 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:35,839 [salt.state       :300 ][INFO    ][18122] Made the following changes:
'neutron-openvswitch-agent' changed from 'absent' to '2:13.0.3-2~u16.04+mcp88'
'conntrack' changed from 'absent' to '1:1.4.3-3'

2019-04-30 22:29:35,852 [salt.state       :915 ][INFO    ][18122] Loading fresh modules for state activity
2019-04-30 22:29:35,874 [salt.state       :1951][INFO    ][18122] Completed state [neutron-openvswitch-agent] at time 22:29:35.874344 duration_in_ms=4852.653
2019-04-30 22:29:35,877 [salt.state       :1780][INFO    ][18122] Running state [neutron-l3-agent] at time 22:29:35.877550
2019-04-30 22:29:35,877 [salt.state       :1813][INFO    ][18122] Executing state pkg.installed for [neutron-l3-agent]
2019-04-30 22:29:36,296 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:29:36,312 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-l3-agent'] in directory '/root'
2019-04-30 22:29:45,756 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:45,787 [salt.state       :300 ][INFO    ][18122] Made the following changes:
'libsnmp-base' changed from 'absent' to '5.7.3+dfsg-1ubuntu4.2'
'keepalived' changed from 'absent' to '1:1.2.24-1ubuntu0.16.04.1'
'neutron-l3-agent' changed from 'absent' to '2:13.0.3-2~u16.04+mcp88'
'neutron-fwaas-common' changed from 'absent' to '2:13.0.1-2~u16.04+mcp21'
'ipvsadm' changed from 'absent' to '1:1.28-3'
'python-neutron-fwaas' changed from 'absent' to '2:13.0.1-2~u16.04+mcp21'
'libsnmp30' changed from 'absent' to '5.7.3+dfsg-1ubuntu4.2'
'iputils-arping' changed from 'absent' to '3:20121221-5ubuntu2'
'libsensors4' changed from 'absent' to '1:3.4.0-2'
'libnl-route-3-200' changed from 'absent' to '3.2.27-1ubuntu0.16.04.1'
'radvd' changed from 'absent' to '1:2.11-1'

2019-04-30 22:29:45,800 [salt.state       :915 ][INFO    ][18122] Loading fresh modules for state activity
2019-04-30 22:29:45,821 [salt.state       :1951][INFO    ][18122] Completed state [neutron-l3-agent] at time 22:29:45.821743 duration_in_ms=9944.192
2019-04-30 22:29:45,823 [salt.state       :1780][INFO    ][18122] Running state [neutron_gateway_ssl_rabbitmq] at time 22:29:45.823590
2019-04-30 22:29:45,823 [salt.state       :1813][INFO    ][18122] Executing state test.show_notification for [neutron_gateway_ssl_rabbitmq]
2019-04-30 22:29:45,824 [salt.state       :300 ][INFO    ][18122] Running neutron._ssl.rabbitmq
2019-04-30 22:29:45,824 [salt.state       :1951][INFO    ][18122] Completed state [neutron_gateway_ssl_rabbitmq] at time 22:29:45.824173 duration_in_ms=0.582
2019-04-30 22:29:46,204 [salt.state       :1780][INFO    ][18122] Running state [haproxy] at time 22:29:46.204664
2019-04-30 22:29:46,204 [salt.state       :1813][INFO    ][18122] Executing state service.dead for [haproxy]
2019-04-30 22:29:46,205 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'status', 'haproxy.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:46,215 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:46,221 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:46,227 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemd-run', '--scope', 'systemctl', 'stop', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:46,237 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:46,244 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:46,251 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:46,261 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemd-run', '--scope', 'systemctl', 'disable', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:46,521 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:46,530 [salt.state       :300 ][INFO    ][18122] {'haproxy': True}
2019-04-30 22:29:46,531 [salt.state       :1951][INFO    ][18122] Completed state [haproxy] at time 22:29:46.531138 duration_in_ms=326.474
2019-04-30 22:29:46,534 [salt.state       :1780][INFO    ][18122] Running state [/etc/neutron/neutron.conf] at time 22:29:46.534717
2019-04-30 22:29:46,535 [salt.state       :1813][INFO    ][18122] Executing state file.managed for [/etc/neutron/neutron.conf]
2019-04-30 22:29:46,555 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/neutron-generic.conf'
2019-04-30 22:29:46,647 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_log.conf'
2019-04-30 22:29:46,664 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/messaging/_default.conf'
2019-04-30 22:29:46,688 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/service/_wsgi_default.conf'
2019-04-30 22:29:46,700 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_concurrency.conf'
2019-04-30 22:29:46,712 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/messaging/_rabbit.conf'
2019-04-30 22:29:46,734 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/messaging/_notifications.conf'
2019-04-30 22:29:46,747 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_middleware.conf'
2019-04-30 22:29:46,758 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/service/_ssl.conf'
2019-04-30 22:29:46,767 [salt.state       :300 ][INFO    ][18122] File changed:
--- 
+++ 
@@ -1,5 +1,5 @@
+
 [DEFAULT]
-core_plugin = ml2
 
 #
 # From neutron
@@ -26,12 +26,11 @@
 
 # The type of authentication to use (string value)
 #auth_strategy = keystone
-
 # The core plugin Neutron will use (string value)
-#core_plugin = <None>
+core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 
 # The service plugins Neutron will use (list value)
-#service_plugins =
+service_plugins = router,metering
 
 # The base MAC address Neutron will use for VIFs. The first 3 octets will
 # remain unchanged. If the 4th octet is not 00, it will also be used. The
@@ -43,7 +42,7 @@
 
 # The maximum number of items returned in a single response, value was
 # 'infinite' or negative integer means no limit (string value)
-#pagination_max_limit = -1
+pagination_max_limit = -1
 
 # Default value of availability zone hints. The availability zone aware
 # schedulers use this when the resources availability_zone_hints is empty.
@@ -69,7 +68,7 @@
 
 # DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite
 # lease times. (integer value)
-#dhcp_lease_duration = 86400
+dhcp_lease_duration = 3600
 
 # Domain to use for building the hostnames (string value)
 #dns_domain = openstacklocal
@@ -83,7 +82,7 @@
 # Allow overlapping IP support in Neutron. Attention: the following parameter
 # MUST be set to False if Neutron is being used in conjunction with Nova
 # security groups. (boolean value)
-#allow_overlapping_ips = false
+allow_overlapping_ips = true
 
 # Hostname to be used by the Neutron server, agents and services running on
 # this machine. All the agents and services running on this machine must use
@@ -96,11 +95,11 @@
 #network_link_prefix = <None>
 
 # Send notification to nova when port status changes (boolean value)
-#notify_nova_on_port_status_changes = true
+notify_nova_on_port_status_changes = true
 
 # Send notification to nova when port data (fixed_ips/floatingip) changes so
 # nova can update its cache. (boolean value)
-#notify_nova_on_port_data_changes = true
+notify_nova_on_port_data_changes = true
 
 # Number of seconds between sending events to nova if there are any events to
 # send. (integer value)
@@ -125,7 +124,7 @@
 # neutron automatically subtracts the overlay protocol overhead from this
 # value. Defaults to 1500, the standard value for Ethernet. (integer value)
 # Deprecated group/name - [ml2]/segment_mtu
-#global_physnet_mtu = 1500
+global_physnet_mtu = 1500
 
 # Number of backlog requests to configure the socket with (integer value)
 #backlog = 4096
@@ -145,11 +144,11 @@
 #api_workers = <None>
 
 # Number of RPC worker processes for service. (integer value)
-#rpc_workers = 1
+rpc_workers = 16
 
 # Number of RPC worker processes dedicated to state reports queue. (integer
 # value)
-#rpc_state_report_workers = 1
+rpc_state_report_workers = 4
 
 # Range of seconds to randomly delay when starting the periodic task scheduler
 # to reduce stampeding. (Disable by setting to 0) (integer value)
@@ -176,10 +175,6 @@
 #
 # From neutron.db
 #
-
-# Seconds to regard the agent is down; should be at least twice
-# report_interval, to be sure the agent is down for good. (integer value)
-#agent_down_time = 75
 
 # Representing the resource type whose load is being reported by the agent.
 # This can be "networks", "subnets" or "ports". When specified (Default is
@@ -222,7 +217,7 @@
 # greater than 1, the scheduler automatically assigns multiple DHCP agents for
 # a given tenant network, providing high availability for DHCP service.
 # (integer value)
-#dhcp_agents_per_network = 1
+dhcp_agents_per_network = 2
 
 # Enable services on an agent with admin_state_up False. If this option is
 # False, when admin_state_up of an agent is turned False, services on it will
@@ -241,28 +236,28 @@
 
 # System-wide flag to determine the type of router that tenants can create.
 # Only admin can override. (boolean value)
-#router_distributed = false
+router_distributed = False
 
 # Determine if setup is configured for DVR. If False, DVR API extension will be
 # disabled. (boolean value)
-#enable_dvr = true
+enable_dvr = False
 
 # Driver to use for scheduling router to a default L3 agent (string value)
-#router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler
+router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
 
 # Allow auto scheduling of routers to L3 agent. (boolean value)
 #router_auto_schedule = true
 
 # Automatically reschedule routers from offline L3 agents to online L3 agents.
 # (boolean value)
-#allow_automatic_l3agent_failover = false
+allow_automatic_l3agent_failover = true
 
 # Enable HA mode for virtual routers. (boolean value)
-#l3_ha = false
+l3_ha = false
 
 # Maximum number of L3 agents which a HA router will be scheduled on. If it is
 # set to 0 then the router will be scheduled on every agent. (integer value)
-#max_l3_agents_per_router = 3
+max_l3_agents_per_router = 0
 
 # Subnet used for the l3 HA admin network. (string value)
 #l3_ha_net_cidr = 169.254.192.0/18
@@ -283,7 +278,6 @@
 
 # Maximum number of allowed address pairs (integer value)
 #max_allowed_address_pair = 10
-
 #
 # From oslo.log
 #
@@ -447,6 +441,7 @@
 # exception when timeout expired. (integer value)
 #rpc_poll_timeout = 1
 
+
 # Expiration timeout in seconds of a name service record about existing target
 # ( < 0 means no timeout). (integer value)
 #zmq_target_expire = 300
@@ -574,6 +569,7 @@
 # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
 # (string value)
 #transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
@@ -584,8 +580,7 @@
 
 # The default exchange under which topics are scoped. May be overridden by an
 # exchange name specified in the transport_url option. (string value)
-#control_exchange = neutron
-
+#control_exchange = keystone
 #
 # From oslo.service.wsgi
 #
@@ -621,7 +616,6 @@
 
 
 [agent]
-root_helper = "sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf"
 
 #
 # From neutron.agent
@@ -631,7 +625,7 @@
 # /etc/neutron/rootwrap.conf' to use the real root filter facility. Change to
 # 'sudo' to skip the filtering and just run the command directly. (string
 # value)
-#root_helper = sudo
+root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
 
 # Use the root helper when listing the namespaces on a system. This may not be
 # required depending on the security configuration. If the root helper is not
@@ -656,7 +650,7 @@
 # Seconds between nodes reporting state to server; should be less than
 # agent_down_time, best if it is half or less than agent_down_time. (floating
 # point value)
-#report_interval = 30
+report_interval = 120
 
 # Log agent heartbeats (boolean value)
 #log_agent_heartbeats = false
@@ -689,509 +683,11 @@
 
 [cors]
 
-#
-# From oslo.middleware.cors
-#
-
-# Indicate whether this resource may be shared with the domain received in the
-# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
-# slash. Example: https://horizon.example.com (list value)
-#allowed_origin = <None>
-
-# Indicate that the actual request can include user credentials (boolean value)
-#allow_credentials = true
-
-# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
-# Headers. (list value)
-#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-Volume-microversion
-
-# Maximum cache age of CORS preflight requests. (integer value)
-#max_age = 3600
-
-# Indicate which methods can be used during the actual request. (list value)
-#allow_methods = GET,PUT,POST,DELETE,PATCH
-
-# Indicate which header field names may be used during the actual request.
-# (list value)
-#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
-
-
-[database]
-connection = sqlite:////var/lib/neutron/neutron.sqlite
-
-#
-# From neutron.db
-#
-
-# Database engine for which script will be generated when using offline
-# migration. (string value)
-#engine =
-
-#
-# From oslo.db
-#
-
-# If True, SQLite uses synchronous mode. (boolean value)
-#sqlite_synchronous = true
-
-# The back end to use for the database. (string value)
-# Deprecated group/name - [DEFAULT]/db_backend
-#backend = sqlalchemy
-
-# The SQLAlchemy connection string to use to connect to the database. (string
-# value)
-# Deprecated group/name - [DEFAULT]/sql_connection
-# Deprecated group/name - [DATABASE]/sql_connection
-# Deprecated group/name - [sql]/connection
-#connection = <None>
-
-# The SQLAlchemy connection string to use to connect to the slave database.
-# (string value)
-#slave_connection = <None>
-
-# The SQL mode to be used for MySQL sessions. This option, including the
-# default, overrides any server-set SQL mode. To use whatever SQL mode is set
-# by the server configuration, set this to no value. Example: mysql_sql_mode=
-# (string value)
-#mysql_sql_mode = TRADITIONAL
-
-# If True, transparently enables support for handling MySQL Cluster (NDB).
-# (boolean value)
-#mysql_enable_ndb = false
-
-# Connections which have been present in the connection pool longer than this
-# number of seconds will be replaced with a new one the next time they are
-# checked out from the pool. (integer value)
-# Deprecated group/name - [DATABASE]/idle_timeout
-# Deprecated group/name - [database]/idle_timeout
-# Deprecated group/name - [DEFAULT]/sql_idle_timeout
-# Deprecated group/name - [DATABASE]/sql_idle_timeout
-# Deprecated group/name - [sql]/idle_timeout
-#connection_recycle_time = 3600
-
-# DEPRECATED: Minimum number of SQL connections to keep open in a pool.
-# (integer value)
-# Deprecated group/name - [DEFAULT]/sql_min_pool_size
-# Deprecated group/name - [DATABASE]/sql_min_pool_size
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: The option to set the minimum pool size is not supported by
-# sqlalchemy.
-#min_pool_size = 1
-
-# Maximum number of SQL connections to keep open in a pool. Setting a value of
-# 0 indicates no limit. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_pool_size
-# Deprecated group/name - [DATABASE]/sql_max_pool_size
-#max_pool_size = 5
-
-# Maximum number of database connection retries during startup. Set to -1 to
-# specify an infinite retry count. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_retries
-# Deprecated group/name - [DATABASE]/sql_max_retries
-#max_retries = 10
-
-# Interval between retries of opening a SQL connection. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_retry_interval
-# Deprecated group/name - [DATABASE]/reconnect_interval
-#retry_interval = 10
-
-# If set, use this value for max_overflow with SQLAlchemy. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_overflow
-# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
-#max_overflow = 50
-
-# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
-# value)
-# Minimum value: 0
-# Maximum value: 100
-# Deprecated group/name - [DEFAULT]/sql_connection_debug
-#connection_debug = 0
-
-# Add Python stack traces to SQL as comment strings. (boolean value)
-# Deprecated group/name - [DEFAULT]/sql_connection_trace
-#connection_trace = false
-
-# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
-# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
-#pool_timeout = <None>
-
-# Enable the experimental use of database reconnect on connection lost.
-# (boolean value)
-#use_db_reconnect = false
-
-# Seconds between retries of a database transaction. (integer value)
-#db_retry_interval = 1
-
-# If True, increases the interval between retries of a database operation up to
-# db_max_retry_interval. (boolean value)
-#db_inc_retry_interval = true
-
-# If db_inc_retry_interval is set, the maximum seconds between retries of a
-# database operation. (integer value)
-#db_max_retry_interval = 10
-
-# Maximum retries in case of connection error or deadlock error before error is
-# raised. Set to -1 to specify an infinite retry count. (integer value)
-#db_max_retries = 20
-
-# Optional URL parameters to append onto the connection URL at connect time;
-# specify as param1=value1&param2=value2&... (string value)
-#connection_parameters =
-
 
 [keystone_authtoken]
 
-#
-# From keystonemiddleware.auth_token
-#
-
-# Complete "public" Identity API endpoint. This endpoint should not be an
-# "admin" endpoint, as it should be accessible by all end users.
-# Unauthenticated clients are redirected to this endpoint to authenticate.
-# Although this endpoint should ideally be unversioned, client support in the
-# wild varies. If you're using a versioned v2 endpoint here, then this should
-# *not* be the same endpoint the service user utilizes for validating tokens,
-# because normal end users may not be able to reach that endpoint. (string
-# value)
-# Deprecated group/name - [keystone_authtoken]/auth_uri
-#www_authenticate_uri = <None>
-
-# DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
-# be an "admin" endpoint, as it should be accessible by all end users.
-# Unauthenticated clients are redirected to this endpoint to authenticate.
-# Although this endpoint should ideally be unversioned, client support in the
-# wild varies. If you're using a versioned v2 endpoint here, then this should
-# *not* be the same endpoint the service user utilizes for validating tokens,
-# because normal end users may not be able to reach that endpoint. This option
-# is deprecated in favor of www_authenticate_uri and will be removed in the S
-# release. (string value)
-# This option is deprecated for removal since Queens.
-# Its value may be silently ignored in the future.
-# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
-# and will be removed in the S  release.
-#auth_uri = <None>
-
-# API version of the admin Identity API endpoint. (string value)
-#auth_version = <None>
-
-# Do not handle authorization requests within the middleware, but delegate the
-# authorization decision to downstream WSGI components. (boolean value)
-#delay_auth_decision = false
-
-# Request timeout value for communicating with Identity API server. (integer
-# value)
-#http_connect_timeout = <None>
-
-# How many times are we trying to reconnect when communicating with Identity
-# API Server. (integer value)
-#http_request_max_retries = 3
-
-# Request environment key where the Swift cache object is stored. When
-# auth_token middleware is deployed with a Swift cache, use this option to have
-# the middleware share a caching backend with swift. Otherwise, use the
-# ``memcached_servers`` option instead. (string value)
-#cache = <None>
-
-# Required if identity server requires client certificate (string value)
-#certfile = <None>
-
-# Required if identity server requires client certificate (string value)
-#keyfile = <None>
-
-# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# Defaults to system CAs. (string value)
-#cafile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# The region in which the identity server can be found. (string value)
-#region_name = <None>
-
-# DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P
-# release. (string value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#signing_dir = <None>
-
-# Optionally specify a list of memcached server(s) to use for caching. If left
-# undefined, tokens will instead be cached in-process. (list value)
-# Deprecated group/name - [keystone_authtoken]/memcache_servers
-#memcached_servers = <None>
-
-# In order to prevent excessive effort spent validating tokens, the middleware
-# caches previously-seen tokens for a configurable duration (in seconds). Set
-# to -1 to disable caching completely. (integer value)
-#token_cache_time = 300
-
-# DEPRECATED: Determines the frequency at which the list of revoked tokens is
-# retrieved from the Identity service (in seconds). A high number of revocation
-# events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in
-# the Ocata release and will be removed in the P release. (integer value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#revocation_cache_time = 10
-
-# (Optional) If defined, indicate whether token data should be authenticated or
-# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
-# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
-# cache. If the value is not one of these options or empty, auth_token will
-# raise an exception on initialization. (string value)
-# Possible values:
-# None - <No description provided>
-# MAC - <No description provided>
-# ENCRYPT - <No description provided>
-#memcache_security_strategy = None
-
-# (Optional, mandatory if memcache_security_strategy is defined) This string is
-# used for key derivation. (string value)
-#memcache_secret_key = <None>
-
-# (Optional) Number of seconds memcached server is considered dead before it is
-# tried again. (integer value)
-#memcache_pool_dead_retry = 300
-
-# (Optional) Maximum total number of open connections to every memcached
-# server. (integer value)
-#memcache_pool_maxsize = 10
-
-# (Optional) Socket timeout in seconds for communicating with a memcached
-# server. (integer value)
-#memcache_pool_socket_timeout = 3
-
-# (Optional) Number of seconds a connection to memcached is held unused in the
-# pool before it is closed. (integer value)
-#memcache_pool_unused_timeout = 60
-
-# (Optional) Number of seconds that an operation will wait to get a memcached
-# client connection from the pool. (integer value)
-#memcache_pool_conn_get_timeout = 10
-
-# (Optional) Use the advanced (eventlet safe) memcached client pool. The
-# advanced pool will only work under python 2.x. (boolean value)
-#memcache_use_advanced_pool = false
-
-# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
-# middleware will not ask for service catalog on token validation and will not
-# set the X-Service-Catalog header. (boolean value)
-#include_service_catalog = true
-
-# Used to control the use and type of token binding. Can be set to: "disabled"
-# to not check token binding. "permissive" (default) to validate binding
-# information if the bind type is of a form known to the server and ignore it
-# if not. "strict" like "permissive" but if the bind type is unknown the token
-# will be rejected. "required" any form of token binding is needed to be
-# allowed. Finally the name of a binding method that must be present in tokens.
-# (string value)
-#enforce_token_bind = permissive
-
-# DEPRECATED: If true, the revocation list will be checked for cached tokens.
-# This requires that PKI tokens are configured on the identity server. (boolean
-# value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#check_revocations_for_cached = false
-
-# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
-# single algorithm or multiple. The algorithms are those supported by Python
-# standard hashlib.new(). The hashes will be tried in the order given, so put
-# the preferred one first for performance. The result of the first hash will be
-# stored in the cache. This will typically be set to multiple values only while
-# migrating from a less secure algorithm to a more secure one. Once all the old
-# tokens are expired this option should be set to a single value for better
-# performance. (list value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#hash_algorithms = md5
-
-# A choice of roles that must be present in a service token. Service tokens are
-# allowed to request that an expired token can be used and so this check should
-# tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present.
-# For backwards compatibility reasons this currently only affects the
-# allow_expired check. (list value)
-#service_token_roles = service
-
-# For backwards compatibility reasons we must let valid service tokens pass
-# that don't pass the service_token_roles check as valid. Setting this true
-# will become the default in a future release and should be enabled if
-# possible. (boolean value)
-#service_token_roles_required = false
-
-# Authentication type to load (string value)
-# Deprecated group/name - [keystone_authtoken]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
-
-
-[matchmaker_redis]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Host to locate redis. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
-
-# DEPRECATED: Use this port to connect to redis host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
-
-# DEPRECATED: Password for Redis server (optional). (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#password =
-
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
-# [host:port, host1:port ... ] (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#sentinel_hosts =
-
-# Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
-
-# Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
-
-# Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
-
-# Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
-
-
-[nova]
-
-#
-# From neutron
-#
-
-# Name of nova region to use. Useful if keystone manages more than one region.
-# (string value)
-#region_name = <None>
-
-# Type of the nova endpoint to use.  This endpoint will be looked up in the
-# keystone catalog and should be one of public, internal or admin. (string
-# value)
-# Possible values:
-# public - <No description provided>
-# admin - <No description provided>
-# internal - <No description provided>
-#endpoint_type = public
-
-#
-# From nova.auth
-#
-
-# Authentication URL (string value)
-#auth_url = <None>
-
-# Authentication type to load (string value)
-# Deprecated group/name - [nova]/auth_plugin
-#auth_type = <None>
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# Collect per-API call timing information. (boolean value)
-#collect_timing = false
-
-# Optional domain ID to use with v3 and v2 parameters. It will be used for both
-# the user and project domain in v3 and ignored in v2 authentication. (string
-# value)
-#default_domain_id = <None>
-
-# Optional domain name to use with v3 API and v2 parameters. It will be used
-# for both the user and project domain in v3 and ignored in v2 authentication.
-# (string value)
-#default_domain_name = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# User's password (string value)
-#password = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Project ID to scope to (string value)
-# Deprecated group/name - [nova]/tenant_id
-#project_id = <None>
-
-# Project name to scope to (string value)
-# Deprecated group/name - [nova]/tenant_name
-#project_name = <None>
-
-# Log requests to multiple loggers. (boolean value)
-#split_loggers = false
-
-# Scope for system operations (string value)
-#system_scope = <None>
-
-# Tenant ID (string value)
-#tenant_id = <None>
-
-# Tenant Name (string value)
-#tenant_name = <None>
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User id (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [nova]/user_name
-#username = <None>
-
 
 [oslo_concurrency]
-
 #
 # From oslo.concurrency
 #
@@ -1199,326 +695,15 @@
 # Enables or disables inter-process locks. (boolean value)
 #disable_process_locking = false
 
-# Directory to use for lock files.  For security, the specified directory
-# should only be writable by the user running the processes that need locking.
-# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
-# a lock path must be set. (string value)
-#lock_path = <None>
-
-
-[oslo_messaging_amqp]
-
-#
-# From oslo.messaging
-#
-
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string
-# value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string
-# value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional)
-# (string value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# By default SSL checks that the name in the server's certificate matches the
-# hostname in the transport_url. In some configurations it may be preferable to
-# use the virtual hostname instead, for example if the server uses the Server
-# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
-# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
-# virtual host name instead of the DNS name. (boolean value)
-#ssl_verify_vhost = false
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# Enable virtual host support for those message buses that do not natively
-# support virtual hosting (such as qpidd). When set to true the virtual host
-# name will be added to all message bus addresses, effectively creating a
-# private 'subnet' per virtual host. Set to False if the message bus supports
-# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
-# as the name of the virtual host. (boolean value)
-#pseudo_vhost = true
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-
-# robin fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# DEPRECATED: Pool Size for Kafka Consumers (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#pool_size = 10
-
-# DEPRECATED: The pool size limit for connections expiration policy (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_min_size = 2
-
-# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating
-# point value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
-# Enable asynchronous consumer commits (boolean value)
-#enable_auto_commit = false
-
-# The maximum number of records returned in a poll call (integer value)
-#max_poll_records = 500
-
-# Protocol used to communicate with brokers (string value)
-# Possible values:
-# PLAINTEXT - <No description provided>
-# SASL_PLAINTEXT - <No description provided>
-# SSL - <No description provided>
-# SASL_SSL - <No description provided>
-#security_protocol = PLAINTEXT
-
-# Mechanism when security protocol is SASL (string value)
-#sasl_mechanism = PLAIN
-
-# CA certificate PEM file used to verify the server certificate (string value)
-#ssl_cafile =
-
-
-[oslo_messaging_notifications]
-
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are
-# messaging, messagingv2, routing, log, test, noop (multi valued)
-# Deprecated group/name - [DEFAULT]/notification_driver
-#driver =
-
-# A URL representing the messaging driver to use for notifications. If not set,
-# we fall back to the same configuration used for RPC. (string value)
-# Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
-
-# AMQP topic used for OpenStack notifications. (list value)
-# Deprecated group/name - [rpc_notifier2]/topics
-# Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
-
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
-
-
+# Directory to use for lock files.  For security, the specified
+# directory should only be writable by the user running the processes
+# that need locking. Defaults to environment variable OSLO_LOCK_PATH.
+# If OSLO_LOCK_PATH is not set in the environment, use the Python
+# tempfile.gettempdir function to find a suitable location. If
+# external locks are used, a lock path must be set. (string value)
+#lock_path = /tmp
+lock_path = /var/lock/neutron
 [oslo_messaging_rabbit]
-
 #
 # From oslo.messaging
 #
@@ -1534,24 +719,6 @@
 # Connect over SSL. (boolean value)
 # Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl
 #ssl = false
-
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
 
 # How long to wait before reconnecting in response to an AMQP consumer cancel
 # notification. (floating point value)
@@ -1666,213 +833,59 @@
 #heartbeat_rate = 2
 
 
-[oslo_messaging_zmq]
-
+
+[oslo_messaging_notifications]
 #
 # From oslo.messaging
 #
 
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+driver = messagingv2
+
+# A URL representing the messaging driver to use for notifications. If not set,
+# we fall back to the same configuration used for RPC. (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+
+# The maximum number of attempts to re-send a notification message which failed
+# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
+# (integer value)
+#retry = -1
 
 
 [oslo_middleware]
-
-#
-# From oslo.middleware.http_proxy_to_wsgi
-#
+#
+# From oslo.middleware
+#
+
+# The maximum body size for each  request, in bytes. (integer value)
+# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
+# Deprecated group/name - [DEFAULT]/max_request_body_size
+#max_request_body_size = 114688
+
+# DEPRECATED: The HTTP Header that will be used to determine what the original
+# request protocol scheme was, even if it was hidden by a SSL termination
+# proxy. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#secure_proxy_ssl_header = X-Forwarded-Proto
 
 # Whether the application is behind a proxy or not. This determines if the
 # middleware should parse the headers or not. (boolean value)
 #enable_proxy_headers_parsing = false
+enable_proxy_headers_parsing = True
+
 
 
 [oslo_policy]
-
-#
-# From oslo.policy
-#
-
-# This option controls whether or not to enforce scope when evaluating
-# policies. If ``True``, the scope of the token used in the request is compared
-# to the ``scope_types`` of the policy being enforced. If the scopes do not
-# match, an ``InvalidScope`` exception will be raised. If ``False``, a message
-# will be logged informing operators that policies are being invoked with
-# mismatching scope. (boolean value)
-#enforce_scope = false
-
-# The file that defines policies. (string value)
-#policy_file = policy.json
-
-# Default rule. Enforced when a requested rule is not found. (string value)
-#policy_default_rule = default
-
-# Directories where policy configuration files are stored. They can be relative
-# to any directory in the search path defined by the config_dir option, or
-# absolute paths. The file defined by policy_file must exist for these
-# directories to be searched.  Missing or empty directories are ignored. (multi
-# valued)
-#policy_dirs = policy.d
-
-# Content Type to send and receive data for REST based policy check (string
-# value)
-# Possible values:
-# application/x-www-form-urlencoded - <No description provided>
-# application/json - <No description provided>
-#remote_content_type = application/x-www-form-urlencoded
-
-# server identity verification for REST based policy check (boolean value)
-#remote_ssl_verify_server_crt = false
-
-# Absolute path to ca cert file for REST based policy check (string value)
-#remote_ssl_ca_crt_file = <None>
-
-# Absolute path to client cert for REST based policy check (string value)
-#remote_ssl_client_crt_file = <None>
-
-# Absolute path client key file REST based policy check (string value)
-#remote_ssl_client_key_file = <None>
 
 
 [quotas]
@@ -1927,7 +940,6 @@
 
 
 [ssl]
-
 #
 # From oslo.service.sslutils
 #
@@ -1952,3 +964,6 @@
 # Sets the list of available ciphers. value should be a string in the OpenSSL
 # cipher list format. (string value)
 #ciphers = <None>
+
+
+[ovs]

2019-04-30 22:29:46,767 [salt.state       :1951][INFO    ][18122] Completed state [/etc/neutron/neutron.conf] at time 22:29:46.767894 duration_in_ms=233.176
2019-04-30 22:29:46,768 [salt.state       :1780][INFO    ][18122] Running state [/etc/neutron/l3_agent.ini] at time 22:29:46.768176
2019-04-30 22:29:46,768 [salt.state       :1813][INFO    ][18122] Executing state file.managed for [/etc/neutron/l3_agent.ini]
2019-04-30 22:29:46,781 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/l3_agent.ini'
2019-04-30 22:29:46,848 [salt.state       :300 ][INFO    ][18122] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -13,7 +14,7 @@
 #ovs_use_veth = false
 
 # The driver used to manage the virtual interface. (string value)
-#interface_driver = <None>
+interface_driver = openvswitch
 
 #
 # From neutron.l3.agent
@@ -37,12 +38,12 @@
 # dvr_snat - <No description provided>
 # legacy - <No description provided>
 # dvr_no_external - <No description provided>
-#agent_mode = legacy
+agent_mode = legacy
 
 # TCP Port used by Neutron metadata namespace proxy. (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#metadata_port = 9697
+metadata_port = 8775
 
 # Indicates that this L3 agent should also handle routers that do not have an
 # external network gateway configured. This option should be True only for a
@@ -162,7 +163,6 @@
 
 # MaxRtrAdvInterval setting for radvd.conf (integer value)
 #max_rtr_adv_interval = 100
-
 #
 # From oslo.log
 #
@@ -277,6 +277,7 @@
 #fatal_deprecations = false
 
 
+
 [agent]
 
 #
@@ -314,8 +315,8 @@
 
 # DEPRECATED: The interface for interacting with the OVSDB (string value)
 # Possible values:
+# native - <No description provided>
 # vsctl - <No description provided>
-# native - <No description provided>
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #ovsdb_interface = native

2019-04-30 22:29:46,848 [salt.state       :1951][INFO    ][18122] Completed state [/etc/neutron/l3_agent.ini] at time 22:29:46.848151 duration_in_ms=79.974
2019-04-30 22:29:46,848 [salt.state       :1780][INFO    ][18122] Running state [/etc/neutron/plugins/ml2/openvswitch_agent.ini] at time 22:29:46.848399
2019-04-30 22:29:46,848 [salt.state       :1813][INFO    ][18122] Executing state file.managed for [/etc/neutron/plugins/ml2/openvswitch_agent.ini]
2019-04-30 22:29:46,861 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/openvswitch_agent.ini'
2019-04-30 22:29:46,938 [salt.state       :300 ][INFO    ][18122] File changed:
--- 
+++ 
@@ -1,5 +1,5 @@
+
 [DEFAULT]
-
 #
 # From oslo.log
 #
@@ -114,6 +114,7 @@
 #fatal_deprecations = false
 
 
+
 [agent]
 
 #
@@ -126,38 +127,37 @@
 # The number of seconds to wait before respawning the ovsdb monitor after
 # losing communication with it. (integer value)
 #ovsdb_monitor_respawn_interval = 30
-
 # Network types supported by the agent (gre, vxlan and/or geneve). (list value)
-#tunnel_types =
+tunnel_types = vxlan
 
 # The UDP port to use for VXLAN tunnels. (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#vxlan_udp_port = 4789
+vxlan_udp_port = 4789
 
 # MTU size of veth interfaces (integer value)
 #veth_mtu = 9000
 
 # Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve
 # tunnel scalability. (boolean value)
-#l2_population = false
+l2_population = true
 
 # Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2
 # l2population driver. Allows the switch (when supporting an overlay) to
 # respond to an ARP request locally without performing a costly ARP broadcast
 # into the overlay. (boolean value)
-#arp_responder = false
+arp_responder = true
 
 # Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying
 # GRE/VXLAN tunnel. (boolean value)
 #dont_fragment = true
 
 # Make the l2 agent run in DVR mode. (boolean value)
-#enable_distributed_routing = false
+enable_distributed_routing = False
 
 # Reset flow table on start. Setting this to True will cause brief traffic
 # interruption. (boolean value)
-#drop_flows_on_start = false
+drop_flows_on_start = false
 
 # Set or un-set the tunnel header checksum  on outgoing IP packet carrying
 # GRE/VXLAN tunnel. (boolean value)
@@ -169,7 +169,7 @@
 #agent_type = Open vSwitch agent
 
 # Extensions list to use (list value)
-#extensions =
+extensions = 
 
 
 [network_log]
@@ -218,6 +218,7 @@
 # in the ML2 plug-in configuration file on the neutron server node(s). (IP
 # address value)
 #local_ip = <None>
+local_ip = 10.1.0.5
 
 # Comma-separated list of <physical_network>:<bridge> tuples mapping physical
 # network names to the agent's node-specific Open vSwitch bridge names to be
@@ -227,7 +228,8 @@
 # have mappings to appropriate bridges on each agent. Note: If you remove a
 # bridge from this mapping, make sure to disconnect it from the integration
 # bridge as it won't be managed by the agent anymore. (list value)
-#bridge_mappings =
+
+bridge_mappings = physnet1:br-floating
 
 # Use veths instead of patch ports to interconnect the integration bridge to
 # physical networks. Support kernel without Open vSwitch patch port support so
@@ -265,16 +267,16 @@
 
 # Timeout in seconds to wait for the local switch connecting the controller.
 # Used only for 'native' driver. (integer value)
-#of_connect_timeout = 300
+#of_connect_timeout = 30
 
 # Timeout in seconds to wait for a single OpenFlow request. Used only for
 # 'native' driver. (integer value)
-#of_request_timeout = 300
+#of_request_timeout = 10
 
 # DEPRECATED: The interface for interacting with the OVSDB (string value)
 # Possible values:
+# native - <No description provided>
 # vsctl - <No description provided>
-# native - <No description provided>
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #ovsdb_interface = native
@@ -308,12 +310,12 @@
 #
 
 # Driver for security groups firewall in the L2 agent (string value)
-#firewall_driver = <None>
+firewall_driver = openvswitch
 
 # Controls whether the neutron security group API is enabled in the server. It
 # should be false when using no security groups or using the nova security
 # group API. (boolean value)
-#enable_security_group = true
+enable_security_group = True
 
 # Use ipset to speed-up the iptables based security groups. Enabling ipset
 # support requires that ipset is installed on L2 agent node. (boolean value)

2019-04-30 22:29:46,938 [salt.state       :1951][INFO    ][18122] Completed state [/etc/neutron/plugins/ml2/openvswitch_agent.ini] at time 22:29:46.938161 duration_in_ms=89.762
2019-04-30 22:29:46,938 [salt.state       :1780][INFO    ][18122] Running state [/etc/neutron/dhcp_agent.ini] at time 22:29:46.938406
2019-04-30 22:29:46,938 [salt.state       :1813][INFO    ][18122] Executing state file.managed for [/etc/neutron/dhcp_agent.ini]
2019-04-30 22:29:46,951 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/dhcp_agent.ini'
2019-04-30 22:29:47,023 [salt.state       :300 ][INFO    ][18122] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -13,7 +14,7 @@
 #ovs_use_veth = false
 
 # The driver used to manage the virtual interface. (string value)
-#interface_driver = <None>
+interface_driver = openvswitch
 
 #
 # From neutron.dhcp.agent
@@ -22,7 +23,7 @@
 # The DHCP agent will resync its state with Neutron to recover from any
 # transient notification or RPC errors. The interval is number of seconds
 # between attempts. (integer value)
-#resync_interval = 5
+resync_interval = 30
 
 # The driver used to manage the DHCP server. (string value)
 #dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
@@ -34,7 +35,7 @@
 # instance must be configured to request host routes via DHCP (Option 121).
 # This option doesn't have any effect when force_metadata is set to True.
 # (boolean value)
-#enable_isolated_metadata = false
+enable_isolated_metadata = true
 
 # In some cases the Neutron router is not present to provide the metadata IP
 # but the DHCP server can be used to provide this info. Setting this value will
@@ -49,7 +50,7 @@
 # DHCP Option 121 will not be injected in VMs, as they will be able to reach
 # 169.254.169.254 through a router. This option requires
 # enable_isolated_metadata = True. (boolean value)
-#enable_metadata_network = false
+enable_metadata_network = false
 
 # Number of threads to use during sync process. Should not exceed connection
 # pool size configured on server. (integer value)
@@ -90,7 +91,6 @@
 # DHCP rebinding time T2 (in seconds). If set to 0, it will default to 7/8 of
 # the lease time. (integer value)
 #dhcp_rebinding_time = 0
-
 #
 # From oslo.log
 #
@@ -205,6 +205,7 @@
 #fatal_deprecations = false
 
 
+
 [agent]
 
 #
@@ -235,8 +236,8 @@
 
 # DEPRECATED: The interface for interacting with the OVSDB (string value)
 # Possible values:
+# native - <No description provided>
 # vsctl - <No description provided>
-# native - <No description provided>
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #ovsdb_interface = native

2019-04-30 22:29:47,024 [salt.state       :1951][INFO    ][18122] Completed state [/etc/neutron/dhcp_agent.ini] at time 22:29:47.024085 duration_in_ms=85.679
2019-04-30 22:29:47,024 [salt.state       :1780][INFO    ][18122] Running state [/etc/neutron/metadata_agent.ini] at time 22:29:47.024326
2019-04-30 22:29:47,024 [salt.state       :1813][INFO    ][18122] Executing state file.managed for [/etc/neutron/metadata_agent.ini]
2019-04-30 22:29:47,037 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/metadata_agent.ini'
2019-04-30 22:29:47,103 [salt.state       :300 ][INFO    ][18122] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -19,7 +20,7 @@
 #auth_ca_cert = <None>
 
 # IP address or DNS name of Nova metadata server. (host address value)
-#nova_metadata_host = 127.0.0.1
+nova_metadata_host = 10.167.4.35
 
 # TCP Port used by Nova metadata server. (port value)
 # Minimum value: 0
@@ -31,13 +32,13 @@
 # but it must match here and in the configuration used by the Nova Metadata
 # Server. NOTE: Nova uses the same config key, but in [neutron] section.
 # (string value)
-#metadata_proxy_shared_secret =
+metadata_proxy_shared_secret = opnfv_secret
 
 # Protocol to access nova metadata, http or https (string value)
 # Possible values:
 # http - <No description provided>
 # https - <No description provided>
-#nova_metadata_protocol = http
+nova_metadata_protocol = http
 
 # Allow to perform insecure SSL (https) requests to nova metadata (boolean
 # value)
@@ -69,7 +70,6 @@
 # Number of backlog requests to configure the metadata server socket with
 # (integer value)
 #metadata_backlog = 4096
-
 #
 # From oslo.log
 #
@@ -184,6 +184,7 @@
 #fatal_deprecations = false
 
 
+
 [agent]
 
 #
@@ -200,82 +201,3 @@
 
 
 [cache]
-
-#
-# From oslo.cache
-#
-
-# Prefix for building the configuration dictionary for the cache region. This
-# should not need to be changed unless there is another dogpile.cache region
-# with the same configuration name. (string value)
-#config_prefix = cache.oslo
-
-# Default TTL, in seconds, for any cached item in the dogpile.cache region.
-# This applies to any cached method that doesn't have an explicit cache
-# expiration time defined for it. (integer value)
-#expiration_time = 600
-
-# Cache backend module. For eventlet-based or environments with hundreds of
-# threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is
-# recommended. For environments with less than 100 threaded servers, Memcached
-# (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test
-# environments with a single instance of the server can use the
-# dogpile.cache.memory backend. (string value)
-# Possible values:
-# oslo_cache.memcache_pool - <No description provided>
-# oslo_cache.dict - <No description provided>
-# oslo_cache.mongo - <No description provided>
-# oslo_cache.etcd3gw - <No description provided>
-# dogpile.cache.memcached - <No description provided>
-# dogpile.cache.pylibmc - <No description provided>
-# dogpile.cache.bmemcached - <No description provided>
-# dogpile.cache.dbm - <No description provided>
-# dogpile.cache.redis - <No description provided>
-# dogpile.cache.memory - <No description provided>
-# dogpile.cache.memory_pickle - <No description provided>
-# dogpile.cache.null - <No description provided>
-#backend = dogpile.cache.null
-
-# Arguments supplied to the backend module. Specify this option once per
-# argument to be passed to the dogpile.cache backend. Example format:
-# "<argname>:<value>". (multi valued)
-#backend_argument =
-
-# Proxy classes to import that will affect the way the dogpile.cache backend
-# functions. See the dogpile.cache documentation on changing-backend-behavior.
-# (list value)
-#proxies =
-
-# Global toggle for caching. (boolean value)
-#enabled = false
-
-# Extra debugging from the cache backend (cache keys, get/set/delete/etc
-# calls). This is only really useful if you need to see the specific cache-
-# backend get/set/delete calls with the keys/values.  Typically this should be
-# left set to false. (boolean value)
-#debug_cache_backend = false
-
-# Memcache servers in the format of "host:port". (dogpile.cache.memcache and
-# oslo_cache.memcache_pool backends only). (list value)
-#memcache_servers = localhost:11211
-
-# Number of seconds memcached server is considered dead before it is tried
-# again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
-# (integer value)
-#memcache_dead_retry = 300
-
-# Timeout in seconds for every call to a server. (dogpile.cache.memcache and
-# oslo_cache.memcache_pool backends only). (floating point value)
-#memcache_socket_timeout = 3.0
-
-# Max total number of open connections to every memcached server.
-# (oslo_cache.memcache_pool backend only). (integer value)
-#memcache_pool_maxsize = 10
-
-# Number of seconds a connection to memcached is held unused in the pool before
-# it is closed. (oslo_cache.memcache_pool backend only). (integer value)
-#memcache_pool_unused_timeout = 60
-
-# Number of seconds that an operation will wait to get a memcache client
-# connection. (integer value)
-#memcache_pool_connection_get_timeout = 10

2019-04-30 22:29:47,103 [salt.state       :1951][INFO    ][18122] Completed state [/etc/neutron/metadata_agent.ini] at time 22:29:47.103713 duration_in_ms=79.386
2019-04-30 22:29:47,104 [salt.state       :1780][INFO    ][18122] Running state [/etc/default/neutron-metadata-agent] at time 22:29:47.104001
2019-04-30 22:29:47,104 [salt.state       :1813][INFO    ][18122] Executing state file.managed for [/etc/default/neutron-metadata-agent]
2019-04-30 22:29:47,117 [salt.fileclient  :1219][INFO    ][18122] Fetching file from saltenv 'base', ** done ** 'neutron/files/default'
2019-04-30 22:29:47,121 [salt.state       :300 ][INFO    ][18122] File changed:
New file
2019-04-30 22:29:47,121 [salt.state       :1951][INFO    ][18122] Completed state [/etc/default/neutron-metadata-agent] at time 22:29:47.121487 duration_in_ms=17.487
2019-04-30 22:29:47,121 [salt.state       :1780][INFO    ][18122] Running state [/etc/default/neutron-dhcp-agent] at time 22:29:47.121774
2019-04-30 22:29:47,121 [salt.state       :1813][INFO    ][18122] Executing state file.managed for [/etc/default/neutron-dhcp-agent]
2019-04-30 22:29:47,134 [salt.state       :300 ][INFO    ][18122] File changed:
New file
2019-04-30 22:29:47,134 [salt.state       :1951][INFO    ][18122] Completed state [/etc/default/neutron-dhcp-agent] at time 22:29:47.134259 duration_in_ms=12.485
2019-04-30 22:29:47,134 [salt.state       :1780][INFO    ][18122] Running state [/etc/default/neutron-openvswitch-agent] at time 22:29:47.134543
2019-04-30 22:29:47,134 [salt.state       :1813][INFO    ][18122] Executing state file.managed for [/etc/default/neutron-openvswitch-agent]
2019-04-30 22:29:47,146 [salt.state       :300 ][INFO    ][18122] File changed:
New file
2019-04-30 22:29:47,146 [salt.state       :1951][INFO    ][18122] Completed state [/etc/default/neutron-openvswitch-agent] at time 22:29:47.146505 duration_in_ms=11.961
2019-04-30 22:29:47,146 [salt.state       :1780][INFO    ][18122] Running state [/etc/default/neutron-l3-agent] at time 22:29:47.146791
2019-04-30 22:29:47,146 [salt.state       :1813][INFO    ][18122] Executing state file.managed for [/etc/default/neutron-l3-agent]
2019-04-30 22:29:47,159 [salt.state       :300 ][INFO    ][18122] File changed:
New file
2019-04-30 22:29:47,159 [salt.state       :1951][INFO    ][18122] Completed state [/etc/default/neutron-l3-agent] at time 22:29:47.159801 duration_in_ms=13.01
2019-04-30 22:29:47,161 [salt.state       :1780][INFO    ][18122] Running state [neutron-metadata-agent] at time 22:29:47.161644
2019-04-30 22:29:47,161 [salt.state       :1813][INFO    ][18122] Executing state service.running for [neutron-metadata-agent]
2019-04-30 22:29:47,162 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'status', 'neutron-metadata-agent.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:47,171 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'neutron-metadata-agent.service'] in directory '/root'
2019-04-30 22:29:47,180 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-enabled', 'neutron-metadata-agent.service'] in directory '/root'
2019-04-30 22:29:47,186 [salt.state       :300 ][INFO    ][18122] The service neutron-metadata-agent is already running
2019-04-30 22:29:47,186 [salt.state       :1951][INFO    ][18122] Completed state [neutron-metadata-agent] at time 22:29:47.186353 duration_in_ms=24.707
2019-04-30 22:29:47,186 [salt.state       :1780][INFO    ][18122] Running state [neutron-metadata-agent] at time 22:29:47.186511
2019-04-30 22:29:47,186 [salt.state       :1813][INFO    ][18122] Executing state service.mod_watch for [neutron-metadata-agent]
2019-04-30 22:29:47,187 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'neutron-metadata-agent.service'] in directory '/root'
2019-04-30 22:29:47,195 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-metadata-agent.service'] in directory '/root'
2019-04-30 22:29:48,310 [salt.state       :300 ][INFO    ][18122] {'neutron-metadata-agent': True}
2019-04-30 22:29:48,310 [salt.state       :1951][INFO    ][18122] Completed state [neutron-metadata-agent] at time 22:29:48.310299 duration_in_ms=1123.786
2019-04-30 22:29:48,311 [salt.state       :1780][INFO    ][18122] Running state [neutron-dhcp-agent] at time 22:29:48.311357
2019-04-30 22:29:48,311 [salt.state       :1813][INFO    ][18122] Executing state service.running for [neutron-dhcp-agent]
2019-04-30 22:29:48,312 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'status', 'neutron-dhcp-agent.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:48,321 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'neutron-dhcp-agent.service'] in directory '/root'
2019-04-30 22:29:48,330 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-enabled', 'neutron-dhcp-agent.service'] in directory '/root'
2019-04-30 22:29:48,340 [salt.state       :300 ][INFO    ][18122] The service neutron-dhcp-agent is already running
2019-04-30 22:29:48,341 [salt.state       :1951][INFO    ][18122] Completed state [neutron-dhcp-agent] at time 22:29:48.341046 duration_in_ms=29.688
2019-04-30 22:29:48,341 [salt.state       :1780][INFO    ][18122] Running state [neutron-dhcp-agent] at time 22:29:48.341222
2019-04-30 22:29:48,341 [salt.state       :1813][INFO    ][18122] Executing state service.mod_watch for [neutron-dhcp-agent]
2019-04-30 22:29:48,341 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'neutron-dhcp-agent.service'] in directory '/root'
2019-04-30 22:29:48,351 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-dhcp-agent.service'] in directory '/root'
2019-04-30 22:29:48,450 [salt.state       :300 ][INFO    ][18122] {'neutron-dhcp-agent': True}
2019-04-30 22:29:48,451 [salt.state       :1951][INFO    ][18122] Completed state [neutron-dhcp-agent] at time 22:29:48.451159 duration_in_ms=109.935
2019-04-30 22:29:48,452 [salt.state       :1780][INFO    ][18122] Running state [neutron-openvswitch-agent] at time 22:29:48.452739
2019-04-30 22:29:48,453 [salt.state       :1813][INFO    ][18122] Executing state service.running for [neutron-openvswitch-agent]
2019-04-30 22:29:48,453 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'status', 'neutron-openvswitch-agent.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:48,463 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'neutron-openvswitch-agent.service'] in directory '/root'
2019-04-30 22:29:48,474 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-enabled', 'neutron-openvswitch-agent.service'] in directory '/root'
2019-04-30 22:29:48,485 [salt.state       :300 ][INFO    ][18122] The service neutron-openvswitch-agent is already running
2019-04-30 22:29:48,486 [salt.state       :1951][INFO    ][18122] Completed state [neutron-openvswitch-agent] at time 22:29:48.486271 duration_in_ms=33.53
2019-04-30 22:29:48,486 [salt.state       :1780][INFO    ][18122] Running state [neutron-openvswitch-agent] at time 22:29:48.486520
2019-04-30 22:29:48,486 [salt.state       :1813][INFO    ][18122] Executing state service.mod_watch for [neutron-openvswitch-agent]
2019-04-30 22:29:48,487 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'neutron-openvswitch-agent.service'] in directory '/root'
2019-04-30 22:29:48,498 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-openvswitch-agent.service'] in directory '/root'
2019-04-30 22:29:48,528 [salt.state       :300 ][INFO    ][18122] {'neutron-openvswitch-agent': True}
2019-04-30 22:29:48,528 [salt.state       :1951][INFO    ][18122] Completed state [neutron-openvswitch-agent] at time 22:29:48.528465 duration_in_ms=41.943
2019-04-30 22:29:48,529 [salt.state       :1780][INFO    ][18122] Running state [neutron-l3-agent] at time 22:29:48.529907
2019-04-30 22:29:48,530 [salt.state       :1813][INFO    ][18122] Executing state service.running for [neutron-l3-agent]
2019-04-30 22:29:48,531 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'status', 'neutron-l3-agent.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:48,540 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'neutron-l3-agent.service'] in directory '/root'
2019-04-30 22:29:48,548 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-enabled', 'neutron-l3-agent.service'] in directory '/root'
2019-04-30 22:29:48,557 [salt.state       :300 ][INFO    ][18122] The service neutron-l3-agent is already running
2019-04-30 22:29:48,557 [salt.state       :1951][INFO    ][18122] Completed state [neutron-l3-agent] at time 22:29:48.557612 duration_in_ms=27.705
2019-04-30 22:29:48,557 [salt.state       :1780][INFO    ][18122] Running state [neutron-l3-agent] at time 22:29:48.557788
2019-04-30 22:29:48,558 [salt.state       :1813][INFO    ][18122] Executing state service.mod_watch for [neutron-l3-agent]
2019-04-30 22:29:48,558 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemctl', 'is-active', 'neutron-l3-agent.service'] in directory '/root'
2019-04-30 22:29:48,567 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18122] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-l3-agent.service'] in directory '/root'
2019-04-30 22:29:48,588 [salt.state       :300 ][INFO    ][18122] {'neutron-l3-agent': True}
2019-04-30 22:29:48,588 [salt.state       :1951][INFO    ][18122] Completed state [neutron-l3-agent] at time 22:29:48.588677 duration_in_ms=30.887
2019-04-30 22:29:48,591 [salt.minion      :1711][INFO    ][18122] Returning information for job: 20190430222916329301
2019-04-30 22:29:49,266 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command test.ping with jid 20190430222949245471
2019-04-30 22:29:49,275 [salt.minion      :1432][INFO    ][21792] Starting a new job with PID 21792
2019-04-30 22:29:49,286 [salt.minion      :1711][INFO    ][21792] Returning information for job: 20190430222949245471
2019-04-30 22:29:49,447 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command match.pillar with jid 20190430222949429505
2019-04-30 22:29:49,454 [salt.minion      :1432][INFO    ][21797] Starting a new job with PID 21797
2019-04-30 22:29:49,458 [salt.minion      :1711][INFO    ][21797] Returning information for job: 20190430222949429505
2019-04-30 22:29:50,099 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command state.sls with jid 20190430222950081629
2019-04-30 22:29:50,106 [salt.minion      :1432][INFO    ][21813] Starting a new job with PID 21813
2019-04-30 22:29:53,798 [salt.state       :915 ][INFO    ][21813] Loading fresh modules for state activity
2019-04-30 22:29:53,830 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'nova/init.sls'
2019-04-30 22:29:53,849 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'nova/compute.sls'
2019-04-30 22:29:54,048 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'nova/_ssl/rabbitmq.sls'
2019-04-30 22:29:54,160 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'armband/init.sls'
2019-04-30 22:29:54,199 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:54,467 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'armband/qemu_efi.sls'
2019-04-30 22:29:54,483 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'armband/vgabios.sls'
2019-04-30 22:29:54,494 [salt.state       :1780][INFO    ][21813] Running state [libvirtd] at time 22:29:54.494763
2019-04-30 22:29:54,494 [salt.state       :1813][INFO    ][21813] Executing state group.present for [libvirtd]
2019-04-30 22:29:54,495 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['groupadd', '-r', 'libvirtd'] in directory '/root'
2019-04-30 22:29:54,624 [salt.state       :300 ][INFO    ][21813] {'passwd': 'x', 'gid': 999, 'name': 'libvirtd', 'members': []}
2019-04-30 22:29:54,624 [salt.state       :1951][INFO    ][21813] Completed state [libvirtd] at time 22:29:54.624430 duration_in_ms=129.667
2019-04-30 22:29:54,624 [salt.state       :1780][INFO    ][21813] Running state [nova] at time 22:29:54.624675
2019-04-30 22:29:54,624 [salt.state       :1813][INFO    ][21813] Executing state group.present for [nova]
2019-04-30 22:29:54,625 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['groupadd', '-g 303', '-r', 'nova'] in directory '/root'
2019-04-30 22:29:54,727 [salt.state       :300 ][INFO    ][21813] {'passwd': 'x', 'gid': 303, 'name': 'nova', 'members': []}
2019-04-30 22:29:54,727 [salt.state       :1951][INFO    ][21813] Completed state [nova] at time 22:29:54.727852 duration_in_ms=103.177
2019-04-30 22:29:54,728 [salt.state       :1780][INFO    ][21813] Running state [nova] at time 22:29:54.728339
2019-04-30 22:29:54,728 [salt.state       :1813][INFO    ][21813] Executing state user.present for [nova]
2019-04-30 22:29:54,730 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['useradd', '-s', '/bin/bash', '-u', '303', '-g', '303', '-m', '-d', '/var/lib/nova', '-r', 'nova'] in directory '/root'
2019-04-30 22:29:54,843 [salt.state       :300 ][INFO    ][21813] {'shell': '/bin/bash', 'workphone': '', 'uid': 303, 'passwd': 'x', 'roomnumber': '', 'groups': ['nova'], 'home': '/var/lib/nova', 'name': 'nova', 'gid': 303, 'fullname': '', 'homephone': ''}
2019-04-30 22:29:54,843 [salt.state       :1951][INFO    ][21813] Completed state [nova] at time 22:29:54.843699 duration_in_ms=115.358
2019-04-30 22:29:54,844 [salt.state       :1780][INFO    ][21813] Running state [nova_compute_ssl_rabbitmq] at time 22:29:54.844214
2019-04-30 22:29:54,844 [salt.state       :1813][INFO    ][21813] Executing state test.show_notification for [nova_compute_ssl_rabbitmq]
2019-04-30 22:29:54,844 [salt.state       :300 ][INFO    ][21813] Running nova._ssl.rabbitmq
2019-04-30 22:29:54,845 [salt.state       :1951][INFO    ][21813] Completed state [nova_compute_ssl_rabbitmq] at time 22:29:54.845091 duration_in_ms=0.877
2019-04-30 22:29:55,794 [salt.state       :1780][INFO    ][21813] Running state [nova-common] at time 22:29:55.794443
2019-04-30 22:29:55,794 [salt.state       :1813][INFO    ][21813] Executing state pkg.installed for [nova-common]
2019-04-30 22:29:55,808 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['apt-cache', '-q', 'policy', 'nova-common'] in directory '/root'
2019-04-30 22:29:55,848 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:29:58,224 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:29:58,241 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-common'] in directory '/root'
2019-04-30 22:30:00,265 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:30:00,288 [salt.state       :300 ][INFO    ][21813] Made the following changes:
'nova-common' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'

2019-04-30 22:30:00,302 [salt.state       :915 ][INFO    ][21813] Loading fresh modules for state activity
2019-04-30 22:30:00,416 [salt.state       :1951][INFO    ][21813] Completed state [nova-common] at time 22:30:00.416671 duration_in_ms=4622.227
2019-04-30 22:30:00,420 [salt.state       :1780][INFO    ][21813] Running state [nova-compute-kvm] at time 22:30:00.420004
2019-04-30 22:30:00,420 [salt.state       :1813][INFO    ][21813] Executing state pkg.installed for [nova-compute-kvm]
2019-04-30 22:30:00,888 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:30:00,902 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-compute-kvm'] in directory '/root'
2019-04-30 22:30:05,198 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430223005180009
2019-04-30 22:30:05,208 [salt.minion      :1432][INFO    ][22706] Starting a new job with PID 22706
2019-04-30 22:30:05,329 [salt.minion      :1711][INFO    ][22706] Returning information for job: 20190430223005180009
2019-04-30 22:30:35,330 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430223035311796
2019-04-30 22:30:35,339 [salt.minion      :1432][INFO    ][24509] Starting a new job with PID 24509
2019-04-30 22:30:35,349 [salt.minion      :1711][INFO    ][24509] Returning information for job: 20190430223035311796
2019-04-30 22:30:52,080 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:30:52,106 [salt.state       :300 ][INFO    ][21813] Made the following changes:
'libbluetooth3' changed from 'absent' to '5.37-0ubuntu5.1'
'python-pypowervm' changed from 'absent' to '1.1.16-1~u16.04+mcp'
'zvmcloudconnector-common' changed from 'absent' to '1.2.3-0ubuntu3~u16.04+mcp'
'libavahi-common3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2.3'
'qemu-keymaps' changed from 'absent' to '1'
'xmlsec1' changed from 'absent' to '1.2.20-2ubuntu4'
'python-microversion-parse' changed from 'absent' to '0.2.1-0.1~u16.04+mcp'
'libyajl2' changed from 'absent' to '2.1.0-2'
'python-click' changed from 'absent' to '6.2-2ubuntu1'
'nova-compute-hypervisor' changed from 'absent' to '1'
'python2.7-alabaster' changed from 'absent' to '1'
'python-ldap' changed from 'absent' to '3.0.0-1~u16.04+mcp1'
'python-sphinx' changed from 'absent' to '1.6.4-2~u16.04+mcp3'
'libasyncns0' changed from 'absent' to '0.8-5build1'
'libogg0' changed from 'absent' to '1.3.2-1'
'python-scrypt' changed from 'absent' to '0.8.0-1~u16.04+mcp2'
'sphinx-common' changed from 'absent' to '1.6.4-2~u16.04+mcp3'
'libpciaccess0' changed from 'absent' to '0.13.4-1'
'libasound2-data' changed from 'absent' to '1.1.0-0ubuntu1'
'libfdt1' changed from 'absent' to '1.4.2-1.2~u16.04+mcp2'
'python-nova' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'
'python-typing' changed from 'absent' to '3.6.2-1~u16.04+mcp2'
'libpixman-1-0' changed from 'absent' to '0.33.6-1'
'python-werkzeug' changed from 'absent' to '0.14.1+dfsg1-1~u16.04+mcp'
'augeas-lenses' changed from 'absent' to '1.4.0-0ubuntu1.1'
'libvirt-daemon-system' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'qemu-system-x86' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'python-aniso8601' changed from 'absent' to '0.83-1'
'libvirt-clients' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'ipxe-qemu' changed from 'absent' to '1.0.0+git-20180124.fbe8c52d-0.2.1~u16.04+mcp1'
'libsndfile1' changed from 'absent' to '1.0.25-10ubuntu0.16.04.1'
'qemu-kvm' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'libxml2-utils' changed from 'absent' to '2.9.3+dfsg1-1ubuntu0.6'
'python-bcrypt' changed from 'absent' to '3.1.3-1~u16.04+mcp3'
'mkisofs' changed from 'absent' to '1'
'libvorbis0a' changed from 'absent' to '1.3.5-3ubuntu0.2'
'libspice-server1' changed from 'absent' to '0.12.6-4ubuntu0.4'
'keystone-common' changed from 'absent' to '2:14.1.0-1~u16.04+mcp22'
'genisoimage' changed from 'absent' to '9:1.1.11-3ubuntu1'
'libcaca0' changed from 'absent' to '0.99.beta19-2ubuntu0.16.04.1'
'python-keystone' changed from 'absent' to '2:14.1.0-1~u16.04+mcp22'
'libvirt0' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'qemu-kvm-spice' changed from 'absent' to '1'
'qemu-system-common' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'qemu-system-i386' changed from 'absent' to '1'
'python-flask' changed from 'absent' to '1.0.2-1~u16.04+mcp'
'qemu-system-x86-64' changed from 'absent' to '1'
'libvirt-daemon-driver-storage-rbd' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'python-libvirt' changed from 'absent' to '3.5.0-1.1~u16.04+mcp3'
'libusbredirparser1' changed from 'absent' to '0.7.1-1'
'python-itsdangerous' changed from 'absent' to '0.24+dfsg1-1'
'python-alabaster' changed from 'absent' to '0.7.7-1'
'python-imagesize' changed from 'absent' to '0.7.1-1.1~u16.04+mcp2'
'python-os-vif' changed from 'absent' to '1.9.1-1.0~u16.04+mcp6'
'libopus0' changed from 'absent' to '1.1.2-1ubuntu1'
'seabios' changed from 'absent' to '1.10.2-1.1~u16.04+mcp2'
'libavahi-client3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2.3'
'python2.7-ldap' changed from 'absent' to '1'
'libcacard0' changed from 'absent' to '1:2.5.0-2'
'libasound2' changed from 'absent' to '1.1.0-0ubuntu1'
'libxen-4.6' changed from 'absent' to '4.6.5-0ubuntu1.4'
'libxenstore3.0' changed from 'absent' to '4.6.5-0ubuntu1.4'
'python-zvmcloudconnector' changed from 'absent' to '1.2.3-0ubuntu3~u16.04+mcp'
'msr-tools' changed from 'absent' to '1.3-2'
'python-repoze.who' changed from 'absent' to '2.2-3'
'libsdl1.2debian' changed from 'absent' to '1.2.15+dfsg1-3'
'libvirt-daemon' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'python-colorama' changed from 'absent' to '0.3.7-1'
'python-responses' changed from 'absent' to '0.3.0-1'
'libvorbisenc2' changed from 'absent' to '1.3.5-3ubuntu0.2'
'kpartx' changed from 'absent' to '0.5.0+git1.656f8865-5ubuntu2.5'
'nova-compute-libvirt' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'
'python-pyldap' changed from 'absent' to '1'
'nova-compute-kvm' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'
'python2.7-cinderclient' changed from 'absent' to '1'
'python-flask-restful' changed from 'absent' to '0.3.5-1~u16.04+mcp0'
'python-os-traits' changed from 'absent' to '0.5.0-1.0~u16.04+mcp5'
'ebtables' changed from 'absent' to '2.0.10.4-3.4ubuntu2.16.04.2'
'python-passlib' changed from 'absent' to '1.7.1-2.1~u16.04+mcp2'
'ipxe-qemu-256k-compat-efi-roms' changed from 'absent' to '1.0.0+git-20150424.a25a16d-0.2~u16.04+mcp1'
'kvm' changed from 'absent' to '1'
'python-pyasn1-modules' changed from 'absent' to '0.0.7-0.1'
'libaugeas0' changed from 'absent' to '1.4.0-0ubuntu1.1'
'python2.7-nova' changed from 'absent' to '1'
'python2.7-keystone' changed from 'absent' to '1'
'libbrlapi0.6' changed from 'absent' to '5.3.1-2ubuntu2.1'
'libpulse0' changed from 'absent' to '1:8.0-0ubuntu3.10'
'nova-compute' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'
'libnetcf1' changed from 'absent' to '1:0.2.8-1ubuntu1'
'python-pysaml2' changed from 'absent' to '4.5.0-1~u16.04+mcp'
'libavahi-common-data' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2.3'
'cpu-checker' changed from 'absent' to '0.7-0ubuntu7'
'libflac8' changed from 'absent' to '1.3.1-4'
'python-cinderclient' changed from 'absent' to '1:4.0.1-1~u16.04+mcp9'
'python-future' changed from 'absent' to '0.15.2-1'

2019-04-30 22:30:52,120 [salt.state       :915 ][INFO    ][21813] Loading fresh modules for state activity
2019-04-30 22:30:52,141 [salt.state       :1951][INFO    ][21813] Completed state [nova-compute-kvm] at time 22:30:52.140992 duration_in_ms=51720.987
2019-04-30 22:30:52,144 [salt.state       :1780][INFO    ][21813] Running state [python-novaclient] at time 22:30:52.144376
2019-04-30 22:30:52,144 [salt.state       :1813][INFO    ][21813] Executing state pkg.installed for [python-novaclient]
2019-04-30 22:30:52,636 [salt.state       :300 ][INFO    ][21813] All specified packages are already installed
2019-04-30 22:30:52,636 [salt.state       :1951][INFO    ][21813] Completed state [python-novaclient] at time 22:30:52.636558 duration_in_ms=492.182
2019-04-30 22:30:52,636 [salt.state       :1780][INFO    ][21813] Running state [pm-utils] at time 22:30:52.636881
2019-04-30 22:30:52,637 [salt.state       :1813][INFO    ][21813] Executing state pkg.installed for [pm-utils]
2019-04-30 22:30:52,650 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:30:52,665 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'pm-utils'] in directory '/root'
2019-04-30 22:30:55,656 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:30:55,685 [salt.state       :300 ][INFO    ][21813] Made the following changes:
'pm-utils' changed from 'absent' to '1.4.1-16'
'libx86-1' changed from 'absent' to '1.1+ds1-10'
'vbetool' changed from 'absent' to '1.1-3'

2019-04-30 22:30:55,697 [salt.state       :915 ][INFO    ][21813] Loading fresh modules for state activity
2019-04-30 22:30:55,719 [salt.state       :1951][INFO    ][21813] Completed state [pm-utils] at time 22:30:55.719898 duration_in_ms=3083.016
2019-04-30 22:30:55,723 [salt.state       :1780][INFO    ][21813] Running state [sysfsutils] at time 22:30:55.723227
2019-04-30 22:30:55,723 [salt.state       :1813][INFO    ][21813] Executing state pkg.installed for [sysfsutils]
2019-04-30 22:30:56,128 [salt.state       :300 ][INFO    ][21813] All specified packages are already installed
2019-04-30 22:30:56,129 [salt.state       :1951][INFO    ][21813] Completed state [sysfsutils] at time 22:30:56.129180 duration_in_ms=405.953
2019-04-30 22:30:56,129 [salt.state       :1780][INFO    ][21813] Running state [sg3-utils] at time 22:30:56.129528
2019-04-30 22:30:56,129 [salt.state       :1813][INFO    ][21813] Executing state pkg.installed for [sg3-utils]
2019-04-30 22:30:56,134 [salt.state       :300 ][INFO    ][21813] All specified packages are already installed
2019-04-30 22:30:56,134 [salt.state       :1951][INFO    ][21813] Completed state [sg3-utils] at time 22:30:56.134569 duration_in_ms=5.041
2019-04-30 22:30:56,134 [salt.state       :1780][INFO    ][21813] Running state [python-memcache] at time 22:30:56.134807
2019-04-30 22:30:56,134 [salt.state       :1813][INFO    ][21813] Executing state pkg.installed for [python-memcache]
2019-04-30 22:30:56,139 [salt.state       :300 ][INFO    ][21813] All specified packages are already installed
2019-04-30 22:30:56,139 [salt.state       :1951][INFO    ][21813] Completed state [python-memcache] at time 22:30:56.139341 duration_in_ms=4.534
2019-04-30 22:30:56,139 [salt.state       :1780][INFO    ][21813] Running state [python-guestfs] at time 22:30:56.139572
2019-04-30 22:30:56,139 [salt.state       :1813][INFO    ][21813] Executing state pkg.installed for [python-guestfs]
2019-04-30 22:30:56,152 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:30:56,168 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-guestfs'] in directory '/root'
2019-04-30 22:31:05,480 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430223105460732
2019-04-30 22:31:05,488 [salt.minion      :1432][INFO    ][27744] Starting a new job with PID 27744
2019-04-30 22:31:05,499 [salt.minion      :1711][INFO    ][27744] Returning information for job: 20190430223105460732
2019-04-30 22:31:22,191 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:31:22,222 [salt.state       :300 ][INFO    ][21813] Made the following changes:
'hfsplus' changed from 'absent' to '1.0.4-13'
'scrub' changed from 'absent' to '2.6.1-1'
'syslinux-common' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libguestfs0' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'libguestfs-hfsplus' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'lzop' changed from 'absent' to '1.03-3.2'
'libhfsp0' changed from 'absent' to '1.0.4-13'
'binutils-gold' changed from 'absent' to '1'
'elf-binutils' changed from 'absent' to '1'
'reiserfsprogs' changed from 'absent' to '1:3.6.24-3.1'
'lsscsi' changed from 'absent' to '0.27-3'
'python-guestfs' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'syslinux' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libguestfs-xfs' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'libguestfs-reiserfs' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'python-libguestfs' changed from 'absent' to '1'
'mtools' changed from 'absent' to '4.0.18-2ubuntu0.16.04'
'supermin' changed from 'absent' to '5.1.14-2ubuntu1.1'
'extlinux' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libhivex0' changed from 'absent' to '1.3.13-1build3'
'binutils' changed from 'absent' to '2.26.1-1ubuntu1~16.04.8'

2019-04-30 22:31:22,241 [salt.state       :915 ][INFO    ][21813] Loading fresh modules for state activity
2019-04-30 22:31:22,261 [salt.state       :1951][INFO    ][21813] Completed state [python-guestfs] at time 22:31:22.261930 duration_in_ms=26122.358
2019-04-30 22:31:22,265 [salt.state       :1780][INFO    ][21813] Running state [gettext-base] at time 22:31:22.265399
2019-04-30 22:31:22,265 [salt.state       :1813][INFO    ][21813] Executing state pkg.installed for [gettext-base]
2019-04-30 22:31:22,769 [salt.state       :300 ][INFO    ][21813] All specified packages are already installed
2019-04-30 22:31:22,769 [salt.state       :1951][INFO    ][21813] Completed state [gettext-base] at time 22:31:22.769672 duration_in_ms=504.273
2019-04-30 22:31:22,771 [salt.state       :1780][INFO    ][21813] Running state [/var/log/nova] at time 22:31:22.771493
2019-04-30 22:31:22,771 [salt.state       :1813][INFO    ][21813] Executing state file.directory for [/var/log/nova]
2019-04-30 22:31:22,772 [salt.state       :300 ][INFO    ][21813] {'group': 'nova'}
2019-04-30 22:31:22,772 [salt.state       :1951][INFO    ][21813] Completed state [/var/log/nova] at time 22:31:22.772574 duration_in_ms=1.08
2019-04-30 22:31:22,772 [salt.state       :1780][INFO    ][21813] Running state [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com] at time 22:31:22.772948
2019-04-30 22:31:22,773 [salt.state       :1813][INFO    ][21813] Executing state ssh_auth.present for [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com]
2019-04-30 22:31:22,774 [salt.state       :300 ][INFO    ][21813] {'AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ': 'New'}
2019-04-30 22:31:22,775 [salt.state       :1951][INFO    ][21813] Completed state [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com] at time 22:31:22.775011 duration_in_ms=2.063
2019-04-30 22:31:22,775 [salt.state       :1780][INFO    ][21813] Running state [nova] at time 22:31:22.775467
2019-04-30 22:31:22,775 [salt.state       :1813][INFO    ][21813] Executing state user.present for [nova]
2019-04-30 22:31:22,776 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['usermod', '-G', 'libvirtd', 'nova'] in directory '/root'
2019-04-30 22:31:22,876 [salt.state       :300 ][INFO    ][21813] {'groups': ['libvirtd', 'nova']}
2019-04-30 22:31:22,877 [salt.state       :1951][INFO    ][21813] Completed state [nova] at time 22:31:22.877169 duration_in_ms=101.702
2019-04-30 22:31:22,877 [salt.state       :1780][INFO    ][21813] Running state [libvirt-qemu] at time 22:31:22.877371
2019-04-30 22:31:22,877 [salt.state       :1813][INFO    ][21813] Executing state user.present for [libvirt-qemu]
2019-04-30 22:31:22,878 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['usermod', '-G', 'nova', 'libvirt-qemu'] in directory '/root'
2019-04-30 22:31:22,984 [salt.state       :300 ][INFO    ][21813] {'groups': ['kvm', 'nova']}
2019-04-30 22:31:22,984 [salt.state       :1951][INFO    ][21813] Completed state [libvirt-qemu] at time 22:31:22.984507 duration_in_ms=107.134
2019-04-30 22:31:22,984 [salt.state       :1780][INFO    ][21813] Running state [/var/lib/nova] at time 22:31:22.984699
2019-04-30 22:31:22,984 [salt.state       :1813][INFO    ][21813] Executing state file.directory for [/var/lib/nova]
2019-04-30 22:31:22,985 [salt.state       :300 ][INFO    ][21813] {'mode': '0750'}
2019-04-30 22:31:22,985 [salt.state       :1951][INFO    ][21813] Completed state [/var/lib/nova] at time 22:31:22.985707 duration_in_ms=1.009
2019-04-30 22:31:22,986 [salt.state       :1780][INFO    ][21813] Running state [/var/lib/nova/.ssh/id_rsa] at time 22:31:22.986132
2019-04-30 22:31:22,986 [salt.state       :1813][INFO    ][21813] Executing state file.managed for [/var/lib/nova/.ssh/id_rsa]
2019-04-30 22:31:22,992 [salt.state       :300 ][INFO    ][21813] File changed:
New file
2019-04-30 22:31:22,992 [salt.state       :1951][INFO    ][21813] Completed state [/var/lib/nova/.ssh/id_rsa] at time 22:31:22.992188 duration_in_ms=6.056
2019-04-30 22:31:22,992 [salt.state       :1780][INFO    ][21813] Running state [/var/lib/nova/.ssh/config] at time 22:31:22.992481
2019-04-30 22:31:22,992 [salt.state       :1813][INFO    ][21813] Executing state file.managed for [/var/lib/nova/.ssh/config]
2019-04-30 22:31:22,997 [salt.state       :300 ][INFO    ][21813] File changed:
New file
2019-04-30 22:31:22,998 [salt.state       :1951][INFO    ][21813] Completed state [/var/lib/nova/.ssh/config] at time 22:31:22.998016 duration_in_ms=5.535
2019-04-30 22:31:22,998 [salt.state       :1780][INFO    ][21813] Running state [/etc/nova/nova.conf] at time 22:31:22.998364
2019-04-30 22:31:22,998 [salt.state       :1813][INFO    ][21813] Executing state file.managed for [/etc/nova/nova.conf]
2019-04-30 22:31:23,035 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'nova/files/rocky/nova-compute.conf.Debian'
2019-04-30 22:31:23,315 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_database.conf'
2019-04-30 22:31:23,335 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/castellan/_barbican.conf'
2019-04-30 22:31:23,348 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_cache.conf'
2019-04-30 22:31:23,365 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/keystoneauth/_type_password.conf'
2019-04-30 22:31:23,389 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/keystonemiddleware/_auth_token.conf'
2019-04-30 22:31:23,503 [salt.state       :300 ][INFO    ][21813] File changed:
--- 
+++ 
@@ -1,7 +1,5 @@
+
 [DEFAULT]
-log_dir = /var/log/nova
-lock_path = /var/lock/nova
-state_path = /var/lib/nova
 
 #
 # From nova.conf
@@ -63,7 +61,7 @@
 # *  period with offset, example: ``month@15`` will result in monthly audits
 #    starting on 15th day of month.
 #  (string value)
-#instance_usage_audit_period = month
+instance_usage_audit_period = hour
 
 #
 # Start and use a daemon that can run the commands that need to be run with
@@ -99,7 +97,7 @@
 # * ``powervm.PowerVMDriver``
 # * ``zvm.ZVMDriver``
 #  (string value)
-#compute_driver = <None>
+compute_driver = libvirt.LibvirtDriver
 
 #
 # Allow destination machine to match source for resize. Useful when
@@ -108,7 +106,7 @@
 # the same host to the destination options. Also set to true
 # if you allow the ServerGroupAffinityFilter and need to resize.
 #  (boolean value)
-#allow_resize_to_same_host = false
+allow_resize_to_same_host = true
 
 #
 # Image properties that should not be inherited from the instance
@@ -204,7 +202,7 @@
 # * True: Instances should fail after VIF plugging timeout
 # * False: Instances should continue booting after VIF plugging timeout
 #  (boolean value)
-#vif_plugging_is_fatal = true
+vif_plugging_is_fatal = true
 
 #
 # Timeout for Neutron VIF plugging event message arrival.
@@ -223,7 +221,7 @@
 #   arrive at all.
 #  (integer value)
 # Minimum value: 0
-#vif_plugging_timeout = 300
+vif_plugging_timeout = 300
 
 # Path to '/etc/network/interfaces' template.
 #
@@ -272,19 +270,13 @@
 # none - <No description provided>
 # space - <No description provided>
 #preallocate_images = none
+preallocate_images = space
 
 #
 # Enable use of copy-on-write (cow) images.
 #
 # QEMU/KVM allow the use of qcow2 as backing files. By disabling this,
-# backing files will not be used. This option is also used by image backends.
-# If the value is False, images are flattened after fetching or cloning.
-# This makes instance images completely independent from parent images.
-#
-# Related options:
-#
-# * ``images_type``: setting ``use_cow_images`` option to False is not supported
-#   when ``images_type=qcow2`` is being used.
+# backing files will not be used.
 #  (boolean value)
 #use_cow_images = true
 
@@ -300,7 +292,7 @@
 #
 # * ``compute_driver``: Only the libvirt driver uses this option.
 #  (boolean value)
-#force_raw_images = true
+force_raw_images = true
 
 #
 # Name of the mkfs commands for ephemeral device.
@@ -426,7 +418,7 @@
 #   for the host.
 #  (integer value)
 # Minimum value: 0
-#reserved_host_memory_mb = 512
+reserved_host_memory_mb = 512
 
 #
 # Number of physical CPUs to reserve for the host. The host resources usage is
@@ -568,12 +560,8 @@
 # * $state_path/instances where state_path is a config option that specifies
 #   the top-level directory for maintaining nova's state. (default) or
 #   Any string representing directory path.
-#
-# Related options:
-#
-# * ``[workarounds]/ensure_libvirt_rbd_instance_dir_cleanup``
-#  (string value)
-#instances_path = $state_path/instances
+#  (string value)
+instances_path = $state_path/instances
 
 #
 # This option enables periodic compute.instance.exists notifications. Each
@@ -582,6 +570,7 @@
 #  (boolean value)
 #instance_usage_audit = false
 
+
 #
 # Maximum number of 1 second retries in live_migration. It specifies number
 # of retries to iptables when it complains. It happens when an user continuously
@@ -600,7 +589,7 @@
 # host rebooted. It ensures that all of the instances on a Nova compute node
 # resume their state each time the compute node boots or restarts.
 #  (boolean value)
-#resume_guests_state_on_host_boot = false
+resume_guests_state_on_host_boot = True
 
 #
 # Number of times to retry network allocation. It is required to attempt network
@@ -656,7 +645,7 @@
 # * Any negative value is treated as 0.
 # * For any value > 0, total attempts are (value + 1)
 #  (integer value)
-#block_device_allocate_retries = 60
+block_device_allocate_retries = 600
 
 #
 # Number of greenthreads available for use to sync power states.
@@ -737,7 +726,7 @@
 # * Any positive integer in seconds.
 # * Any value <=0 will disable the sync. This is not recommended.
 #  (integer value)
-#heal_instance_info_cache_interval = 60
+heal_instance_info_cache_interval = 300
 
 #
 # Interval for reclaiming deleted instances.
@@ -857,7 +846,7 @@
 # * ``block_device_allocate_retries`` in compute_manager_opts group.
 #  (integer value)
 # Minimum value: 0
-#block_device_allocate_retries_interval = 3
+block_device_allocate_retries_interval = 10
 
 #
 # Interval between sending the scheduler a list of current instance UUIDs to
@@ -1171,7 +1160,7 @@
 # Possible values:
 # iso9660 - <No description provided>
 # vfat - <No description provided>
-#config_drive_format = iso9660
+config_drive_format = vfat
 
 #
 # Force injection to take place on a config drive
@@ -1198,7 +1187,7 @@
 #   configuration section to the full path to an qemu-img command
 #   installation.
 #  (boolean value)
-#force_config_drive = false
+force_config_drive = true
 
 #
 # Name or path of the tool used for ISO image creation
@@ -1251,6 +1240,7 @@
 # * vpn_ip
 #  (string value)
 #my_ip = <host_ipv4>
+my_ip = 10.167.4.55
 
 #
 # The IP address which is used to connect to the block storage network.
@@ -2097,7 +2087,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#force_dhcp_release = true
+force_dhcp_release = true
 
 # DEPRECATED:
 # When this option is True, whenever a DNS entry must be updated, a fanout cast
@@ -2149,7 +2139,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#dhcp_domain = novalocal
+dhcp_domain = novalocal
 
 # DEPRECATED:
 # This option allows you to specify the L3 management library to be used.
@@ -2650,7 +2640,7 @@
 #
 # * The full path to a directory.
 #  (string value)
-#bindir = /usr/local/bin
+#bindir = /tmp/nova/.tox/shared/local/bin
 
 #
 # The top-level directory for maintaining Nova's state.
@@ -2666,7 +2656,7 @@
 #
 # * The full path to a directory. Defaults to value provided in ``pybasedir``.
 #  (string value)
-#state_path = $pybasedir
+state_path = /var/lib/nova
 
 #
 # This option allows setting an alternate timeout value for RPC calls
@@ -2677,7 +2667,6 @@
 # Operations with RPC calls that utilize this value:
 #
 # * live migration
-# * scheduling
 #
 # Related options:
 #
@@ -2697,7 +2686,7 @@
 #   is less than report_interval, services will routinely be considered down,
 #   because they report in too rarely.
 #  (integer value)
-#report_interval = 10
+report_interval = 60
 
 #
 # Maximum time in seconds since last check-in for up service
@@ -2711,7 +2700,7 @@
 # * report_interval (service_down_time should not be less than report_interval)
 # * scheduler.periodic_task_interval
 #  (integer value)
-#service_down_time = 60
+service_down_time = 90
 
 #
 # Enable periodic tasks.
@@ -2849,7 +2838,6 @@
 # db - <No description provided>
 # mc - <No description provided>
 #servicegroup_driver = db
-
 #
 # From oslo.log
 #
@@ -2887,19 +2875,19 @@
 
 # Uses logging handler designed to watch file system. When log file is moved or
 # removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and Linux
-# platform is used. This option is ignored if log_config_append is set. (boolean
-# value)
+# instantaneously. It makes sense only if log_file option is specified and
+# Linux platform is used. This option is ignored if log_config_append is set.
+# (boolean value)
 #watch_log_file = false
 
 # Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append is
-# set. (boolean value)
+# changed later to honor RFC5424. This option is ignored if log_config_append
+# is set. (boolean value)
 #use_syslog = false
 
 # Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol which
-# includes structured metadata in addition to log messages.This option is
+# to enable journal support. Doing so will use the journal native protocol
+# which includes structured metadata in addition to log messages.This option is
 # ignored if log_config_append is set. (boolean value)
 #use_journal = false
 
@@ -2922,8 +2910,8 @@
 # value)
 #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
 
-# Additional data to append to log message when logging level for the message is
-# DEBUG. (string value)
+# Additional data to append to log message when logging level for the message
+# is DEBUG. (string value)
 #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
 
 # Prefix each line of exception output with this format. (string value)
@@ -2940,7 +2928,8 @@
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
 
-# The format for an instance that is passed with the log message. (string value)
+# The format for an instance that is passed with the log message. (string
+# value)
 #instance_format = "[instance: %(uuid)s] "
 
 # The format for an instance UUID that is passed with the log message. (string
@@ -2953,9 +2942,9 @@
 # Maximum number of logged messages per rate_limit_interval. (integer value)
 #rate_limit_burst = 0
 
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
-# empty string. Logs with level greater or equal to rate_limit_except_level are
-# not filtered. An empty string means that all levels are filtered. (string
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
+# or empty string. Logs with level greater or equal to rate_limit_except_level
+# are not filtered. An empty string means that all levels are filtered. (string
 # value)
 #rate_limit_except_level = CRITICAL
 
@@ -3001,10 +2990,10 @@
 #rpc_zmq_host = localhost
 
 # Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger period.
-# The value of 0 specifies no linger period. Pending messages shall be discarded
-# immediately when the socket is closed. Positive values specify an upper bound
-# for the linger period. (integer value)
+# closing a socket. The default value of -1 specifies an infinite linger
+# period. The value of 0 specifies no linger period. Pending messages shall be
+# discarded immediately when the socket is closed. Positive values specify an
+# upper bound for the linger period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
 #zmq_linger = -1
 
@@ -3012,8 +3001,9 @@
 # exception when timeout expired. (integer value)
 #rpc_poll_timeout = 1
 
-# Expiration timeout in seconds of a name service record about existing target (
-# < 0 means no timeout). (integer value)
+
+# Expiration timeout in seconds of a name service record about existing target
+# ( < 0 means no timeout). (integer value)
 #zmq_target_expire = 300
 
 # Update period in seconds of a name service record about existing target.
@@ -3071,27 +3061,28 @@
 
 # The duration between two keepalive transmissions in idle condition. The unit
 # is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to skip
-# any overrides and leave it to OS default. (integer value)
+# etc. The default value of -1 (or any other negative value and 0) means to
+# skip any overrides and leave it to OS default. (integer value)
 #zmq_tcp_keepalive_idle = -1
 
 # The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value and
-# 0) means to skip any overrides and leave it to OS default. (integer value)
+# end is not available. The default value of -1 (or any other negative value
+# and 0) means to skip any overrides and leave it to OS default. (integer
+# value)
 #zmq_tcp_keepalive_cnt = -1
 
 # The duration between two successive keepalive retransmissions, if
 # acknowledgement to the previous keepalive transmission is not received. The
 # unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0) means
-# to skip any overrides and leave it to OS default. (integer value)
+# Windows etc. The default value of -1 (or any other negative value and 0)
+# means to skip any overrides and leave it to OS default. (integer value)
 #zmq_tcp_keepalive_intvl = -1
 
 # Maximum number of (green) threads to work concurrently. (integer value)
 #rpc_thread_pool_size = 100
 
-# Expiration timeout in seconds of a sent/received message after which it is not
-# tracked anymore by a client/server. (integer value)
+# Expiration timeout in seconds of a sent/received message after which it is
+# not tracked anymore by a client/server. (integer value)
 #rpc_message_ttl = 300
 
 # Wait for message acknowledgements from receivers. This mechanism works only
@@ -3125,6 +3116,7 @@
 
 # Seconds to wait for a response from a call. (integer value)
 #rpc_response_timeout = 60
+rpc_response_timeout = 30
 
 # The network address and optional user credentials for connecting to the
 # messaging backend, in URL format. The expected format is:
@@ -3138,6 +3130,7 @@
 # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
 # (string value)
 #transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
@@ -3148,7 +3141,8 @@
 
 # The default exchange under which topics are scoped. May be overridden by an
 # exchange name specified in the transport_url option. (string value)
-#control_exchange = openstack
+#control_exchange = keystone
+
 
 #
 # From oslo.service.periodic_task
@@ -3479,72 +3473,126 @@
 
 
 [api_database]
-connection = sqlite:////var/lib/nova/nova_api.sqlite
-#
-# The *Nova API Database* is a separate database which is used for information
-# which is used across *cells*. This database is mandatory since the Mitaka
-# release (13.0.0).
-
-#
-# From nova.conf
-#
+#
+# From oslo.db
+#
+
+# If True, SQLite uses synchronous mode. (boolean value)
+#sqlite_synchronous = true
+
+# The back end to use for the database. (string value)
+# Deprecated group/name - [DEFAULT]/db_backend
+#backend = sqlalchemy
 
 # The SQLAlchemy connection string to use to connect to the database. (string
 # value)
+# Deprecated group/name - [DEFAULT]/sql_connection
+# Deprecated group/name - [DATABASE]/sql_connection
+# Deprecated group/name - [sql]/connection
 #connection = <None>
+connection = mysql+pymysql://nova:opnfv_secret@10.167.4.23/nova_api?charset=utf8
+# The SQLAlchemy connection string to use to connect to the slave
+# database. (string value)
+#slave_connection = <None>
+
+# The SQL mode to be used for MySQL sessions. This option, including the
+# default, overrides any server-set SQL mode. To use whatever SQL mode is set
+# by the server configuration, set this to no value. Example: mysql_sql_mode=
+# (string value)
+#mysql_sql_mode = TRADITIONAL
+
+# If True, transparently enables support for handling MySQL Cluster (NDB).
+# (boolean value)
+#mysql_enable_ndb = false
+
+# Connections which have been present in the connection pool longer than this
+# number of seconds will be replaced with a new one the next time they are
+# checked out from the pool. (integer value)
+# Deprecated group/name - [DATABASE]/idle_timeout
+# Deprecated group/name - [database]/idle_timeout
+# Deprecated group/name - [DEFAULT]/sql_idle_timeout
+# Deprecated group/name - [DATABASE]/sql_idle_timeout
+# Deprecated group/name - [sql]/idle_timeout
+#connection_recycle_time = 3600
+
+# DEPRECATED: Minimum number of SQL connections to keep open in a pool.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/sql_min_pool_size
+# Deprecated group/name - [DATABASE]/sql_min_pool_size
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: The option to set the minimum pool size is not supported by
+# sqlalchemy.
+#min_pool_size = 1
+
+# Maximum number of SQL connections to keep open in a pool. Setting a value of
+# 0 indicates no limit. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_pool_size
+# Deprecated group/name - [DATABASE]/sql_max_pool_size
+#max_pool_size = 5
+max_pool_size = 10
+
+# Maximum number of database connection retries during startup. Set to -1 to
+# specify an infinite retry count. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_retries
+# Deprecated group/name - [DATABASE]/sql_max_retries
+#max_retries = 10
+max_retries = -1
+
+# Interval between retries of opening a SQL connection. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_retry_interval
+# Deprecated group/name - [DATABASE]/reconnect_interval
+#retry_interval = 10
+
+# If set, use this value for max_overflow with SQLAlchemy. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_overflow
+# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
+#max_overflow = 50
+max_overflow = 30
+
+# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
+# value)
+# Minimum value: 0
+# Maximum value: 100
+# Deprecated group/name - [DEFAULT]/sql_connection_debug
+#connection_debug = 0
+
+# Add Python stack traces to SQL as comment strings. (boolean value)
+# Deprecated group/name - [DEFAULT]/sql_connection_trace
+#connection_trace = false
+
+# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
+# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
+#pool_timeout = <None>
+
+# Enable the experimental use of database reconnect on connection lost.
+# (boolean value)
+#use_db_reconnect = false
+
+# Seconds between retries of a database transaction. (integer value)
+#db_retry_interval = 1
+
+# If True, increases the interval between retries of a database operation up to
+# db_max_retry_interval. (boolean value)
+#db_inc_retry_interval = true
+
+# If db_inc_retry_interval is set, the maximum seconds between retries of a
+# database operation. (integer value)
+#db_max_retry_interval = 10
+
+# Maximum retries in case of connection error or deadlock error before error is
+# raised. Set to -1 to specify an infinite retry count. (integer value)
+#db_max_retries = 20
 
 # Optional URL parameters to append onto the connection URL at connect time;
 # specify as param1=value1&param2=value2&... (string value)
 #connection_parameters =
 
-# If True, SQLite uses synchronous mode. (boolean value)
-#sqlite_synchronous = true
-
-# The SQLAlchemy connection string to use to connect to the slave database.
-# (string value)
-#slave_connection = <None>
-
-# The SQL mode to be used for MySQL sessions. This option, including the
-# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
-# the server configuration, set this to no value. Example: mysql_sql_mode=
-# (string value)
-#mysql_sql_mode = TRADITIONAL
-
-# Connections which have been present in the connection pool longer than this
-# number of seconds will be replaced with a new one the next time they are
-# checked out from the pool. (integer value)
-# Deprecated group/name - [api_database]/idle_timeout
-#connection_recycle_time = 3600
-
-# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
-# indicates no limit. (integer value)
-#max_pool_size = <None>
-
-# Maximum number of database connection retries during startup. Set to -1 to
-# specify an infinite retry count. (integer value)
-#max_retries = 10
-
-# Interval between retries of opening a SQL connection. (integer value)
-#retry_interval = 10
-
-# If set, use this value for max_overflow with SQLAlchemy. (integer value)
-#max_overflow = <None>
-
-# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
-# value)
-#connection_debug = 0
-
-# Add Python stack traces to SQL as comment strings. (boolean value)
-#connection_trace = false
-
-# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
-#pool_timeout = <None>
 
 
 [barbican]
-
-#
-# From nova.conf
+#
+# From castellan.config
 #
 
 # Use this endpoint to connect to Barbican, for example:
@@ -3557,6 +3605,7 @@
 # Use this endpoint to connect to Keystone (string value)
 # Deprecated group/name - [key_manager]/auth_url
 #auth_endpoint = http://localhost/identity/v3
+auth_endpoint = http://10.167.4.35:35357/v3
 
 # Number of seconds to wait before retrying poll for key creation completion
 # (integer value)
@@ -3565,23 +3614,23 @@
 # Number of times to retry poll for key creation completion (integer value)
 #number_of_retries = 60
 
-# Specifies if insecure TLS (https) requests. If False, the server's certificate
-# will not be validated (boolean value)
+# Specifies if insecure TLS (https) requests. If False, the server's
+# certificate will not be validated (boolean value)
 #verify_ssl = true
 
-# Specifies the type of endpoint.  Allowed values are: public, private, and
+# Specifies the type of endpoint.  Allowed values are: public, internal, and
 # admin (string value)
 # Possible values:
 # public - <No description provided>
 # internal - <No description provided>
 # admin - <No description provided>
 #barbican_endpoint_type = public
+barbican_endpoint_type = internal
 
 
 [cache]
-
-#
-# From nova.conf
+#
+# From oslo.cache
 #
 
 # Prefix for building the configuration dictionary for the cache region. This
@@ -3589,9 +3638,9 @@
 # with the same configuration name. (string value)
 #config_prefix = cache.oslo
 
-# Default TTL, in seconds, for any cached item in the dogpile.cache region. This
-# applies to any cached method that doesn't have an explicit cache expiration
-# time defined for it. (integer value)
+# Default TTL, in seconds, for any cached item in the dogpile.cache region.
+# This applies to any cached method that doesn't have an explicit cache
+# expiration time defined for it. (integer value)
 #expiration_time = 600
 
 # Cache backend module. For eventlet-based or environments with hundreds of
@@ -3614,6 +3663,7 @@
 # dogpile.cache.memory_pickle - <No description provided>
 # dogpile.cache.null - <No description provided>
 #backend = dogpile.cache.null
+backend = oslo_cache.memcache_pool
 
 # Arguments supplied to the backend module. Specify this option once per
 # argument to be passed to the dogpile.cache backend. Example format:
@@ -3626,17 +3676,19 @@
 #proxies =
 
 # Global toggle for caching. (boolean value)
-#enabled = false
-
-# Extra debugging from the cache backend (cache keys, get/set/delete/etc calls).
-# This is only really useful if you need to see the specific cache-backend
-# get/set/delete calls with the keys/values.  Typically this should be left set
-# to false. (boolean value)
+#enabled = true
+enabled = True
+
+# Extra debugging from the cache backend (cache keys, get/set/delete/etc
+# calls). This is only really useful if you need to see the specific cache-
+# backend get/set/delete calls with the keys/values.  Typically this should be
+# left set to false. (boolean value)
 #debug_cache_backend = false
 
 # Memcache servers in the format of "host:port". (dogpile.cache.memcache and
 # oslo_cache.memcache_pool backends only). (list value)
 #memcache_servers = localhost:11211
+memcache_servers =10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
 
 # Number of seconds memcached server is considered dead before it is tried
 # again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
@@ -3660,8 +3712,8 @@
 #memcache_pool_connection_get_timeout = 10
 
 
+
 [cells]
-enable = False
 #
 # DEPRECATED: Cells options allow you to use cells v1 functionality in an
 # OpenStack deployment.
@@ -4199,7 +4251,7 @@
 #
 # * endpoint_template - Setting this option will override catalog_info
 #  (string value)
-#catalog_info = volumev3:cinderv3:publicURL
+catalog_info = volumev3:cinderv3:internalURL
 
 #
 # If this option is set then it will override service catalog lookup with
@@ -4227,7 +4279,7 @@
 #
 # * Any string representing region name
 #  (string value)
-#os_region_name = <None>
+os_region_name = RegionOne
 
 #
 # Number of times cinderclient should retry on any failed http call.
@@ -4257,62 +4309,40 @@
 # By default there is no availability zone restriction on volume attach.
 #  (boolean value)
 #cross_az_attach = true
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Name of nova region to use. Useful if keystone manages more than one region.
 # (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Collect per-API call timing information. (boolean value)
-#collect_timing = false
-
-# Log requests to multiple loggers. (boolean value)
-#split_loggers = false
-
-# Authentication type to load (string value)
-# Deprecated group/name - [cinder]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internal
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
 
 # Authentication URL (string value)
 #auth_url = <None>
-
-# Scope for system operations (string value)
-#system_scope = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-#project_id = <None>
-
-# Project name to scope to (string value)
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
 
 # Optional domain ID to use with v3 and v2 parameters. It will be used for both
 # the user and project domain in v3 and ignored in v2 authentication. (string
@@ -4324,27 +4354,65 @@
 # (string value)
 #default_domain_name = <None>
 
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
 # User ID (string value)
 #user_id = <None>
 
 # Username (string value)
-# Deprecated group/name - [cinder]/user_name
+# Deprecated group/name - [neutron]/user_name
 #username = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User's password (string value)
-#password = <None>
-
-# Tenant ID (string value)
-#tenant_id = <None>
-
-# Tenant Name (string value)
-#tenant_name = <None>
+username = nova
 
 
 [compute]
@@ -4534,6 +4602,7 @@
 #token_ttl = 600
 
 
+
 [cors]
 
 #
@@ -4564,8 +4633,6 @@
 
 
 [database]
-connection = sqlite:////var/lib/nova/nova.sqlite
-
 #
 # From oslo.db
 #
@@ -4583,14 +4650,14 @@
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
 #connection = <None>
-
-# The SQLAlchemy connection string to use to connect to the slave database.
-# (string value)
+connection = mysql+pymysql://nova:opnfv_secret@10.167.4.23/nova?charset=utf8
+# The SQLAlchemy connection string to use to connect to the slave
+# database. (string value)
 #slave_connection = <None>
 
 # The SQL mode to be used for MySQL sessions. This option, including the
-# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
-# the server configuration, set this to no value. Example: mysql_sql_mode=
+# default, overrides any server-set SQL mode. To use whatever SQL mode is set
+# by the server configuration, set this to no value. Example: mysql_sql_mode=
 # (string value)
 #mysql_sql_mode = TRADITIONAL
 
@@ -4608,8 +4675,8 @@
 # Deprecated group/name - [sql]/idle_timeout
 #connection_recycle_time = 3600
 
-# DEPRECATED: Minimum number of SQL connections to keep open in a pool. (integer
-# value)
+# DEPRECATED: Minimum number of SQL connections to keep open in a pool.
+# (integer value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
 # Deprecated group/name - [DATABASE]/sql_min_pool_size
 # This option is deprecated for removal.
@@ -4618,17 +4685,19 @@
 # sqlalchemy.
 #min_pool_size = 1
 
-# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
-# indicates no limit. (integer value)
+# Maximum number of SQL connections to keep open in a pool. Setting a value of
+# 0 indicates no limit. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
 #max_pool_size = 5
+max_pool_size = 10
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
 #max_retries = 10
+max_retries = -1
 
 # Interval between retries of opening a SQL connection. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
@@ -4639,6 +4708,7 @@
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
 #max_overflow = 50
+max_overflow = 30
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
@@ -4655,8 +4725,8 @@
 # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
 #pool_timeout = <None>
 
-# Enable the experimental use of database reconnect on connection lost. (boolean
-# value)
+# Enable the experimental use of database reconnect on connection lost.
+# (boolean value)
 #use_db_reconnect = false
 
 # Seconds between retries of a database transaction. (integer value)
@@ -4678,14 +4748,6 @@
 # specify as param1=value1&param2=value2&... (string value)
 #connection_parameters =
 
-#
-# From oslo.db.concurrency
-#
-
-# Enable the experimental use of thread pooling for all DB API calls (boolean
-# value)
-# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
-#use_tpool = false
 
 
 [devices]
@@ -5267,6 +5329,8 @@
 #   (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image").
 #  (list value)
 #api_servers = <None>
+api_servers = http://10.167.4.35:9292
+
 
 #
 # Enable glance operation retries.
@@ -5316,7 +5380,7 @@
 # * Both enable_certificate_validation and default_trusted_certificate_ids
 #   below depend on this option being enabled.
 #  (boolean value)
-#verify_glance_signatures = false
+verify_glance_signatures = False
 
 # DEPRECATED:
 # Enable certificate validation for image signature verification.
@@ -5691,7 +5755,7 @@
 # * You can configure the Compute service to always create a configuration
 #   drive by setting the force_config_drive option to 'True'.
 #  (boolean value)
-#config_drive_cdrom = false
+config_drive_cdrom = false
 
 #
 # Configuration drive inject password
@@ -5704,7 +5768,7 @@
 #   configuration drive usage with Hyper-V, such as force_config_drive.
 # * Currently, the only accepted config_drive_format is 'iso9660'.
 #  (boolean value)
-#config_drive_inject_password = false
+config_drive_inject_password = false
 
 #
 # Volume attach retry count
@@ -5788,147 +5852,6 @@
 #iscsi_initiator_list =
 
 
-[ironic]
-#
-# Configuration options for Ironic driver (Bare Metal).
-# If using the Ironic driver following options must be set:
-# * auth_type
-# * auth_url
-# * project_name
-# * username
-# * password
-# * project_domain_id or project_domain_name
-# * user_domain_id or user_domain_name
-
-#
-# From nova.conf
-#
-
-# DEPRECATED: URL override for the Ironic API endpoint. (uri value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Endpoint lookup uses the service catalog via common keystoneauth1
-# Adapter configuration options. In the current release, api_endpoint will
-# override this behavior, but will be ignored and/or removed in a future
-# release. To achieve the same result, use the endpoint_override option instead.
-#api_endpoint = http://ironic.example.org:6385/
-
-#
-# The number of times to retry when a request conflicts.
-# If set to 0, only try once, no retries.
-#
-# Related options:
-#
-# * api_retry_interval
-#  (integer value)
-# Minimum value: 0
-#api_max_retries = 60
-
-#
-# The number of seconds to wait before retrying the request.
-#
-# Related options:
-#
-# * api_max_retries
-#  (integer value)
-# Minimum value: 0
-#api_retry_interval = 2
-
-# Timeout (seconds) to wait for node serial console state changed. Set to 0 to
-# disable timeout. (integer value)
-# Minimum value: 0
-#serial_console_state_timeout = 10
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Collect per-API call timing information. (boolean value)
-#collect_timing = false
-
-# Log requests to multiple loggers. (boolean value)
-#split_loggers = false
-
-# Authentication type to load (string value)
-# Deprecated group/name - [ironic]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
-
-# Authentication URL (string value)
-#auth_url = <None>
-
-# Scope for system operations (string value)
-#system_scope = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-#project_id = <None>
-
-# Project name to scope to (string value)
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
-
-# User ID (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [ironic]/user_name
-#username = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User's password (string value)
-#password = <None>
-
-# The default service_type for endpoint URL discovery. (string value)
-#service_type = baremetal
-
-# The default service_name for endpoint URL discovery. (string value)
-#service_name = <None>
-
-# List of interfaces, in order of preference, for endpoint URL. (list value)
-#valid_interfaces = internal,public
-
-# The default region_name for endpoint URL discovery. (string value)
-#region_name = <None>
-
-# Always use this endpoint URL for requests for this client. NOTE: The
-# unversioned endpoint should be specified here; to request a particular API
-# version, use the `version`, `min-version`, and/or `max-version` options.
-# (string value)
-# Deprecated group/name - [ironic]/api_endpoint
-#endpoint_override = <None>
 
 
 [key_manager]
@@ -6072,14 +5995,16 @@
 #
 
 # Complete "public" Identity API endpoint. This endpoint should not be an
-# "admin" endpoint, as it should be accessible by all end users. Unauthenticated
-# clients are redirected to this endpoint to authenticate. Although this
-# endpoint should ideally be unversioned, client support in the wild varies. If
-# you're using a versioned v2 endpoint here, then this should *not* be the same
-# endpoint the service user utilizes for validating tokens, because normal end
-# users may not be able to reach that endpoint. (string value)
+# "admin" endpoint, as it should be accessible by all end users.
+# Unauthenticated clients are redirected to this endpoint to authenticate.
+# Although this endpoint should ideally be unversioned, client support in the
+# wild varies. If you're using a versioned v2 endpoint here, then this should
+# *not* be the same endpoint the service user utilizes for validating tokens,
+# because normal end users may not be able to reach that endpoint. (string
+# value)
 # Deprecated group/name - [keystone_authtoken]/auth_uri
 #www_authenticate_uri = <None>
+www_authenticate_uri = http://10.167.4.35:5000
 
 # DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
 # be an "admin" endpoint, as it should be accessible by all end users.
@@ -6092,9 +6017,10 @@
 # release. (string value)
 # This option is deprecated for removal since Queens.
 # Its value may be silently ignored in the future.
-# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri and
-# will be removed in the S  release.
+# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
+# and will be removed in the S  release.
 #auth_uri = <None>
+auth_uri = http://10.167.4.35:5000
 
 # API version of the admin Identity API endpoint. (string value)
 #auth_version = <None>
@@ -6107,8 +6033,8 @@
 # value)
 #http_connect_timeout = <None>
 
-# How many times are we trying to reconnect when communicating with Identity API
-# Server. (integer value)
+# How many times are we trying to reconnect when communicating with Identity
+# API Server. (integer value)
 #http_request_max_retries = 3
 
 # Request environment key where the Swift cache object is stored. When
@@ -6132,10 +6058,11 @@
 
 # The region in which the identity server can be found. (string value)
 #region_name = <None>
+region_name = RegionOne
 
 # DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P release.
-# (string value)
+# has been deprecated in the Ocata release and will be removed in the P
+# release. (string value)
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
@@ -6145,43 +6072,29 @@
 # undefined, tokens will instead be cached in-process. (list value)
 # Deprecated group/name - [keystone_authtoken]/memcache_servers
 #memcached_servers = <None>
+memcached_servers=10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
 
 # In order to prevent excessive effort spent validating tokens, the middleware
-# caches previously-seen tokens for a configurable duration (in seconds). Set to
-# -1 to disable caching completely. (integer value)
+# caches previously-seen tokens for a configurable duration (in seconds). Set
+# to -1 to disable caching completely. (integer value)
 #token_cache_time = 300
 
 # DEPRECATED: Determines the frequency at which the list of revoked tokens is
 # retrieved from the Identity service (in seconds). A high number of revocation
 # events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in the
-# Ocata release and will be removed in the P release. (integer value)
+# performance. Only valid for PKI tokens. This option has been deprecated in
+# the Ocata release and will be removed in the P release. (integer value)
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
 #revocation_cache_time = 10
-
-# (Optional) If defined, indicate whether token data should be authenticated or
-# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
-# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
-# cache. If the value is not one of these options or empty, auth_token will
-# raise an exception on initialization. (string value)
-# Possible values:
-# None - <No description provided>
-# MAC - <No description provided>
-# ENCRYPT - <No description provided>
-#memcache_security_strategy = None
-
-# (Optional, mandatory if memcache_security_strategy is defined) This string is
-# used for key derivation. (string value)
-#memcache_secret_key = <None>
 
 # (Optional) Number of seconds memcached server is considered dead before it is
 # tried again. (integer value)
 #memcache_pool_dead_retry = 300
 
-# (Optional) Maximum total number of open connections to every memcached server.
-# (integer value)
+# (Optional) Maximum total number of open connections to every memcached
+# server. (integer value)
 #memcache_pool_maxsize = 10
 
 # (Optional) Socket timeout in seconds for communicating with a memcached
@@ -6207,11 +6120,11 @@
 
 # Used to control the use and type of token binding. Can be set to: "disabled"
 # to not check token binding. "permissive" (default) to validate binding
-# information if the bind type is of a form known to the server and ignore it if
-# not. "strict" like "permissive" but if the bind type is unknown the token will
-# be rejected. "required" any form of token binding is needed to be allowed.
-# Finally the name of a binding method that must be present in tokens. (string
-# value)
+# information if the bind type is of a form known to the server and ignore it
+# if not. "strict" like "permissive" but if the bind type is unknown the token
+# will be rejected. "required" any form of token binding is needed to be
+# allowed. Finally the name of a binding method that must be present in tokens.
+# (string value)
 #enforce_token_bind = permissive
 
 # DEPRECATED: If true, the revocation list will be checked for cached tokens.
@@ -6238,23 +6151,129 @@
 # A choice of roles that must be present in a service token. Service tokens are
 # allowed to request that an expired token can be used and so this check should
 # tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present. For
-# backwards compatibility reasons this currently only affects the allow_expired
-# check. (list value)
+# here are applied as an ANY check so any role in this list must be present.
+# For backwards compatibility reasons this currently only affects the
+# allow_expired check. (list value)
 #service_token_roles = service
 
-# For backwards compatibility reasons we must let valid service tokens pass that
-# don't pass the service_token_roles check as valid. Setting this true will
-# become the default in a future release and should be enabled if possible.
-# (boolean value)
+# For backwards compatibility reasons we must let valid service tokens pass
+# that don't pass the service_token_roles check as valid. Setting this true
+# will become the default in a future release and should be enabled if
+# possible. (boolean value)
 #service_token_roles_required = false
 
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
 #auth_type = <None>
+auth_type = password
 
 # Config Section from which to load plugin specific options (string value)
 #auth_section = <None>
+
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internal
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = nova
 
 
 [libvirt]
@@ -6360,7 +6379,7 @@
 # uml - <No description provided>
 # xen - <No description provided>
 # parallels - <No description provided>
-#virt_type = kvm
+virt_type = kvm
 
 #
 # Overrides the default libvirt URI of the chosen virtualization type.
@@ -6378,76 +6397,6 @@
 # * ``virt_type``: Influences what is used as default value here.
 #  (string value)
 #connection_uri =
-
-#
-# Algorithm used to hash the injected password.
-# Note that it must be supported by libc on the compute host
-# _and_ by libc inside *any guest image* that will be booted by this compute
-# host whith requested password injection.
-# In case the specified algorithm is not supported by libc on the compute host,
-# a fallback to DES algorithm will be performed.
-#
-# Related options:
-#
-# * ``inject_password``
-# * ``inject_partition``
-#  (string value)
-# Possible values:
-# SHA-512 - <No description provided>
-# SHA-256 - <No description provided>
-# MD5 - <No description provided>
-#inject_password_algorithm = MD5
-
-#
-# Allow the injection of an admin password for instance only at ``create`` and
-# ``rebuild`` process.
-#
-# There is no agent needed within the image to do this. If *libguestfs* is
-# available on the host, it will be used. Otherwise *nbd* is used. The file
-# system of the image will be mounted and the admin password, which is provided
-# in the REST API call will be injected as password for the root user. If no
-# root user is available, the instance won't be launched and an error is thrown.
-# Be aware that the injection is *not* possible when the instance gets launched
-# from a volume.
-#
-# *Linux* distribution guest only.
-#
-# Possible values:
-#
-# * True: Allows the injection.
-# * False: Disallows the injection. Any via the REST API provided admin password
-#   will be silently ignored.
-#
-# Related options:
-#
-# * ``inject_partition``: That option will decide about the discovery and usage
-#   of the file system. It also can disable the injection at all.
-#  (boolean value)
-#inject_password = false
-
-#
-# Allow the injection of an SSH key at boot time.
-#
-# There is no agent needed within the image to do this. If *libguestfs* is
-# available on the host, it will be used. Otherwise *nbd* is used. The file
-# system of the image will be mounted and the SSH key, which is provided
-# in the REST API call will be injected as SSH key for the root user and
-# appended to the ``authorized_keys`` of that user. The SELinux context will
-# be set if necessary. Be aware that the injection is *not* possible when the
-# instance gets launched from a volume.
-#
-# This config option will enable directly modifying the instance disk and does
-# not affect what cloud-init may do using data from config_drive option or the
-# metadata service.
-#
-# *Linux* distribution guest only.
-#
-# Related options:
-#
-# * ``inject_partition``: That option will decide about the discovery and usage
-#   of the file system. It also can disable the injection at all.
-#  (boolean value)
-#inject_key = false
 
 #
 # Determines the way how the file system is chosen to inject data into it.
@@ -6480,7 +6429,7 @@
 #   single partition image
 #  (integer value)
 # Minimum value: -2
-#inject_partition = -2
+inject_partition = -2
 
 # DEPRECATED:
 # Enable a mouse cursor within a graphical VNC or SPICE sessions.
@@ -6500,6 +6449,56 @@
 # Its value may be silently ignored in the future.
 # Reason: This option is being replaced by the 'pointer_model' option.
 #use_usb_tablet = true
+#
+# Allow the injection of an admin password for instance only at ``create`` and
+# ``rebuild`` process.
+#
+# There is no agent needed within the image to do this. If *libguestfs* is
+# available on the host, it will be used. Otherwise *nbd* is used. The file
+# system of the image will be mounted and the admin password, which is provided
+# in the REST API call will be injected as password for the root user. If no
+# root user is available, the instance won't be launched and an error is thrown.
+# Be aware that the injection is *not* possible when the instance gets launched
+# from a volume.
+#
+# *Linux* distribution guest only.
+#
+# Possible values:
+#
+# * True: Allows the injection.
+# * False: Disallows the injection. Any via the REST API provided admin password
+#   will be silently ignored.
+#
+# Related options:
+#
+# * ``inject_partition``: That option will decide about the discovery and usage
+#   of the file system. It also can disable the injection at all.
+#  (boolean value)
+inject_password = false
+
+#
+# Allow the injection of an SSH key at boot time.
+#
+# There is no agent needed within the image to do this. If *libguestfs* is
+# available on the host, it will be used. Otherwise *nbd* is used. The file
+# system of the image will be mounted and the SSH key, which is provided
+# in the REST API call will be injected as SSH key for the root user and
+# appended to the ``authorized_keys`` of that user. The SELinux context will
+# be set if necessary. Be aware that the injection is *not* possible when the
+# instance gets launched from a volume.
+#
+# This config option will enable directly modifying the instance disk and does
+# not affect what cloud-init may do using data from config_drive option or the
+# metadata service.
+#
+# *Linux* distribution guest only.
+#
+# Related options:
+#
+# * ``inject_partition``: That option will decide about the discovery and usage
+#   of the file system. It also can disable the injection at all.
+#  (boolean value)
+inject_key = true
 
 #
 # The IP address or hostname to be used as the target for live migration
@@ -6523,6 +6522,7 @@
 #   ignored if tunneling is enabled.
 #  (string value)
 #live_migration_inbound_addr = <None>
+live_migration_inbound_addr = 10.167.4.55
 
 # DEPRECATED:
 # Live migration target URI to use.
@@ -6575,7 +6575,6 @@
 # * ``live_migration_uri``: If ``live_migration_uri`` value is not None, the
 #   scheme used for live migration is taken from ``live_migration_uri`` instead.
 #  (string value)
-#live_migration_scheme = <None>
 
 #
 # Enable tunnelled migration.
@@ -6785,7 +6784,7 @@
 # host-passthrough - <No description provided>
 # custom - <No description provided>
 # none - <No description provided>
-#cpu_mode = <None>
+cpu_mode = host-passthrough
 
 #
 # Set the name of the libvirt CPU model the instance should use.
@@ -6930,7 +6929,7 @@
 #   speeding up guest installations, but you should switch to another caching
 #   mode in production environments.
 #  (list value)
-#disk_cachemodes =
+disk_cachemodes = "file=directsync,block=none"
 
 #
 # The path to an RNG (Random Number Generator) device that will be used as
@@ -7061,7 +7060,6 @@
 #
 # * virt.use_cow_images
 # * images_volume_group
-# * [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup
 #  (string value)
 # Possible values:
 # raw - <No description provided>
@@ -7073,15 +7071,6 @@
 # default - <No description provided>
 #images_type = default
 
-#
-# LVM Volume Group that is used for VM images, when you specify images_type=lvm
-#
-# Related options:
-#
-# * images_type
-#  (string value)
-#images_volume_group = <None>
-
 # DEPRECATED:
 # Create sparse logical volumes (with virtualsize) if this flag is set to True.
 #  (boolean value)
@@ -7094,12 +7083,6 @@
 # use Cinder thin-provisioned volumes.
 #sparse_logical_volumes = false
 
-# The RADOS pool in which rbd volumes are stored (string value)
-#images_rbd_pool = rbd
-
-# Path to the ceph configuration file to use (string value)
-#images_rbd_ceph_conf =
-
 #
 # Discard option for nova managed disks.
 #
@@ -7138,45 +7121,6 @@
 # Reason: The image cache no longer periodically calculates checksums of stored
 # images. Data integrity can be checked at the block or filesystem level.
 #checksum_interval_seconds = 3600
-
-#
-# Method used to wipe ephemeral disks when they are deleted. Only takes effect
-# if LVM is set as backing storage.
-#
-# Possible values:
-#
-# * none - do not wipe deleted volumes
-# * zero - overwrite volumes with zeroes
-# * shred - overwrite volume repeatedly
-#
-# Related options:
-#
-# * images_type - must be set to ``lvm``
-# * volume_clear_size
-#  (string value)
-# Possible values:
-# none - <No description provided>
-# zero - <No description provided>
-# shred - <No description provided>
-#volume_clear = zero
-
-#
-# Size of area in MiB, counting from the beginning of the allocated volume,
-# that will be cleared using method set in ``volume_clear`` option.
-#
-# Possible values:
-#
-# * 0 - clear whole volume
-# * >0 - clear specified amount of MiB
-#
-# Related options:
-#
-# * images_type - must be set to ``lvm``
-# * volume_clear - must be set and the value must be different than ``none``
-#   for this option to have any impact
-#  (integer value)
-# Minimum value: 0
-#volume_clear_size = 0
 
 #
 # Enable snapshot compression for ``qcow2`` images.
@@ -7248,19 +7192,6 @@
 # availability and fault tolerance.
 #  (boolean value)
 #iser_use_multipath = false
-
-#
-# The RADOS client name for accessing rbd(RADOS Block Devices) volumes.
-#
-# Libvirt will refer to this user when connecting and authenticating with
-# the Ceph RBD server.
-#  (string value)
-#rbd_user = <None>
-
-#
-# The libvirt UUID of the secret for the rbd_user volumes.
-#  (string value)
-#rbd_secret_uuid = <None>
 
 #
 # Directory where the NFS volume is mounted on the compute node.
@@ -7723,7 +7654,7 @@
 # extensions with no wait.
 #  (integer value)
 # Minimum value: 0
-#extension_sync_interval = 600
+extension_sync_interval = 600
 
 #
 # List of physnets present on this host.
@@ -7798,7 +7729,7 @@
 #insecure = false
 
 # Timeout value for http requests (integer value)
-#timeout = <None>
+timeout = 300
 
 # Collect per-API call timing information. (boolean value)
 #collect_timing = false
@@ -7808,13 +7739,13 @@
 
 # Authentication type to load (string value)
 # Deprecated group/name - [neutron]/auth_plugin
-#auth_type = <None>
+auth_type = v3password
 
 # Config Section from which to load plugin specific options (string value)
 #auth_section = <None>
 
 # Authentication URL (string value)
-#auth_url = <None>
+auth_url = http://10.167.4.35:35357/v3
 
 # Scope for system operations (string value)
 #system_scope = <None>
@@ -7829,13 +7760,13 @@
 #project_id = <None>
 
 # Project name to scope to (string value)
-#project_name = <None>
+project_name = service
 
 # Domain ID containing project (string value)
 #project_domain_id = <None>
 
 # Domain name containing project (string value)
-#project_domain_name = <None>
+project_domain_name = Default
 
 # Trust ID (string value)
 #trust_id = <None>
@@ -7855,16 +7786,16 @@
 
 # Username (string value)
 # Deprecated group/name - [neutron]/user_name
-#username = <None>
+username = neutron
 
 # User's domain id (string value)
 #user_domain_id = <None>
 
 # User's domain name (string value)
-#user_domain_name = <None>
+user_domain_name = Default
 
 # User's password (string value)
-#password = <None>
+password = opnfv_secret
 
 # Tenant ID (string value)
 #tenant_id = <None>
@@ -7882,7 +7813,7 @@
 #valid_interfaces = internal,public
 
 # The default region_name for endpoint URL discovery. (string value)
-#region_name = <None>
+region_name= RegionOne
 
 # Always use this endpoint URL for requests for this client. NOTE: The
 # unversioned endpoint should be specified here; to request a particular API
@@ -7925,6 +7856,7 @@
 # vm_state - <No description provided>
 # vm_and_task_state - <No description provided>
 #notify_on_state_change = <None>
+notify_on_state_change = vm_and_task_state
 
 # Default notification level for outgoing notifications. (string value)
 # Possible values:
@@ -8027,296 +7959,16 @@
 #lock_path = <None>
 
 
-[oslo_messaging_amqp]
-
+[oslo_messaging_notifications]
 #
 # From oslo.messaging
 #
 
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional) (string
-# value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# By default SSL checks that the name in the server's certificate matches the
-# hostname in the transport_url. In some configurations it may be preferable to
-# use the virtual hostname instead, for example if the server uses the Server
-# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
-# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
-# virtual host name instead of the DNS name. (boolean value)
-#ssl_verify_vhost = false
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# Enable virtual host support for those message buses that do not natively
-# support virtual hosting (such as qpidd). When set to true the virtual host
-# name will be added to all message bus addresses, effectively creating a
-# private 'subnet' per virtual host. Set to False if the message bus supports
-# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
-# as the name of the virtual host. (boolean value)
-#pseudo_vhost = true
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-robin
-# fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# DEPRECATED: Pool Size for Kafka Consumers (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#pool_size = 10
-
-# DEPRECATED: The pool size limit for connections expiration policy (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_min_size = 2
-
-# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating point
-# value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
-# Enable asynchronous consumer commits (boolean value)
-#enable_auto_commit = false
-
-# The maximum number of records returned in a poll call (integer value)
-#max_poll_records = 500
-
-# Protocol used to communicate with brokers (string value)
-# Possible values:
-# PLAINTEXT - <No description provided>
-# SASL_PLAINTEXT - <No description provided>
-# SSL - <No description provided>
-# SASL_SSL - <No description provided>
-#security_protocol = PLAINTEXT
-
-# Mechanism when security protocol is SASL (string value)
-#sasl_mechanism = PLAIN
-
-# CA certificate PEM file used to verify the server certificate (string value)
-#ssl_cafile =
-
-
-[oslo_messaging_notifications]
-
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are messaging,
-# messagingv2, routing, log, test, noop (multi valued)
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
 # Deprecated group/name - [DEFAULT]/notification_driver
 #driver =
+driver = messagingv2
 
 # A URL representing the messaging driver to use for notifications. If not set,
 # we fall back to the same configuration used for RPC. (string value)
@@ -8332,10 +7984,7 @@
 # to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
 # (integer value)
 #retry = -1
-
-
 [oslo_messaging_rabbit]
-
 #
 # From oslo.messaging
 #
@@ -8352,24 +8001,6 @@
 # Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl
 #ssl = false
 
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
-
 # How long to wait before reconnecting in response to an AMQP consumer cancel
 # notification. (floating point value)
 #kombu_reconnect_delay = 1.0
@@ -8384,8 +8015,8 @@
 #kombu_missing_consumer_retry_timeout = 60
 
 # Determines how the next RabbitMQ node is chosen in case the one we are
-# currently connected to becomes unavailable. Takes effect only if more than one
-# RabbitMQ node is provided in config. (string value)
+# currently connected to becomes unavailable. Takes effect only if more than
+# one RabbitMQ node is provided in config. (string value)
 # Possible values:
 # round-robin - <No description provided>
 # shuffle - <No description provided>
@@ -8398,7 +8029,8 @@
 # Reason: Replaced by [DEFAULT]/transport_url
 #rabbit_host = localhost
 
-# DEPRECATED: The RabbitMQ broker port where a single node is used. (port value)
+# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
+# value)
 # Minimum value: 0
 # Maximum value: 65535
 # This option is deprecated for removal.
@@ -8456,20 +8088,20 @@
 
 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
 # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
-# is no longer controlled by the x-ha-policy argument when declaring a queue. If
-# you just want to make sure that all queues (except those with auto-generated
-# names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
-# '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
+# is no longer controlled by the x-ha-policy argument when declaring a queue.
+# If you just want to make sure that all queues (except those with auto-
+# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
+# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
 #rabbit_ha_queues = false
 
 # Positive integer representing duration in seconds for queue TTL (x-expires).
-# Queues which are unused for the duration of the TTL are automatically deleted.
-# The parameter affects only reply and fanout queues. (integer value)
+# Queues which are unused for the duration of the TTL are automatically
+# deleted. The parameter affects only reply and fanout queues. (integer value)
 # Minimum value: 1
 #rabbit_transient_queues_ttl = 1800
 
-# Specifies the number of messages to prefetch. Setting to zero allows unlimited
-# messages. (integer value)
+# Specifies the number of messages to prefetch. Setting to zero allows
+# unlimited messages. (integer value)
 #rabbit_qos_prefetch_count = 0
 
 # Number of seconds after which the Rabbit broker is considered down if
@@ -8477,163 +8109,13 @@
 # value)
 #heartbeat_timeout_threshold = 60
 
-# How often times during the heartbeat_timeout_threshold we check the heartbeat.
-# (integer value)
+# How often times during the heartbeat_timeout_threshold we check the
+# heartbeat. (integer value)
 #heartbeat_rate = 2
 
 
-[oslo_messaging_zmq]
-
-#
-# From oslo.messaging
-#
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger period.
-# The value of 0 specifies no linger period. Pending messages shall be discarded
-# immediately when the socket is closed. Positive values specify an upper bound
-# for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target (
-# < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to skip
-# any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value and
-# 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0) means
-# to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is not
-# tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
 
 [oslo_middleware]
-
 #
 # From oslo.middleware
 #
@@ -8644,8 +8126,8 @@
 #max_request_body_size = 114688
 
 # DEPRECATED: The HTTP Header that will be used to determine what the original
-# request protocol scheme was, even if it was hidden by a SSL termination proxy.
-# (string value)
+# request protocol scheme was, even if it was hidden by a SSL termination
+# proxy. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #secure_proxy_ssl_header = X-Forwarded-Proto
@@ -8653,53 +8135,11 @@
 # Whether the application is behind a proxy or not. This determines if the
 # middleware should parse the headers or not. (boolean value)
 #enable_proxy_headers_parsing = false
+enable_proxy_headers_parsing = True
+
 
 
 [oslo_policy]
-
-#
-# From oslo.policy
-#
-
-# This option controls whether or not to enforce scope when evaluating policies.
-# If ``True``, the scope of the token used in the request is compared to the
-# ``scope_types`` of the policy being enforced. If the scopes do not match, an
-# ``InvalidScope`` exception will be raised. If ``False``, a message will be
-# logged informing operators that policies are being invoked with mismatching
-# scope. (boolean value)
-#enforce_scope = false
-
-# The file that defines policies. (string value)
-#policy_file = policy.json
-
-# Default rule. Enforced when a requested rule is not found. (string value)
-#policy_default_rule = default
-
-# Directories where policy configuration files are stored. They can be relative
-# to any directory in the search path defined by the config_dir option, or
-# absolute paths. The file defined by policy_file must exist for these
-# directories to be searched.  Missing or empty directories are ignored. (multi
-# valued)
-#policy_dirs = policy.d
-
-# Content Type to send and receive data for REST based policy check (string
-# value)
-# Possible values:
-# application/x-www-form-urlencoded - <No description provided>
-# application/json - <No description provided>
-#remote_content_type = application/x-www-form-urlencoded
-
-# server identity verification for REST based policy check (boolean value)
-#remote_ssl_verify_server_crt = false
-
-# Absolute path to ca cert file for REST based policy check (string value)
-#remote_ssl_ca_crt_file = <None>
-
-# Absolute path to client cert for REST based policy check (string value)
-#remote_ssl_client_crt_file = <None>
-
-# Absolute path client key file REST based policy check (string value)
-#remote_ssl_client_key_file = <None>
 
 
 [pci]
@@ -8831,7 +8271,6 @@
 
 
 [placement]
-os_region_name = openstack
 
 #
 # From nova.conf
@@ -8893,13 +8332,13 @@
 
 # Authentication type to load (string value)
 # Deprecated group/name - [placement]/auth_plugin
-#auth_type = <None>
+auth_type = password
 
 # Config Section from which to load plugin specific options (string value)
 #auth_section = <None>
 
 # Authentication URL (string value)
-#auth_url = <None>
+auth_url = http://10.167.4.35:35357/v3
 
 # Scope for system operations (string value)
 #system_scope = <None>
@@ -8914,10 +8353,10 @@
 #project_id = <None>
 
 # Project name to scope to (string value)
-#project_name = <None>
+project_name = service
 
 # Domain ID containing project (string value)
-#project_domain_id = <None>
+project_domain_id = default
 
 # Domain name containing project (string value)
 #project_domain_name = <None>
@@ -8940,16 +8379,16 @@
 
 # Username (string value)
 # Deprecated group/name - [placement]/user_name
-#username = <None>
+username = nova
 
 # User's domain id (string value)
-#user_domain_id = <None>
+user_domain_id = default
 
 # User's domain name (string value)
 #user_domain_name = <None>
 
 # User's password (string value)
-#password = <None>
+password = opnfv_secret
 
 # Tenant ID (string value)
 #tenant_id = <None>
@@ -8964,10 +8403,10 @@
 #service_name = <None>
 
 # List of interfaces, in order of preference, for endpoint URL. (list value)
-#valid_interfaces = internal,public
+valid_interfaces = internal
 
 # The default region_name for endpoint URL discovery. (string value)
-#region_name = <None>
+region_name = RegionOne
 
 # Always use this endpoint URL for requests for this client. NOTE: The
 # unversioned endpoint should be specified here; to request a particular API
@@ -9891,62 +9330,41 @@
 # middleware.
 #  (boolean value)
 #send_service_user_token = false
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+send_service_user_token = True
+# Name of nova region to use. Useful if keystone manages more than one region.
 # (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Collect per-API call timing information. (boolean value)
-#collect_timing = false
-
-# Log requests to multiple loggers. (boolean value)
-#split_loggers = false
-
-# Authentication type to load (string value)
-# Deprecated group/name - [service_user]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internal
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
 
 # Authentication URL (string value)
 #auth_url = <None>
-
-# Scope for system operations (string value)
-#system_scope = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-#project_id = <None>
-
-# Project name to scope to (string value)
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
+auth_url = http://10.167.4.35:5000
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
 
 # Optional domain ID to use with v3 and v2 parameters. It will be used for both
 # the user and project domain in v3 and ignored in v2 authentication. (string
@@ -9958,27 +9376,65 @@
 # (string value)
 #default_domain_name = <None>
 
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
 # User ID (string value)
 #user_id = <None>
 
 # Username (string value)
-# Deprecated group/name - [service_user]/user_name
+# Deprecated group/name - [neutron]/user_name
 #username = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User's password (string value)
-#password = <None>
-
-# Tenant ID (string value)
-#tenant_id = <None>
-
-# Tenant Name (string value)
-#tenant_name = <None>
+username = nova
 
 
 [spice]
@@ -10049,6 +9505,7 @@
 #   and port where the ``nova-spicehtml5proxy`` service is listening.
 #  (uri value)
 #html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html
+html5proxy_base_url = https://172.30.10.101:6080/spice_auto.html
 
 #
 # The  address where the SPICE server running on the instances should listen.
@@ -10317,15 +9774,6 @@
 # root token for vault (string value)
 #root_token_id = <None>
 
-# AppRole role_id for authentication with vault (string value)
-#approle_role_id = <None>
-
-# AppRole secret_id for authentication with vault (string value)
-#approle_secret_id = <None>
-
-# Mountpoint of KV store in Vault to use, for example: secret (string value)
-#kv_mountpoint = secret
-
 # Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200"
 # (string value)
 #vault_url = http://127.0.0.1:8200
@@ -10433,287 +9881,6 @@
 
 # Tenant Name (string value)
 #tenant_name = <None>
-
-
-[vmware]
-#
-# Related options:
-# Following options must be set in order to launch VMware-based
-# virtual machines.
-#
-# * compute_driver: Must use vmwareapi.VMwareVCDriver.
-# * vmware.host_username
-# * vmware.host_password
-# * vmware.cluster_name
-
-#
-# From nova.conf
-#
-
-#
-# This option specifies the physical ethernet adapter name for VLAN
-# networking.
-#
-# Set the vlan_interface configuration option to match the ESX host
-# interface that handles VLAN-tagged VM traffic.
-#
-# Possible values:
-#
-# * Any valid string representing VLAN interface name
-#  (string value)
-#vlan_interface = vmnic0
-
-#
-# This option should be configured only when using the NSX-MH Neutron
-# plugin. This is the name of the integration bridge on the ESXi server
-# or host. This should not be set for any other Neutron plugin. Hence
-# the default value is not set.
-#
-# Possible values:
-#
-# * Any valid string representing the name of the integration bridge
-#  (string value)
-#integration_bridge = <None>
-
-#
-# Set this value if affected by an increased network latency causing
-# repeated characters when typing in a remote console.
-#  (integer value)
-# Minimum value: 0
-#console_delay_seconds = <None>
-
-#
-# Identifies the remote system where the serial port traffic will
-# be sent.
-#
-# This option adds a virtual serial port which sends console output to
-# a configurable service URI. At the service URI address there will be
-# virtual serial port concentrator that will collect console logs.
-# If this is not set, no serial ports will be added to the created VMs.
-#
-# Possible values:
-#
-# * Any valid URI
-#  (string value)
-#serial_port_service_uri = <None>
-
-#
-# Identifies a proxy service that provides network access to the
-# serial_port_service_uri.
-#
-# Possible values:
-#
-# * Any valid URI (The scheme is 'telnet' or 'telnets'.)
-#
-# Related options:
-# This option is ignored if serial_port_service_uri is not specified.
-# * serial_port_service_uri
-#  (uri value)
-#serial_port_proxy_uri = <None>
-
-#
-# Specifies the directory where the Virtual Serial Port Concentrator is
-# storing console log files. It should match the 'serial_log_dir' config
-# value of VSPC.
-#  (string value)
-#serial_log_dir = /opt/vmware/vspc
-
-#
-# Hostname or IP address for connection to VMware vCenter host. (host address
-# value)
-#host_ip = <None>
-
-# Port for connection to VMware vCenter host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#host_port = 443
-
-# Username for connection to VMware vCenter host. (string value)
-#host_username = <None>
-
-# Password for connection to VMware vCenter host. (string value)
-#host_password = <None>
-
-#
-# Specifies the CA bundle file to be used in verifying the vCenter
-# server certificate.
-#  (string value)
-#ca_file = <None>
-
-#
-# If true, the vCenter server certificate is not verified. If false,
-# then the default CA truststore is used for verification.
-#
-# Related options:
-# * ca_file: This option is ignored if "ca_file" is set.
-#  (boolean value)
-#insecure = false
-
-# Name of a VMware Cluster ComputeResource. (string value)
-#cluster_name = <None>
-
-#
-# Regular expression pattern to match the name of datastore.
-#
-# The datastore_regex setting specifies the datastores to use with
-# Compute. For example, datastore_regex="nas.*" selects all the data
-# stores that have a name starting with "nas".
-#
-# NOTE: If no regex is given, it just picks the datastore with the
-# most freespace.
-#
-# Possible values:
-#
-# * Any matching regular expression to a datastore must be given
-#  (string value)
-#datastore_regex = <None>
-
-#
-# Time interval in seconds to poll remote tasks invoked on
-# VMware VC server.
-#  (floating point value)
-#task_poll_interval = 0.5
-
-#
-# Number of times VMware vCenter server API must be retried on connection
-# failures, e.g. socket error, etc.
-#  (integer value)
-# Minimum value: 0
-#api_retry_count = 10
-
-#
-# This option specifies VNC starting port.
-#
-# Every VM created by ESX host has an option of enabling VNC client
-# for remote connection. Above option 'vnc_port' helps you to set
-# default starting port for the VNC client.
-#
-# Possible values:
-#
-# * Any valid port number within 5900 -(5900 + vnc_port_total)
-#
-# Related options:
-# Below options should be set to enable VNC client.
-# * vnc.enabled = True
-# * vnc_port_total
-#  (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#vnc_port = 5900
-
-#
-# Total number of VNC ports.
-#  (integer value)
-# Minimum value: 0
-#vnc_port_total = 10000
-
-#
-# Keymap for VNC.
-#
-# The keyboard mapping (keymap) determines which keyboard layout a VNC
-# session should use by default.
-#
-# Possible values:
-#
-# * A keyboard layout which is supported by the underlying hypervisor on
-#   this node. This is usually an 'IETF language tag' (for example
-#   'en-us').
-#  (string value)
-#vnc_keymap = en-us
-
-#
-# This option enables/disables the use of linked clone.
-#
-# The ESX hypervisor requires a copy of the VMDK file in order to boot
-# up a virtual machine. The compute driver must download the VMDK via
-# HTTP from the OpenStack Image service to a datastore that is visible
-# to the hypervisor and cache it. Subsequent virtual machines that need
-# the VMDK use the cached version and don't have to copy the file again
-# from the OpenStack Image service.
-#
-# If set to false, even with a cached VMDK, there is still a copy
-# operation from the cache location to the hypervisor file directory
-# in the shared datastore. If set to true, the above copy operation
-# is avoided as it creates copy of the virtual machine that shares
-# virtual disks with its parent VM.
-#  (boolean value)
-#use_linked_clone = true
-
-#
-# This option sets the http connection pool size
-#
-# The connection pool size is the maximum number of connections from nova to
-# vSphere.  It should only be increased if there are warnings indicating that
-# the connection pool is full, otherwise, the default should suffice.
-#  (integer value)
-# Minimum value: 10
-#connection_pool_size = 10
-
-#
-# This option enables or disables storage policy based placement
-# of instances.
-#
-# Related options:
-#
-# * pbm_default_policy
-#  (boolean value)
-#pbm_enabled = false
-
-#
-# This option specifies the PBM service WSDL file location URL.
-#
-# Setting this will disable storage policy based placement
-# of instances.
-#
-# Possible values:
-#
-# * Any valid file path
-#   e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl
-#  (string value)
-#pbm_wsdl_location = <None>
-
-#
-# This option specifies the default policy to be used.
-#
-# If pbm_enabled is set and there is no defined storage policy for the
-# specific request, then this policy will be used.
-#
-# Possible values:
-#
-# * Any valid storage policy such as VSAN default storage policy
-#
-# Related options:
-#
-# * pbm_enabled
-#  (string value)
-#pbm_default_policy = <None>
-
-#
-# This option specifies the limit on the maximum number of objects to
-# return in a single result.
-#
-# A positive value will cause the operation to suspend the retrieval
-# when the count of objects reaches the specified limit. The server may
-# still limit the count to something less than the configured value.
-# Any remaining objects may be retrieved with additional requests.
-#  (integer value)
-# Minimum value: 0
-#maximum_objects = 100
-
-#
-# This option adds a prefix to the folder where cached images are stored
-#
-# This is not the full path - just a folder prefix. This should only be
-# used when a datastore cache is shared between compute nodes.
-#
-# Note: This should only be used when the compute nodes are running on same
-# host or they have a shared file system.
-#
-# Possible values:
-#
-# * Any string representing the cache prefix to the folder
-#  (string value)
-#cache_prefix = <None>
 
 
 [vnc]
@@ -10757,7 +9924,7 @@
 # keyboards. You should instead use a VNC client that supports Extended Key
 # Event
 # messages, such as noVNC 1.0.0. Refer to bug #1682020 for more information.
-#keymap = <None>
+keymap = en-us
 
 #
 # The IP address or hostname on which an instance should listen to for
@@ -10766,6 +9933,7 @@
 # Deprecated group/name - [DEFAULT]/vncserver_listen
 # Deprecated group/name - [vnc]/vncserver_listen
 #server_listen = 127.0.0.1
+server_listen = 10.167.4.55
 
 #
 # Private, internal IP address or hostname of VNC console proxy.
@@ -10778,7 +9946,7 @@
 #  (host address value)
 # Deprecated group/name - [DEFAULT]/vncserver_proxyclient_address
 # Deprecated group/name - [vnc]/vncserver_proxyclient_address
-#server_proxyclient_address = 127.0.0.1
+server_proxyclient_address = 10.167.4.55
 
 #
 # Public address of noVNC VNC console proxy.
@@ -10800,6 +9968,7 @@
 # * novncproxy_port
 #  (uri value)
 #novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html
+novncproxy_base_url = https://172.30.10.101:6080/vnc_auto.html
 
 #
 # IP address or hostname that the XVP VNC console proxy should bind to.
@@ -10896,6 +10065,7 @@
 # Minimum value: 0
 # Maximum value: 65535
 #novncproxy_port = 6080
+novncproxy_port = 6080
 
 #
 # The authentication schemes to use with the compute node.
@@ -11009,7 +10179,7 @@
 # * False: Live snapshots are always used when snapshotting (as long as
 #   there is a new enough libvirt and the backend storage supports it)
 #  (boolean value)
-#disable_libvirt_livesnapshot = false
+disable_libvirt_livesnapshot = true
 
 #
 # Enable handling of events emitted from compute drivers.
@@ -11135,60 +10305,6 @@
 # See related bug https://bugs.launchpad.net/nova/+bug/1796920 for more details.
 #  (boolean value)
 #report_ironic_standard_resource_class_inventory = true
-
-#
-# Enable live migration of instances with NUMA topologies.
-#
-# Live migration of instances with NUMA topologies is disabled by default
-# when using the libvirt driver. This includes live migration of instances with
-# CPU pinning or hugepages. CPU pinning and huge page information for such
-# instances is not currently re-calculated, as noted in `bug #1289064`_.  This
-# means that if instances were already present on the destination host, the
-# migrated instance could be placed on the same dedicated cores as these
-# instances or use hugepages allocated for another instance. Alternately, if the
-# host platforms were not homogeneous, the instance could be assigned to
-# non-existent cores or be inadvertently split across host NUMA nodes.
-#
-# Despite these known issues, there may be cases where live migration is
-# necessary. By enabling this option, operators that are aware of the issues and
-# are willing to manually work around them can enable live migration support for
-# these instances.
-#
-# Related options:
-#
-# * ``compute_driver``: Only the libvirt driver is affected.
-#
-# .. _bug #1289064: https://bugs.launchpad.net/nova/+bug/1289064
-#  (boolean value)
-#enable_numa_live_migration = false
-
-#
-# Ensure the instance directory is removed during clean up when using rbd.
-#
-# When enabled this workaround will ensure that the instance directory is always
-# removed during cleanup on hosts using ``[libvirt]/images_type=rbd``. This
-# avoids the following bugs with evacuation and revert resize clean up that lead
-# to the instance directory remaining on the host:
-#
-# https://bugs.launchpad.net/nova/+bug/1414895
-#
-# https://bugs.launchpad.net/nova/+bug/1761062
-#
-# Both of these bugs can then result in ``DestinationDiskExists`` errors being
-# raised if the instances ever attempt to return to the host.
-#
-# .. warning:: Operators will need to ensure that the instance directory itself,
-#   specified by ``[DEFAULT]/instances_path``, is not shared between computes
-#   before enabling this workaround otherwise the console.log, kernels, ramdisks
-#   and any additional files being used by the running instance will be lost.
-#
-# Related options:
-#
-# * ``compute_driver`` (libvirt)
-# * ``[libvirt]/images_type`` (rbd)
-# * ``instances_path``
-#  (boolean value)
-#ensure_libvirt_rbd_instance_dir_cleanup = false
 
 
 [wsgi]

2019-04-30 22:31:23,505 [salt.state       :1951][INFO    ][21813] Completed state [/etc/nova/nova.conf] at time 22:31:23.504964 duration_in_ms=506.598
2019-04-30 22:31:23,505 [salt.state       :1780][INFO    ][21813] Running state [/etc/default/nova-compute] at time 22:31:23.505408
2019-04-30 22:31:23,505 [salt.state       :1813][INFO    ][21813] Executing state file.managed for [/etc/default/nova-compute]
2019-04-30 22:31:23,518 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'nova/files/default'
2019-04-30 22:31:23,524 [salt.state       :300 ][INFO    ][21813] File changed:
New file
2019-04-30 22:31:23,525 [salt.state       :1951][INFO    ][21813] Completed state [/etc/default/nova-compute] at time 22:31:23.525053 duration_in_ms=19.645
2019-04-30 22:31:23,530 [salt.state       :1780][INFO    ][21813] Running state [virsh net-destroy default; virsh net-undefine default] at time 22:31:23.530327
2019-04-30 22:31:23,530 [salt.state       :1813][INFO    ][21813] Executing state cmd.run for [virsh net-destroy default; virsh net-undefine default]
2019-04-30 22:31:23,530 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command 'virsh net-list --all --name |grep -w default' in directory '/root'
2019-04-30 22:31:23,552 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command 'virsh net-destroy default; virsh net-undefine default' in directory '/root'
2019-04-30 22:31:23,776 [salt.state       :300 ][INFO    ][21813] {'pid': 2836, 'retcode': 0, 'stderr': '', 'stdout': 'Network default destroyed\n\nNetwork default has been undefined'}
2019-04-30 22:31:23,776 [salt.state       :1951][INFO    ][21813] Completed state [virsh net-destroy default; virsh net-undefine default] at time 22:31:23.776952 duration_in_ms=246.624
2019-04-30 22:31:23,778 [salt.state       :1780][INFO    ][21813] Running state [/etc/default/libvirtd] at time 22:31:23.778383
2019-04-30 22:31:23,778 [salt.state       :1813][INFO    ][21813] Executing state file.managed for [/etc/default/libvirtd]
2019-04-30 22:31:23,794 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'nova/files/rocky/libvirt.Debian'
2019-04-30 22:31:23,802 [salt.state       :300 ][INFO    ][21813] File changed:
--- 
+++ 
@@ -1,17 +1,13 @@
-# Defaults for libvirtd initscript (/etc/init.d/libvirtd)
+# Defaults for libvirt-bin initscript (/etc/init.d/libvirt-bin)
 # This is a POSIX shell fragment
 
 # Start libvirtd to handle qemu/kvm:
 start_libvirtd="yes"
 
 # options passed to libvirtd, add "-l" to listen on tcp
-#libvirtd_opts=""
-
+# Don't use "-d" option with systemd
+libvirtd_opts="-l"
+LIBVIRTD_ARGS="--listen"
 # pass in location of kerberos keytab
 #export KRB5_KTNAME=/etc/libvirt/libvirt.keytab
 
-# Whether to mount a systemd like cgroup layout (only
-# useful when not running systemd)
-#mount_cgroups=yes
-# Which cgroups to mount
-#cgroups="memory devices"

2019-04-30 22:31:23,802 [salt.state       :1951][INFO    ][21813] Completed state [/etc/default/libvirtd] at time 22:31:23.802628 duration_in_ms=24.244
2019-04-30 22:31:23,802 [salt.state       :1780][INFO    ][21813] Running state [service.systemctl_reload] at time 22:31:23.802914
2019-04-30 22:31:23,803 [salt.state       :1813][INFO    ][21813] Executing state module.wait for [service.systemctl_reload]
2019-04-30 22:31:23,803 [salt.state       :300 ][INFO    ][21813] No changes made for service.systemctl_reload
2019-04-30 22:31:23,803 [salt.state       :1951][INFO    ][21813] Completed state [service.systemctl_reload] at time 22:31:23.803353 duration_in_ms=0.438
2019-04-30 22:31:23,803 [salt.state       :1780][INFO    ][21813] Running state [service.systemctl_reload] at time 22:31:23.803464
2019-04-30 22:31:23,803 [salt.state       :1813][INFO    ][21813] Executing state module.mod_watch for [service.systemctl_reload]
2019-04-30 22:31:23,803 [salt.utils.decorators:613 ][WARNING ][21813] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 22:31:23,804 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemctl', '--system', 'daemon-reload'] in directory '/root'
2019-04-30 22:31:23,866 [salt.state       :300 ][INFO    ][21813] {'ret': True}
2019-04-30 22:31:23,866 [salt.state       :1951][INFO    ][21813] Completed state [service.systemctl_reload] at time 22:31:23.866343 duration_in_ms=62.878
2019-04-30 22:31:23,866 [salt.state       :1780][INFO    ][21813] Running state [/etc/libvirt/libvirtd.conf] at time 22:31:23.866799
2019-04-30 22:31:23,867 [salt.state       :1813][INFO    ][21813] Executing state file.managed for [/etc/libvirt/libvirtd.conf]
2019-04-30 22:31:23,881 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'nova/files/rocky/libvirtd.conf.Debian'
2019-04-30 22:31:23,985 [salt.state       :300 ][INFO    ][21813] File changed:
--- 
+++ 
@@ -1,6 +1,7 @@
+
 # Master libvirt daemon configuration file
 #
-# For further information consult https://libvirt.org/format.html
+# For further information consult http://libvirt.org/format.html
 #
 # NOTE: the tests/daemon-conf regression test script requires
 # that each "PARAMETER = VALUE" line in this file have the parameter
@@ -18,8 +19,9 @@
 # It is necessary to setup a CA and issue server certificates before
 # using this capability.
 #
+
 # This is enabled by default, uncomment this to disable it
-#listen_tls = 0
+listen_tls = 0
 
 # Listen for unencrypted TCP connections on the public TCP/IP port.
 # NB, must pass the --listen flag to the libvirtd process for this to
@@ -33,6 +35,7 @@
 #listen_tcp = 1
 
 
+listen_tcp = 1
 
 # Override the port for accepting secure TLS connections
 # This can be a port number, or service name
@@ -48,10 +51,6 @@
 # Override the default configuration which binds to all network
 # interfaces. This can be a numeric IPv4/6 address, or hostname
 #
-# If the libvirtd service is started in parallel with network
-# startup (e.g. with systemd), binding to addresses other than
-# the wildcards (0.0.0.0/::) might not be available yet.
-#
 #listen_addr = "192.168.0.1"
 
 
@@ -67,7 +66,7 @@
 # unique on the immediate broadcast network.
 #
 # The default is "Virtualization Host HOSTNAME", where HOSTNAME
-# is substituted for the short hostname of the machine (without domain)
+# is subsituted for the short hostname of the machine (without domain)
 #
 #mdns_name = "Virtualization Host Joe Demo"
 
@@ -82,13 +81,14 @@
 # without becoming root.
 #
 # This is restricted to 'root' by default.
-unix_sock_group = "libvirt"
+unix_sock_group = "libvirtd"
 
 # Set the UNIX socket permissions for the R/O socket. This is used
 # for monitoring VM status only
 #
-# Default allows any user. If setting group ownership, you may want to
-# restrict this too.
+# Default allows any user. If setting group ownership may want to
+# restrict this to:
+#unix_sock_ro_perms = "0777"
 unix_sock_ro_perms = "0777"
 
 # Set the UNIX socket permissions for the R/W socket. This is used
@@ -98,19 +98,11 @@
 # the default will change to allow everyone (eg, 0777)
 #
 # If not using PolicyKit and setting group ownership for access
-# control, then you may want to relax this too.
+# control then you may want to relax this to:
 unix_sock_rw_perms = "0770"
-
-# Set the UNIX socket permissions for the admin interface socket.
-#
-# Default allows only owner (root), do not change it unless you are
-# sure to whom you are exposing the access to.
-#unix_sock_admin_perms = "0700"
 
 # Set the name of the directory in which sockets will be found/created.
 #unix_sock_dir = "/var/run/libvirt"
-
-
 
 #################################################################
 #
@@ -125,7 +117,7 @@
 #  - sasl: use SASL infrastructure. The actual auth scheme is then
 #          controlled from /etc/sasl2/libvirt.conf. For the TCP
 #          socket only GSSAPI & DIGEST-MD5 mechanisms will be used.
-#          For non-TCP or TLS sockets, any scheme is allowed.
+#          For non-TCP or TLS sockets,  any scheme is allowed.
 #
 #  - polkit: use PolicyKit to authenticate. This is only suitable
 #            for use on the UNIX sockets. The default policy will
@@ -156,6 +148,8 @@
 # use, always enable SASL and use the GSSAPI or DIGEST-MD5
 # mechanism in /etc/sasl2/libvirt.conf
 #auth_tcp = "sasl"
+#auth_tcp = "none"
+auth_tcp = "none"
 
 # Change the authentication scheme for TLS sockets.
 #
@@ -167,15 +161,6 @@
 #auth_tls = "none"
 
 
-# Change the API access control scheme
-#
-# By default an authenticated user is allowed access
-# to all APIs. Access drivers can place restrictions
-# on this. By default the 'nop' driver is enabled,
-# meaning no access control checks are done once a
-# client has authenticated with libvirtd
-#
-#access_drivers = [ "polkit" ]
 
 #################################################################
 #
@@ -228,7 +213,7 @@
 #tls_no_verify_certificate = 1
 
 
-# A whitelist of allowed x509 Distinguished Names
+# A whitelist of allowed x509  Distinguished Names
 # This list may contain wildcards such as
 #
 #    "C=GB,ST=London,L=London,O=Red Hat,CN=*"
@@ -241,8 +226,7 @@
 # By default, no DN's are checked
 #tls_allowed_dn_list = ["DN1", "DN2"]
 
-
-# A whitelist of allowed SASL usernames. The format for username
+# A whitelist of allowed SASL usernames. The format for usernames
 # depends on the SASL authentication mechanism. Kerberos usernames
 # look like username@REALM
 #
@@ -259,13 +243,6 @@
 #sasl_allowed_username_list = ["joe@EXAMPLE.COM", "fred@EXAMPLE.COM" ]
 
 
-# Override the compile time default TLS priority string. The
-# default is usually "NORMAL" unless overridden at build time.
-# Only set this is it is desired for libvirt to deviate from
-# the global default settings.
-#
-#tls_priority="NORMAL"
-
 
 #################################################################
 #
@@ -274,22 +251,12 @@
 
 # The maximum number of concurrent client connections to allow
 # over all sockets combined.
-#max_clients = 5000
-
-# The maximum length of queue of connections waiting to be
-# accepted by the daemon. Note, that some protocols supporting
-# retransmission may obey this so that a later reattempt at
-# connection succeeds.
-#max_queued_clients = 1000
-
-# The maximum length of queue of accepted but not yet
-# authenticated clients. The default value is 20. Set this to
-# zero to turn this feature off.
-#max_anonymous_clients = 20
+#max_clients = 20
+
 
 # The minimum limit sets the number of workers to start up
 # initially. If the number of active clients exceeds this,
-# then more threads are spawned, up to max_workers limit.
+# then more threads are spawned, upto max_workers limit.
 # Typically you'd want max_workers to equal maximum number
 # of clients allowed
 #min_workers = 5
@@ -297,25 +264,25 @@
 
 
 # The number of priority workers. If all workers from above
-# pool are stuck, some calls marked as high priority
+# pool will stuck, some calls marked as high priority
 # (notably domainDestroy) can be executed in this pool.
 #prio_workers = 5
 
+# Total global limit on concurrent RPC calls. Should be
+# at least as large as max_workers. Beyond this, RPC requests
+# will be read into memory and queued. This directly impact
+# memory usage, currently each request requires 256 KB of
+# memory. So by default upto 5 MB of memory is used
+#
+# XXX this isn't actually enforced yet, only the per-client
+# limit is used so far
+#max_requests = 20
+
 # Limit on concurrent requests from a single client
 # connection. To avoid one client monopolizing the server
-# this should be a small fraction of the global max_workers
-# parameter.
+# this should be a small fraction of the global max_requests
+# and max_workers parameter
 #max_client_requests = 5
-
-# Same processing controls, but this time for the admin interface.
-# For description of each option, be so kind to scroll few lines
-# upwards.
-
-#admin_min_workers = 1
-#admin_max_workers = 5
-#admin_max_clients = 5
-#admin_max_queued_clients = 5
-#admin_max_client_requests = 5
 
 #################################################################
 #
@@ -324,34 +291,23 @@
 
 # Logging level: 4 errors, 3 warnings, 2 information, 1 debug
 # basically 1 will log everything possible
-# Note: Journald may employ rate limiting of the messages logged
-# and thus lock up the libvirt daemon. To use the debug level with
-# journald you have to specify it explicitly in 'log_outputs', otherwise
-# only information level messages will be logged.
 #log_level = 3
-
 # Logging filters:
 # A filter allows to select a different logging level for a given category
 # of logs
 # The format for a filter is one of:
 #    x:name
 #    x:+name
-
-#      where name is a string which is matched against the category
-#      given in the VIR_LOG_INIT() at the top of each libvirt source
-#      file, e.g., "remote", "qemu", or "util.json" (the name in the
-#      filter can be a substring of the full category name, in order
-#      to match multiple similar categories), the optional "+" prefix
-#      tells libvirt to log stack trace for each message matching
-#      name, and x is the minimal level where matching messages should
-#      be logged:
-
+#      where name is a string which is matched against source file name,
+#      e.g., "remote", "qemu", or "util/json", the optional "+" prefix
+#      tells libvirt to log stack trace for each message matching name,
+#      and x is the minimal level where matching messages should be logged:
 #    1: DEBUG
 #    2: INFO
 #    3: WARNING
 #    4: ERROR
 #
-# Multiple filters can be defined in a single @filters, they just need to be
+# Multiple filter can be defined in a single @filters, they just need to be
 # separated by spaces.
 #
 # e.g. to only get warning or errors from the remote layer and only errors
@@ -367,26 +323,23 @@
 #      use syslog for the output and use the given name as the ident
 #    x:file:file_path
 #      output to a file, with the given filepath
-#    x:journald
-#      output to journald logging system
 # In all case the x prefix is the minimal level, acting as a filter
 #    1: DEBUG
 #    2: INFO
 #    3: WARNING
 #    4: ERROR
 #
-# Multiple outputs can be defined, they just need to be separated by spaces.
+# Multiple output can be defined, they just need to be separated by spaces.
 # e.g. to log all warnings and errors to syslog under the libvirtd ident:
 #log_outputs="3:syslog:libvirtd"
 #
 
-# Log debug buffer size:
-#
-# This configuration option is no longer used, since the global
-# log buffer functionality has been removed. Please configure
-# suitable log_outputs/log_filters settings to obtain logs.
+# Log debug buffer size: default 64
+# The daemon keeps an internal debug log buffer which will be dumped in case
+# of crash or upon receiving a SIGUSR2 signal. This setting allows to override
+# the default buffer size in kilobytes.
+# If value is 0 or less the debug log buffer is deactivated
 #log_buffer_size = 64
-
 
 ##################################################################
 #
@@ -407,16 +360,10 @@
 
 ###################################################################
 # UUID of the host:
-# Host UUID is read from one of the sources specified in host_uuid_source.
-#
-# - 'smbios': fetch the UUID from 'dmidecode -s system-uuid'
-# - 'machine-id': fetch the UUID from /etc/machine-id
-#
-# The host_uuid_source default is 'smbios'. If 'dmidecode' does not provide
-# a valid UUID a temporary UUID will be generated.
-#
-# Another option is to specify host UUID in host_uuid.
-#
+# Provide the UUID of the host here in case the command
+# 'dmidecode -s system-uuid' does not provide a valid uuid. In case
+# 'dmidecode' does not provide a valid UUID and none is provided here, a
+# temporary UUID will be generated.
 # Keep the format of the example UUID below. UUID must not have all digits
 # be the same.
 
@@ -424,12 +371,11 @@
 # it with the output of the 'uuidgen' command and then
 # uncomment this entry
 #host_uuid = "00000000-0000-0000-0000-000000000000"
-#host_uuid_source = "smbios"
 
 ###################################################################
 # Keepalive protocol:
 # This allows libvirtd to detect broken client connections or even
-# dead clients.  A keepalive message is sent to a client after
+# dead client.  A keepalive message is sent to a client after
 # keepalive_interval seconds of inactivity to check if the client is
 # still responding; keepalive_count is a maximum number of keepalive
 # messages that are allowed to be sent to the client without getting
@@ -438,31 +384,15 @@
 # keepalive_interval * (keepalive_count + 1) seconds since the last
 # message received from the client.  If keepalive_interval is set to
 # -1, libvirtd will never send keepalive requests; however clients
-# can still send them and the daemon will send responses.  When
+# can still send them and the deamon will send responses.  When
 # keepalive_count is set to 0, connections will be automatically
 # closed after keepalive_interval seconds of inactivity without
 # sending any keepalive messages.
 #
 #keepalive_interval = 5
 #keepalive_count = 5
-
-#
-# These configuration options are no longer used.  There is no way to
-# restrict such clients from connecting since they first need to
-# connect in order to ask for keepalive.
+#
+# If set to 1, libvirtd will refuse to talk to clients that do not
+# support keepalive protocol.  Defaults to 0.
 #
 #keepalive_required = 1
-#admin_keepalive_required = 1
-
-# Keepalive settings for the admin interface
-#admin_keepalive_interval = 5
-#admin_keepalive_count = 5
-
-###################################################################
-# Open vSwitch:
-# This allows to specify a timeout for openvswitch calls made by
-# libvirt. The ovs-vsctl utility is used for the configuration and
-# its timeout option is set by default to 5 seconds to avoid
-# potential infinite waits blocking libvirt.
-#
-#ovs_timeout = 5

2019-04-30 22:31:23,985 [salt.state       :1951][INFO    ][21813] Completed state [/etc/libvirt/libvirtd.conf] at time 22:31:23.985565 duration_in_ms=118.766
2019-04-30 22:31:23,985 [salt.state       :1780][INFO    ][21813] Running state [/etc/libvirt/qemu.conf] at time 22:31:23.985877
2019-04-30 22:31:23,986 [salt.state       :1813][INFO    ][21813] Executing state file.managed for [/etc/libvirt/qemu.conf]
2019-04-30 22:31:24,000 [salt.fileclient  :1219][INFO    ][21813] Fetching file from saltenv 'base', ** done ** 'nova/files/rocky/qemu.conf.Debian'
2019-04-30 22:31:24,096 [salt.state       :300 ][INFO    ][21813] File changed:
--- 
+++ 
@@ -1,61 +1,8 @@
+
 # Master configuration file for the QEMU driver.
 # All settings described here are optional - if omitted, sensible
 # defaults are used.
 
-# Use of TLS requires that x509 certificates be issued. The default is
-# to keep them in /etc/pki/qemu. This directory must contain
-#
-#  ca-cert.pem - the CA master certificate
-#  server-cert.pem - the server certificate signed with ca-cert.pem
-#  server-key.pem  - the server private key
-#
-# and optionally may contain
-#
-#  dh-params.pem - the DH params configuration file
-#
-# If the directory does not exist, libvirtd will fail to start. If the
-# directory doesn't contain the necessary files, QEMU domains will fail
-# to start if they are configured to use TLS.
-#
-# In order to overwrite the default path alter the following. This path
-# definition will be used as the default path for other *_tls_x509_cert_dir
-# configuration settings if their default path does not exist or is not
-# specifically set.
-#
-#default_tls_x509_cert_dir = "/etc/pki/qemu"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client who does not have a
-# certificate signed by the CA in /etc/pki/qemu/ca-cert.pem
-#
-# The default_tls_x509_cert_dir directory must also contain
-#
-#  client-cert.pem - the client certificate signed with the ca-cert.pem
-#  client-key.pem - the client private key
-#
-#default_tls_x509_verify = 1
-
-#
-# Libvirt assumes the server-key.pem file is unencrypted by default.
-# To use an encrypted server-key.pem file, the password to decrypt
-# the PEM file is required. This can be provided by creating a secret
-# object in libvirt and then to uncomment this setting to set the UUID
-# of the secret.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#default_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
-
-
 # VNC is configured to listen on 127.0.0.1 by default.
 # To make it listen on all public interfaces, uncomment
 # this next option.
@@ -69,9 +16,9 @@
 # unix socket. This prevents unprivileged access from users on the
 # host machine, though most VNC clients do not support it.
 #
-# This will only be enabled for VNC configurations that have listen
-# type=address but without any address specified. This setting takes
-# preference over vnc_listen.
+# This will only be enabled for VNC configurations that do not have
+# a hardcoded 'listen' or 'socket' value. This setting takes preference
+# over vnc_listen.
 #
 #vnc_auto_unix_socket = 1
 
@@ -85,12 +32,15 @@
 #
 #vnc_tls = 1
 
-
-# In order to override the default TLS certificate location for
-# vnc certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but vnc_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
+# Use of TLS requires that x509 certificates be issued. The
+# default it to keep them in /etc/pki/libvirt-vnc. This directory
+# must contain
+#
+#  ca-cert.pem - the CA master certificate
+#  server-cert.pem - the server certificate signed with ca-cert.pem
+#  server-key.pem  - the server private key
+#
+# This option allows the certificate directory to be changed
 #
 #vnc_tls_x509_cert_dir = "/etc/pki/libvirt-vnc"
 
@@ -100,15 +50,10 @@
 # an encrypted channel.
 #
 # It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the vnc_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
+# issuing a x509 certificate to every client who needs to connect.
+#
+# Enabling this option will reject any client who does not have a
+# certificate signed by the CA in /etc/pki/libvirt-vnc/ca-cert.pem
 #
 #vnc_tls_x509_verify = 1
 
@@ -172,24 +117,17 @@
 #spice_tls = 1
 
 
-# In order to override the default TLS certificate location for
-# spice certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but spice_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
+# Use of TLS requires that x509 certificates be issued. The
+# default it to keep them in /etc/pki/libvirt-spice. This directory
+# must contain
+#
+#  ca-cert.pem - the CA master certificate
+#  server-cert.pem - the server certificate signed with ca-cert.pem
+#  server-key.pem  - the server private key
+#
+# This option allows the certificate directory to be changed.
 #
 #spice_tls_x509_cert_dir = "/etc/pki/libvirt-spice"
-
-
-# Enable this option to have SPICE served over an automatically created
-# unix socket. This prevents unprivileged access from users on the
-# host machine.
-#
-# This will only be enabled for SPICE configurations that have listen
-# type=address but without any address specified. This setting takes
-# preference over spice_listen.
-#
-#spice_auto_unix_socket = 1
 
 
 # The default SPICE password. This parameter is only used if the
@@ -216,123 +154,6 @@
 # point to the directory, and create a qemu.conf in that location
 #
 #spice_sasl_dir = "/some/directory/sasl2"
-
-# Enable use of TLS encryption on the chardev TCP transports.
-#
-# It is necessary to setup CA and issue a server certificate
-# before enabling this.
-#
-#chardev_tls = 1
-
-
-# In order to override the default TLS certificate location for character
-# device TCP certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but chardev_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
-#
-#chardev_tls_x509_cert_dir = "/etc/pki/libvirt-chardev"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the chardev_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
-#
-#chardev_tls_x509_verify = 1
-
-
-# Uncomment and use the following option to override the default secret
-# UUID provided in the default_tls_x509_secret_uuid parameter.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#chardev_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
-
-
-# Enable use of TLS encryption for all VxHS network block devices that
-# don't specifically disable.
-#
-# When the VxHS network block device server is set up appropriately,
-# x509 certificates are required for authentication between the clients
-# (qemu processes) and the remote VxHS server.
-#
-# It is necessary to setup CA and issue the client certificate before
-# enabling this.
-#
-#vxhs_tls = 1
-
-
-# In order to override the default TLS certificate location for VxHS
-# backed storage, supply a valid path to the certificate directory.
-# This is used to authenticate the VxHS block device clients to the VxHS
-# server.
-#
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but vxhs_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
-#
-# VxHS block device clients expect the client certificate and key to be
-# present in the certificate directory along with the CA master certificate.
-# If using the default environment, default_tls_x509_verify must be configured.
-# Since this is only a client the server-key.pem certificate is not needed.
-# Thus a VxHS directory must contain the following:
-#
-#  ca-cert.pem - the CA master certificate
-#  client-cert.pem - the client certificate signed with the ca-cert.pem
-#  client-key.pem - the client private key
-#
-#vxhs_tls_x509_cert_dir = "/etc/pki/libvirt-vxhs"
-
-
-# In order to override the default TLS certificate location for migration
-# certificates, supply a valid path to the certificate directory. If the
-# provided path does not exist, libvirtd will fail to start. If the path is
-# not provided, but migrate_tls = 1, then the default_tls_x509_cert_dir path
-# will be used. Once/if a default certificate is enabled/defined, migration
-# will then be able to use the certificate via migration API flags.
-#
-#migrate_tls_x509_cert_dir = "/etc/pki/libvirt-migrate"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the migrate_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
-#
-#migrate_tls_x509_verify = 1
-
-
-# Uncomment and use the following option to override the default secret
-# UUID provided in the default_tls_x509_secret_uuid parameter.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#migrate_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
 
 
 # By default, if no graphical front end is configured, libvirt will disable
@@ -416,10 +237,9 @@
 # Set to 0 to disable file ownership changes.
 #dynamic_ownership = 1
 
-
 # What cgroup controllers to make use of with QEMU guests
 #
-#  - 'cpu' - use for scheduler tunables
+#  - 'cpu' - use for schedular tunables
 #  - 'devices' - use for device whitelisting
 #  - 'memory' - use for memory tunables
 #  - 'blkio' - use for block devices I/O tunables
@@ -451,19 +271,11 @@
 #    "/dev/null", "/dev/full", "/dev/zero",
 #    "/dev/random", "/dev/urandom",
 #    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
-#    "/dev/rtc","/dev/hpet"
+#    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio"
 #]
-#
-# RDMA migration requires the following extra files to be added to the list:
-#   "/dev/infiniband/rdma_cm",
-#   "/dev/infiniband/issm0",
-#   "/dev/infiniband/issm1",
-#   "/dev/infiniband/umad0",
-#   "/dev/infiniband/umad1",
-#   "/dev/infiniband/uverbs0"
-
-
-# The default format for QEMU/KVM guest save images is raw; that is, the
+
+
+# The default format for Qemu/KVM guest save images is raw; that is, the
 # memory from the domain is dumped out directly to a file.  If you have
 # guests with a large amount of memory, however, this can take up quite
 # a bit of space.  If you would like to compress the images while they
@@ -517,20 +329,15 @@
 # unspecified here, determination of a host mount point in /proc/mounts
 # will be attempted.  Specifying an explicit mount overrides detection
 # of the same in /proc/mounts.  Setting the mount point to "" will
-# disable guest hugepage backing. If desired, multiple mount points can
-# be specified at once, separated by comma and enclosed in square
-# brackets, for example:
-#
-#     hugetlbfs_mount = ["/dev/hugepages2M", "/dev/hugepages1G"]
-#
-# The size of huge page served by specific mount point is determined by
-# libvirt at the daemon startup.
-#
-# NB, within these mount points, guests will create memory backing
-# files in a location of $MOUNTPOINT/libvirt/qemu
+# disable guest hugepage backing.
+#
+# NB, within this mount point, guests will create memory backing files
+# in a location of $MOUNTPOINT/libvirt/qemu
 #
 #hugetlbfs_mount = "/dev/hugepages"
-
+#hugetlbfs_mount = ["/run/hugepages/kvm", "/mnt/hugepages_1GB"]
+hugetlbfs_mount = ["/mnt/hugepages_1G"]
+security_driver="none"
 
 # Path to the setuid helper for creating tap devices.  This executable
 # is used to create <source type='bridge'> interfaces when libvirtd is
@@ -566,42 +373,6 @@
 # The same applies to max_files which sets the limit on the maximum
 # number of opened files.
 #
-#max_processes = 0
-#max_files = 0
-
-# If max_core is set to a non-zero integer, then QEMU will be
-# permitted to create core dumps when it crashes, provided its
-# RAM size is smaller than the limit set.
-#
-# Be warned that the core dump will include a full copy of the
-# guest RAM, if the 'dump_guest_core' setting has been enabled,
-# or if the guest XML contains
-#
-#   <memory dumpcore="on">...guest ram...</memory>
-#
-# If guest RAM is to be included, ensure the max_core limit
-# is set to at least the size of the largest expected guest
-# plus another 1GB for any QEMU host side memory mappings.
-#
-# As a special case it can be set to the string "unlimited" to
-# to allow arbitrarily sized core dumps.
-#
-# By default the core dump size is set to 0 disabling all dumps
-#
-# Size is a positive integer specifying bytes or the
-# string "unlimited"
-#
-#max_core = "unlimited"
-
-# Determine if guest RAM is included in QEMU core dumps. By
-# default guest RAM will be excluded if a new enough QEMU is
-# present. Setting this to '1' will force guest RAM to always
-# be included in QEMU core dumps.
-#
-# This setting will be ignored if the guest XML has set the
-# dumpcore attribute on the <memory> element.
-#
-#dump_guest_core = 1
 
 # mac_filter enables MAC addressed based filtering on bridge ports.
 # This currently requires ebtables to be installed.
@@ -628,13 +399,11 @@
 #allow_disk_format_probing = 1
 
 
-# In order to prevent accidentally starting two domains that
-# share one writable disk, libvirt offers two approaches for
-# locking files. The first one is sanlock, the other one,
-# virtlockd, is then our own implementation. Accepted values
-# are "sanlock" and "lockd".
-#
-#lock_manager = "lockd"
+# To enable 'Sanlock' project based locking of the file
+# content (to prevent two VMs writing to the same
+# disk), uncomment this
+#
+#lock_manager = "sanlock"
 
 
 
@@ -676,17 +445,10 @@
 #seccomp_sandbox = 1
 
 
+
 # Override the listen address for all incoming migrations. Defaults to
 # 0.0.0.0, or :: if both host and qemu are capable of IPv6.
-#migration_address = "0.0.0.0"
-
-
-# The default hostname or IP address which will be used by a migration
-# source for transferring migration data to this host.  The migration
-# source has to be able to resolve this hostname and connect to it so
-# setting "localhost" will not work.  By default, the host's configured
-# hostname is used.
-#migration_host = "host.example.com"
+#migration_address = "127.0.0.1"
 
 
 # Override the port range used for incoming migrations.
@@ -698,36 +460,12 @@
 #
 #migration_port_min = 49152
 #migration_port_max = 49215
-
-
-
-# Timestamp QEMU's log messages (if QEMU supports it)
-#
-# Defaults to 1.
-#
-#log_timestamp = 0
-
-
-# Location of master nvram file
-#
-# When a domain is configured to use UEFI instead of standard
-# BIOS it may use a separate storage for UEFI variables. If
-# that's the case libvirt creates the variable store per domain
-# using this master file as image. Each UEFI firmware can,
-# however, have different variables store. Therefore the nvram is
-# a list of strings when a single item is in form of:
-#   ${PATH_TO_UEFI_FW}:${PATH_TO_UEFI_VARS}.
-# Later, when libvirt creates per domain variable store, this list is
-# searched for the master image. The UEFI firmware can be called
-# differently for different guest architectures. For instance, it's OVMF
-# for x86_64 and i686, but it's AAVMF for aarch64. The libvirt default
-# follows this scheme.
-#nvram = [
-#   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
-#   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
-#   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
-#   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
-#]
+cgroup_device_acl = [
+    "/dev/null", "/dev/full", "/dev/zero",
+    "/dev/random", "/dev/urandom",
+    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
+    "/dev/rtc", "/dev/hpet","/dev/net/tun",
+]
 
 # The backend to use for handling stdout/stderr output from
 # QEMU processes.
@@ -743,41 +481,3 @@
 #          rollover when a size limit is hit.
 #
 #stdio_handler = "logd"
-
-# QEMU gluster libgfapi log level, debug levels are 0-9, with 9 being the
-# most verbose, and 0 representing no debugging output.
-#
-# The current logging levels defined in the gluster GFAPI are:
-#
-#    0 - None
-#    1 - Emergency
-#    2 - Alert
-#    3 - Critical
-#    4 - Error
-#    5 - Warning
-#    6 - Notice
-#    7 - Info
-#    8 - Debug
-#    9 - Trace
-#
-# Defaults to 4
-#
-#gluster_debug_level = 9
-
-# To enhance security, QEMU driver is capable of creating private namespaces
-# for each domain started. Well, so far only "mount" namespace is supported. If
-# enabled it means qemu process is unable to see all the devices on the system,
-# only those configured for the domain in question. Libvirt then manages
-# devices entries throughout the domain lifetime. This namespace is turned on
-# by default.
-#namespaces = [ "mount" ]
-
-# This directory is used for memoryBacking source if configured as file.
-# NOTE: big files will be stored here
-#memory_backing_dir = "/var/lib/libvirt/qemu/ram"
-
-# The following two values set the default RX/TX ring buffer size for virtio
-# interfaces. These values are taken unless overridden in domain XML. For more
-# info consult docs to corresponding attributes from domain XML.
-#rx_queue_size = 1024
-#tx_queue_size = 1024

2019-04-30 22:31:24,097 [salt.state       :1951][INFO    ][21813] Completed state [/etc/libvirt/qemu.conf] at time 22:31:24.096969 duration_in_ms=111.091
2019-04-30 22:31:24,098 [salt.state       :1780][INFO    ][21813] Running state [libvirtd] at time 22:31:24.098322
2019-04-30 22:31:24,098 [salt.state       :1813][INFO    ][21813] Executing state service.running for [libvirtd]
2019-04-30 22:31:24,099 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemctl', 'status', 'libvirtd.service', '-n', '0'] in directory '/root'
2019-04-30 22:31:24,112 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemctl', 'is-active', 'libvirtd.service'] in directory '/root'
2019-04-30 22:31:24,122 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemctl', 'is-enabled', 'libvirtd.service'] in directory '/root'
2019-04-30 22:31:24,132 [salt.state       :300 ][INFO    ][21813] The service libvirtd is already running
2019-04-30 22:31:24,132 [salt.state       :1951][INFO    ][21813] Completed state [libvirtd] at time 22:31:24.132879 duration_in_ms=34.556
2019-04-30 22:31:24,133 [salt.state       :1780][INFO    ][21813] Running state [libvirtd] at time 22:31:24.133036
2019-04-30 22:31:24,133 [salt.state       :1813][INFO    ][21813] Executing state service.mod_watch for [libvirtd]
2019-04-30 22:31:24,133 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemctl', 'is-active', 'libvirtd.service'] in directory '/root'
2019-04-30 22:31:24,144 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'libvirtd.service'] in directory '/root'
2019-04-30 22:31:24,206 [salt.state       :300 ][INFO    ][21813] {'libvirtd': True}
2019-04-30 22:31:24,206 [salt.state       :1951][INFO    ][21813] Completed state [libvirtd] at time 22:31:24.206900 duration_in_ms=73.862
2019-04-30 22:31:24,207 [salt.state       :1780][INFO    ][21813] Running state [nova-compute] at time 22:31:24.207626
2019-04-30 22:31:24,207 [salt.state       :1813][INFO    ][21813] Executing state service.running for [nova-compute]
2019-04-30 22:31:24,208 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemctl', 'status', 'nova-compute.service', '-n', '0'] in directory '/root'
2019-04-30 22:31:24,217 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemctl', 'is-active', 'nova-compute.service'] in directory '/root'
2019-04-30 22:31:24,225 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemctl', 'is-enabled', 'nova-compute.service'] in directory '/root'
2019-04-30 22:31:24,235 [salt.state       :300 ][INFO    ][21813] The service nova-compute is already running
2019-04-30 22:31:24,236 [salt.state       :1951][INFO    ][21813] Completed state [nova-compute] at time 22:31:24.236133 duration_in_ms=28.506
2019-04-30 22:31:24,236 [salt.state       :1780][INFO    ][21813] Running state [nova-compute] at time 22:31:24.236304
2019-04-30 22:31:24,236 [salt.state       :1813][INFO    ][21813] Executing state service.mod_watch for [nova-compute]
2019-04-30 22:31:24,237 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemctl', 'is-active', 'nova-compute.service'] in directory '/root'
2019-04-30 22:31:24,246 [salt.loaded.int.module.cmdmod:395 ][INFO    ][21813] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'nova-compute.service'] in directory '/root'
2019-04-30 22:31:24,264 [salt.state       :300 ][INFO    ][21813] {'nova-compute': True}
2019-04-30 22:31:24,264 [salt.state       :1951][INFO    ][21813] Completed state [nova-compute] at time 22:31:24.264836 duration_in_ms=28.53
2019-04-30 22:31:24,267 [salt.minion      :1711][INFO    ][21813] Returning information for job: 20190430222950081629
2019-04-30 22:33:22,451 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command state.sls with jid 20190430223322434574
2019-04-30 22:33:22,462 [salt.minion      :1432][INFO    ][3132] Starting a new job with PID 3132
2019-04-30 22:33:26,151 [salt.state       :915 ][INFO    ][3132] Loading fresh modules for state activity
2019-04-30 22:33:26,183 [salt.fileclient  :1219][INFO    ][3132] Fetching file from saltenv 'base', ** done ** 'barbican/init.sls'
2019-04-30 22:33:26,206 [salt.fileclient  :1219][INFO    ][3132] Fetching file from saltenv 'base', ** done ** 'barbican/client/init.sls'
2019-04-30 22:33:26,221 [salt.fileclient  :1219][INFO    ][3132] Fetching file from saltenv 'base', ** done ** 'barbican/client/service.sls'
2019-04-30 22:33:26,235 [salt.fileclient  :1219][INFO    ][3132] Fetching file from saltenv 'base', ** done ** 'barbican/map.jinja'
2019-04-30 22:33:26,261 [salt.fileclient  :1219][INFO    ][3132] Fetching file from saltenv 'base', ** done ** 'barbican/client/resources/init.sls'
2019-04-30 22:33:26,273 [salt.fileclient  :1219][INFO    ][3132] Fetching file from saltenv 'base', ** done ** 'barbican/client/resources/v1.sls'
2019-04-30 22:33:27,436 [salt.state       :1780][INFO    ][3132] Running state [python-barbicanclient] at time 22:33:27.436706
2019-04-30 22:33:27,437 [salt.state       :1813][INFO    ][3132] Executing state pkg.installed for [python-barbicanclient]
2019-04-30 22:33:27,437 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3132] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:33:27,723 [salt.state       :300 ][INFO    ][3132] All specified packages are already installed
2019-04-30 22:33:27,723 [salt.state       :1951][INFO    ][3132] Completed state [python-barbicanclient] at time 22:33:27.723266 duration_in_ms=286.56
2019-04-30 22:33:27,740 [salt.minion      :1711][INFO    ][3132] Returning information for job: 20190430223322434574
2019-04-30 22:36:14,001 [salt.utils.schedule:1377][INFO    ][3184] Running scheduled job: __mine_interval
2019-04-30 22:44:51,119 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command state.sls with jid 20190430224451101774
2019-04-30 22:44:51,129 [salt.minion      :1432][INFO    ][3409] Starting a new job with PID 3409
2019-04-30 22:44:54,542 [salt.state       :915 ][INFO    ][3409] Loading fresh modules for state activity
2019-04-30 22:44:54,575 [salt.fileclient  :1219][INFO    ][3409] Fetching file from saltenv 'base', ** done ** 'ceilometer/init.sls'
2019-04-30 22:44:54,594 [salt.fileclient  :1219][INFO    ][3409] Fetching file from saltenv 'base', ** done ** 'ceilometer/agent.sls'
2019-04-30 22:44:54,670 [salt.fileclient  :1219][INFO    ][3409] Fetching file from saltenv 'base', ** done ** 'ceilometer/_ssl/rabbitmq.sls'
2019-04-30 22:44:55,829 [salt.state       :1780][INFO    ][3409] Running state [ceilometer-agent-compute] at time 22:44:55.829937
2019-04-30 22:44:55,830 [salt.state       :1813][INFO    ][3409] Executing state pkg.installed for [ceilometer-agent-compute]
2019-04-30 22:44:55,830 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:44:56,136 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['apt-cache', '-q', 'policy', 'ceilometer-agent-compute'] in directory '/root'
2019-04-30 22:44:56,188 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:44:58,017 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:44:58,034 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'ceilometer-agent-compute'] in directory '/root'
2019-04-30 22:45:06,175 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430224506155146
2019-04-30 22:45:06,187 [salt.minion      :1432][INFO    ][4067] Starting a new job with PID 4067
2019-04-30 22:45:06,200 [salt.minion      :1711][INFO    ][4067] Returning information for job: 20190430224506155146
2019-04-30 22:45:14,609 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:45:14,639 [salt.state       :300 ][INFO    ][3409] Made the following changes:
'python-pysnmp4' changed from 'absent' to '4.4.3-1~u16.04+mcp'
'python-pysnmp4-mibs' changed from 'absent' to '0.1.3-1'
'python-pysnmp2' changed from 'absent' to '1'
'python-pysnmp-common' changed from 'absent' to '1'
'ceilometer-agent-compute' changed from 'absent' to '1:11.0.1-1~u16.04+mcp21'
'python-pam' changed from 'absent' to '0.4.2-13.2ubuntu2'
'python-cotyledon' changed from 'absent' to '1.7.1-1~u16.04+mcp'
'python2.7-twisted-core' changed from 'absent' to '1'
'python-twisted' changed from 'absent' to '16.0.0-1ubuntu0.2'
'python-ceilometer' changed from 'absent' to '1:11.0.1-1~u16.04+mcp21'
'ceilometer-common' changed from 'absent' to '1:11.0.1-1~u16.04+mcp21'
'libsmi2ldbl' changed from 'absent' to '0.4.8+dfsg2-11'
'python-pysmi' changed from 'absent' to '0.2.2-2~u16.04+mcp'
'python2.7-twisted' changed from 'absent' to '1'
'python-croniter' changed from 'absent' to '0.3.8-1'
'python-setproctitle' changed from 'absent' to '1.1.8-1build2'
'python-twisted-core' changed from 'absent' to '16.0.0-1ubuntu0.2'
'python-jsonpath-rw' changed from 'absent' to '1.4.0-1'
'python-attr' changed from 'absent' to '15.2.0-1'
'python-service-identity' changed from 'absent' to '16.0.0-2'
'python-serial' changed from 'absent' to '3.4-4~u16.04+mcp'
'smitools' changed from 'absent' to '0.4.8+dfsg2-11'
'python-pycryptodome' changed from 'absent' to '3.4.7-1.1~u16.04+mcp'
'python-jsonpath-rw-ext' changed from 'absent' to '0.1.9-1'
'python2.7-twisted-bin' changed from 'absent' to '1'
'python-pysnmp4-apps' changed from 'absent' to '0.3.2-1'
'python-twisted-bin' changed from 'absent' to '16.0.0-1ubuntu0.2'

2019-04-30 22:45:14,654 [salt.state       :915 ][INFO    ][3409] Loading fresh modules for state activity
2019-04-30 22:45:14,767 [salt.state       :1951][INFO    ][3409] Completed state [ceilometer-agent-compute] at time 22:45:14.767753 duration_in_ms=18937.815
2019-04-30 22:45:14,769 [salt.state       :1780][INFO    ][3409] Running state [ceilometer_ssl_rabbitmq] at time 22:45:14.769278
2019-04-30 22:45:14,769 [salt.state       :1813][INFO    ][3409] Executing state test.show_notification for [ceilometer_ssl_rabbitmq]
2019-04-30 22:45:14,769 [salt.state       :300 ][INFO    ][3409] Running ceilometer._ssl.rabbitmq
2019-04-30 22:45:14,769 [salt.state       :1951][INFO    ][3409] Completed state [ceilometer_ssl_rabbitmq] at time 22:45:14.769743 duration_in_ms=0.466
2019-04-30 22:45:14,771 [salt.state       :1780][INFO    ][3409] Running state [/etc/ceilometer/ceilometer.conf] at time 22:45:14.771268
2019-04-30 22:45:14,771 [salt.state       :1813][INFO    ][3409] Executing state file.managed for [/etc/ceilometer/ceilometer.conf]
2019-04-30 22:45:14,793 [salt.fileclient  :1219][INFO    ][3409] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/rocky/ceilometer-agent.conf.Debian'
2019-04-30 22:45:14,933 [salt.state       :300 ][INFO    ][3409] File changed:
--- 
+++ 
@@ -1,15 +1,330 @@
 [DEFAULT]
-
-#
-# From ceilometer
-#
-
-# DEPRECATED: To reduce polling agent load, samples are sent to the
-# notification agent in a batch. To gain higher throughput at the cost of load
-# set this to False. This option is deprecated, to disable batching set
-# batch_size = 0 in the polling group. (boolean value)
+#
+# From oslo.messaging
+#
+
+# Size of RPC connection pool. (integer value)
+#rpc_conn_pool_size = 30
+
+# The pool size limit for connections expiration policy (integer
+# value)
+#conn_pool_min_size = 2
+
+# The time-to-live in sec of idle connections in the pool (integer
+# value)
+#conn_pool_ttl = 1200
+
+# ZeroMQ bind address. Should be a wildcard (*), an ethernet
+# interface, or IP. The "host" option should point or resolve to this
+# address. (string value)
+#rpc_zmq_bind_address = *
+
+# MatchMaker driver. (string value)
+# Possible values:
+# redis - <No description provided>
+# sentinel - <No description provided>
+# dummy - <No description provided>
+#rpc_zmq_matchmaker = redis
+
+# Number of ZeroMQ contexts, defaults to 1. (integer value)
+#rpc_zmq_contexts = 1
+
+# Maximum number of ingress messages to locally buffer per topic.
+# Default is unlimited. (integer value)
+#rpc_zmq_topic_backlog = <None>
+
+# Directory for holding IPC sockets. (string value)
+#rpc_zmq_ipc_dir = /var/run/openstack
+
+# Name of this node. Must be a valid hostname, FQDN, or IP address.
+# Must match "host" option, if running Nova. (string value)
+#rpc_zmq_host = localhost
+
+# Number of seconds to wait before all pending messages will be sent
+# after closing a socket. The default value of -1 specifies an
+# infinite linger period. The value of 0 specifies no linger period.
+# Pending messages shall be discarded immediately when the socket is
+# closed. Positive values specify an upper bound for the linger
+# period. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
+#zmq_linger = -1
+
+# The default number of seconds that poll should wait. Poll raises
+# timeout exception when timeout expired. (integer value)
+#rpc_poll_timeout = 1
+
+
+# Expiration timeout in seconds of a name service record about
+# existing target ( < 0 means no timeout). (integer value)
+#zmq_target_expire = 300
+
+# Update period in seconds of a name service record about existing
+# target. (integer value)
+#zmq_target_update = 180
+
+# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
+# (boolean value)
+#use_pub_sub = false
+
+# Use ROUTER remote proxy. (boolean value)
+#use_router_proxy = false
+
+# This option makes direct connections dynamic or static. It makes
+# sense only with use_router_proxy=False which means to use direct
+# connections for direct message types (ignored otherwise). (boolean
+# value)
+#use_dynamic_connections = false
+
+# How many additional connections to a host will be made for failover
+# reasons. This option is actual only in dynamic connections mode.
+# (integer value)
+#zmq_failover_connections = 2
+
+# Minimal port number for random ports range. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#rpc_zmq_min_port = 49153
+
+# Maximal port number for random ports range. (integer value)
+# Minimum value: 1
+# Maximum value: 65536
+#rpc_zmq_max_port = 65536
+
+# Number of retries to find free port number before fail with
+# ZMQBindError. (integer value)
+#rpc_zmq_bind_port_retries = 100
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#rpc_zmq_serialization = json
+
+# This option configures round-robin mode in zmq socket. True means
+# not keeping a queue when server side disconnects. False means to
+# keep queue and messages even if server is disconnected, when the
+# server appears we send all accumulated messages to it. (boolean
+# value)
+#zmq_immediate = true
+
+# Enable/disable TCP keepalive (KA) mechanism. The default value of -1
+# (or any other negative value) means to skip any overrides and leave
+# it to OS default; 0 and 1 (or any other positive value) mean to
+# disable and enable the option respectively. (integer value)
+#zmq_tcp_keepalive = -1
+
+# The duration between two keepalive transmissions in idle condition.
+# The unit is platform dependent, for example, seconds in Linux,
+# milliseconds in Windows etc. The default value of -1 (or any other
+# negative value and 0) means to skip any overrides and leave it to OS
+# default. (integer value)
+#zmq_tcp_keepalive_idle = -1
+
+# The number of retransmissions to be carried out before declaring
+# that remote end is not available. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_cnt = -1
+
+# The duration between two successive keepalive retransmissions, if
+# acknowledgement to the previous keepalive transmission is not
+# received. The unit is platform dependent, for example, seconds in
+# Linux, milliseconds in Windows etc. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_intvl = -1
+
+# Maximum number of (green) threads to work concurrently. (integer
+# value)
+#rpc_thread_pool_size = 100
+
+# Expiration timeout in seconds of a sent/received message after which
+# it is not tracked anymore by a client/server. (integer value)
+#rpc_message_ttl = 300
+
+# Wait for message acknowledgements from receivers. This mechanism
+# works only via proxy without PUB/SUB. (boolean value)
+#rpc_use_acks = false
+
+# Number of seconds to wait for an ack from a cast/call. After each
+# retry attempt this timeout is multiplied by some specified
+# multiplier. (integer value)
+#rpc_ack_timeout_base = 15
+
+# Number to multiply base ack timeout by after each retry attempt.
+# (integer value)
+#rpc_ack_timeout_multiplier = 2
+
+# Default number of message sending attempts in case of any problems
+# occurred: positive value N means at most N retries, 0 means no
+# retries, None or -1 (or any other negative values) mean to retry
+# forever. This option is used only if acknowledgments are enabled.
+# (integer value)
+#rpc_retry_attempts = 3
+
+# List of publisher hosts SubConsumer can subscribe on. This option
+# has higher priority then the default publishers list taken from the
+# matchmaker. (list value)
+#subscribe_on =
+
+# Size of executor thread pool when executor is threading or eventlet.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
+#executor_thread_pool_size = 64
+
+# Seconds to wait for a response from a call. (integer value)
+#rpc_response_timeout = 60
+
+# The network address and optional user credentials for connecting to
+# the messaging backend, in URL format. The expected format is:
+#
+# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
+#
+# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
+#
+# For full details on the fields in the URL see the documentation of
+# oslo_messaging.TransportURL at
+# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
+# (string value)
+#transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
+
+# DEPRECATED: The messaging driver to use, defaults to rabbit. Other
+# drivers include amqp and zmq. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rpc_backend = rabbit
+
+# The default exchange under which topics are scoped. May be
+# overridden by an exchange name specified in the transport_url
+# option. (string value)
+#control_exchange = openstack
+
+#
+# From oslo.log
+#
+
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
+# Note: This option can be changed without restarting.
+#debug = false
+
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
+# Note: This option can be changed without restarting.
+# Deprecated group/name - [DEFAULT]/log_config
+
+# Defines the format string for %%(asctime)s in log records. Default:
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
+#log_date_format = %Y-%m-%d %H:%M:%S
+
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logfile
+#log_file = <None>
+
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logdir
+#log_dir = <None>
+
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
+# (boolean value)
+#use_journal = false
+
+# Syslog facility to receive log lines. This option is ignored if
+# log_config_append is set. (string value)
+#syslog_log_facility = LOG_USER
+
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_json = false
+
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_stderr = false
+
+# Format string to use for log messages with context. (string value)
+#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+# Format string to use for log messages when context is undefined.
+# (string value)
+#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
+#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+
+# Prefix each line of exception output with this format. (string
+# value)
+#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
+
+# Defines the format string for %(user_identity)s that is used in
+# logging_context_format_string. (string value)
+#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
+
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+
+# Enables or disables publication of error events. (boolean value)
+#publish_errors = false
+
+# The format for an instance that is passed with the log message.
+# (string value)
+#instance_format = "[instance: %(uuid)s] "
+
+# The format for an instance UUID that is passed with the log message.
+# (string value)
+#instance_uuid_format = "[instance: %(uuid)s] "
+
+# Interval, number of seconds, of log rate limiting. (integer value)
+#rate_limit_interval = 0
+
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
+#rate_limit_burst = 0
+
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
+#rate_limit_except_level = CRITICAL
+
+# Enables or disables fatal status of deprecations. (boolean value)
+#fatal_deprecations = false
+
+#
+# From ceilometer
+#
+
+# To reduce polling agent load, samples are sent to the notification agent in a
+# batch. To gain higher throughput at the cost of load set this to False.
+# (boolean value)
 #batch_polled_samples = true
 
 # Inspector to use for inspecting the hypervisor layer. Known inspectors are
@@ -30,7 +345,7 @@
 #libvirt_uri =
 
 # Swift reseller prefix. Must be on par with reseller_prefix in proxy-
-# server.conf. (string value)
+# agent.conf. (string value)
 #reseller_prefix = AUTH_
 
 # Configuration file for pipeline definition. (string value)
@@ -69,334 +384,6 @@
 # (integer value)
 # Minimum value: 1
 #max_parallel_requests = 64
-
-#
-# From oslo.log
-#
-
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
-# Note: This option can be changed without restarting.
-#debug = false
-
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
-# Note: This option can be changed without restarting.
-# Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
-
-# Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
-#log_date_format = %Y-%m-%d %H:%M:%S
-
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
-# Deprecated group/name - [DEFAULT]/logfile
-#log_file = <None>
-
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
-# Deprecated group/name - [DEFAULT]/logdir
-#log_dir = <None>
-
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
-# (boolean value)
-#watch_log_file = false
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
-
-# Syslog facility to receive log lines. This option is ignored if
-# log_config_append is set. (string value)
-#syslog_log_facility = LOG_USER
-
-# Use JSON formatting for logging. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_json = false
-
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
-#use_stderr = false
-
-# Format string to use for log messages with context. (string value)
-#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
-
-# Format string to use for log messages when context is undefined. (string
-# value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string value)
-#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
-
-# Defines the format string for %(user_identity)s that is used in
-# logging_context_format_string. (string value)
-#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
-
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
-
-# Enables or disables publication of error events. (boolean value)
-#publish_errors = false
-
-# The format for an instance that is passed with the log message. (string
-# value)
-#instance_format = "[instance: %(uuid)s] "
-
-# The format for an instance UUID that is passed with the log message. (string
-# value)
-#instance_uuid_format = "[instance: %(uuid)s] "
-
-# Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
-
-# Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
-
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
-#rate_limit_except_level = CRITICAL
-
-# Enables or disables fatal status of deprecations. (boolean value)
-#fatal_deprecations = false
-
-#
-# From oslo.messaging
-#
-
-# Size of RPC connection pool. (integer value)
-#rpc_conn_pool_size = 30
-
-# The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
-
-# The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
-# Size of executor thread pool when executor is threading or eventlet. (integer
-# value)
-# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
-#executor_thread_pool_size = 64
-
-# Seconds to wait for a response from a call. (integer value)
-#rpc_response_timeout = 60
-
-# The network address and optional user credentials for connecting to the
-# messaging backend, in URL format. The expected format is:
-#
-# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
-#
-# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
-#
-# For full details on the fields in the URL see the documentation of
-# oslo_messaging.TransportURL at
-# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
-# (string value)
-#transport_url = <None>
-
-# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
-# include amqp and zmq. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rpc_backend = rabbit
-
-# The default exchange under which topics are scoped. May be overridden by an
-# exchange name specified in the transport_url option. (string value)
-#control_exchange = openstack
-
-#
-# From oslo.service.service
-#
-
-# Enable eventlet backdoor.  Acceptable values are 0, <port>, and
-# <start>:<end>, where 0 results in listening on a random tcp port number;
-# <port> results in listening on the specified port number (and not enabling
-# backdoor if that port is in use); and <start>:<end> results in listening on
-# the smallest unused port number within the specified range of port numbers.
-# The chosen port is displayed in the service's log file. (string value)
-#backdoor_port = <None>
-
-# Enable eventlet backdoor, using the provided path as a unix socket that can
-# receive connections. This option is mutually exclusive with 'backdoor_port'
-# in that only one should be provided. If both are provided then the existence
-# of this option overrides the usage of that option. (string value)
-#backdoor_socket = <None>
-
-# Enables or disables logging values of all registered options when starting a
-# service (at DEBUG level). (boolean value)
-#log_options = true
-
-# Specify a timeout after which a gracefully shutdown server will exit. Zero
-# value means endless wait. (integer value)
-#graceful_shutdown_timeout = 60
 
 
 [compute]
@@ -416,6 +403,7 @@
 # workload_partitioning - <No description provided>
 # libvirt_metadata - <No description provided>
 #instance_discovery_method = libvirt_metadata
+instance_discovery_method = libvirt_metadata
 
 # New instances will be discovered periodically based on this option (in
 # seconds). By default, the agent discovers instances according to pipeline
@@ -437,23 +425,6 @@
 #resource_cache_expiry = 3600
 
 
-[coordination]
-
-#
-# From ceilometer
-#
-
-# The backend URL to use for distributed coordination. If left empty, per-
-# deployment central agent and per-host compute agent won't do workload
-# partitioning and will only function correctly if a single instance of that
-# service is running. (string value)
-#backend_url = <None>
-
-# Number of seconds between checks to see if group membership has changed
-# (floating point value)
-#check_watchers = 10.0
-
-
 [event]
 
 #
@@ -529,52 +500,6 @@
 # Tolerance of IPMI/NM polling failures before disable this pollster. Negative
 # indicates retrying forever. (integer value)
 #polling_retry = 3
-
-
-[matchmaker_redis]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Host to locate redis. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
-
-# DEPRECATED: Use this port to connect to redis host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
-
-# DEPRECATED: Password for Redis server (optional). (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#password =
-
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
-# [host:port, host1:port ... ] (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#sentinel_hosts =
-
-# Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
-
-# Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
-
-# Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
-
-# Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
 
 
 [meter]
@@ -593,7 +518,7 @@
 
 # List directory to find files of defining meter notifications. (multi valued)
 #meter_definitions_dirs = /etc/ceilometer/meters.d
-#meter_definitions_dirs = /build/ceilometer-mfW0lD/ceilometer-11.0.1/ceilometer/data/meters.d
+#meter_definitions_dirs = /usr/src/git/ceilometer/ceilometer/data/meters.d
 
 
 [notification]
@@ -666,633 +591,6 @@
 #notification_control_exchanges = aodh
 
 
-[oslo_concurrency]
-
-#
-# From oslo.concurrency
-#
-
-# Enables or disables inter-process locks. (boolean value)
-#disable_process_locking = false
-
-# Directory to use for lock files.  For security, the specified directory
-# should only be writable by the user running the processes that need locking.
-# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
-# a lock path must be set. (string value)
-#lock_path = <None>
-
-
-[oslo_messaging_amqp]
-
-#
-# From oslo.messaging
-#
-
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string
-# value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string
-# value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional)
-# (string value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# By default SSL checks that the name in the server's certificate matches the
-# hostname in the transport_url. In some configurations it may be preferable to
-# use the virtual hostname instead, for example if the server uses the Server
-# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
-# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
-# virtual host name instead of the DNS name. (boolean value)
-#ssl_verify_vhost = false
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# Enable virtual host support for those message buses that do not natively
-# support virtual hosting (such as qpidd). When set to true the virtual host
-# name will be added to all message bus addresses, effectively creating a
-# private 'subnet' per virtual host. Set to False if the message bus supports
-# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
-# as the name of the virtual host. (boolean value)
-#pseudo_vhost = true
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-
-# robin fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# DEPRECATED: Pool Size for Kafka Consumers (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#pool_size = 10
-
-# DEPRECATED: The pool size limit for connections expiration policy (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_min_size = 2
-
-# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating
-# point value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
-# Enable asynchronous consumer commits (boolean value)
-#enable_auto_commit = false
-
-# The maximum number of records returned in a poll call (integer value)
-#max_poll_records = 500
-
-# Protocol used to communicate with brokers (string value)
-# Possible values:
-# PLAINTEXT - <No description provided>
-# SASL_PLAINTEXT - <No description provided>
-# SSL - <No description provided>
-# SASL_SSL - <No description provided>
-#security_protocol = PLAINTEXT
-
-# Mechanism when security protocol is SASL (string value)
-#sasl_mechanism = PLAIN
-
-# CA certificate PEM file used to verify the server certificate (string value)
-#ssl_cafile =
-
-
-[oslo_messaging_notifications]
-
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are
-# messaging, messagingv2, routing, log, test, noop (multi valued)
-# Deprecated group/name - [DEFAULT]/notification_driver
-#driver =
-
-# A URL representing the messaging driver to use for notifications. If not set,
-# we fall back to the same configuration used for RPC. (string value)
-# Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
-
-# AMQP topic used for OpenStack notifications. (list value)
-# Deprecated group/name - [rpc_notifier2]/topics
-# Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
-
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
-
-
-[oslo_messaging_rabbit]
-
-#
-# From oslo.messaging
-#
-
-# Use durable queues in AMQP. (boolean value)
-# Deprecated group/name - [DEFAULT]/amqp_durable_queues
-# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
-#amqp_durable_queues = false
-
-# Auto-delete queues in AMQP. (boolean value)
-#amqp_auto_delete = false
-
-# Connect over SSL. (boolean value)
-# Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl
-#ssl = false
-
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
-
-# How long to wait before reconnecting in response to an AMQP consumer cancel
-# notification. (floating point value)
-#kombu_reconnect_delay = 1.0
-
-# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
-# be used. This option may not be available in future versions. (string value)
-#kombu_compression = <None>
-
-# How long to wait a missing client before abandoning to send it its replies.
-# This value should not be longer than rpc_response_timeout. (integer value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
-#kombu_missing_consumer_retry_timeout = 60
-
-# Determines how the next RabbitMQ node is chosen in case the one we are
-# currently connected to becomes unavailable. Takes effect only if more than
-# one RabbitMQ node is provided in config. (string value)
-# Possible values:
-# round-robin - <No description provided>
-# shuffle - <No description provided>
-#kombu_failover_strategy = round-robin
-
-# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_host = localhost
-
-# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
-# value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_port = 5672
-
-# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_hosts = $rabbit_host:$rabbit_port
-
-# DEPRECATED: The RabbitMQ userid. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_userid = guest
-
-# DEPRECATED: The RabbitMQ password. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_password = guest
-
-# The RabbitMQ login method. (string value)
-# Possible values:
-# PLAIN - <No description provided>
-# AMQPLAIN - <No description provided>
-# RABBIT-CR-DEMO - <No description provided>
-#rabbit_login_method = AMQPLAIN
-
-# DEPRECATED: The RabbitMQ virtual host. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_virtual_host = /
-
-# How frequently to retry connecting with RabbitMQ. (integer value)
-#rabbit_retry_interval = 1
-
-# How long to backoff for between retries when connecting to RabbitMQ. (integer
-# value)
-#rabbit_retry_backoff = 2
-
-# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
-# (integer value)
-#rabbit_interval_max = 30
-
-# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
-# (infinite retry count). (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#rabbit_max_retries = 0
-
-# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
-# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
-# is no longer controlled by the x-ha-policy argument when declaring a queue.
-# If you just want to make sure that all queues (except those with auto-
-# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
-# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
-#rabbit_ha_queues = false
-
-# Positive integer representing duration in seconds for queue TTL (x-expires).
-# Queues which are unused for the duration of the TTL are automatically
-# deleted. The parameter affects only reply and fanout queues. (integer value)
-# Minimum value: 1
-#rabbit_transient_queues_ttl = 1800
-
-# Specifies the number of messages to prefetch. Setting to zero allows
-# unlimited messages. (integer value)
-#rabbit_qos_prefetch_count = 0
-
-# Number of seconds after which the Rabbit broker is considered down if
-# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
-# value)
-#heartbeat_timeout_threshold = 60
-
-# How often times during the heartbeat_timeout_threshold we check the
-# heartbeat. (integer value)
-#heartbeat_rate = 2
-
-
-[oslo_messaging_zmq]
-
-#
-# From oslo.messaging
-#
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
-
 [polling]
 
 #
@@ -1307,10 +605,6 @@
 # pool with the same partitioning_group_prefix a disjoint subset of pollsters
 # should be loaded. (string value)
 #partitioning_group_prefix = <None>
-
-# Batch size of samples to send to notification agent, Set to 0 to disable
-# (integer value)
-#batch_size = 50
 
 
 [publisher]
@@ -1325,6 +619,7 @@
 # Deprecated group/name - [publisher_rpc]/metering_secret
 # Deprecated group/name - [publisher]/metering_secret
 #telemetry_secret = change this for valid signing
+telemetry_secret=opnfv_secret
 
 
 [publisher_notifier]
@@ -1357,87 +652,115 @@
 #secret_key = <None>
 
 
-[rgw_client]
-
-#
-# From ceilometer
-#
-
-# Whether RGW uses implicit tenants or not. (boolean value)
-#implicit_tenants = false
-
-
 [service_credentials]
 
 #
 # From ceilometer-auth
 #
-
-# Authentication type to load (string value)
-# Deprecated group/name - [service_credentials]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internalURL
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
 
 # Authentication URL (string value)
 #auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
 
 # Scope for system operations (string value)
 #system_scope = <None>
 
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-# Deprecated group/name - [service_credentials]/tenant_id
-#project_id = <None>
-
-# Project name to scope to (string value)
-# Deprecated group/name - [service_credentials]/tenant_name
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
 
 # Trust ID (string value)
 #trust_id = <None>
 
-# Optional domain ID to use with v3 and v2 parameters. It will be used for both
-# the user and project domain in v3 and ignored in v2 authentication. (string
-# value)
-#default_domain_id = <None>
-
-# Optional domain name to use with v3 API and v2 parameters. It will be used
-# for both the user and project domain in v3 and ignored in v2 authentication.
-# (string value)
-#default_domain_name = <None>
-
-# User id (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [service_credentials]/user_name
-#username = <None>
-
 # User's domain id (string value)
 #user_domain_id = <None>
+user_domain_id = default
 
 # User's domain name (string value)
 #user_domain_name = <None>
 
-# User's password (string value)
-#password = <None>
-
-# Region name to use for OpenStack service endpoints. (string value)
-# Deprecated group/name - [DEFAULT]/os_region_name
-#region_name = <None>
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = ceilometer
 
 # Type of endpoint in Identity service catalog to use for communication with
 # OpenStack services. (string value)
@@ -1450,7 +773,7 @@
 # internalURL - <No description provided>
 # adminURL - <No description provided>
 # Deprecated group/name - [service_credentials]/os_endpoint_type
-#interface = public
+interface = internal
 
 
 [service_types]
@@ -1484,59 +807,306 @@
 # Deprecated group/name - [service_types]/cinderv2
 #cinder = volumev3
 
-
-[vmware]
-
-#
-# From ceilometer
-#
-
-# IP address of the VMware vSphere host. (host address value)
-#host_ip = 127.0.0.1
-
-# Port of the VMware vSphere host. (port value)
+[xenapi]
+
+#
+# From ceilometer
+#
+
+# URL for connection to XenServer/Xen Cloud Platform. (string value)
+#connection_url = <None>
+
+# Username for connection to XenServer/Xen Cloud Platform. (string value)
+#connection_username = root
+
+# Password for connection to XenServer/Xen Cloud Platform. (string value)
+#connection_password = <None>
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+#
+# From oslo.messaging
+#
+
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+driver = messagingv2
+
+# A URL representing the messaging driver to use for notifications. If
+# not set, we fall back to the same configuration used for RPC.
+# (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+topics = notifications
+
+# The maximum number of attempts to re-send a notification message
+# which failed to be delivered due to a recoverable error. 0 - No
+# retry, -1 - indefinite (integer value)
+#retry = -1
+[oslo_messaging_rabbit]
+#
+# From oslo.messaging
+#
+
+# Use durable queues in AMQP. (boolean value)
+# Deprecated group/name - [DEFAULT]/amqp_durable_queues
+# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
+#amqp_durable_queues = false
+
+# Auto-delete queues in AMQP. (boolean value)
+#amqp_auto_delete = false
+
+# Enable SSL (boolean value)
+#ssl = <None>
+
+
+# How long to wait before reconnecting in response to an AMQP consumer
+# cancel notification. (floating point value)
+#kombu_reconnect_delay = 1.0
+
+# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression
+# will not be used. This option may not be available in future
+# versions. (string value)
+#kombu_compression = <None>
+
+# How long to wait a missing client before abandoning to send it its
+# replies. This value should not be longer than rpc_response_timeout.
+# (integer value)
+# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
+#kombu_missing_consumer_retry_timeout = 60
+
+# Determines how the next RabbitMQ node is chosen in case the one we
+# are currently connected to becomes unavailable. Takes effect only if
+# more than one RabbitMQ node is provided in config. (string value)
+# Possible values:
+# round-robin - <No description provided>
+# shuffle - <No description provided>
+#kombu_failover_strategy = round-robin
+
+# DEPRECATED: The RabbitMQ broker address where a single node is used.
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_host = localhost
+
+# DEPRECATED: The RabbitMQ broker port where a single node is used.
+# (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#host_port = 443
-
-# Username of VMware vSphere. (string value)
-#host_username =
-
-# Password of VMware vSphere. (string value)
-#host_password =
-
-# CA bundle file to use in verifying the vCenter server certificate. (string
-# value)
-#ca_file = <None>
-
-# If true, the vCenter server certificate is not verified. If false, then the
-# default CA truststore is used for verification. This option is ignored if
-# "ca_file" is set. (boolean value)
-#insecure = false
-
-# Number of times a VMware vSphere API may be retried. (integer value)
-#api_retry_count = 10
-
-# Sleep time in seconds for polling an ongoing async task. (floating point
-# value)
-#task_poll_interval = 0.5
-
-# Optional vim service WSDL location e.g http://<server>/vimService.wsdl.
-# Optional over-ride to default location for bug work-arounds. (string value)
-#wsdl_location = <None>
-
-
-[xenapi]
-
-#
-# From ceilometer
-#
-
-# URL for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_url = <None>
-
-# Username for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_username = root
-
-# Password for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_password = <None>
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_port = 5672
+
+# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_hosts = $rabbit_host:$rabbit_port
+
+# DEPRECATED: The RabbitMQ userid. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_userid = guest
+
+# DEPRECATED: The RabbitMQ password. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_password = guest
+
+# The RabbitMQ login method. (string value)
+# Possible values:
+# PLAIN - <No description provided>
+# AMQPLAIN - <No description provided>
+# RABBIT-CR-DEMO - <No description provided>
+#rabbit_login_method = AMQPLAIN
+
+# DEPRECATED: The RabbitMQ virtual host. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_virtual_host = /
+
+# How frequently to retry connecting with RabbitMQ. (integer value)
+#rabbit_retry_interval = 1
+
+# How long to backoff for between retries when connecting to RabbitMQ.
+# (integer value)
+#rabbit_retry_backoff = 2
+
+# Maximum interval of RabbitMQ connection retries. Default is 30
+# seconds. (integer value)
+#rabbit_interval_max = 30
+
+# DEPRECATED: Maximum number of RabbitMQ connection retries. Default
+# is 0 (infinite retry count). (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#rabbit_max_retries = 0
+
+# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change
+# this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0,
+# queue mirroring is no longer controlled by the x-ha-policy argument
+# when declaring a queue. If you just want to make sure that all
+# queues (except those with auto-generated names) are mirrored across
+# all nodes, run: "rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-
+# mode": "all"}' " (boolean value)
+#rabbit_ha_queues = false
+
+# Positive integer representing duration in seconds for queue TTL
+# (x-expires). Queues which are unused for the duration of the TTL are
+# automatically deleted. The parameter affects only reply and fanout
+# queues. (integer value)
+# Minimum value: 1
+#rabbit_transient_queues_ttl = 1800
+
+# Specifies the number of messages to prefetch. Setting to zero allows
+# unlimited messages. (integer value)
+
+# NOTE(dmescheryakov) hardcoding to >0 by default
+# Having no prefetch limit makes oslo.messaging consume all available
+# messages from the queue. That can lead to a situation when several
+# server processes hog all the messages leaving others out of business.
+# That leads to artificial high message processing latency and at the
+# extrime to MessagingTimeout errors.
+rabbit_qos_prefetch_count = 64
+
+# Number of seconds after which the Rabbit broker is considered down
+# if heartbeat's keep-alive fails (0 disable the heartbeat).
+# EXPERIMENTAL (integer value)
+#heartbeat_timeout_threshold = 60
+
+# How often times during the heartbeat_timeout_threshold we check the
+# heartbeat. (integer value)
+#heartbeat_rate = 2
+
+# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
+# (boolean value)
+#fake_rabbit = false
+
+# Maximum number of channels to allow (integer value)
+#channel_max = <None>
+
+
+# The maximum byte size for an AMQP frame (integer value)
+#frame_max = <None>
+
+# How often to send heartbeats for consumer's connections (integer
+# value)
+#heartbeat_interval = 3
+
+# Arguments passed to ssl.wrap_socket (dict value)
+#ssl_options = <None>
+
+# Set socket timeout in seconds for connection's socket (floating
+# point value)
+#socket_timeout = 0.25
+
+# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating
+# point value)
+#tcp_user_timeout = 0.25
+
+# Set delay for reconnection to some host which has connection error
+# (floating point value)
+#host_connection_reconnect_delay = 0.25
+
+# Connection factory implementation (string value)
+# Possible values:
+# new - <No description provided>
+# single - <No description provided>
+# read_write - <No description provided>
+#connection_factory = single
+
+# Maximum number of connections to keep queued. (integer value)
+#pool_max_size = 30
+
+# Maximum number of connections to create above `pool_max_size`.
+# (integer value)
+#pool_max_overflow = 0
+
+# Default number of seconds to wait for a connections to available
+# (integer value)
+#pool_timeout = 30
+
+# Lifetime of a connection (since creation) in seconds or None for no
+# recycling. Expired connections are closed on acquire. (integer
+# value)
+#pool_recycle = 600
+
+# Threshold at which inactive (since release) connections are
+# considered stale in seconds or None for no staleness. Stale
+# connections are closed on acquire. (integer value)
+#pool_stale = 60
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#default_serializer_type = json
+
+# Persist notification messages. (boolean value)
+#notification_persistence = false
+
+# Exchange name for sending notifications (string value)
+#default_notification_exchange = ${control_exchange}_notification
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# notification listener. (integer value)
+#notification_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending notification, -1 means infinite retry. (integer value)
+#default_notification_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending notification message (floating point value)
+#notification_retry_delay = 0.25
+
+# Time to live for rpc queues without consumers in seconds. (integer
+# value)
+#rpc_queue_expiration = 60
+
+# Exchange name for sending RPC messages (string value)
+#default_rpc_exchange = ${control_exchange}_rpc
+
+# Exchange name for receiving RPC replies (string value)
+#rpc_reply_exchange = ${control_exchange}_rpc_reply
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc listener. (integer value)
+#rpc_listener_prefetch_count = 100
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc reply listener. (integer value)
+#rpc_reply_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending reply. -1 means infinite retry during rpc_timeout (integer
+# value)
+#rpc_reply_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending reply. (floating point value)
+#rpc_reply_retry_delay = 0.25
+
+# Reconnecting retry count in case of connectivity problem during
+# sending RPC message, -1 means infinite retry. If actual retry
+# attempts in not 0 the rpc request could be processed more than one
+# time (integer value)
+#default_rpc_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending RPC message (floating point value)
+#rpc_retry_delay = 0.25

2019-04-30 22:45:14,934 [salt.state       :1951][INFO    ][3409] Completed state [/etc/ceilometer/ceilometer.conf] at time 22:45:14.934571 duration_in_ms=163.303
2019-04-30 22:45:14,934 [salt.state       :1780][INFO    ][3409] Running state [/etc/default/ceilometer-agent-compute] at time 22:45:14.934792
2019-04-30 22:45:14,934 [salt.state       :1813][INFO    ][3409] Executing state file.managed for [/etc/default/ceilometer-agent-compute]
2019-04-30 22:45:14,949 [salt.fileclient  :1219][INFO    ][3409] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/default'
2019-04-30 22:45:14,953 [salt.state       :300 ][INFO    ][3409] File changed:
New file
2019-04-30 22:45:14,954 [salt.state       :1951][INFO    ][3409] Completed state [/etc/default/ceilometer-agent-compute] at time 22:45:14.954056 duration_in_ms=19.264
2019-04-30 22:45:14,954 [salt.state       :1780][INFO    ][3409] Running state [/etc/ceilometer/pipeline.yaml] at time 22:45:14.954251
2019-04-30 22:45:14,954 [salt.state       :1813][INFO    ][3409] Executing state file.managed for [/etc/ceilometer/pipeline.yaml]
2019-04-30 22:45:14,967 [salt.fileclient  :1219][INFO    ][3409] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/rocky/pipeline.yaml'
2019-04-30 22:45:15,005 [salt.state       :300 ][INFO    ][3409] File changed:
New file
2019-04-30 22:45:15,005 [salt.state       :1951][INFO    ][3409] Completed state [/etc/ceilometer/pipeline.yaml] at time 22:45:15.005155 duration_in_ms=50.903
2019-04-30 22:45:15,005 [salt.state       :1780][INFO    ][3409] Running state [/etc/ceilometer/event_pipeline.yaml] at time 22:45:15.005382
2019-04-30 22:45:15,005 [salt.state       :1813][INFO    ][3409] Executing state file.managed for [/etc/ceilometer/event_pipeline.yaml]
2019-04-30 22:45:15,018 [salt.fileclient  :1219][INFO    ][3409] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/rocky/event_pipeline.yaml'
2019-04-30 22:45:15,051 [salt.state       :300 ][INFO    ][3409] File changed:
New file
2019-04-30 22:45:15,051 [salt.state       :1951][INFO    ][3409] Completed state [/etc/ceilometer/event_pipeline.yaml] at time 22:45:15.051773 duration_in_ms=46.392
2019-04-30 22:45:15,051 [salt.state       :1780][INFO    ][3409] Running state [/etc/ceilometer/polling.yaml] at time 22:45:15.051966
2019-04-30 22:45:15,052 [salt.state       :1813][INFO    ][3409] Executing state file.managed for [/etc/ceilometer/polling.yaml]
2019-04-30 22:45:15,065 [salt.fileclient  :1219][INFO    ][3409] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/rocky/polling.yaml'
2019-04-30 22:45:15,095 [salt.state       :300 ][INFO    ][3409] File changed:
--- 
+++ 
@@ -1,27 +1,7 @@
+
 ---
 sources:
-    - name: some_pollsters
-      interval: 300
+    - name: default_pollsters
+      interval: 180
       meters:
-        - cpu
-        - cpu_l3_cache
-        - memory.usage
-        - network.incoming.bytes
-        - network.incoming.packets
-        - network.outgoing.bytes
-        - network.outgoing.packets
-        - disk.device.read.bytes
-        - disk.device.read.requests
-        - disk.device.write.bytes
-        - disk.device.write.requests
-        - hardware.cpu.util
-        - hardware.memory.used
-        - hardware.memory.total
-        - hardware.memory.buffer
-        - hardware.memory.cached
-        - hardware.memory.swap.avail
-        - hardware.memory.swap.total
-        - hardware.system_stats.io.outgoing.blocks
-        - hardware.system_stats.io.incoming.blocks
-        - hardware.network.ip.incoming.datagrams
-        - hardware.network.ip.outgoing.datagrams
+        - "*"

2019-04-30 22:45:15,096 [salt.state       :1951][INFO    ][3409] Completed state [/etc/ceilometer/polling.yaml] at time 22:45:15.096011 duration_in_ms=44.045
2019-04-30 22:45:15,413 [salt.state       :1780][INFO    ][3409] Running state [ceilometer-agent-compute] at time 22:45:15.413940
2019-04-30 22:45:15,414 [salt.state       :1813][INFO    ][3409] Executing state service.running for [ceilometer-agent-compute]
2019-04-30 22:45:15,414 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['systemctl', 'status', 'ceilometer-agent-compute.service', '-n', '0'] in directory '/root'
2019-04-30 22:45:15,424 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['systemctl', 'is-active', 'ceilometer-agent-compute.service'] in directory '/root'
2019-04-30 22:45:15,432 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['systemctl', 'is-enabled', 'ceilometer-agent-compute.service'] in directory '/root'
2019-04-30 22:45:15,440 [salt.state       :300 ][INFO    ][3409] The service ceilometer-agent-compute is already running
2019-04-30 22:45:15,440 [salt.state       :1951][INFO    ][3409] Completed state [ceilometer-agent-compute] at time 22:45:15.440687 duration_in_ms=26.741
2019-04-30 22:45:15,440 [salt.state       :1780][INFO    ][3409] Running state [ceilometer-agent-compute] at time 22:45:15.440846
2019-04-30 22:45:15,441 [salt.state       :1813][INFO    ][3409] Executing state service.mod_watch for [ceilometer-agent-compute]
2019-04-30 22:45:15,441 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['systemctl', 'is-active', 'ceilometer-agent-compute.service'] in directory '/root'
2019-04-30 22:45:15,449 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3409] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'ceilometer-agent-compute.service'] in directory '/root'
2019-04-30 22:45:15,567 [salt.state       :300 ][INFO    ][3409] {'ceilometer-agent-compute': True}
2019-04-30 22:45:15,567 [salt.state       :1951][INFO    ][3409] Completed state [ceilometer-agent-compute] at time 22:45:15.567528 duration_in_ms=126.681
2019-04-30 22:45:15,569 [salt.minion      :1711][INFO    ][3409] Returning information for job: 20190430224451101774
2019-04-30 22:49:38,810 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command pillar.get with jid 20190430224938792149
2019-04-30 22:49:38,821 [salt.minion      :1432][INFO    ][4925] Starting a new job with PID 4925
2019-04-30 22:49:38,825 [salt.minion      :1711][INFO    ][4925] Returning information for job: 20190430224938792149
2019-04-30 22:49:39,429 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command pillar.get with jid 20190430224939411264
2019-04-30 22:49:39,438 [salt.minion      :1432][INFO    ][4930] Starting a new job with PID 4930
2019-04-30 22:49:39,442 [salt.minion      :1711][INFO    ][4930] Returning information for job: 20190430224939411264
2019-04-30 22:49:40,066 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command pillar.get with jid 20190430224940046664
2019-04-30 22:49:40,076 [salt.minion      :1432][INFO    ][4935] Starting a new job with PID 4935
2019-04-30 22:49:40,079 [salt.minion      :1711][INFO    ][4935] Returning information for job: 20190430224940046664
2019-04-30 22:49:40,598 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command pillar.get with jid 20190430224940580562
2019-04-30 22:49:40,607 [salt.minion      :1432][INFO    ][4941] Starting a new job with PID 4941
2019-04-30 22:49:40,611 [salt.minion      :1711][INFO    ][4941] Returning information for job: 20190430224940580562
2019-04-30 22:50:27,009 [salt.minion      :1308][INFO    ][3184] User sudo_ubuntu Executing command cp.push_dir with jid 20190430225026989463
2019-04-30 22:50:27,017 [salt.minion      :1432][INFO    ][4966] Starting a new job with PID 4966
