2019-04-30 21:31:48,258 [salt.utils.decorators:613 ][WARNING ][3657] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:05,263 [salt.utils.decorators:613 ][WARNING ][4479] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:20,475 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:34,229 [salt.loaded.int.states.file:2298][WARNING ][6014] State for file: /boot/grub/grub.cfg - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:43,509 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,525 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,541 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,557 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,573 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,592 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,612 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,628 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,644 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,660 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,676 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,692 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,708 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,724 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,740 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:43,756 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:44,459 [salt.utils.decorators:613 ][WARNING ][6014] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:32:44,537 [salt.loaded.int.states.file:2298][WARNING ][6014] State for file: /etc/shadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:44,537 [salt.loaded.int.states.file:2298][WARNING ][6014] State for file: /etc/gshadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:44,538 [salt.loaded.int.states.file:2298][WARNING ][6014] State for file: /etc/group- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:44,539 [salt.loaded.int.states.file:2298][WARNING ][6014] State for file: /etc/group - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:44,540 [salt.loaded.int.states.file:2298][WARNING ][6014] State for file: /etc/passwd- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:44,540 [salt.loaded.int.states.file:2298][WARNING ][6014] State for file: /etc/passwd - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:44,542 [salt.loaded.int.states.file:2298][WARNING ][6014] State for file: /etc/gshadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:44,543 [salt.loaded.int.states.file:2298][WARNING ][6014] State for file: /etc/shadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:32:52,887 [salt.loaded.int.module.cmdmod:730 ][ERROR   ][6014] Command '['ovs-vsctl', 'br-exists', 'br-floating']' failed with return code: 2
2019-04-30 21:32:52,888 [salt.loaded.int.module.cmdmod:736 ][ERROR   ][6014] retcode: 2
2019-04-30 21:32:55,966 [salt.loaded.int.module.cmdmod:730 ][ERROR   ][6014] Command '['umount', '/dev/shm']' failed with return code: 32
2019-04-30 21:32:55,967 [salt.loaded.int.module.cmdmod:734 ][ERROR   ][6014] stderr: umount: /dev/shm: target is busy
        (In some cases useful info about processes that
         use the device is found by lsof(8) or fuser(1).)
2019-04-30 21:32:55,967 [salt.loaded.int.module.cmdmod:736 ][ERROR   ][6014] retcode: 32
2019-04-30 21:32:58,063 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13512] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
2019-04-30 21:32:58,078 [salt.loaded.int.module.cmdmod:395 ][INFO    ][13512] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'salt-minion.service'] in directory '/root'
2019-04-30 21:32:58,088 [salt.utils.parsers:1051][WARNING ][2081] Minion received a SIGTERM. Exiting.
2019-04-30 21:32:58,877 [salt.cli.daemons :293 ][INFO    ][13564] Setting up the Salt Minion "cmp002.mcp-ovs-ha.local"
2019-04-30 21:32:58,950 [salt.cli.daemons :82  ][INFO    ][13564] Starting up the Salt Minion
2019-04-30 21:32:58,950 [salt.utils.event :1017][INFO    ][13564] Starting pull socket on /var/run/salt/minion/minion_event_56b6a5185f_pull.ipc
2019-04-30 21:32:59,522 [salt.minion      :976 ][INFO    ][13564] Creating minion process manager
2019-04-30 21:33:00,632 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][13564] Executing command ['date', '+%z'] in directory '/root'
2019-04-30 21:33:00,647 [salt.utils.schedule:568 ][INFO    ][13564] Updating job settings for scheduled job: __mine_interval
2019-04-30 21:33:00,648 [salt.minion      :1108][INFO    ][13564] Added mine.update to scheduler
2019-04-30 21:33:00,652 [salt.minion      :1975][INFO    ][13564] Minion is starting as user 'root'
2019-04-30 21:33:00,665 [salt.minion      :2336][INFO    ][13564] Minion is ready to receive requests!
2019-04-30 21:33:14,538 [salt.minion      :1308][INFO    ][13564] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430213314494123
2019-04-30 21:33:14,548 [salt.minion      :1432][INFO    ][13669] Starting a new job with PID 13669
2019-04-30 21:33:14,560 [salt.minion      :1711][INFO    ][13669] Returning information for job: 20190430213314494123
2019-04-30 21:33:55,121 [salt.minion      :1308][INFO    ][13564] User sudo_ubuntu Executing command test.ping with jid 20190430213355120660
2019-04-30 21:33:55,129 [salt.minion      :1432][INFO    ][13695] Starting a new job with PID 13695
2019-04-30 21:33:55,140 [salt.minion      :1711][INFO    ][13695] Returning information for job: 20190430213355120660
2019-04-30 21:33:55,779 [salt.minion      :1308][INFO    ][13564] User sudo_ubuntu Executing command cmd.run with jid 20190430213355779912
2019-04-30 21:33:55,787 [salt.minion      :1432][INFO    ][13706] Starting a new job with PID 13706
2019-04-30 21:33:55,790 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][13706] Executing command 'reboot' in directory '/root'
2019-04-30 21:33:55,987 [salt.utils.parsers:1051][WARNING ][13564] Minion received a SIGTERM. Exiting.
2019-04-30 21:33:55,988 [salt.cli.daemons :82  ][INFO    ][13564] Shutting down the Salt Minion
2019-04-30 21:33:57,486 [salt.minion      :1711][INFO    ][13706] Returning information for job: 20190430213355779912
2019-04-30 21:36:22,830 [salt.cli.daemons :293 ][INFO    ][3337] Setting up the Salt Minion "cmp002.mcp-ovs-ha.local"
2019-04-30 21:36:23,001 [salt.cli.daemons :82  ][INFO    ][3337] Starting up the Salt Minion
2019-04-30 21:36:23,001 [salt.utils.event :1017][INFO    ][3337] Starting pull socket on /var/run/salt/minion/minion_event_56b6a5185f_pull.ipc
2019-04-30 21:36:23,597 [salt.minion      :976 ][INFO    ][3337] Creating minion process manager
2019-04-30 21:36:24,741 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][3337] Executing command ['date', '+%z'] in directory '/root'
2019-04-30 21:36:24,752 [salt.utils.schedule:568 ][INFO    ][3337] Updating job settings for scheduled job: __mine_interval
2019-04-30 21:36:24,754 [salt.minion      :1108][INFO    ][3337] Added mine.update to scheduler
2019-04-30 21:36:24,757 [salt.minion      :1975][INFO    ][3337] Minion is starting as user 'root'
2019-04-30 21:36:24,768 [salt.minion      :2336][INFO    ][3337] Minion is ready to receive requests!
2019-04-30 21:36:32,581 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command test.ping with jid 20190430213632566499
2019-04-30 21:36:32,590 [salt.minion      :1432][INFO    ][3567] Starting a new job with PID 3567
2019-04-30 21:36:32,601 [salt.minion      :1711][INFO    ][3567] Returning information for job: 20190430213632566499
2019-04-30 21:36:33,229 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command state.apply with jid 20190430213633214368
2019-04-30 21:36:33,237 [salt.minion      :1432][INFO    ][3572] Starting a new job with PID 3572
2019-04-30 21:36:36,809 [salt.state       :915 ][INFO    ][3572] Loading fresh modules for state activity
2019-04-30 21:36:40,162 [salt.state       :1780][INFO    ][3572] Running state [/etc/environment] at time 21:36:40.162362
2019-04-30 21:36:40,162 [salt.state       :1813][INFO    ][3572] Executing state file.blockreplace for [/etc/environment]
2019-04-30 21:36:40,163 [salt.state       :300 ][INFO    ][3572] No changes needed to be made
2019-04-30 21:36:40,163 [salt.state       :1951][INFO    ][3572] Completed state [/etc/environment] at time 21:36:40.163170 duration_in_ms=0.808
2019-04-30 21:36:40,163 [salt.state       :1780][INFO    ][3572] Running state [/etc/profile.d] at time 21:36:40.163320
2019-04-30 21:36:40,163 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/etc/profile.d]
2019-04-30 21:36:40,169 [salt.state       :300 ][INFO    ][3572] Directory /etc/profile.d is in the correct state
Directory /etc/profile.d updated
2019-04-30 21:36:40,169 [salt.state       :1951][INFO    ][3572] Completed state [/etc/profile.d] at time 21:36:40.169697 duration_in_ms=6.377
2019-04-30 21:36:40,169 [salt.state       :1780][INFO    ][3572] Running state [/etc/bash.bashrc] at time 21:36:40.169835
2019-04-30 21:36:40,169 [salt.state       :1813][INFO    ][3572] Executing state file.blockreplace for [/etc/bash.bashrc]
2019-04-30 21:36:40,873 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['git', '--version'] in directory '/root'
2019-04-30 21:36:41,060 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'test -f /etc/bash.bashrc' in directory '/root'
2019-04-30 21:36:41,074 [salt.state       :300 ][INFO    ][3572] No changes needed to be made
2019-04-30 21:36:41,074 [salt.state       :1951][INFO    ][3572] Completed state [/etc/bash.bashrc] at time 21:36:41.074700 duration_in_ms=904.864
2019-04-30 21:36:41,074 [salt.state       :1780][INFO    ][3572] Running state [/etc/profile] at time 21:36:41.074963
2019-04-30 21:36:41,075 [salt.state       :1813][INFO    ][3572] Executing state file.blockreplace for [/etc/profile]
2019-04-30 21:36:41,078 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'test -f /etc/profile' in directory '/root'
2019-04-30 21:36:41,087 [salt.state       :300 ][INFO    ][3572] No changes needed to be made
2019-04-30 21:36:41,087 [salt.state       :1951][INFO    ][3572] Completed state [/etc/profile] at time 21:36:41.087576 duration_in_ms=12.613
2019-04-30 21:36:41,087 [salt.state       :1780][INFO    ][3572] Running state [/etc/login.defs] at time 21:36:41.087772
2019-04-30 21:36:41,087 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/login.defs]
2019-04-30 21:36:41,174 [salt.state       :300 ][INFO    ][3572] File /etc/login.defs is in the correct state
2019-04-30 21:36:41,174 [salt.state       :1951][INFO    ][3572] Completed state [/etc/login.defs] at time 21:36:41.174900 duration_in_ms=87.127
2019-04-30 21:36:41,179 [salt.state       :1780][INFO    ][3572] Running state [at] at time 21:36:41.179327
2019-04-30 21:36:41,179 [salt.state       :1813][INFO    ][3572] Executing state pkg.installed for [at]
2019-04-30 21:36:41,179 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:36:41,641 [salt.state       :300 ][INFO    ][3572] All specified packages are already installed
2019-04-30 21:36:41,641 [salt.state       :1951][INFO    ][3572] Completed state [at] at time 21:36:41.641465 duration_in_ms=462.137
2019-04-30 21:36:41,642 [salt.state       :1780][INFO    ][3572] Running state [atd] at time 21:36:41.642791
2019-04-30 21:36:41,643 [salt.state       :1813][INFO    ][3572] Executing state service.running for [atd]
2019-04-30 21:36:41,643 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'status', 'atd.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:41,657 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-active', 'atd.service'] in directory '/root'
2019-04-30 21:36:41,664 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-enabled', 'atd.service'] in directory '/root'
2019-04-30 21:36:41,672 [salt.state       :300 ][INFO    ][3572] The service atd is already running
2019-04-30 21:36:41,672 [salt.state       :1951][INFO    ][3572] Completed state [atd] at time 21:36:41.672595 duration_in_ms=29.802
2019-04-30 21:36:41,674 [salt.state       :1780][INFO    ][3572] Running state [cron] at time 21:36:41.674963
2019-04-30 21:36:41,675 [salt.state       :1813][INFO    ][3572] Executing state pkg.installed for [cron]
2019-04-30 21:36:41,682 [salt.state       :300 ][INFO    ][3572] All specified packages are already installed
2019-04-30 21:36:41,682 [salt.state       :1951][INFO    ][3572] Completed state [cron] at time 21:36:41.682939 duration_in_ms=7.975
2019-04-30 21:36:41,684 [salt.state       :1780][INFO    ][3572] Running state [/etc/at.allow] at time 21:36:41.684043
2019-04-30 21:36:41,684 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/at.allow]
2019-04-30 21:36:41,705 [salt.state       :300 ][INFO    ][3572] File /etc/at.allow is in the correct state
2019-04-30 21:36:41,705 [salt.state       :1951][INFO    ][3572] Completed state [/etc/at.allow] at time 21:36:41.705936 duration_in_ms=21.893
2019-04-30 21:36:41,706 [salt.state       :1780][INFO    ][3572] Running state [/etc/at.deny] at time 21:36:41.706117
2019-04-30 21:36:41,706 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/at.deny]
2019-04-30 21:36:41,706 [salt.state       :300 ][INFO    ][3572] File /etc/at.deny is not present
2019-04-30 21:36:41,706 [salt.state       :1951][INFO    ][3572] Completed state [/etc/at.deny] at time 21:36:41.706708 duration_in_ms=0.591
2019-04-30 21:36:41,708 [salt.state       :1780][INFO    ][3572] Running state [cron] at time 21:36:41.708026
2019-04-30 21:36:41,708 [salt.state       :1813][INFO    ][3572] Executing state service.running for [cron]
2019-04-30 21:36:41,708 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'status', 'cron.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:41,716 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-active', 'cron.service'] in directory '/root'
2019-04-30 21:36:41,721 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-enabled', 'cron.service'] in directory '/root'
2019-04-30 21:36:41,729 [salt.state       :300 ][INFO    ][3572] The service cron is already running
2019-04-30 21:36:41,729 [salt.state       :1951][INFO    ][3572] Completed state [cron] at time 21:36:41.729300 duration_in_ms=21.273
2019-04-30 21:36:41,730 [salt.state       :1780][INFO    ][3572] Running state [/etc/cron.allow] at time 21:36:41.730673
2019-04-30 21:36:41,730 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/cron.allow]
2019-04-30 21:36:41,747 [salt.state       :300 ][INFO    ][3572] File /etc/cron.allow is in the correct state
2019-04-30 21:36:41,748 [salt.state       :1951][INFO    ][3572] Completed state [/etc/cron.allow] at time 21:36:41.748088 duration_in_ms=17.415
2019-04-30 21:36:41,748 [salt.state       :1780][INFO    ][3572] Running state [/etc/cron.deny] at time 21:36:41.748266
2019-04-30 21:36:41,748 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/cron.deny]
2019-04-30 21:36:41,748 [salt.state       :300 ][INFO    ][3572] File /etc/cron.deny is not present
2019-04-30 21:36:41,748 [salt.state       :1951][INFO    ][3572] Completed state [/etc/cron.deny] at time 21:36:41.748828 duration_in_ms=0.562
2019-04-30 21:36:41,749 [salt.state       :1780][INFO    ][3572] Running state [/etc/crontab] at time 21:36:41.749968
2019-04-30 21:36:41,750 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/crontab]
2019-04-30 21:36:41,750 [salt.state       :300 ][INFO    ][3572] File /etc/crontab exists with proper permissions. No changes made.
2019-04-30 21:36:41,750 [salt.state       :1951][INFO    ][3572] Completed state [/etc/crontab] at time 21:36:41.750768 duration_in_ms=0.8
2019-04-30 21:36:41,751 [salt.state       :1780][INFO    ][3572] Running state [/etc/cron.d] at time 21:36:41.751888
2019-04-30 21:36:41,752 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/etc/cron.d]
2019-04-30 21:36:41,752 [salt.state       :300 ][INFO    ][3572] Directory /etc/cron.d is in the correct state
Directory /etc/cron.d updated
2019-04-30 21:36:41,752 [salt.state       :1951][INFO    ][3572] Completed state [/etc/cron.d] at time 21:36:41.752832 duration_in_ms=0.944
2019-04-30 21:36:41,754 [salt.state       :1780][INFO    ][3572] Running state [/etc/cron.daily] at time 21:36:41.753977
2019-04-30 21:36:41,754 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/etc/cron.daily]
2019-04-30 21:36:41,760 [salt.state       :300 ][INFO    ][3572] Directory /etc/cron.daily is in the correct state
Directory /etc/cron.daily updated
2019-04-30 21:36:41,760 [salt.state       :1951][INFO    ][3572] Completed state [/etc/cron.daily] at time 21:36:41.760872 duration_in_ms=6.894
2019-04-30 21:36:41,762 [salt.state       :1780][INFO    ][3572] Running state [/etc/cron.hourly] at time 21:36:41.762004
2019-04-30 21:36:41,762 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/etc/cron.hourly]
2019-04-30 21:36:41,762 [salt.state       :300 ][INFO    ][3572] Directory /etc/cron.hourly is in the correct state
Directory /etc/cron.hourly updated
2019-04-30 21:36:41,763 [salt.state       :1951][INFO    ][3572] Completed state [/etc/cron.hourly] at time 21:36:41.763020 duration_in_ms=1.016
2019-04-30 21:36:41,764 [salt.state       :1780][INFO    ][3572] Running state [/etc/cron.monthly] at time 21:36:41.764138
2019-04-30 21:36:41,764 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/etc/cron.monthly]
2019-04-30 21:36:41,765 [salt.state       :300 ][INFO    ][3572] Directory /etc/cron.monthly is in the correct state
Directory /etc/cron.monthly updated
2019-04-30 21:36:41,765 [salt.state       :1951][INFO    ][3572] Completed state [/etc/cron.monthly] at time 21:36:41.765134 duration_in_ms=0.996
2019-04-30 21:36:41,766 [salt.state       :1780][INFO    ][3572] Running state [/etc/cron.weekly] at time 21:36:41.766268
2019-04-30 21:36:41,766 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/etc/cron.weekly]
2019-04-30 21:36:41,767 [salt.state       :300 ][INFO    ][3572] Directory /etc/cron.weekly is in the correct state
Directory /etc/cron.weekly updated
2019-04-30 21:36:41,767 [salt.state       :1951][INFO    ][3572] Completed state [/etc/cron.weekly] at time 21:36:41.767270 duration_in_ms=1.002
2019-04-30 21:36:41,771 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/apt.conf.d/99prefer_ipv4-salt] at time 21:36:41.771293
2019-04-30 21:36:41,771 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/apt/apt.conf.d/99prefer_ipv4-salt]
2019-04-30 21:36:41,788 [salt.state       :300 ][INFO    ][3572] File /etc/apt/apt.conf.d/99prefer_ipv4-salt is in the correct state
2019-04-30 21:36:41,788 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/apt.conf.d/99prefer_ipv4-salt] at time 21:36:41.788254 duration_in_ms=16.961
2019-04-30 21:36:41,788 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/apt.conf.d/99allow_downgrades-salt] at time 21:36:41.788429
2019-04-30 21:36:41,788 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/apt/apt.conf.d/99allow_downgrades-salt]
2019-04-30 21:36:41,804 [salt.state       :300 ][INFO    ][3572] File /etc/apt/apt.conf.d/99allow_downgrades-salt is in the correct state
2019-04-30 21:36:41,804 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/apt.conf.d/99allow_downgrades-salt] at time 21:36:41.804287 duration_in_ms=15.858
2019-04-30 21:36:41,806 [salt.state       :1780][INFO    ][3572] Running state [linux_repo_prereq_pkgs] at time 21:36:41.806344
2019-04-30 21:36:41,806 [salt.state       :1813][INFO    ][3572] Executing state pkg.installed for [linux_repo_prereq_pkgs]
2019-04-30 21:36:41,814 [salt.state       :300 ][INFO    ][3572] All specified packages are already installed
2019-04-30 21:36:41,814 [salt.state       :1951][INFO    ][3572] Completed state [linux_repo_prereq_pkgs] at time 21:36:41.814305 duration_in_ms=7.961
2019-04-30 21:36:41,814 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/apt.conf.d/99proxies-salt] at time 21:36:41.814480
2019-04-30 21:36:41,814 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt]
2019-04-30 21:36:41,814 [salt.state       :300 ][INFO    ][3572] File /etc/apt/apt.conf.d/99proxies-salt is not present
2019-04-30 21:36:41,815 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/apt.conf.d/99proxies-salt] at time 21:36:41.815024 duration_in_ms=0.544
2019-04-30 21:36:41,815 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack] at time 21:36:41.815190
2019-04-30 21:36:41,815 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack]
2019-04-30 21:36:41,815 [salt.state       :300 ][INFO    ][3572] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack is not present
2019-04-30 21:36:41,815 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack] at time 21:36:41.815722 duration_in_ms=0.532
2019-04-30 21:36:41,815 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/preferences.d/mirantis_openstack] at time 21:36:41.815886
2019-04-30 21:36:41,816 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/apt/preferences.d/mirantis_openstack]
2019-04-30 21:36:41,816 [salt.state       :300 ][INFO    ][3572] File /etc/apt/preferences.d/mirantis_openstack is not present
2019-04-30 21:36:41,816 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/preferences.d/mirantis_openstack] at time 21:36:41.816410 duration_in_ms=0.524
2019-04-30 21:36:41,816 [salt.state       :1780][INFO    ][3572] Running state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -] at time 21:36:41.816586
2019-04-30 21:36:41,816 [salt.state       :1813][INFO    ][3572] Executing state cmd.run for [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -]
2019-04-30 21:36:41,817 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -' in directory '/root'
2019-04-30 21:36:41,933 [salt.state       :300 ][INFO    ][3572] {'pid': 3683, 'retcode': 0, 'stderr': '', 'stdout': 'OK'}
2019-04-30 21:36:41,933 [salt.state       :1951][INFO    ][3572] Completed state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -] at time 21:36:41.933761 duration_in_ms=117.175
2019-04-30 21:36:41,938 [salt.state       :1780][INFO    ][3572] Running state [deb http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial main] at time 21:36:41.938289
2019-04-30 21:36:41,938 [salt.state       :1813][INFO    ][3572] Executing state pkgrepo.managed for [deb http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial main]
2019-04-30 21:36:42,193 [salt.state       :300 ][INFO    ][3572] Configured package repo 'deb http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial main'
2019-04-30 21:36:42,193 [salt.state       :1951][INFO    ][3572] Completed state [deb http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial main] at time 21:36:42.193762 duration_in_ms=255.473
2019-04-30 21:36:42,193 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_backports] at time 21:36:42.193960
2019-04-30 21:36:42,194 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_backports]
2019-04-30 21:36:42,194 [salt.state       :300 ][INFO    ][3572] File /etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_backports is not present
2019-04-30 21:36:42,194 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mirantis_openstack_backports] at time 21:36:42.194630 duration_in_ms=0.67
2019-04-30 21:36:42,194 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/preferences.d/mirantis_openstack_backports] at time 21:36:42.194797
2019-04-30 21:36:42,194 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/apt/preferences.d/mirantis_openstack_backports]
2019-04-30 21:36:42,195 [salt.state       :300 ][INFO    ][3572] File /etc/apt/preferences.d/mirantis_openstack_backports is not present
2019-04-30 21:36:42,195 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/preferences.d/mirantis_openstack_backports] at time 21:36:42.195334 duration_in_ms=0.537
2019-04-30 21:36:42,195 [salt.state       :1780][INFO    ][3572] Running state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -] at time 21:36:42.195511
2019-04-30 21:36:42,195 [salt.state       :1813][INFO    ][3572] Executing state cmd.run for [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -]
2019-04-30 21:36:42,196 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -' in directory '/root'
2019-04-30 21:36:42,287 [salt.state       :300 ][INFO    ][3572] {'pid': 3797, 'retcode': 0, 'stderr': '', 'stdout': 'OK'}
2019-04-30 21:36:42,287 [salt.state       :1951][INFO    ][3572] Completed state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUVOQkZXdDhvZ0JDQUN0VC9qNFdNR3VoRUk0ODZWdjl6VlYwR1dHZWZIRTVoQmxnSlNqU2dyRXhMRnFRMkZvClNjYUFCQ2Z2elVldVhITm9oL2MyZUxqeDNZRTZvRnJkaXc1dGFtME5GbFpNTStQU3VmY2lUeFF6OHZyWEhHeDcKVkI1cmcyVFhLb3FPdjljVzY5MEZzUkFlT3RLVHRCeFp2WVZUTEVQbjJHSlcwOVh5OUNCYStuMjNYQkhUQnZLcwpqM2h4a24yNU95NzBXZ3hrL0JKcXB5blhHbm8rTnp1QW5JYmIrZitYN2k2ZmlYd3J2dHA1ek9ZT0plVXdTK2ZVCklNL21YYmV0T2Qvc0h0SnFjOU5VWXBUaXA0bkVsRXFBWVJDc1hEVGJ1TU5kelNyOFZsU01NOGI2MW1CR2VsTEgKWEplK0VQUCtMb2djNUtYTzhhZG9HZ1docWxiRDZuN3creW5IQUJFQkFBRzBMbVoxWld3dGFXNW1jbUVnS0VWNApZVzF3YkdVZ2EyVjVLU0E4WkdWMmIzQnpRRzFwY21GdWRHbHpMbU52YlQ2SkFUZ0VFd0VDQUNJRkFsV3Q4b2dDCkd3TUdDd2tJQndNQ0JoVUlBZ2tLQ3dRV0FnTUJBaDRCQWhlQUFBb0pFTHpsekVZZm9pc0lrdVFJQUpsMGNGSjUKQlNLTVhIaFJZZjBCZUR6aGRoM3BtY09Ycy9qU3puVEl4QjRPRTVPZHdyTWdLeW9Ja1NJUDhBRXR0dkIrQnVPdgpCSG1oVEw3a3ZSaFA1eGlLZGJDd21EdG9FUm9hcXhoUlJiWkpjSitwSHZsN21rRXU4R2oyS1plMmxmRTRaNlpGCjZxMDBHeDlIWWZzZTErVmdVUjV5bWg0MW5aQ3ZSVE5FbllCcDFSUWNQb2dpTHkycll2WmJ4WW5VdGc0amFEN0QKdnV1RVF3cmZFSGRLRlVsV0JDSVZibCtlM0s2WlNuaU9jcXF5SEs3Mi9ISTBTWXVacEdmQ3p6dzVkZU9EY2pXbQpHejRuWnI0MWNCM2VIWGtmbUczbmdkaG1iMk1wVnI4M3UrSmViT292anp1c2Y3MW9JZFpCVEZOWXNaTlNWS3JuCmwwcnJSdURJTUhiUU11UzVBUTBFVmEzeWlBRUlBTFpxZExHWFNHWkFnVVhsN3poUEg1d25JUXRkbzZpTUlvdloKelFOVzk1UkRUMm5tLzNZZGRpUnk2RnVPVGJhSFh3MDdENFpVbDRkR1ZIekV3QmxsaFVMeGNIVjNPT2RRM2dWcAo0bUJBWjhrdjBFZWx6cVBmRFFXUjJDcTBoaTdJSjRRNGVQcFpoUUZpYXN6OHFiVjdEN0NZYlpkREFtUUt4cUFrCjBYWU9qYkIzanpCMnI2TUhmbEFLbUp6VHAzK05BRTliRExBd1hhMG90MlRIRGJwUGRCNFI2cHhwRDZZM2p3ZVcKdUxVQ25JZnZ5SUJ3aEhvYmFVMjhwdy9CQSswZGtDOWpuTG5vTytUcnpCOVlENTgzOUxjM2N0cmRQQkxpRlBNRwp3ZGZBVlJDeWZnTGpPeVVMcWpUdWR4MU1vK0RnejkreHJjVEZvZWhJN1VZb1pucmFFS2tBRVFFQUFZa0JId1FZCkFRSUFDUVVDVmEzeWlBSWJEQUFLQ1JDODVjeEdINklyQ1BINUIvMFVjK09oTVNDa1JvczFZdjV0QTRic0VjanQKOCtzSjJTNnBVcUNiWnhtWHB6S3NwS3BuanAzREpqbVFLREIycTRVUERWRWxWRE1NZEJsc3RUeDFSUlpEZjh5awpuRHZSQlN6YXdrN1hoZmxvcm84TjJMeHY2Z1doaE12SFVZSXR5TzZLTWJBWnVaMk0xSTEvT0ZIRy9mLy83b1BNCjBRcE5iaWhmK0dxRS9kV1J6OVpEeit4bFNGbGk2QVIvM2xkcTdONmdrQ3NFRmRpM2o2WkRmMHFMc1pwYXpQVUkKd2lDQy9hQVlMa1JEdFRKVjFHNkVzV2lqbU9UTk5sQ0VGUy9YRExRM04yRXYvMXNnQU8wQWxCTWRYcVNucVVJMQoxaC9lU0tDaUdta3dGV2xDZi80SG5KVlA3UXBTZVJQTHl3Nzg1RnZ0M3A5dlQrNjRpc1owWks2Y3BjajgKPTBhUUQKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQotLS0tLUJFR0lOIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0KVmVyc2lvbjogR251UEcgdjEKCm1RRU5CRnRZVlk4QkNBQzNvbGk5M2h1c0cwWlZ0di9MOEk0L2JjVzYwTEZDeUIwRHV3RXpuR2xTYWoxZmpPUXUKQzdRWDl3dkdScThtUlo4bWZaNnNieEdtZ3MwTG5WNVFJQmxlMWw1STNCK0FNR2tzZjZVR0VXZ29OL3ZxODZnKwowSmc2a0pQL0Qwc2pHWHZkbGZ5K2JnQXFqc3gyYldPTGpRR3RIU0l4aGU0Y0U5SFBCZk1pWXNGd0dRdWEzWE4zCnRpR0tjaWZzenZEQTZ1cWRqUzZEdVRFUEN6eUtpU3lVZXZuV3RCaDBvVXRVdC8vWDRsRzJNeDBsVTkxdVVRR2oKS2VaK2ZZWE9McWdabS9GeExWVDV3M2cvVUdLOUNiejVoNGtHQ0pPZmswRXdJWnAwSVJSczFwaE9DNmdWTXdvVgp5V0tDdGRIbWc3T2I4STRBWjhPVzVISm4xVVBIVHByeGNIQm5BQkVCQUFHMExFRjFkRzlpZFdsc1pHVnlJRHhwCmJtWnlZU3RoZFhScFluVnBiR1JsY2tCdGFYSmhiblJwY3k1amIyMCtpUUU0QkJNQkFnQWlCUUpiV0ZXUEFoc0QKQmdzSkNBY0RBZ1lWQ0FJSkNnc0VGZ0lEQVFJZUFRSVhnQUFLQ1JDUlpWcDVURktKNzBjSkIvOUFyV3JTRnlFeApxczdUeW85TTVXQ1BqcXc3eTJGN2pkNEV0M2hxd2M1ang2S2x4R3BnMTdTSHQ0b1djbXRNTDNWQngremlCQWkwCjVSeTRaNHcwUXFGVzZnQXFRZXBlVzc2WXEvT1A1U29xRUk5c1V3ekxmVVk3cmFLL1AxYnV2WEIxZVpoNG1NdzQKVEZmNEhnbzh5VVEzZ2VZTm5VQkJmYVNma21peUJKR3NNWEJmVzJ6aGxwVkl5QjZDeWU1UjgyM0Z4R05KZStsaQpoZ2dOQ1FuS1lxckd0cjU1Uk82eFlJMXY4OWNnR3JPMkVWd1BrRkxBL01VblFFYjQzM0NrK3NqcDFOWkRVZnVKClUzZ2c4UzBoVCtDZjVYaWtuVC94cUloaFRZL0t6bE5teW5adC81MUR6WnpzYk0rUk82SlpGWUpMMkx1QzY5Z0IKK1I1anJtYUd1OWZHCj1zcUluCi0tLS0tRU5EIFBHUCBQVUJMSUMgS0VZIEJMT0NLLS0tLS0=' | base64 -d | apt-key add -] at time 21:36:42.287844 duration_in_ms=92.331
2019-04-30 21:36:42,291 [salt.state       :1780][INFO    ][3572] Running state [deb http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial main] at time 21:36:42.291071
2019-04-30 21:36:42,291 [salt.state       :1813][INFO    ][3572] Executing state pkgrepo.managed for [deb http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial main]
2019-04-30 21:36:42,374 [salt.state       :300 ][INFO    ][3572] Configured package repo 'deb http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial main'
2019-04-30 21:36:42,374 [salt.state       :1951][INFO    ][3572] Completed state [deb http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial main] at time 21:36:42.374584 duration_in_ms=83.512
2019-04-30 21:36:42,374 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/apt.conf.d/99proxies-salt-mcp_glusterfs] at time 21:36:42.374769
2019-04-30 21:36:42,374 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/apt/apt.conf.d/99proxies-salt-mcp_glusterfs]
2019-04-30 21:36:42,375 [salt.state       :300 ][INFO    ][3572] File /etc/apt/apt.conf.d/99proxies-salt-mcp_glusterfs is not present
2019-04-30 21:36:42,375 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/apt.conf.d/99proxies-salt-mcp_glusterfs] at time 21:36:42.375405 duration_in_ms=0.635
2019-04-30 21:36:42,375 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/preferences.d/mcp_glusterfs] at time 21:36:42.375579
2019-04-30 21:36:42,375 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/apt/preferences.d/mcp_glusterfs]
2019-04-30 21:36:42,461 [salt.state       :300 ][INFO    ][3572] File /etc/apt/preferences.d/mcp_glusterfs is in the correct state
2019-04-30 21:36:42,461 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/preferences.d/mcp_glusterfs] at time 21:36:42.461927 duration_in_ms=86.347
2019-04-30 21:36:42,462 [salt.state       :1780][INFO    ][3572] Running state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUlOQkZQdFlGY0JFQURjUU1aOWFTUjFwdGJhRWVxLzhCenU3a2lwYXhWR2gzV2NtYTRMeitRUGUwb2Z4UmYrCm9ZUjIyVVZHbUpjUG5WY0dGYlhKNTB0OEJBeHd0US9UU21HZFE5M2JsNkxPUkFRQlovdWQxTFRyMkhLcGFhMEYKMWJ3cGkvVEFnQldxUDY0SHUwTEJHSVNjMEc1bTMvaG4vYmk2WHhJSU96Si9ML3ZxTGgxZGVWYURyWVlXeTVDbQplOEF1UHRxT0FSS3NlZnZWZ3dscG5iQ3RrK1FhRTY1dmdsOE1YaVlDYU9lblQwN0dEQ3ExeGI3aGtvVmxKUzRiCmY2RjNVTUpWTVZ5NG9FeVlrUnc0U1A3VUxlVDFzNHlyQmVEemJ4aEZhWlJKRnZHcHZNVzNBWnhmcmhYLzVPcFoKU2tRaUZuNS8yajRlSmxpNC9NbXB0QUFIcEdyNHRMQStzNm1IbUE5RTljN3dNZnlGWmUrd01odmFuZ1NEcDA5ZwpTU1pzMDBicUtTbllJSi9vR1JqYXhDbGxrdzRTTWZUT3F2OGwvR094UnMxMnlJY1pEMDhTU21ScG95TGZmcmwxCnpFbHlhaXh0QUpSZW5waFRaeXE3ZVJMUHlRbDZxRURBMVh0THMzVGhLNS80ZmdoTWJlN01PSGlNQjhNd0wxUnoKTFFrbC9QVTA4dnhmdW05a2kvbS9MUDV4cEpvcE5IWnMyTDQ3UmxYMit0cTZGSldiRHZRd09Hb0ZUVG54bWREZgo0RWtNaGxCNE4rdWpadzY0cFNNdDNjMDhOU2h4dHkyVVdwYlNiYzgvZTdQczRCN0x4NmVxNkFtcXJjVUNoZzhjCjkrUEkyTFVxajZtRGJjOGp4cFVzbHZqc0xVMDV4bnE2T0x2NFUvL3BVVFV6NmVJOEZnRmFkVlpjb1FBUkFRQUIKdEJsTVlYVnVZMmh3WVdRZ1VGQkJJR1p2Y2lCSGJIVnpkR1Z5aVFJNEJCTUJBZ0FpQlFKVDdXQlhBaHNEQmdzSgpDQWNEQWdZVkNBSUpDZ3NFRmdJREFRSWVBUUlYZ0FBS0NSQVQ0QnQ3UCtocHFaM0xFQUNZWUM0VWp4d1NIb3VWCjI5NUN4Znd0OVAzMkdjV0piRm1MWXRMSFdWVHQydmROL005WGIwMllnVkxKbS9uVnkydkpocWNNb3dTVzJqTzUKMDNtTHE2NzJnNW1IaXRuSXExbGg0elhjSEV2UDc5YURSUXV2a2dzTEVIamxrMk56WXFkQXNkUmszVGdPTGNLMApTUk03Q3dnd2QvYi9nVlV0UFlyWDFodlFLcmpHSk05VlpGY0NNWDJSbUdBUzBmdDNRSHpFQVBaQ2d5YW1rMHFCCjJlbzh0TFpZbTQyaU12cStaU3hHdWxoemk3Z0prcHYvd05kYVA0RTZvOG83S1kzSklXTW14Qm44UVpVS1lNb2IKemU0UFNCZzRHNGlHMnVlOUlyR0NiOE0xbys0NmFPU3lFSWM5OWJ6bkY4SnJ3N2E4c0J1ZlZSalNaSUU5QS9vTQpFdEIxcFRSRG45bHd4L0R5WWJDVjE2RE9zazZkNXg0UDhjcXZnZGFHemw3Vk5Mdmt3bU1hQ0gwZ1JGSUJyOTM3CnJFVWJlU0pIVHFyVkcwelh6U2FVSEV3WFBaRTBMdDJDOWRFbU1uVDZueEM3RmJKQjFBVFBETng4a0w3TXZCNGoKbDVIa2pyRDFXOVh1MnkwZHp3QUtsZzVqdnp3UDQ2TUpndm0rQVlLODA4WGhPaE1aald6enQ1UE9lRGNEaEdocApSU2ZRdEFoU25Sa090S1MxZHJNQ3QyN2hMWkRFWmZDcC8vYWo3anZWTDhGamFtR0VNZm05MUZMUWE1TFk3T29KCmFZb1psWVV0dGhyWFY2dzVLSEZqRllBS2dBOHRKemViVHZjMVE5YXZDbzJHNXFXTlpxNlRTTHhIRU1vL2c0Z3UKMmFHUlBSckt1OXcySWJvc2c0T3FaL1liWEM4U2pBPT0KPStRbmEKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQ==' | base64 -d | apt-key add -] at time 21:36:42.462115
2019-04-30 21:36:42,462 [salt.state       :1813][INFO    ][3572] Executing state cmd.run for [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUlOQkZQdFlGY0JFQURjUU1aOWFTUjFwdGJhRWVxLzhCenU3a2lwYXhWR2gzV2NtYTRMeitRUGUwb2Z4UmYrCm9ZUjIyVVZHbUpjUG5WY0dGYlhKNTB0OEJBeHd0US9UU21HZFE5M2JsNkxPUkFRQlovdWQxTFRyMkhLcGFhMEYKMWJ3cGkvVEFnQldxUDY0SHUwTEJHSVNjMEc1bTMvaG4vYmk2WHhJSU96Si9ML3ZxTGgxZGVWYURyWVlXeTVDbQplOEF1UHRxT0FSS3NlZnZWZ3dscG5iQ3RrK1FhRTY1dmdsOE1YaVlDYU9lblQwN0dEQ3ExeGI3aGtvVmxKUzRiCmY2RjNVTUpWTVZ5NG9FeVlrUnc0U1A3VUxlVDFzNHlyQmVEemJ4aEZhWlJKRnZHcHZNVzNBWnhmcmhYLzVPcFoKU2tRaUZuNS8yajRlSmxpNC9NbXB0QUFIcEdyNHRMQStzNm1IbUE5RTljN3dNZnlGWmUrd01odmFuZ1NEcDA5ZwpTU1pzMDBicUtTbllJSi9vR1JqYXhDbGxrdzRTTWZUT3F2OGwvR094UnMxMnlJY1pEMDhTU21ScG95TGZmcmwxCnpFbHlhaXh0QUpSZW5waFRaeXE3ZVJMUHlRbDZxRURBMVh0THMzVGhLNS80ZmdoTWJlN01PSGlNQjhNd0wxUnoKTFFrbC9QVTA4dnhmdW05a2kvbS9MUDV4cEpvcE5IWnMyTDQ3UmxYMit0cTZGSldiRHZRd09Hb0ZUVG54bWREZgo0RWtNaGxCNE4rdWpadzY0cFNNdDNjMDhOU2h4dHkyVVdwYlNiYzgvZTdQczRCN0x4NmVxNkFtcXJjVUNoZzhjCjkrUEkyTFVxajZtRGJjOGp4cFVzbHZqc0xVMDV4bnE2T0x2NFUvL3BVVFV6NmVJOEZnRmFkVlpjb1FBUkFRQUIKdEJsTVlYVnVZMmh3WVdRZ1VGQkJJR1p2Y2lCSGJIVnpkR1Z5aVFJNEJCTUJBZ0FpQlFKVDdXQlhBaHNEQmdzSgpDQWNEQWdZVkNBSUpDZ3NFRmdJREFRSWVBUUlYZ0FBS0NSQVQ0QnQ3UCtocHFaM0xFQUNZWUM0VWp4d1NIb3VWCjI5NUN4Znd0OVAzMkdjV0piRm1MWXRMSFdWVHQydmROL005WGIwMllnVkxKbS9uVnkydkpocWNNb3dTVzJqTzUKMDNtTHE2NzJnNW1IaXRuSXExbGg0elhjSEV2UDc5YURSUXV2a2dzTEVIamxrMk56WXFkQXNkUmszVGdPTGNLMApTUk03Q3dnd2QvYi9nVlV0UFlyWDFodlFLcmpHSk05VlpGY0NNWDJSbUdBUzBmdDNRSHpFQVBaQ2d5YW1rMHFCCjJlbzh0TFpZbTQyaU12cStaU3hHdWxoemk3Z0prcHYvd05kYVA0RTZvOG83S1kzSklXTW14Qm44UVpVS1lNb2IKemU0UFNCZzRHNGlHMnVlOUlyR0NiOE0xbys0NmFPU3lFSWM5OWJ6bkY4SnJ3N2E4c0J1ZlZSalNaSUU5QS9vTQpFdEIxcFRSRG45bHd4L0R5WWJDVjE2RE9zazZkNXg0UDhjcXZnZGFHemw3Vk5Mdmt3bU1hQ0gwZ1JGSUJyOTM3CnJFVWJlU0pIVHFyVkcwelh6U2FVSEV3WFBaRTBMdDJDOWRFbU1uVDZueEM3RmJKQjFBVFBETng4a0w3TXZCNGoKbDVIa2pyRDFXOVh1MnkwZHp3QUtsZzVqdnp3UDQ2TUpndm0rQVlLODA4WGhPaE1aald6enQ1UE9lRGNEaEdocApSU2ZRdEFoU25Sa090S1MxZHJNQ3QyN2hMWkRFWmZDcC8vYWo3anZWTDhGamFtR0VNZm05MUZMUWE1TFk3T29KCmFZb1psWVV0dGhyWFY2dzVLSEZqRllBS2dBOHRKemViVHZjMVE5YXZDbzJHNXFXTlpxNlRTTHhIRU1vL2c0Z3UKMmFHUlBSckt1OXcySWJvc2c0T3FaL1liWEM4U2pBPT0KPStRbmEKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQ==' | base64 -d | apt-key add -]
2019-04-30 21:36:42,462 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUlOQkZQdFlGY0JFQURjUU1aOWFTUjFwdGJhRWVxLzhCenU3a2lwYXhWR2gzV2NtYTRMeitRUGUwb2Z4UmYrCm9ZUjIyVVZHbUpjUG5WY0dGYlhKNTB0OEJBeHd0US9UU21HZFE5M2JsNkxPUkFRQlovdWQxTFRyMkhLcGFhMEYKMWJ3cGkvVEFnQldxUDY0SHUwTEJHSVNjMEc1bTMvaG4vYmk2WHhJSU96Si9ML3ZxTGgxZGVWYURyWVlXeTVDbQplOEF1UHRxT0FSS3NlZnZWZ3dscG5iQ3RrK1FhRTY1dmdsOE1YaVlDYU9lblQwN0dEQ3ExeGI3aGtvVmxKUzRiCmY2RjNVTUpWTVZ5NG9FeVlrUnc0U1A3VUxlVDFzNHlyQmVEemJ4aEZhWlJKRnZHcHZNVzNBWnhmcmhYLzVPcFoKU2tRaUZuNS8yajRlSmxpNC9NbXB0QUFIcEdyNHRMQStzNm1IbUE5RTljN3dNZnlGWmUrd01odmFuZ1NEcDA5ZwpTU1pzMDBicUtTbllJSi9vR1JqYXhDbGxrdzRTTWZUT3F2OGwvR094UnMxMnlJY1pEMDhTU21ScG95TGZmcmwxCnpFbHlhaXh0QUpSZW5waFRaeXE3ZVJMUHlRbDZxRURBMVh0THMzVGhLNS80ZmdoTWJlN01PSGlNQjhNd0wxUnoKTFFrbC9QVTA4dnhmdW05a2kvbS9MUDV4cEpvcE5IWnMyTDQ3UmxYMit0cTZGSldiRHZRd09Hb0ZUVG54bWREZgo0RWtNaGxCNE4rdWpadzY0cFNNdDNjMDhOU2h4dHkyVVdwYlNiYzgvZTdQczRCN0x4NmVxNkFtcXJjVUNoZzhjCjkrUEkyTFVxajZtRGJjOGp4cFVzbHZqc0xVMDV4bnE2T0x2NFUvL3BVVFV6NmVJOEZnRmFkVlpjb1FBUkFRQUIKdEJsTVlYVnVZMmh3WVdRZ1VGQkJJR1p2Y2lCSGJIVnpkR1Z5aVFJNEJCTUJBZ0FpQlFKVDdXQlhBaHNEQmdzSgpDQWNEQWdZVkNBSUpDZ3NFRmdJREFRSWVBUUlYZ0FBS0NSQVQ0QnQ3UCtocHFaM0xFQUNZWUM0VWp4d1NIb3VWCjI5NUN4Znd0OVAzMkdjV0piRm1MWXRMSFdWVHQydmROL005WGIwMllnVkxKbS9uVnkydkpocWNNb3dTVzJqTzUKMDNtTHE2NzJnNW1IaXRuSXExbGg0elhjSEV2UDc5YURSUXV2a2dzTEVIamxrMk56WXFkQXNkUmszVGdPTGNLMApTUk03Q3dnd2QvYi9nVlV0UFlyWDFodlFLcmpHSk05VlpGY0NNWDJSbUdBUzBmdDNRSHpFQVBaQ2d5YW1rMHFCCjJlbzh0TFpZbTQyaU12cStaU3hHdWxoemk3Z0prcHYvd05kYVA0RTZvOG83S1kzSklXTW14Qm44UVpVS1lNb2IKemU0UFNCZzRHNGlHMnVlOUlyR0NiOE0xbys0NmFPU3lFSWM5OWJ6bkY4SnJ3N2E4c0J1ZlZSalNaSUU5QS9vTQpFdEIxcFRSRG45bHd4L0R5WWJDVjE2RE9zazZkNXg0UDhjcXZnZGFHemw3Vk5Mdmt3bU1hQ0gwZ1JGSUJyOTM3CnJFVWJlU0pIVHFyVkcwelh6U2FVSEV3WFBaRTBMdDJDOWRFbU1uVDZueEM3RmJKQjFBVFBETng4a0w3TXZCNGoKbDVIa2pyRDFXOVh1MnkwZHp3QUtsZzVqdnp3UDQ2TUpndm0rQVlLODA4WGhPaE1aald6enQ1UE9lRGNEaEdocApSU2ZRdEFoU25Sa090S1MxZHJNQ3QyN2hMWkRFWmZDcC8vYWo3anZWTDhGamFtR0VNZm05MUZMUWE1TFk3T29KCmFZb1psWVV0dGhyWFY2dzVLSEZqRllBS2dBOHRKemViVHZjMVE5YXZDbzJHNXFXTlpxNlRTTHhIRU1vL2c0Z3UKMmFHUlBSckt1OXcySWJvc2c0T3FaL1liWEM4U2pBPT0KPStRbmEKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQ==' | base64 -d | apt-key add -' in directory '/root'
2019-04-30 21:36:42,546 [salt.state       :300 ][INFO    ][3572] {'pid': 3913, 'retcode': 0, 'stderr': '', 'stdout': 'OK'}
2019-04-30 21:36:42,547 [salt.state       :1951][INFO    ][3572] Completed state [echo 'LS0tLS1CRUdJTiBQR1AgUFVCTElDIEtFWSBCTE9DSy0tLS0tClZlcnNpb246IEdudVBHIHYxCgptUUlOQkZQdFlGY0JFQURjUU1aOWFTUjFwdGJhRWVxLzhCenU3a2lwYXhWR2gzV2NtYTRMeitRUGUwb2Z4UmYrCm9ZUjIyVVZHbUpjUG5WY0dGYlhKNTB0OEJBeHd0US9UU21HZFE5M2JsNkxPUkFRQlovdWQxTFRyMkhLcGFhMEYKMWJ3cGkvVEFnQldxUDY0SHUwTEJHSVNjMEc1bTMvaG4vYmk2WHhJSU96Si9ML3ZxTGgxZGVWYURyWVlXeTVDbQplOEF1UHRxT0FSS3NlZnZWZ3dscG5iQ3RrK1FhRTY1dmdsOE1YaVlDYU9lblQwN0dEQ3ExeGI3aGtvVmxKUzRiCmY2RjNVTUpWTVZ5NG9FeVlrUnc0U1A3VUxlVDFzNHlyQmVEemJ4aEZhWlJKRnZHcHZNVzNBWnhmcmhYLzVPcFoKU2tRaUZuNS8yajRlSmxpNC9NbXB0QUFIcEdyNHRMQStzNm1IbUE5RTljN3dNZnlGWmUrd01odmFuZ1NEcDA5ZwpTU1pzMDBicUtTbllJSi9vR1JqYXhDbGxrdzRTTWZUT3F2OGwvR094UnMxMnlJY1pEMDhTU21ScG95TGZmcmwxCnpFbHlhaXh0QUpSZW5waFRaeXE3ZVJMUHlRbDZxRURBMVh0THMzVGhLNS80ZmdoTWJlN01PSGlNQjhNd0wxUnoKTFFrbC9QVTA4dnhmdW05a2kvbS9MUDV4cEpvcE5IWnMyTDQ3UmxYMit0cTZGSldiRHZRd09Hb0ZUVG54bWREZgo0RWtNaGxCNE4rdWpadzY0cFNNdDNjMDhOU2h4dHkyVVdwYlNiYzgvZTdQczRCN0x4NmVxNkFtcXJjVUNoZzhjCjkrUEkyTFVxajZtRGJjOGp4cFVzbHZqc0xVMDV4bnE2T0x2NFUvL3BVVFV6NmVJOEZnRmFkVlpjb1FBUkFRQUIKdEJsTVlYVnVZMmh3WVdRZ1VGQkJJR1p2Y2lCSGJIVnpkR1Z5aVFJNEJCTUJBZ0FpQlFKVDdXQlhBaHNEQmdzSgpDQWNEQWdZVkNBSUpDZ3NFRmdJREFRSWVBUUlYZ0FBS0NSQVQ0QnQ3UCtocHFaM0xFQUNZWUM0VWp4d1NIb3VWCjI5NUN4Znd0OVAzMkdjV0piRm1MWXRMSFdWVHQydmROL005WGIwMllnVkxKbS9uVnkydkpocWNNb3dTVzJqTzUKMDNtTHE2NzJnNW1IaXRuSXExbGg0elhjSEV2UDc5YURSUXV2a2dzTEVIamxrMk56WXFkQXNkUmszVGdPTGNLMApTUk03Q3dnd2QvYi9nVlV0UFlyWDFodlFLcmpHSk05VlpGY0NNWDJSbUdBUzBmdDNRSHpFQVBaQ2d5YW1rMHFCCjJlbzh0TFpZbTQyaU12cStaU3hHdWxoemk3Z0prcHYvd05kYVA0RTZvOG83S1kzSklXTW14Qm44UVpVS1lNb2IKemU0UFNCZzRHNGlHMnVlOUlyR0NiOE0xbys0NmFPU3lFSWM5OWJ6bkY4SnJ3N2E4c0J1ZlZSalNaSUU5QS9vTQpFdEIxcFRSRG45bHd4L0R5WWJDVjE2RE9zazZkNXg0UDhjcXZnZGFHemw3Vk5Mdmt3bU1hQ0gwZ1JGSUJyOTM3CnJFVWJlU0pIVHFyVkcwelh6U2FVSEV3WFBaRTBMdDJDOWRFbU1uVDZueEM3RmJKQjFBVFBETng4a0w3TXZCNGoKbDVIa2pyRDFXOVh1MnkwZHp3QUtsZzVqdnp3UDQ2TUpndm0rQVlLODA4WGhPaE1aald6enQ1UE9lRGNEaEdocApSU2ZRdEFoU25Sa090S1MxZHJNQ3QyN2hMWkRFWmZDcC8vYWo3anZWTDhGamFtR0VNZm05MUZMUWE1TFk3T29KCmFZb1psWVV0dGhyWFY2dzVLSEZqRllBS2dBOHRKemViVHZjMVE5YXZDbzJHNXFXTlpxNlRTTHhIRU1vL2c0Z3UKMmFHUlBSckt1OXcySWJvc2c0T3FaL1liWEM4U2pBPT0KPStRbmEKLS0tLS1FTkQgUEdQIFBVQkxJQyBLRVkgQkxPQ0stLS0tLQ==' | base64 -d | apt-key add -] at time 21:36:42.547191 duration_in_ms=85.076
2019-04-30 21:36:42,550 [salt.state       :1780][INFO    ][3572] Running state [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main] at time 21:36:42.550662
2019-04-30 21:36:42,550 [salt.state       :1813][INFO    ][3572] Executing state pkgrepo.managed for [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main]
2019-04-30 21:36:42,580 [salt.state       :300 ][INFO    ][3572] Package repo 'deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main' already configured
2019-04-30 21:36:42,580 [salt.state       :1951][INFO    ][3572] Completed state [deb http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial main] at time 21:36:42.580463 duration_in_ms=29.801
2019-04-30 21:36:42,580 [salt.state       :1780][INFO    ][3572] Running state [pkg.refresh_db] at time 21:36:42.580636
2019-04-30 21:36:42,580 [salt.state       :1813][INFO    ][3572] Executing state module.run for [pkg.refresh_db]
2019-04-30 21:36:42,581 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:42,581 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 21:36:46,137 [salt.state       :300 ][INFO    ][3572] {'ret': {'http://archive.ubuntu.com/ubuntu xenial-backports InRelease': None, 'http://archive.ubuntu.com/ubuntu xenial-updates InRelease': None, 'http://mirror.mirantis.com/nightly//openstack-queens/xenial xenial InRelease': None, 'http://mirror.mirantis.com/nightly//openstack-rocky//xenial xenial InRelease': None, 'http://archive.ubuntu.com/ubuntu xenial-security InRelease': None, 'http://repo.saltstack.com/apt/ubuntu/16.04/amd64/2017.7 xenial InRelease': None, 'http://archive.ubuntu.com/ubuntu xenial InRelease': None, 'http://ppa.launchpad.net/gluster/glusterfs-3.13/ubuntu xenial InRelease': None}}
2019-04-30 21:36:46,137 [salt.state       :1951][INFO    ][3572] Completed state [pkg.refresh_db] at time 21:36:46.137688 duration_in_ms=3557.051
2019-04-30 21:36:46,138 [salt.state       :1780][INFO    ][3572] Running state [linux_extra_packages_removed] at time 21:36:46.137985
2019-04-30 21:36:46,138 [salt.state       :1813][INFO    ][3572] Executing state pkg.removed for [linux_extra_packages_removed]
2019-04-30 21:36:46,148 [salt.state       :300 ][INFO    ][3572] All specified packages are already absent
2019-04-30 21:36:46,148 [salt.state       :1951][INFO    ][3572] Completed state [linux_extra_packages_removed] at time 21:36:46.148653 duration_in_ms=10.667
2019-04-30 21:36:46,148 [salt.state       :1780][INFO    ][3572] Running state [linux_extra_packages_latest] at time 21:36:46.148840
2019-04-30 21:36:46,149 [salt.state       :1813][INFO    ][3572] Executing state pkg.latest for [linux_extra_packages_latest]
2019-04-30 21:36:46,160 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['apt-cache', '-q', 'policy', 'python-tornado'] in directory '/root'
2019-04-30 21:36:46,198 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['apt-cache', '-q', 'policy', 'smartmontools'] in directory '/root'
2019-04-30 21:36:46,228 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['apt-cache', '-q', 'policy', 'python-pymysql'] in directory '/root'
2019-04-30 21:36:46,259 [salt.state       :300 ][INFO    ][3572] All packages are up-to-date (python-pymysql, python-tornado, smartmontools).
2019-04-30 21:36:46,259 [salt.state       :1951][INFO    ][3572] Completed state [linux_extra_packages_latest] at time 21:36:46.259840 duration_in_ms=110.998
2019-04-30 21:36:46,260 [salt.state       :1780][INFO    ][3572] Running state [UTC] at time 21:36:46.260150
2019-04-30 21:36:46,260 [salt.state       :1813][INFO    ][3572] Executing state timezone.system for [UTC]
2019-04-30 21:36:46,261 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['timedatectl'] in directory '/root'
2019-04-30 21:36:46,316 [salt.state       :300 ][INFO    ][3572] Timezone UTC already set, UTC already set to UTC
2019-04-30 21:36:46,317 [salt.state       :1951][INFO    ][3572] Completed state [UTC] at time 21:36:46.317125 duration_in_ms=56.973
2019-04-30 21:36:46,317 [salt.state       :1780][INFO    ][3572] Running state [/etc/default/grub.d] at time 21:36:46.317445
2019-04-30 21:36:46,317 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/etc/default/grub.d]
2019-04-30 21:36:46,318 [salt.state       :300 ][INFO    ][3572] Directory /etc/default/grub.d is in the correct state
Directory /etc/default/grub.d updated
2019-04-30 21:36:46,318 [salt.state       :1951][INFO    ][3572] Completed state [/etc/default/grub.d] at time 21:36:46.318739 duration_in_ms=1.293
2019-04-30 21:36:46,322 [salt.state       :1780][INFO    ][3572] Running state [/etc/default/grub.d/99-custom-settings.cfg] at time 21:36:46.322883
2019-04-30 21:36:46,323 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/default/grub.d/99-custom-settings.cfg]
2019-04-30 21:36:46,345 [salt.state       :300 ][INFO    ][3572] File /etc/default/grub.d/99-custom-settings.cfg is in the correct state
2019-04-30 21:36:46,345 [salt.state       :1951][INFO    ][3572] Completed state [/etc/default/grub.d/99-custom-settings.cfg] at time 21:36:46.345949 duration_in_ms=23.065
2019-04-30 21:36:46,347 [salt.state       :1780][INFO    ][3572] Running state [/etc/default/grub.d/90-hugepages.cfg] at time 21:36:46.346982
2019-04-30 21:36:46,347 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/default/grub.d/90-hugepages.cfg]
2019-04-30 21:36:46,437 [salt.state       :300 ][INFO    ][3572] File /etc/default/grub.d/90-hugepages.cfg is in the correct state
2019-04-30 21:36:46,437 [salt.state       :1951][INFO    ][3572] Completed state [/etc/default/grub.d/90-hugepages.cfg] at time 21:36:46.437593 duration_in_ms=90.61
2019-04-30 21:36:46,439 [salt.state       :1780][INFO    ][3572] Running state [update-grub] at time 21:36:46.438986
2019-04-30 21:36:46,439 [salt.state       :1813][INFO    ][3572] Executing state cmd.wait for [update-grub]
2019-04-30 21:36:46,439 [salt.state       :300 ][INFO    ][3572] No changes made for update-grub
2019-04-30 21:36:46,439 [salt.state       :1951][INFO    ][3572] Completed state [update-grub] at time 21:36:46.439557 duration_in_ms=0.571
2019-04-30 21:36:46,440 [salt.state       :1780][INFO    ][3572] Running state [/boot/grub/grub.cfg] at time 21:36:46.440558
2019-04-30 21:36:46,440 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/boot/grub/grub.cfg]
2019-04-30 21:36:46,444 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'test -f /boot/grub/grub.cfg' in directory '/root'
2019-04-30 21:36:46,449 [salt.loaded.int.states.file:2298][WARNING ][3572] State for file: /boot/grub/grub.cfg - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:46,449 [salt.state       :300 ][INFO    ][3572] File /boot/grub/grub.cfg exists with proper permissions. No changes made.
2019-04-30 21:36:46,449 [salt.state       :1951][INFO    ][3572] Completed state [/boot/grub/grub.cfg] at time 21:36:46.449761 duration_in_ms=9.203
2019-04-30 21:36:46,450 [salt.state       :1780][INFO    ][3572] Running state [nf_conntrack] at time 21:36:46.449982
2019-04-30 21:36:46,450 [salt.state       :1813][INFO    ][3572] Executing state kmod.present for [nf_conntrack]
2019-04-30 21:36:46,450 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'lsmod' in directory '/root'
2019-04-30 21:36:46,456 [salt.state       :300 ][INFO    ][3572] Kernel module nf_conntrack is already present
2019-04-30 21:36:46,456 [salt.state       :1951][INFO    ][3572] Completed state [nf_conntrack] at time 21:36:46.456851 duration_in_ms=6.869
2019-04-30 21:36:46,457 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d] at time 21:36:46.457054
2019-04-30 21:36:46,457 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/etc/modprobe.d]
2019-04-30 21:36:46,457 [salt.state       :300 ][INFO    ][3572] Directory /etc/modprobe.d is in the correct state
Directory /etc/modprobe.d updated
2019-04-30 21:36:46,457 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d] at time 21:36:46.457961 duration_in_ms=0.907
2019-04-30 21:36:46,459 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/hfsplus.conf] at time 21:36:46.459181
2019-04-30 21:36:46,459 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/hfsplus.conf]
2019-04-30 21:36:46,544 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/hfsplus.conf is in the correct state
2019-04-30 21:36:46,545 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/hfsplus.conf] at time 21:36:46.545080 duration_in_ms=85.899
2019-04-30 21:36:46,546 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/rds.conf] at time 21:36:46.546010
2019-04-30 21:36:46,546 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/rds.conf]
2019-04-30 21:36:46,625 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/rds.conf is in the correct state
2019-04-30 21:36:46,626 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/rds.conf] at time 21:36:46.626118 duration_in_ms=80.108
2019-04-30 21:36:46,627 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/cramfs.conf] at time 21:36:46.627022
2019-04-30 21:36:46,627 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/cramfs.conf]
2019-04-30 21:36:46,706 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/cramfs.conf is in the correct state
2019-04-30 21:36:46,706 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/cramfs.conf] at time 21:36:46.706757 duration_in_ms=79.735
2019-04-30 21:36:46,707 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/freevxfs.conf] at time 21:36:46.707692
2019-04-30 21:36:46,707 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/freevxfs.conf]
2019-04-30 21:36:46,787 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/freevxfs.conf is in the correct state
2019-04-30 21:36:46,788 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/freevxfs.conf] at time 21:36:46.788130 duration_in_ms=80.437
2019-04-30 21:36:46,789 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/hfs.conf] at time 21:36:46.789039
2019-04-30 21:36:46,789 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/hfs.conf]
2019-04-30 21:36:46,868 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/hfs.conf is in the correct state
2019-04-30 21:36:46,868 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/hfs.conf] at time 21:36:46.868452 duration_in_ms=79.413
2019-04-30 21:36:46,869 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/squashfs.conf] at time 21:36:46.869410
2019-04-30 21:36:46,869 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/squashfs.conf]
2019-04-30 21:36:46,949 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/squashfs.conf is in the correct state
2019-04-30 21:36:46,950 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/squashfs.conf] at time 21:36:46.950037 duration_in_ms=80.627
2019-04-30 21:36:46,950 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/udf.conf] at time 21:36:46.950948
2019-04-30 21:36:46,951 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/udf.conf]
2019-04-30 21:36:47,032 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/udf.conf is in the correct state
2019-04-30 21:36:47,032 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/udf.conf] at time 21:36:47.032647 duration_in_ms=81.699
2019-04-30 21:36:47,033 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/vfat.conf] at time 21:36:47.033569
2019-04-30 21:36:47,033 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/vfat.conf]
2019-04-30 21:36:47,113 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/vfat.conf is in the correct state
2019-04-30 21:36:47,114 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/vfat.conf] at time 21:36:47.114128 duration_in_ms=80.559
2019-04-30 21:36:47,115 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/sctp.conf] at time 21:36:47.115045
2019-04-30 21:36:47,115 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/sctp.conf]
2019-04-30 21:36:47,196 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/sctp.conf is in the correct state
2019-04-30 21:36:47,196 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/sctp.conf] at time 21:36:47.196651 duration_in_ms=81.606
2019-04-30 21:36:47,197 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/jffs2.conf] at time 21:36:47.197600
2019-04-30 21:36:47,197 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/jffs2.conf]
2019-04-30 21:36:47,277 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/jffs2.conf is in the correct state
2019-04-30 21:36:47,277 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/jffs2.conf] at time 21:36:47.277912 duration_in_ms=80.311
2019-04-30 21:36:47,278 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/tipc.conf] at time 21:36:47.278821
2019-04-30 21:36:47,279 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/tipc.conf]
2019-04-30 21:36:47,359 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/tipc.conf is in the correct state
2019-04-30 21:36:47,359 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/tipc.conf] at time 21:36:47.359625 duration_in_ms=80.804
2019-04-30 21:36:47,360 [salt.state       :1780][INFO    ][3572] Running state [/etc/modprobe.d/dccp.conf] at time 21:36:47.360562
2019-04-30 21:36:47,360 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/modprobe.d/dccp.conf]
2019-04-30 21:36:47,440 [salt.state       :300 ][INFO    ][3572] File /etc/modprobe.d/dccp.conf is in the correct state
2019-04-30 21:36:47,440 [salt.state       :1951][INFO    ][3572] Completed state [/etc/modprobe.d/dccp.conf] at time 21:36:47.440531 duration_in_ms=79.969
2019-04-30 21:36:47,440 [salt.state       :1780][INFO    ][3572] Running state [vm.dirty_background_ratio] at time 21:36:47.440698
2019-04-30 21:36:47,440 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [vm.dirty_background_ratio]
2019-04-30 21:36:47,441 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n vm.dirty_background_ratio' in directory '/root'
2019-04-30 21:36:47,446 [salt.state       :300 ][INFO    ][3572] Sysctl value vm.dirty_background_ratio = 5 is already set
2019-04-30 21:36:47,446 [salt.state       :1951][INFO    ][3572] Completed state [vm.dirty_background_ratio] at time 21:36:47.446636 duration_in_ms=5.939
2019-04-30 21:36:47,446 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_congestion_control] at time 21:36:47.446840
2019-04-30 21:36:47,447 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_congestion_control]
2019-04-30 21:36:47,447 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_congestion_control' in directory '/root'
2019-04-30 21:36:47,452 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_congestion_control = yeah is already set
2019-04-30 21:36:47,452 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_congestion_control] at time 21:36:47.452596 duration_in_ms=5.755
2019-04-30 21:36:47,452 [salt.state       :1780][INFO    ][3572] Running state [net.core.netdev_budget] at time 21:36:47.452834
2019-04-30 21:36:47,453 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.core.netdev_budget]
2019-04-30 21:36:47,453 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.core.netdev_budget' in directory '/root'
2019-04-30 21:36:47,458 [salt.state       :300 ][INFO    ][3572] Sysctl value net.core.netdev_budget = 600 is already set
2019-04-30 21:36:47,458 [salt.state       :1951][INFO    ][3572] Completed state [net.core.netdev_budget] at time 21:36:47.458909 duration_in_ms=6.074
2019-04-30 21:36:47,459 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_slow_start_after_idle] at time 21:36:47.459110
2019-04-30 21:36:47,459 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_slow_start_after_idle]
2019-04-30 21:36:47,459 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_slow_start_after_idle' in directory '/root'
2019-04-30 21:36:47,464 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_slow_start_after_idle = 0 is already set
2019-04-30 21:36:47,464 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_slow_start_after_idle] at time 21:36:47.464536 duration_in_ms=5.426
2019-04-30 21:36:47,464 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.icmp_echo_ignore_broadcasts] at time 21:36:47.464746
2019-04-30 21:36:47,464 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.icmp_echo_ignore_broadcasts]
2019-04-30 21:36:47,465 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.icmp_echo_ignore_broadcasts' in directory '/root'
2019-04-30 21:36:47,470 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.icmp_echo_ignore_broadcasts = 1 is already set
2019-04-30 21:36:47,470 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.icmp_echo_ignore_broadcasts] at time 21:36:47.470644 duration_in_ms=5.898
2019-04-30 21:36:47,470 [salt.state       :1780][INFO    ][3572] Running state [net.nf_conntrack_max] at time 21:36:47.470847
2019-04-30 21:36:47,471 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.nf_conntrack_max]
2019-04-30 21:36:47,471 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.nf_conntrack_max' in directory '/root'
2019-04-30 21:36:47,476 [salt.state       :300 ][INFO    ][3572] Sysctl value net.nf_conntrack_max = 1048576 is already set
2019-04-30 21:36:47,476 [salt.state       :1951][INFO    ][3572] Completed state [net.nf_conntrack_max] at time 21:36:47.476328 duration_in_ms=5.48
2019-04-30 21:36:47,476 [salt.state       :1780][INFO    ][3572] Running state [fs.inotify.max_user_instances] at time 21:36:47.476516
2019-04-30 21:36:47,476 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [fs.inotify.max_user_instances]
2019-04-30 21:36:47,477 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n fs.inotify.max_user_instances' in directory '/root'
2019-04-30 21:36:47,482 [salt.state       :300 ][INFO    ][3572] Sysctl value fs.inotify.max_user_instances = 4096 is already set
2019-04-30 21:36:47,482 [salt.state       :1951][INFO    ][3572] Completed state [fs.inotify.max_user_instances] at time 21:36:47.482494 duration_in_ms=5.977
2019-04-30 21:36:47,482 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.icmp_ignore_bogus_error_responses] at time 21:36:47.482694
2019-04-30 21:36:47,482 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.icmp_ignore_bogus_error_responses]
2019-04-30 21:36:47,483 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.icmp_ignore_bogus_error_responses' in directory '/root'
2019-04-30 21:36:47,487 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.icmp_ignore_bogus_error_responses = 1 is already set
2019-04-30 21:36:47,488 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.icmp_ignore_bogus_error_responses] at time 21:36:47.488017 duration_in_ms=5.323
2019-04-30 21:36:47,488 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.default.log_martians] at time 21:36:47.488225
2019-04-30 21:36:47,488 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.default.log_martians]
2019-04-30 21:36:47,488 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.default.log_martians' in directory '/root'
2019-04-30 21:36:47,494 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.default.log_martians = 1 is already set
2019-04-30 21:36:47,494 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.default.log_martians] at time 21:36:47.494227 duration_in_ms=6.0
2019-04-30 21:36:47,494 [salt.state       :1780][INFO    ][3572] Running state [net.core.somaxconn] at time 21:36:47.494472
2019-04-30 21:36:47,494 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.core.somaxconn]
2019-04-30 21:36:47,495 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.core.somaxconn' in directory '/root'
2019-04-30 21:36:47,499 [salt.state       :300 ][INFO    ][3572] Sysctl value net.core.somaxconn = 4096 is already set
2019-04-30 21:36:47,500 [salt.state       :1951][INFO    ][3572] Completed state [net.core.somaxconn] at time 21:36:47.500019 duration_in_ms=5.546
2019-04-30 21:36:47,500 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.all.send_redirects] at time 21:36:47.500208
2019-04-30 21:36:47,500 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.all.send_redirects]
2019-04-30 21:36:47,500 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.all.send_redirects' in directory '/root'
2019-04-30 21:36:47,506 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.all.send_redirects = 0 is already set
2019-04-30 21:36:47,506 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.all.send_redirects] at time 21:36:47.506349 duration_in_ms=6.14
2019-04-30 21:36:47,506 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_tw_reuse] at time 21:36:47.506550
2019-04-30 21:36:47,506 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_tw_reuse]
2019-04-30 21:36:47,507 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_tw_reuse' in directory '/root'
2019-04-30 21:36:47,511 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_tw_reuse = 1 is already set
2019-04-30 21:36:47,512 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_tw_reuse] at time 21:36:47.511995 duration_in_ms=5.443
2019-04-30 21:36:47,512 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.default.rp_filter] at time 21:36:47.512190
2019-04-30 21:36:47,512 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.default.rp_filter]
2019-04-30 21:36:47,512 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.default.rp_filter' in directory '/root'
2019-04-30 21:36:47,518 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.default.rp_filter = 1 is already set
2019-04-30 21:36:47,518 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.default.rp_filter] at time 21:36:47.518297 duration_in_ms=6.106
2019-04-30 21:36:47,518 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_fin_timeout] at time 21:36:47.518498
2019-04-30 21:36:47,518 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_fin_timeout]
2019-04-30 21:36:47,519 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_fin_timeout' in directory '/root'
2019-04-30 21:36:47,524 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_fin_timeout = 30 is already set
2019-04-30 21:36:47,524 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_fin_timeout] at time 21:36:47.524483 duration_in_ms=5.984
2019-04-30 21:36:47,524 [salt.state       :1780][INFO    ][3572] Running state [net.core.netdev_budget_usecs] at time 21:36:47.524677
2019-04-30 21:36:47,524 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.core.netdev_budget_usecs]
2019-04-30 21:36:47,525 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.core.netdev_budget_usecs' in directory '/root'
2019-04-30 21:36:47,530 [salt.state       :300 ][INFO    ][3572] Sysctl value net.core.netdev_budget_usecs = 5000 is already set
2019-04-30 21:36:47,530 [salt.state       :1951][INFO    ][3572] Completed state [net.core.netdev_budget_usecs] at time 21:36:47.530451 duration_in_ms=5.773
2019-04-30 21:36:47,530 [salt.state       :1780][INFO    ][3572] Running state [kernel.panic] at time 21:36:47.530647
2019-04-30 21:36:47,530 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [kernel.panic]
2019-04-30 21:36:47,531 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n kernel.panic' in directory '/root'
2019-04-30 21:36:47,536 [salt.state       :300 ][INFO    ][3572] Sysctl value kernel.panic = 60 is already set
2019-04-30 21:36:47,536 [salt.state       :1951][INFO    ][3572] Completed state [kernel.panic] at time 21:36:47.536363 duration_in_ms=5.714
2019-04-30 21:36:47,536 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_keepalive_probes] at time 21:36:47.536568
2019-04-30 21:36:47,536 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_keepalive_probes]
2019-04-30 21:36:47,537 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_keepalive_probes' in directory '/root'
2019-04-30 21:36:47,542 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_keepalive_probes = 8 is already set
2019-04-30 21:36:47,542 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_keepalive_probes] at time 21:36:47.542445 duration_in_ms=5.875
2019-04-30 21:36:47,542 [salt.state       :1780][INFO    ][3572] Running state [vm.dirty_ratio] at time 21:36:47.542635
2019-04-30 21:36:47,542 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [vm.dirty_ratio]
2019-04-30 21:36:47,543 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n vm.dirty_ratio' in directory '/root'
2019-04-30 21:36:47,548 [salt.state       :300 ][INFO    ][3572] Sysctl value vm.dirty_ratio = 10 is already set
2019-04-30 21:36:47,548 [salt.state       :1951][INFO    ][3572] Completed state [vm.dirty_ratio] at time 21:36:47.548365 duration_in_ms=5.73
2019-04-30 21:36:47,548 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.all.log_martians] at time 21:36:47.548569
2019-04-30 21:36:47,548 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.all.log_martians]
2019-04-30 21:36:47,549 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.all.log_martians' in directory '/root'
2019-04-30 21:36:47,554 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.all.log_martians = 1 is already set
2019-04-30 21:36:47,554 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.all.log_martians] at time 21:36:47.554300 duration_in_ms=5.73
2019-04-30 21:36:47,554 [salt.state       :1780][INFO    ][3572] Running state [fs.suid_dumpable] at time 21:36:47.554490
2019-04-30 21:36:47,554 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [fs.suid_dumpable]
2019-04-30 21:36:47,555 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n fs.suid_dumpable' in directory '/root'
2019-04-30 21:36:47,560 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -w fs.suid_dumpable="0"' in directory '/root'
2019-04-30 21:36:47,565 [salt.state       :300 ][INFO    ][3572] {'fs.suid_dumpable': 0}
2019-04-30 21:36:47,565 [salt.state       :1951][INFO    ][3572] Completed state [fs.suid_dumpable] at time 21:36:47.565744 duration_in_ms=11.254
2019-04-30 21:36:47,565 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.default.accept_redirects] at time 21:36:47.565925
2019-04-30 21:36:47,566 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.default.accept_redirects]
2019-04-30 21:36:47,566 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.default.accept_redirects' in directory '/root'
2019-04-30 21:36:47,571 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.default.accept_redirects = 0 is already set
2019-04-30 21:36:47,571 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.default.accept_redirects] at time 21:36:47.571628 duration_in_ms=5.703
2019-04-30 21:36:47,571 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.default.secure_redirects] at time 21:36:47.571831
2019-04-30 21:36:47,572 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.default.secure_redirects]
2019-04-30 21:36:47,572 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.default.secure_redirects' in directory '/root'
2019-04-30 21:36:47,577 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.default.secure_redirects = 0 is already set
2019-04-30 21:36:47,577 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.default.secure_redirects] at time 21:36:47.577788 duration_in_ms=5.956
2019-04-30 21:36:47,578 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.default.accept_source_route] at time 21:36:47.577981
2019-04-30 21:36:47,578 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.default.accept_source_route]
2019-04-30 21:36:47,578 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.default.accept_source_route' in directory '/root'
2019-04-30 21:36:47,583 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.default.accept_source_route = 0 is already set
2019-04-30 21:36:47,583 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.default.accept_source_route] at time 21:36:47.583677 duration_in_ms=5.694
2019-04-30 21:36:47,583 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_keepalive_intvl] at time 21:36:47.583883
2019-04-30 21:36:47,584 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_keepalive_intvl]
2019-04-30 21:36:47,584 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_keepalive_intvl' in directory '/root'
2019-04-30 21:36:47,589 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_keepalive_intvl = 3 is already set
2019-04-30 21:36:47,589 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_keepalive_intvl] at time 21:36:47.589757 duration_in_ms=5.873
2019-04-30 21:36:47,589 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_keepalive_time] at time 21:36:47.589947
2019-04-30 21:36:47,590 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_keepalive_time]
2019-04-30 21:36:47,590 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_keepalive_time' in directory '/root'
2019-04-30 21:36:47,595 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_keepalive_time = 30 is already set
2019-04-30 21:36:47,595 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_keepalive_time] at time 21:36:47.595875 duration_in_ms=5.927
2019-04-30 21:36:47,596 [salt.state       :1780][INFO    ][3572] Running state [kernel.randomize_va_space] at time 21:36:47.596078
2019-04-30 21:36:47,596 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [kernel.randomize_va_space]
2019-04-30 21:36:47,596 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n kernel.randomize_va_space' in directory '/root'
2019-04-30 21:36:47,601 [salt.state       :300 ][INFO    ][3572] Sysctl value kernel.randomize_va_space = 2 is already set
2019-04-30 21:36:47,601 [salt.state       :1951][INFO    ][3572] Completed state [kernel.randomize_va_space] at time 21:36:47.601943 duration_in_ms=5.865
2019-04-30 21:36:47,602 [salt.state       :1780][INFO    ][3572] Running state [fs.file-max] at time 21:36:47.602129
2019-04-30 21:36:47,602 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [fs.file-max]
2019-04-30 21:36:47,602 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n fs.file-max' in directory '/root'
2019-04-30 21:36:47,607 [salt.state       :300 ][INFO    ][3572] Sysctl value fs.file-max = 124165 is already set
2019-04-30 21:36:47,608 [salt.state       :1951][INFO    ][3572] Completed state [fs.file-max] at time 21:36:47.608008 duration_in_ms=5.879
2019-04-30 21:36:47,608 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_syncookies] at time 21:36:47.608213
2019-04-30 21:36:47,608 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_syncookies]
2019-04-30 21:36:47,608 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_syncookies' in directory '/root'
2019-04-30 21:36:47,613 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_syncookies = 1 is already set
2019-04-30 21:36:47,614 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_syncookies] at time 21:36:47.614049 duration_in_ms=5.836
2019-04-30 21:36:47,614 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_max_syn_backlog] at time 21:36:47.614249
2019-04-30 21:36:47,614 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_max_syn_backlog]
2019-04-30 21:36:47,614 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_max_syn_backlog' in directory '/root'
2019-04-30 21:36:47,619 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_max_syn_backlog = 8192 is already set
2019-04-30 21:36:47,620 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_max_syn_backlog] at time 21:36:47.620095 duration_in_ms=5.845
2019-04-30 21:36:47,620 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.all.rp_filter] at time 21:36:47.620347
2019-04-30 21:36:47,620 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.all.rp_filter]
2019-04-30 21:36:47,621 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.all.rp_filter' in directory '/root'
2019-04-30 21:36:47,626 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.all.rp_filter = 1 is already set
2019-04-30 21:36:47,626 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.all.rp_filter] at time 21:36:47.626467 duration_in_ms=6.119
2019-04-30 21:36:47,626 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.all.accept_source_route] at time 21:36:47.626666
2019-04-30 21:36:47,626 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.all.accept_source_route]
2019-04-30 21:36:47,627 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.all.accept_source_route' in directory '/root'
2019-04-30 21:36:47,632 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.all.accept_source_route = 0 is already set
2019-04-30 21:36:47,632 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.all.accept_source_route] at time 21:36:47.632512 duration_in_ms=5.846
2019-04-30 21:36:47,632 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.tcp_retries2] at time 21:36:47.632706
2019-04-30 21:36:47,632 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.tcp_retries2]
2019-04-30 21:36:47,633 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.tcp_retries2' in directory '/root'
2019-04-30 21:36:47,638 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.tcp_retries2 = 5 is already set
2019-04-30 21:36:47,638 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.tcp_retries2] at time 21:36:47.638542 duration_in_ms=5.836
2019-04-30 21:36:47,638 [salt.state       :1780][INFO    ][3572] Running state [net.core.netdev_max_backlog] at time 21:36:47.638728
2019-04-30 21:36:47,638 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.core.netdev_max_backlog]
2019-04-30 21:36:47,639 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.core.netdev_max_backlog' in directory '/root'
2019-04-30 21:36:47,644 [salt.state       :300 ][INFO    ][3572] Sysctl value net.core.netdev_max_backlog = 261144 is already set
2019-04-30 21:36:47,644 [salt.state       :1951][INFO    ][3572] Completed state [net.core.netdev_max_backlog] at time 21:36:47.644527 duration_in_ms=5.799
2019-04-30 21:36:47,644 [salt.state       :1780][INFO    ][3572] Running state [vm.swappiness] at time 21:36:47.644718
2019-04-30 21:36:47,644 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [vm.swappiness]
2019-04-30 21:36:47,645 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n vm.swappiness' in directory '/root'
2019-04-30 21:36:47,650 [salt.state       :300 ][INFO    ][3572] Sysctl value vm.swappiness = 10 is already set
2019-04-30 21:36:47,650 [salt.state       :1951][INFO    ][3572] Completed state [vm.swappiness] at time 21:36:47.650484 duration_in_ms=5.766
2019-04-30 21:36:47,650 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.all.secure_redirects] at time 21:36:47.650674
2019-04-30 21:36:47,650 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.all.secure_redirects]
2019-04-30 21:36:47,651 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.all.secure_redirects' in directory '/root'
2019-04-30 21:36:47,656 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.all.secure_redirects = 0 is already set
2019-04-30 21:36:47,656 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.all.secure_redirects] at time 21:36:47.656383 duration_in_ms=5.708
2019-04-30 21:36:47,656 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.neigh.default.gc_thresh1] at time 21:36:47.656571
2019-04-30 21:36:47,656 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.neigh.default.gc_thresh1]
2019-04-30 21:36:47,657 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.neigh.default.gc_thresh1' in directory '/root'
2019-04-30 21:36:47,662 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.neigh.default.gc_thresh1 = 4096 is already set
2019-04-30 21:36:47,662 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.neigh.default.gc_thresh1] at time 21:36:47.662480 duration_in_ms=5.907
2019-04-30 21:36:47,662 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.neigh.default.gc_thresh2] at time 21:36:47.662667
2019-04-30 21:36:47,662 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.neigh.default.gc_thresh2]
2019-04-30 21:36:47,663 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.neigh.default.gc_thresh2' in directory '/root'
2019-04-30 21:36:47,668 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.neigh.default.gc_thresh2 = 8192 is already set
2019-04-30 21:36:47,668 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.neigh.default.gc_thresh2] at time 21:36:47.668319 duration_in_ms=5.651
2019-04-30 21:36:47,668 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.neigh.default.gc_thresh3] at time 21:36:47.668515
2019-04-30 21:36:47,668 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.neigh.default.gc_thresh3]
2019-04-30 21:36:47,669 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.neigh.default.gc_thresh3' in directory '/root'
2019-04-30 21:36:47,674 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.neigh.default.gc_thresh3 = 16384 is already set
2019-04-30 21:36:47,674 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.neigh.default.gc_thresh3] at time 21:36:47.674288 duration_in_ms=5.772
2019-04-30 21:36:47,674 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.default.send_redirects] at time 21:36:47.674472
2019-04-30 21:36:47,674 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.default.send_redirects]
2019-04-30 21:36:47,675 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.default.send_redirects' in directory '/root'
2019-04-30 21:36:47,680 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.default.send_redirects = 0 is already set
2019-04-30 21:36:47,680 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.default.send_redirects] at time 21:36:47.680372 duration_in_ms=5.898
2019-04-30 21:36:47,680 [salt.state       :1780][INFO    ][3572] Running state [net.ipv4.conf.all.accept_redirects] at time 21:36:47.680564
2019-04-30 21:36:47,680 [salt.state       :1813][INFO    ][3572] Executing state sysctl.present for [net.ipv4.conf.all.accept_redirects]
2019-04-30 21:36:47,681 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl -n net.ipv4.conf.all.accept_redirects' in directory '/root'
2019-04-30 21:36:47,686 [salt.state       :300 ][INFO    ][3572] Sysctl value net.ipv4.conf.all.accept_redirects = 0 is already set
2019-04-30 21:36:47,686 [salt.state       :1951][INFO    ][3572] Completed state [net.ipv4.conf.all.accept_redirects] at time 21:36:47.686309 duration_in_ms=5.744
2019-04-30 21:36:47,686 [salt.state       :1780][INFO    ][3572] Running state [/mnt/hugepages_1G] at time 21:36:47.686601
2019-04-30 21:36:47,686 [salt.state       :1813][INFO    ][3572] Executing state mount.mounted for [/mnt/hugepages_1G]
2019-04-30 21:36:47,687 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:47,694 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'blkid' in directory '/root'
2019-04-30 21:36:47,728 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:47,735 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -o mode=775,pagesize=1G,remount -t hugetlbfs Hugetlbfs-kvm-1g /mnt/hugepages_1G' in directory '/root'
2019-04-30 21:36:47,740 [salt.state       :300 ][INFO    ][3572] {'umount': 'Forced remount because options (pagesize=1G) changed'}
2019-04-30 21:36:47,741 [salt.state       :1951][INFO    ][3572] Completed state [/mnt/hugepages_1G] at time 21:36:47.740974 duration_in_ms=54.374
2019-04-30 21:36:47,741 [salt.state       :1780][INFO    ][3572] Running state [sysctl vm.nr_hugepages=16] at time 21:36:47.741166
2019-04-30 21:36:47,741 [salt.state       :1813][INFO    ][3572] Executing state cmd.run for [sysctl vm.nr_hugepages=16]
2019-04-30 21:36:47,741 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'sysctl vm.nr_hugepages | grep -qE '16'' in directory '/root'
2019-04-30 21:36:47,747 [salt.state       :300 ][INFO    ][3572] unless execution succeeded
2019-04-30 21:36:47,747 [salt.state       :1951][INFO    ][3572] Completed state [sysctl vm.nr_hugepages=16] at time 21:36:47.747671 duration_in_ms=6.504
2019-04-30 21:36:47,747 [salt.state       :1780][INFO    ][3572] Running state [systemctl mask dev-hugepages.mount] at time 21:36:47.747876
2019-04-30 21:36:47,748 [salt.state       :1813][INFO    ][3572] Executing state cmd.run for [systemctl mask dev-hugepages.mount]
2019-04-30 21:36:47,748 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'systemctl mask dev-hugepages.mount' in directory '/root'
2019-04-30 21:36:47,819 [salt.state       :300 ][INFO    ][3572] {'pid': 4510, 'retcode': 0, 'stderr': '', 'stdout': ''}
2019-04-30 21:36:47,820 [salt.state       :1951][INFO    ][3572] Completed state [systemctl mask dev-hugepages.mount] at time 21:36:47.819988 duration_in_ms=72.112
2019-04-30 21:36:47,820 [salt.state       :1780][INFO    ][3572] Running state [linux_sysfs_package] at time 21:36:47.820203
2019-04-30 21:36:47,820 [salt.state       :1813][INFO    ][3572] Executing state pkg.installed for [linux_sysfs_package]
2019-04-30 21:36:47,825 [salt.state       :300 ][INFO    ][3572] All specified packages are already installed
2019-04-30 21:36:47,825 [salt.state       :1951][INFO    ][3572] Completed state [linux_sysfs_package] at time 21:36:47.825669 duration_in_ms=5.466
2019-04-30 21:36:47,827 [salt.state       :1780][INFO    ][3572] Running state [/etc/sysfs.d] at time 21:36:47.827015
2019-04-30 21:36:47,827 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/etc/sysfs.d]
2019-04-30 21:36:47,827 [salt.state       :300 ][INFO    ][3572] Directory /etc/sysfs.d is in the correct state
Directory /etc/sysfs.d updated
2019-04-30 21:36:47,827 [salt.state       :1951][INFO    ][3572] Completed state [/etc/sysfs.d] at time 21:36:47.827691 duration_in_ms=0.676
2019-04-30 21:36:47,827 [salt.state       :1780][INFO    ][3572] Running state [ondemand] at time 21:36:47.827844
2019-04-30 21:36:47,827 [salt.state       :1813][INFO    ][3572] Executing state service.dead for [ondemand]
2019-04-30 21:36:47,828 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'status', 'ondemand.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:47,836 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-active', 'ondemand.service'] in directory '/root'
2019-04-30 21:36:47,843 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-enabled', 'ondemand.service'] in directory '/root'
2019-04-30 21:36:47,853 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'runlevel' in directory '/root'
2019-04-30 21:36:47,859 [salt.state       :300 ][INFO    ][3572] The service ondemand is already dead
2019-04-30 21:36:47,860 [salt.state       :1951][INFO    ][3572] Completed state [ondemand] at time 21:36:47.860065 duration_in_ms=32.22
2019-04-30 21:36:47,861 [salt.state       :1780][INFO    ][3572] Running state [/etc/sysfs.d/governor.conf] at time 21:36:47.861455
2019-04-30 21:36:47,861 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/sysfs.d/governor.conf]
2019-04-30 21:36:47,877 [salt.state       :300 ][INFO    ][3572] File /etc/sysfs.d/governor.conf is in the correct state
2019-04-30 21:36:47,877 [salt.state       :1951][INFO    ][3572] Completed state [/etc/sysfs.d/governor.conf] at time 21:36:47.877262 duration_in_ms=15.807
2019-04-30 21:36:47,877 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.877430
2019-04-30 21:36:47,877 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,877 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,878 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,878 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.878359 duration_in_ms=0.929
2019-04-30 21:36:47,878 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.878506
2019-04-30 21:36:47,878 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,878 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,879 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,879 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.879242 duration_in_ms=0.735
2019-04-30 21:36:47,879 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.879377
2019-04-30 21:36:47,879 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,879 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,879 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,880 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.880080 duration_in_ms=0.703
2019-04-30 21:36:47,880 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.880215
2019-04-30 21:36:47,880 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,880 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,880 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,880 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.880931 duration_in_ms=0.716
2019-04-30 21:36:47,881 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.881065
2019-04-30 21:36:47,881 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,881 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,881 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,882 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.881997 duration_in_ms=0.931
2019-04-30 21:36:47,882 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.882142
2019-04-30 21:36:47,882 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,882 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,882 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,882 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.882884 duration_in_ms=0.742
2019-04-30 21:36:47,883 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.883033
2019-04-30 21:36:47,883 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,883 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,883 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,883 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.883725 duration_in_ms=0.692
2019-04-30 21:36:47,883 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.883859
2019-04-30 21:36:47,884 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,884 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,884 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,884 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.884581 duration_in_ms=0.721
2019-04-30 21:36:47,884 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.884738
2019-04-30 21:36:47,884 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,885 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,885 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,885 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.885467 duration_in_ms=0.728
2019-04-30 21:36:47,885 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.885616
2019-04-30 21:36:47,885 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,885 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,886 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,886 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.886312 duration_in_ms=0.695
2019-04-30 21:36:47,886 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.886445
2019-04-30 21:36:47,886 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,886 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,887 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,887 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.887135 duration_in_ms=0.69
2019-04-30 21:36:47,887 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.887270
2019-04-30 21:36:47,887 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,887 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,887 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,888 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.887984 duration_in_ms=0.714
2019-04-30 21:36:47,888 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.888118
2019-04-30 21:36:47,888 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,888 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,888 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,888 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.888803 duration_in_ms=0.685
2019-04-30 21:36:47,888 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.888936
2019-04-30 21:36:47,889 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,889 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,889 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,889 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.889707 duration_in_ms=0.771
2019-04-30 21:36:47,889 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.889869
2019-04-30 21:36:47,890 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,890 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,890 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,890 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.890565 duration_in_ms=0.696
2019-04-30 21:36:47,890 [salt.state       :1780][INFO    ][3572] Running state [sysfs.write] at time 21:36:47.890701
2019-04-30 21:36:47,890 [salt.state       :1813][INFO    ][3572] Executing state module.run for [sysfs.write]
2019-04-30 21:36:47,891 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:47,891 [salt.state       :300 ][INFO    ][3572] {'ret': True}
2019-04-30 21:36:47,891 [salt.state       :1951][INFO    ][3572] Completed state [sysfs.write] at time 21:36:47.891420 duration_in_ms=0.72
2019-04-30 21:36:47,891 [salt.state       :1780][INFO    ][3572] Running state [en_US.UTF-8] at time 21:36:47.891578
2019-04-30 21:36:47,891 [salt.state       :1813][INFO    ][3572] Executing state locale.present for [en_US.UTF-8]
2019-04-30 21:36:47,892 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'locale -a' in directory '/root'
2019-04-30 21:36:47,897 [salt.state       :300 ][INFO    ][3572] Locale en_US.UTF-8 is already present
2019-04-30 21:36:47,897 [salt.state       :1951][INFO    ][3572] Completed state [en_US.UTF-8] at time 21:36:47.897864 duration_in_ms=6.286
2019-04-30 21:36:47,899 [salt.state       :1780][INFO    ][3572] Running state [en_US.UTF-8] at time 21:36:47.899250
2019-04-30 21:36:47,899 [salt.state       :1813][INFO    ][3572] Executing state locale.system for [en_US.UTF-8]
2019-04-30 21:36:47,899 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'localectl' in directory '/root'
2019-04-30 21:36:47,945 [salt.state       :300 ][INFO    ][3572] System locale en_US.UTF-8 already set
2019-04-30 21:36:47,946 [salt.state       :1951][INFO    ][3572] Completed state [en_US.UTF-8] at time 21:36:47.946130 duration_in_ms=46.878
2019-04-30 21:36:47,946 [salt.state       :1780][INFO    ][3572] Running state [root] at time 21:36:47.946351
2019-04-30 21:36:47,946 [salt.state       :1813][INFO    ][3572] Executing state group.present for [root]
2019-04-30 21:36:47,946 [salt.state       :300 ][INFO    ][3572] Group root is present and up to date
2019-04-30 21:36:47,947 [salt.state       :1951][INFO    ][3572] Completed state [root] at time 21:36:47.946972 duration_in_ms=0.62
2019-04-30 21:36:47,948 [salt.state       :1780][INFO    ][3572] Running state [root] at time 21:36:47.948156
2019-04-30 21:36:47,948 [salt.state       :1813][INFO    ][3572] Executing state user.present for [root]
2019-04-30 21:36:47,949 [salt.state       :300 ][INFO    ][3572] User root is present and up to date
2019-04-30 21:36:47,949 [salt.state       :1951][INFO    ][3572] Completed state [root] at time 21:36:47.949242 duration_in_ms=1.085
2019-04-30 21:36:47,950 [salt.state       :1780][INFO    ][3572] Running state [/root] at time 21:36:47.950304
2019-04-30 21:36:47,950 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/root]
2019-04-30 21:36:47,950 [salt.state       :300 ][INFO    ][3572] Directory /root is in the correct state
Directory /root updated
2019-04-30 21:36:47,951 [salt.state       :1951][INFO    ][3572] Completed state [/root] at time 21:36:47.950992 duration_in_ms=0.689
2019-04-30 21:36:47,951 [salt.state       :1780][INFO    ][3572] Running state [/etc/sudoers.d/90-salt-user-root] at time 21:36:47.951129
2019-04-30 21:36:47,951 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/sudoers.d/90-salt-user-root]
2019-04-30 21:36:47,959 [salt.state       :300 ][INFO    ][3572] File /etc/sudoers.d/90-salt-user-root is not present
2019-04-30 21:36:47,959 [salt.state       :1951][INFO    ][3572] Completed state [/etc/sudoers.d/90-salt-user-root] at time 21:36:47.959308 duration_in_ms=8.178
2019-04-30 21:36:47,959 [salt.state       :1780][INFO    ][3572] Running state [ubuntu] at time 21:36:47.959448
2019-04-30 21:36:47,959 [salt.state       :1813][INFO    ][3572] Executing state group.present for [ubuntu]
2019-04-30 21:36:47,959 [salt.state       :300 ][INFO    ][3572] Group ubuntu is present and up to date
2019-04-30 21:36:47,959 [salt.state       :1951][INFO    ][3572] Completed state [ubuntu] at time 21:36:47.959915 duration_in_ms=0.467
2019-04-30 21:36:47,960 [salt.state       :1780][INFO    ][3572] Running state [ubuntu] at time 21:36:47.960823
2019-04-30 21:36:47,960 [salt.state       :1813][INFO    ][3572] Executing state user.present for [ubuntu]
2019-04-30 21:36:47,969 [salt.state       :300 ][INFO    ][3572] User ubuntu is present and up to date
2019-04-30 21:36:47,969 [salt.state       :1951][INFO    ][3572] Completed state [ubuntu] at time 21:36:47.969694 duration_in_ms=8.87
2019-04-30 21:36:47,970 [salt.state       :1780][INFO    ][3572] Running state [/home/ubuntu] at time 21:36:47.970729
2019-04-30 21:36:47,970 [salt.state       :1813][INFO    ][3572] Executing state file.directory for [/home/ubuntu]
2019-04-30 21:36:47,971 [salt.state       :300 ][INFO    ][3572] Directory /home/ubuntu is in the correct state
Directory /home/ubuntu updated
2019-04-30 21:36:47,971 [salt.state       :1951][INFO    ][3572] Completed state [/home/ubuntu] at time 21:36:47.971419 duration_in_ms=0.689
2019-04-30 21:36:47,972 [salt.state       :1780][INFO    ][3572] Running state [/etc/sudoers.d/90-salt-user-ubuntu] at time 21:36:47.972357
2019-04-30 21:36:47,972 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/sudoers.d/90-salt-user-ubuntu]
2019-04-30 21:36:47,995 [salt.state       :300 ][INFO    ][3572] File /etc/sudoers.d/90-salt-user-ubuntu is in the correct state
2019-04-30 21:36:47,995 [salt.state       :1951][INFO    ][3572] Completed state [/etc/sudoers.d/90-salt-user-ubuntu] at time 21:36:47.995450 duration_in_ms=23.092
2019-04-30 21:36:47,995 [salt.state       :1780][INFO    ][3572] Running state [/etc/security/limits.d/90-salt-cis.conf] at time 21:36:47.995599
2019-04-30 21:36:47,995 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/security/limits.d/90-salt-cis.conf]
2019-04-30 21:36:48,069 [salt.state       :300 ][INFO    ][3572] File /etc/security/limits.d/90-salt-cis.conf is in the correct state
2019-04-30 21:36:48,069 [salt.state       :1951][INFO    ][3572] Completed state [/etc/security/limits.d/90-salt-cis.conf] at time 21:36:48.069339 duration_in_ms=73.739
2019-04-30 21:36:48,069 [salt.state       :1780][INFO    ][3572] Running state [/etc/security/limits.d/90-salt-default.conf] at time 21:36:48.069489
2019-04-30 21:36:48,069 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/security/limits.d/90-salt-default.conf]
2019-04-30 21:36:48,137 [salt.state       :300 ][INFO    ][3572] File /etc/security/limits.d/90-salt-default.conf is in the correct state
2019-04-30 21:36:48,137 [salt.state       :1951][INFO    ][3572] Completed state [/etc/security/limits.d/90-salt-default.conf] at time 21:36:48.137955 duration_in_ms=68.465
2019-04-30 21:36:48,138 [salt.state       :1780][INFO    ][3572] Running state [autofs] at time 21:36:48.138110
2019-04-30 21:36:48,138 [salt.state       :1813][INFO    ][3572] Executing state service.disabled for [autofs]
2019-04-30 21:36:48,138 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'status', 'autofs.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:48,146 [salt.state       :300 ][INFO    ][3572] The named service autofs is not available
2019-04-30 21:36:48,146 [salt.state       :1951][INFO    ][3572] Completed state [autofs] at time 21:36:48.146228 duration_in_ms=8.118
2019-04-30 21:36:48,146 [salt.state       :1780][INFO    ][3572] Running state [/etc/systemd/system.conf.d/90-salt.conf] at time 21:36:48.146415
2019-04-30 21:36:48,146 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/systemd/system.conf.d/90-salt.conf]
2019-04-30 21:36:48,216 [salt.state       :300 ][INFO    ][3572] File /etc/systemd/system.conf.d/90-salt.conf is in the correct state
2019-04-30 21:36:48,216 [salt.state       :1951][INFO    ][3572] Completed state [/etc/systemd/system.conf.d/90-salt.conf] at time 21:36:48.216303 duration_in_ms=69.888
2019-04-30 21:36:48,217 [salt.state       :1780][INFO    ][3572] Running state [service.systemctl_reload] at time 21:36:48.217530
2019-04-30 21:36:48,217 [salt.state       :1813][INFO    ][3572] Executing state module.wait for [service.systemctl_reload]
2019-04-30 21:36:48,217 [salt.state       :300 ][INFO    ][3572] No changes made for service.systemctl_reload
2019-04-30 21:36:48,218 [salt.state       :1951][INFO    ][3572] Completed state [service.systemctl_reload] at time 21:36:48.217980 duration_in_ms=0.451
2019-04-30 21:36:48,218 [salt.state       :1780][INFO    ][3572] Running state [/etc/shadow] at time 21:36:48.218117
2019-04-30 21:36:48,218 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/shadow]
2019-04-30 21:36:48,218 [salt.loaded.int.states.file:2298][WARNING ][3572] State for file: /etc/shadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,218 [salt.state       :300 ][INFO    ][3572] File /etc/shadow exists with proper permissions. No changes made.
2019-04-30 21:36:48,218 [salt.state       :1951][INFO    ][3572] Completed state [/etc/shadow] at time 21:36:48.218883 duration_in_ms=0.766
2019-04-30 21:36:48,219 [salt.state       :1780][INFO    ][3572] Running state [/etc/gshadow] at time 21:36:48.219020
2019-04-30 21:36:48,219 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/gshadow]
2019-04-30 21:36:48,219 [salt.loaded.int.states.file:2298][WARNING ][3572] State for file: /etc/gshadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,219 [salt.state       :300 ][INFO    ][3572] File /etc/gshadow exists with proper permissions. No changes made.
2019-04-30 21:36:48,219 [salt.state       :1951][INFO    ][3572] Completed state [/etc/gshadow] at time 21:36:48.219773 duration_in_ms=0.753
2019-04-30 21:36:48,219 [salt.state       :1780][INFO    ][3572] Running state [/etc/group-] at time 21:36:48.219907
2019-04-30 21:36:48,220 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/group-]
2019-04-30 21:36:48,220 [salt.loaded.int.states.file:2298][WARNING ][3572] State for file: /etc/group- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,220 [salt.state       :300 ][INFO    ][3572] File /etc/group- exists with proper permissions. No changes made.
2019-04-30 21:36:48,220 [salt.state       :1951][INFO    ][3572] Completed state [/etc/group-] at time 21:36:48.220639 duration_in_ms=0.732
2019-04-30 21:36:48,220 [salt.state       :1780][INFO    ][3572] Running state [/etc/group] at time 21:36:48.220774
2019-04-30 21:36:48,220 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/group]
2019-04-30 21:36:48,221 [salt.loaded.int.states.file:2298][WARNING ][3572] State for file: /etc/group - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,221 [salt.state       :300 ][INFO    ][3572] File /etc/group exists with proper permissions. No changes made.
2019-04-30 21:36:48,221 [salt.state       :1951][INFO    ][3572] Completed state [/etc/group] at time 21:36:48.221528 duration_in_ms=0.754
2019-04-30 21:36:48,221 [salt.state       :1780][INFO    ][3572] Running state [/etc/passwd-] at time 21:36:48.221669
2019-04-30 21:36:48,221 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/passwd-]
2019-04-30 21:36:48,222 [salt.loaded.int.states.file:2298][WARNING ][3572] State for file: /etc/passwd- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,222 [salt.state       :300 ][INFO    ][3572] File /etc/passwd- exists with proper permissions. No changes made.
2019-04-30 21:36:48,222 [salt.state       :1951][INFO    ][3572] Completed state [/etc/passwd-] at time 21:36:48.222413 duration_in_ms=0.743
2019-04-30 21:36:48,222 [salt.state       :1780][INFO    ][3572] Running state [/etc/passwd] at time 21:36:48.222547
2019-04-30 21:36:48,222 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/passwd]
2019-04-30 21:36:48,222 [salt.loaded.int.states.file:2298][WARNING ][3572] State for file: /etc/passwd - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,223 [salt.state       :300 ][INFO    ][3572] File /etc/passwd exists with proper permissions. No changes made.
2019-04-30 21:36:48,223 [salt.state       :1951][INFO    ][3572] Completed state [/etc/passwd] at time 21:36:48.223279 duration_in_ms=0.732
2019-04-30 21:36:48,223 [salt.state       :1780][INFO    ][3572] Running state [/var/tmp/dhcp_agent.patch] at time 21:36:48.223416
2019-04-30 21:36:48,223 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/var/tmp/dhcp_agent.patch]
2019-04-30 21:36:48,224 [salt.state       :300 ][INFO    ][3572] File /var/tmp/dhcp_agent.patch is in the correct state
2019-04-30 21:36:48,224 [salt.state       :1951][INFO    ][3572] Completed state [/var/tmp/dhcp_agent.patch] at time 21:36:48.224486 duration_in_ms=1.07
2019-04-30 21:36:48,224 [salt.state       :1780][INFO    ][3572] Running state [/etc/gshadow-] at time 21:36:48.224629
2019-04-30 21:36:48,224 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/gshadow-]
2019-04-30 21:36:48,224 [salt.loaded.int.states.file:2298][WARNING ][3572] State for file: /etc/gshadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,225 [salt.state       :300 ][INFO    ][3572] File /etc/gshadow- exists with proper permissions. No changes made.
2019-04-30 21:36:48,225 [salt.state       :1951][INFO    ][3572] Completed state [/etc/gshadow-] at time 21:36:48.225410 duration_in_ms=0.781
2019-04-30 21:36:48,225 [salt.state       :1780][INFO    ][3572] Running state [/etc/shadow-] at time 21:36:48.225547
2019-04-30 21:36:48,225 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/shadow-]
2019-04-30 21:36:48,225 [salt.loaded.int.states.file:2298][WARNING ][3572] State for file: /etc/shadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
2019-04-30 21:36:48,226 [salt.state       :300 ][INFO    ][3572] File /etc/shadow- exists with proper permissions. No changes made.
2019-04-30 21:36:48,226 [salt.state       :1951][INFO    ][3572] Completed state [/etc/shadow-] at time 21:36:48.226297 duration_in_ms=0.75
2019-04-30 21:36:48,226 [salt.state       :1780][INFO    ][3572] Running state [/etc/issue] at time 21:36:48.226439
2019-04-30 21:36:48,226 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/issue]
2019-04-30 21:36:48,227 [salt.state       :300 ][INFO    ][3572] File /etc/issue is in the correct state
2019-04-30 21:36:48,227 [salt.state       :1951][INFO    ][3572] Completed state [/etc/issue] at time 21:36:48.227690 duration_in_ms=1.252
2019-04-30 21:36:48,227 [salt.state       :1780][INFO    ][3572] Running state [/etc/hostname] at time 21:36:48.227832
2019-04-30 21:36:48,227 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/hostname]
2019-04-30 21:36:48,239 [salt.state       :300 ][INFO    ][3572] File /etc/hostname is in the correct state
2019-04-30 21:36:48,239 [salt.state       :1951][INFO    ][3572] Completed state [/etc/hostname] at time 21:36:48.239737 duration_in_ms=11.904
2019-04-30 21:36:48,240 [salt.state       :1780][INFO    ][3572] Running state [hostname cmp002] at time 21:36:48.240662
2019-04-30 21:36:48,240 [salt.state       :1813][INFO    ][3572] Executing state cmd.run for [hostname cmp002]
2019-04-30 21:36:48,241 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'test "$(hostname)" = "cmp002"' in directory '/root'
2019-04-30 21:36:48,246 [salt.state       :300 ][INFO    ][3572] unless execution succeeded
2019-04-30 21:36:48,247 [salt.state       :1951][INFO    ][3572] Completed state [hostname cmp002] at time 21:36:48.247162 duration_in_ms=6.499
2019-04-30 21:36:48,247 [salt.state       :1780][INFO    ][3572] Running state [mdb02] at time 21:36:48.247677
2019-04-30 21:36:48,247 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mdb02]
2019-04-30 21:36:48,248 [salt.state       :300 ][INFO    ][3572] Host mdb02 (10.167.4.33) already present
2019-04-30 21:36:48,248 [salt.state       :1951][INFO    ][3572] Completed state [mdb02] at time 21:36:48.248739 duration_in_ms=1.062
2019-04-30 21:36:48,249 [salt.state       :1780][INFO    ][3572] Running state [mdb02.mcp-ovs-ha.local] at time 21:36:48.249062
2019-04-30 21:36:48,249 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mdb02.mcp-ovs-ha.local]
2019-04-30 21:36:48,249 [salt.state       :300 ][INFO    ][3572] Host mdb02.mcp-ovs-ha.local (10.167.4.33) already present
2019-04-30 21:36:48,249 [salt.state       :1951][INFO    ][3572] Completed state [mdb02.mcp-ovs-ha.local] at time 21:36:48.249881 duration_in_ms=0.819
2019-04-30 21:36:48,250 [salt.state       :1780][INFO    ][3572] Running state [mdb03] at time 21:36:48.250205
2019-04-30 21:36:48,250 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mdb03]
2019-04-30 21:36:48,250 [salt.state       :300 ][INFO    ][3572] Host mdb03 (10.167.4.34) already present
2019-04-30 21:36:48,251 [salt.state       :1951][INFO    ][3572] Completed state [mdb03] at time 21:36:48.251001 duration_in_ms=0.796
2019-04-30 21:36:48,251 [salt.state       :1780][INFO    ][3572] Running state [mdb03.mcp-ovs-ha.local] at time 21:36:48.251314
2019-04-30 21:36:48,251 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mdb03.mcp-ovs-ha.local]
2019-04-30 21:36:48,251 [salt.state       :300 ][INFO    ][3572] Host mdb03.mcp-ovs-ha.local (10.167.4.34) already present
2019-04-30 21:36:48,252 [salt.state       :1951][INFO    ][3572] Completed state [mdb03.mcp-ovs-ha.local] at time 21:36:48.252107 duration_in_ms=0.793
2019-04-30 21:36:48,252 [salt.state       :1780][INFO    ][3572] Running state [mdb01] at time 21:36:48.252428
2019-04-30 21:36:48,252 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mdb01]
2019-04-30 21:36:48,253 [salt.state       :300 ][INFO    ][3572] Host mdb01 (10.167.4.32) already present
2019-04-30 21:36:48,253 [salt.state       :1951][INFO    ][3572] Completed state [mdb01] at time 21:36:48.253232 duration_in_ms=0.804
2019-04-30 21:36:48,253 [salt.state       :1780][INFO    ][3572] Running state [mdb01.mcp-ovs-ha.local] at time 21:36:48.253549
2019-04-30 21:36:48,253 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mdb01.mcp-ovs-ha.local]
2019-04-30 21:36:48,254 [salt.state       :300 ][INFO    ][3572] Host mdb01.mcp-ovs-ha.local (10.167.4.32) already present
2019-04-30 21:36:48,254 [salt.state       :1951][INFO    ][3572] Completed state [mdb01.mcp-ovs-ha.local] at time 21:36:48.254428 duration_in_ms=0.879
2019-04-30 21:36:48,254 [salt.state       :1780][INFO    ][3572] Running state [mdb] at time 21:36:48.254750
2019-04-30 21:36:48,254 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mdb]
2019-04-30 21:36:48,255 [salt.state       :300 ][INFO    ][3572] Host mdb (10.167.4.31) already present
2019-04-30 21:36:48,255 [salt.state       :1951][INFO    ][3572] Completed state [mdb] at time 21:36:48.255503 duration_in_ms=0.753
2019-04-30 21:36:48,255 [salt.state       :1780][INFO    ][3572] Running state [mdb.mcp-ovs-ha.local] at time 21:36:48.255822
2019-04-30 21:36:48,256 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mdb.mcp-ovs-ha.local]
2019-04-30 21:36:48,256 [salt.state       :300 ][INFO    ][3572] Host mdb.mcp-ovs-ha.local (10.167.4.31) already present
2019-04-30 21:36:48,256 [salt.state       :1951][INFO    ][3572] Completed state [mdb.mcp-ovs-ha.local] at time 21:36:48.256569 duration_in_ms=0.747
2019-04-30 21:36:48,256 [salt.state       :1780][INFO    ][3572] Running state [cfg01] at time 21:36:48.256887
2019-04-30 21:36:48,257 [salt.state       :1813][INFO    ][3572] Executing state host.present for [cfg01]
2019-04-30 21:36:48,258 [salt.state       :300 ][INFO    ][3572] Host cfg01 (10.167.4.11) already present
2019-04-30 21:36:48,258 [salt.state       :1951][INFO    ][3572] Completed state [cfg01] at time 21:36:48.258552 duration_in_ms=1.665
2019-04-30 21:36:48,258 [salt.state       :1780][INFO    ][3572] Running state [cfg01.mcp-ovs-ha.local] at time 21:36:48.258893
2019-04-30 21:36:48,259 [salt.state       :1813][INFO    ][3572] Executing state host.present for [cfg01.mcp-ovs-ha.local]
2019-04-30 21:36:48,259 [salt.state       :300 ][INFO    ][3572] Host cfg01.mcp-ovs-ha.local (10.167.4.11) already present
2019-04-30 21:36:48,259 [salt.state       :1951][INFO    ][3572] Completed state [cfg01.mcp-ovs-ha.local] at time 21:36:48.259611 duration_in_ms=0.718
2019-04-30 21:36:48,259 [salt.state       :1780][INFO    ][3572] Running state [prx01] at time 21:36:48.259919
2019-04-30 21:36:48,260 [salt.state       :1813][INFO    ][3572] Executing state host.present for [prx01]
2019-04-30 21:36:48,260 [salt.state       :300 ][INFO    ][3572] Host prx01 (10.167.4.14) already present
2019-04-30 21:36:48,260 [salt.state       :1951][INFO    ][3572] Completed state [prx01] at time 21:36:48.260625 duration_in_ms=0.706
2019-04-30 21:36:48,260 [salt.state       :1780][INFO    ][3572] Running state [prx01.mcp-ovs-ha.local] at time 21:36:48.260932
2019-04-30 21:36:48,261 [salt.state       :1813][INFO    ][3572] Executing state host.present for [prx01.mcp-ovs-ha.local]
2019-04-30 21:36:48,261 [salt.state       :300 ][INFO    ][3572] Host prx01.mcp-ovs-ha.local (10.167.4.14) already present
2019-04-30 21:36:48,261 [salt.state       :1951][INFO    ][3572] Completed state [prx01.mcp-ovs-ha.local] at time 21:36:48.261680 duration_in_ms=0.749
2019-04-30 21:36:48,262 [salt.state       :1780][INFO    ][3572] Running state [kvm01] at time 21:36:48.261985
2019-04-30 21:36:48,262 [salt.state       :1813][INFO    ][3572] Executing state host.present for [kvm01]
2019-04-30 21:36:48,262 [salt.state       :300 ][INFO    ][3572] Host kvm01 (10.167.4.20) already present
2019-04-30 21:36:48,262 [salt.state       :1951][INFO    ][3572] Completed state [kvm01] at time 21:36:48.262699 duration_in_ms=0.714
2019-04-30 21:36:48,263 [salt.state       :1780][INFO    ][3572] Running state [kvm01.mcp-ovs-ha.local] at time 21:36:48.263009
2019-04-30 21:36:48,263 [salt.state       :1813][INFO    ][3572] Executing state host.present for [kvm01.mcp-ovs-ha.local]
2019-04-30 21:36:48,263 [salt.state       :300 ][INFO    ][3572] Host kvm01.mcp-ovs-ha.local (10.167.4.20) already present
2019-04-30 21:36:48,263 [salt.state       :1951][INFO    ][3572] Completed state [kvm01.mcp-ovs-ha.local] at time 21:36:48.263722 duration_in_ms=0.713
2019-04-30 21:36:48,264 [salt.state       :1780][INFO    ][3572] Running state [kvm03] at time 21:36:48.264039
2019-04-30 21:36:48,264 [salt.state       :1813][INFO    ][3572] Executing state host.present for [kvm03]
2019-04-30 21:36:48,264 [salt.state       :300 ][INFO    ][3572] Host kvm03 (10.167.4.22) already present
2019-04-30 21:36:48,264 [salt.state       :1951][INFO    ][3572] Completed state [kvm03] at time 21:36:48.264754 duration_in_ms=0.715
2019-04-30 21:36:48,265 [salt.state       :1780][INFO    ][3572] Running state [kvm03.mcp-ovs-ha.local] at time 21:36:48.265064
2019-04-30 21:36:48,265 [salt.state       :1813][INFO    ][3572] Executing state host.present for [kvm03.mcp-ovs-ha.local]
2019-04-30 21:36:48,265 [salt.state       :300 ][INFO    ][3572] Host kvm03.mcp-ovs-ha.local (10.167.4.22) already present
2019-04-30 21:36:48,265 [salt.state       :1951][INFO    ][3572] Completed state [kvm03.mcp-ovs-ha.local] at time 21:36:48.265810 duration_in_ms=0.746
2019-04-30 21:36:48,266 [salt.state       :1780][INFO    ][3572] Running state [kvm02] at time 21:36:48.266123
2019-04-30 21:36:48,266 [salt.state       :1813][INFO    ][3572] Executing state host.present for [kvm02]
2019-04-30 21:36:48,266 [salt.state       :300 ][INFO    ][3572] Host kvm02 (10.167.4.21) already present
2019-04-30 21:36:48,266 [salt.state       :1951][INFO    ][3572] Completed state [kvm02] at time 21:36:48.266879 duration_in_ms=0.756
2019-04-30 21:36:48,267 [salt.state       :1780][INFO    ][3572] Running state [kvm02.mcp-ovs-ha.local] at time 21:36:48.267191
2019-04-30 21:36:48,267 [salt.state       :1813][INFO    ][3572] Executing state host.present for [kvm02.mcp-ovs-ha.local]
2019-04-30 21:36:48,267 [salt.state       :300 ][INFO    ][3572] Host kvm02.mcp-ovs-ha.local (10.167.4.21) already present
2019-04-30 21:36:48,267 [salt.state       :1951][INFO    ][3572] Completed state [kvm02.mcp-ovs-ha.local] at time 21:36:48.267949 duration_in_ms=0.758
2019-04-30 21:36:48,268 [salt.state       :1780][INFO    ][3572] Running state [dbs] at time 21:36:48.268260
2019-04-30 21:36:48,268 [salt.state       :1813][INFO    ][3572] Executing state host.present for [dbs]
2019-04-30 21:36:48,268 [salt.state       :300 ][INFO    ][3572] Host dbs (10.167.4.23) already present
2019-04-30 21:36:48,269 [salt.state       :1951][INFO    ][3572] Completed state [dbs] at time 21:36:48.269027 duration_in_ms=0.767
2019-04-30 21:36:48,269 [salt.state       :1780][INFO    ][3572] Running state [dbs.mcp-ovs-ha.local] at time 21:36:48.269346
2019-04-30 21:36:48,269 [salt.state       :1813][INFO    ][3572] Executing state host.present for [dbs.mcp-ovs-ha.local]
2019-04-30 21:36:48,269 [salt.state       :300 ][INFO    ][3572] Host dbs.mcp-ovs-ha.local (10.167.4.23) already present
2019-04-30 21:36:48,270 [salt.state       :1951][INFO    ][3572] Completed state [dbs.mcp-ovs-ha.local] at time 21:36:48.270063 duration_in_ms=0.717
2019-04-30 21:36:48,270 [salt.state       :1780][INFO    ][3572] Running state [prx] at time 21:36:48.270372
2019-04-30 21:36:48,270 [salt.state       :1813][INFO    ][3572] Executing state host.present for [prx]
2019-04-30 21:36:48,270 [salt.state       :300 ][INFO    ][3572] Host prx (10.167.4.13) already present
2019-04-30 21:36:48,271 [salt.state       :1951][INFO    ][3572] Completed state [prx] at time 21:36:48.271106 duration_in_ms=0.734
2019-04-30 21:36:48,271 [salt.state       :1780][INFO    ][3572] Running state [prx.mcp-ovs-ha.local] at time 21:36:48.271410
2019-04-30 21:36:48,271 [salt.state       :1813][INFO    ][3572] Executing state host.present for [prx.mcp-ovs-ha.local]
2019-04-30 21:36:48,271 [salt.state       :300 ][INFO    ][3572] Host prx.mcp-ovs-ha.local (10.167.4.13) already present
2019-04-30 21:36:48,272 [salt.state       :1951][INFO    ][3572] Completed state [prx.mcp-ovs-ha.local] at time 21:36:48.272139 duration_in_ms=0.729
2019-04-30 21:36:48,272 [salt.state       :1780][INFO    ][3572] Running state [prx02] at time 21:36:48.272447
2019-04-30 21:36:48,272 [salt.state       :1813][INFO    ][3572] Executing state host.present for [prx02]
2019-04-30 21:36:48,273 [salt.state       :300 ][INFO    ][3572] Host prx02 (10.167.4.15) already present
2019-04-30 21:36:48,273 [salt.state       :1951][INFO    ][3572] Completed state [prx02] at time 21:36:48.273192 duration_in_ms=0.744
2019-04-30 21:36:48,273 [salt.state       :1780][INFO    ][3572] Running state [prx02.mcp-ovs-ha.local] at time 21:36:48.273514
2019-04-30 21:36:48,273 [salt.state       :1813][INFO    ][3572] Executing state host.present for [prx02.mcp-ovs-ha.local]
2019-04-30 21:36:48,274 [salt.state       :300 ][INFO    ][3572] Host prx02.mcp-ovs-ha.local (10.167.4.15) already present
2019-04-30 21:36:48,274 [salt.state       :1951][INFO    ][3572] Completed state [prx02.mcp-ovs-ha.local] at time 21:36:48.274248 duration_in_ms=0.734
2019-04-30 21:36:48,274 [salt.state       :1780][INFO    ][3572] Running state [msg02] at time 21:36:48.274560
2019-04-30 21:36:48,274 [salt.state       :1813][INFO    ][3572] Executing state host.present for [msg02]
2019-04-30 21:36:48,275 [salt.state       :300 ][INFO    ][3572] Host msg02 (10.167.4.29) already present
2019-04-30 21:36:48,275 [salt.state       :1951][INFO    ][3572] Completed state [msg02] at time 21:36:48.275297 duration_in_ms=0.736
2019-04-30 21:36:48,275 [salt.state       :1780][INFO    ][3572] Running state [msg02.mcp-ovs-ha.local] at time 21:36:48.275607
2019-04-30 21:36:48,275 [salt.state       :1813][INFO    ][3572] Executing state host.present for [msg02.mcp-ovs-ha.local]
2019-04-30 21:36:48,276 [salt.state       :300 ][INFO    ][3572] Host msg02.mcp-ovs-ha.local (10.167.4.29) already present
2019-04-30 21:36:48,276 [salt.state       :1951][INFO    ][3572] Completed state [msg02.mcp-ovs-ha.local] at time 21:36:48.276339 duration_in_ms=0.732
2019-04-30 21:36:48,276 [salt.state       :1780][INFO    ][3572] Running state [msg03] at time 21:36:48.276652
2019-04-30 21:36:48,276 [salt.state       :1813][INFO    ][3572] Executing state host.present for [msg03]
2019-04-30 21:36:48,277 [salt.state       :300 ][INFO    ][3572] Host msg03 (10.167.4.30) already present
2019-04-30 21:36:48,277 [salt.state       :1951][INFO    ][3572] Completed state [msg03] at time 21:36:48.277386 duration_in_ms=0.733
2019-04-30 21:36:48,277 [salt.state       :1780][INFO    ][3572] Running state [msg03.mcp-ovs-ha.local] at time 21:36:48.277696
2019-04-30 21:36:48,277 [salt.state       :1813][INFO    ][3572] Executing state host.present for [msg03.mcp-ovs-ha.local]
2019-04-30 21:36:48,278 [salt.state       :300 ][INFO    ][3572] Host msg03.mcp-ovs-ha.local (10.167.4.30) already present
2019-04-30 21:36:48,278 [salt.state       :1951][INFO    ][3572] Completed state [msg03.mcp-ovs-ha.local] at time 21:36:48.278433 duration_in_ms=0.736
2019-04-30 21:36:48,278 [salt.state       :1780][INFO    ][3572] Running state [msg01] at time 21:36:48.278764
2019-04-30 21:36:48,278 [salt.state       :1813][INFO    ][3572] Executing state host.present for [msg01]
2019-04-30 21:36:48,279 [salt.state       :300 ][INFO    ][3572] Host msg01 (10.167.4.28) already present
2019-04-30 21:36:48,279 [salt.state       :1951][INFO    ][3572] Completed state [msg01] at time 21:36:48.279489 duration_in_ms=0.724
2019-04-30 21:36:48,279 [salt.state       :1780][INFO    ][3572] Running state [msg01.mcp-ovs-ha.local] at time 21:36:48.279795
2019-04-30 21:36:48,280 [salt.state       :1813][INFO    ][3572] Executing state host.present for [msg01.mcp-ovs-ha.local]
2019-04-30 21:36:48,280 [salt.state       :300 ][INFO    ][3572] Host msg01.mcp-ovs-ha.local (10.167.4.28) already present
2019-04-30 21:36:48,280 [salt.state       :1951][INFO    ][3572] Completed state [msg01.mcp-ovs-ha.local] at time 21:36:48.280626 duration_in_ms=0.831
2019-04-30 21:36:48,280 [salt.state       :1780][INFO    ][3572] Running state [msg] at time 21:36:48.280935
2019-04-30 21:36:48,281 [salt.state       :1813][INFO    ][3572] Executing state host.present for [msg]
2019-04-30 21:36:48,281 [salt.state       :300 ][INFO    ][3572] Host msg (10.167.4.27) already present
2019-04-30 21:36:48,281 [salt.state       :1951][INFO    ][3572] Completed state [msg] at time 21:36:48.281663 duration_in_ms=0.728
2019-04-30 21:36:48,282 [salt.state       :1780][INFO    ][3572] Running state [msg.mcp-ovs-ha.local] at time 21:36:48.281975
2019-04-30 21:36:48,282 [salt.state       :1813][INFO    ][3572] Executing state host.present for [msg.mcp-ovs-ha.local]
2019-04-30 21:36:48,282 [salt.state       :300 ][INFO    ][3572] Host msg.mcp-ovs-ha.local (10.167.4.27) already present
2019-04-30 21:36:48,282 [salt.state       :1951][INFO    ][3572] Completed state [msg.mcp-ovs-ha.local] at time 21:36:48.282689 duration_in_ms=0.713
2019-04-30 21:36:48,283 [salt.state       :1780][INFO    ][3572] Running state [cfg01] at time 21:36:48.282998
2019-04-30 21:36:48,283 [salt.state       :1813][INFO    ][3572] Executing state host.present for [cfg01]
2019-04-30 21:36:48,283 [salt.state       :300 ][INFO    ][3572] Host cfg01 (10.167.4.11) already present
2019-04-30 21:36:48,283 [salt.state       :1951][INFO    ][3572] Completed state [cfg01] at time 21:36:48.283715 duration_in_ms=0.716
2019-04-30 21:36:48,284 [salt.state       :1780][INFO    ][3572] Running state [cfg01.mcp-ovs-ha.local] at time 21:36:48.284022
2019-04-30 21:36:48,284 [salt.state       :1813][INFO    ][3572] Executing state host.present for [cfg01.mcp-ovs-ha.local]
2019-04-30 21:36:48,284 [salt.state       :300 ][INFO    ][3572] Host cfg01.mcp-ovs-ha.local (10.167.4.11) already present
2019-04-30 21:36:48,284 [salt.state       :1951][INFO    ][3572] Completed state [cfg01.mcp-ovs-ha.local] at time 21:36:48.284733 duration_in_ms=0.711
2019-04-30 21:36:48,285 [salt.state       :1780][INFO    ][3572] Running state [cmp002] at time 21:36:48.285042
2019-04-30 21:36:48,285 [salt.state       :1813][INFO    ][3572] Executing state host.present for [cmp002]
2019-04-30 21:36:48,285 [salt.state       :300 ][INFO    ][3572] Host cmp002 (10.167.4.56) already present
2019-04-30 21:36:48,285 [salt.state       :1951][INFO    ][3572] Completed state [cmp002] at time 21:36:48.285769 duration_in_ms=0.727
2019-04-30 21:36:48,286 [salt.state       :1780][INFO    ][3572] Running state [cmp002.mcp-ovs-ha.local] at time 21:36:48.286079
2019-04-30 21:36:48,286 [salt.state       :1813][INFO    ][3572] Executing state host.present for [cmp002.mcp-ovs-ha.local]
2019-04-30 21:36:48,286 [salt.state       :300 ][INFO    ][3572] Host cmp002.mcp-ovs-ha.local (10.167.4.56) already present
2019-04-30 21:36:48,286 [salt.state       :1951][INFO    ][3572] Completed state [cmp002.mcp-ovs-ha.local] at time 21:36:48.286797 duration_in_ms=0.717
2019-04-30 21:36:48,288 [salt.state       :1780][INFO    ][3572] Running state [file.replace] at time 21:36:48.288214
2019-04-30 21:36:48,288 [salt.state       :1813][INFO    ][3572] Executing state module.run for [file.replace]
2019-04-30 21:36:48,291 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'grep -q "cmp002 cmp002.mcp-ovs-ha.local" /etc/hosts' in directory '/root'
2019-04-30 21:36:48,298 [salt.utils.decorators:613 ][WARNING ][3572] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:36:48,301 [salt.state       :300 ][INFO    ][3572] {'ret': '--- \n+++ \n@@ -22,7 +22,7 @@\n 10.167.4.30\t\tmsg03 msg03.mcp-ovs-ha.local\n 10.167.4.28\t\tmsg01 msg01.mcp-ovs-ha.local\n 10.167.4.27\t\tmsg msg.mcp-ovs-ha.local\n-10.167.4.56\t\tcmp002 cmp002.mcp-ovs-ha.local\n+10.167.4.56\t\tcmp002.mcp-ovs-ha.local cmp002\n 10.167.4.55\t\tcmp001 cmp001.mcp-ovs-ha.local\n 10.167.4.24\t\tdbs01 dbs01.mcp-ovs-ha.local\n 10.167.4.25\t\tdbs02 dbs02.mcp-ovs-ha.local\n'}
2019-04-30 21:36:48,301 [salt.state       :1951][INFO    ][3572] Completed state [file.replace] at time 21:36:48.301211 duration_in_ms=12.997
2019-04-30 21:36:48,301 [salt.state       :1780][INFO    ][3572] Running state [cmp001] at time 21:36:48.301707
2019-04-30 21:36:48,301 [salt.state       :1813][INFO    ][3572] Executing state host.present for [cmp001]
2019-04-30 21:36:48,302 [salt.state       :300 ][INFO    ][3572] Host cmp001 (10.167.4.55) already present
2019-04-30 21:36:48,302 [salt.state       :1951][INFO    ][3572] Completed state [cmp001] at time 21:36:48.302500 duration_in_ms=0.793
2019-04-30 21:36:48,302 [salt.state       :1780][INFO    ][3572] Running state [cmp001.mcp-ovs-ha.local] at time 21:36:48.302838
2019-04-30 21:36:48,303 [salt.state       :1813][INFO    ][3572] Executing state host.present for [cmp001.mcp-ovs-ha.local]
2019-04-30 21:36:48,303 [salt.state       :300 ][INFO    ][3572] Host cmp001.mcp-ovs-ha.local (10.167.4.55) already present
2019-04-30 21:36:48,303 [salt.state       :1951][INFO    ][3572] Completed state [cmp001.mcp-ovs-ha.local] at time 21:36:48.303563 duration_in_ms=0.724
2019-04-30 21:36:48,303 [salt.state       :1780][INFO    ][3572] Running state [dbs01] at time 21:36:48.303898
2019-04-30 21:36:48,304 [salt.state       :1813][INFO    ][3572] Executing state host.present for [dbs01]
2019-04-30 21:36:48,304 [salt.state       :300 ][INFO    ][3572] Host dbs01 (10.167.4.24) already present
2019-04-30 21:36:48,304 [salt.state       :1951][INFO    ][3572] Completed state [dbs01] at time 21:36:48.304628 duration_in_ms=0.729
2019-04-30 21:36:48,304 [salt.state       :1780][INFO    ][3572] Running state [dbs01.mcp-ovs-ha.local] at time 21:36:48.304965
2019-04-30 21:36:48,305 [salt.state       :1813][INFO    ][3572] Executing state host.present for [dbs01.mcp-ovs-ha.local]
2019-04-30 21:36:48,305 [salt.state       :300 ][INFO    ][3572] Host dbs01.mcp-ovs-ha.local (10.167.4.24) already present
2019-04-30 21:36:48,305 [salt.state       :1951][INFO    ][3572] Completed state [dbs01.mcp-ovs-ha.local] at time 21:36:48.305938 duration_in_ms=0.973
2019-04-30 21:36:48,306 [salt.state       :1780][INFO    ][3572] Running state [dbs02] at time 21:36:48.306273
2019-04-30 21:36:48,306 [salt.state       :1813][INFO    ][3572] Executing state host.present for [dbs02]
2019-04-30 21:36:48,306 [salt.state       :300 ][INFO    ][3572] Host dbs02 (10.167.4.25) already present
2019-04-30 21:36:48,307 [salt.state       :1951][INFO    ][3572] Completed state [dbs02] at time 21:36:48.306987 duration_in_ms=0.714
2019-04-30 21:36:48,307 [salt.state       :1780][INFO    ][3572] Running state [dbs02.mcp-ovs-ha.local] at time 21:36:48.307324
2019-04-30 21:36:48,307 [salt.state       :1813][INFO    ][3572] Executing state host.present for [dbs02.mcp-ovs-ha.local]
2019-04-30 21:36:48,307 [salt.state       :300 ][INFO    ][3572] Host dbs02.mcp-ovs-ha.local (10.167.4.25) already present
2019-04-30 21:36:48,308 [salt.state       :1951][INFO    ][3572] Completed state [dbs02.mcp-ovs-ha.local] at time 21:36:48.308057 duration_in_ms=0.732
2019-04-30 21:36:48,308 [salt.state       :1780][INFO    ][3572] Running state [dbs03] at time 21:36:48.308391
2019-04-30 21:36:48,308 [salt.state       :1813][INFO    ][3572] Executing state host.present for [dbs03]
2019-04-30 21:36:48,308 [salt.state       :300 ][INFO    ][3572] Host dbs03 (10.167.4.26) already present
2019-04-30 21:36:48,309 [salt.state       :1951][INFO    ][3572] Completed state [dbs03] at time 21:36:48.309124 duration_in_ms=0.733
2019-04-30 21:36:48,309 [salt.state       :1780][INFO    ][3572] Running state [dbs03.mcp-ovs-ha.local] at time 21:36:48.309463
2019-04-30 21:36:48,309 [salt.state       :1813][INFO    ][3572] Executing state host.present for [dbs03.mcp-ovs-ha.local]
2019-04-30 21:36:48,310 [salt.state       :300 ][INFO    ][3572] Host dbs03.mcp-ovs-ha.local (10.167.4.26) already present
2019-04-30 21:36:48,310 [salt.state       :1951][INFO    ][3572] Completed state [dbs03.mcp-ovs-ha.local] at time 21:36:48.310165 duration_in_ms=0.702
2019-04-30 21:36:48,310 [salt.state       :1780][INFO    ][3572] Running state [mas01] at time 21:36:48.310492
2019-04-30 21:36:48,310 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mas01]
2019-04-30 21:36:48,311 [salt.state       :300 ][INFO    ][3572] Host mas01 (10.167.4.12) already present
2019-04-30 21:36:48,311 [salt.state       :1951][INFO    ][3572] Completed state [mas01] at time 21:36:48.311176 duration_in_ms=0.683
2019-04-30 21:36:48,311 [salt.state       :1780][INFO    ][3572] Running state [mas01.mcp-ovs-ha.local] at time 21:36:48.311509
2019-04-30 21:36:48,311 [salt.state       :1813][INFO    ][3572] Executing state host.present for [mas01.mcp-ovs-ha.local]
2019-04-30 21:36:48,312 [salt.state       :300 ][INFO    ][3572] Host mas01.mcp-ovs-ha.local (10.167.4.12) already present
2019-04-30 21:36:48,312 [salt.state       :1951][INFO    ][3572] Completed state [mas01.mcp-ovs-ha.local] at time 21:36:48.312214 duration_in_ms=0.703
2019-04-30 21:36:48,312 [salt.state       :1780][INFO    ][3572] Running state [ctl02] at time 21:36:48.312537
2019-04-30 21:36:48,312 [salt.state       :1813][INFO    ][3572] Executing state host.present for [ctl02]
2019-04-30 21:36:48,313 [salt.state       :300 ][INFO    ][3572] Host ctl02 (10.167.4.37) already present
2019-04-30 21:36:48,313 [salt.state       :1951][INFO    ][3572] Completed state [ctl02] at time 21:36:48.313237 duration_in_ms=0.699
2019-04-30 21:36:48,313 [salt.state       :1780][INFO    ][3572] Running state [ctl02.mcp-ovs-ha.local] at time 21:36:48.313571
2019-04-30 21:36:48,313 [salt.state       :1813][INFO    ][3572] Executing state host.present for [ctl02.mcp-ovs-ha.local]
2019-04-30 21:36:48,314 [salt.state       :300 ][INFO    ][3572] Host ctl02.mcp-ovs-ha.local (10.167.4.37) already present
2019-04-30 21:36:48,314 [salt.state       :1951][INFO    ][3572] Completed state [ctl02.mcp-ovs-ha.local] at time 21:36:48.314286 duration_in_ms=0.715
2019-04-30 21:36:48,314 [salt.state       :1780][INFO    ][3572] Running state [ctl03] at time 21:36:48.314616
2019-04-30 21:36:48,314 [salt.state       :1813][INFO    ][3572] Executing state host.present for [ctl03]
2019-04-30 21:36:48,315 [salt.state       :300 ][INFO    ][3572] Host ctl03 (10.167.4.38) already present
2019-04-30 21:36:48,315 [salt.state       :1951][INFO    ][3572] Completed state [ctl03] at time 21:36:48.315334 duration_in_ms=0.717
2019-04-30 21:36:48,315 [salt.state       :1780][INFO    ][3572] Running state [ctl03.mcp-ovs-ha.local] at time 21:36:48.315671
2019-04-30 21:36:48,315 [salt.state       :1813][INFO    ][3572] Executing state host.present for [ctl03.mcp-ovs-ha.local]
2019-04-30 21:36:48,316 [salt.state       :300 ][INFO    ][3572] Host ctl03.mcp-ovs-ha.local (10.167.4.38) already present
2019-04-30 21:36:48,316 [salt.state       :1951][INFO    ][3572] Completed state [ctl03.mcp-ovs-ha.local] at time 21:36:48.316373 duration_in_ms=0.701
2019-04-30 21:36:48,316 [salt.state       :1780][INFO    ][3572] Running state [ctl01] at time 21:36:48.316705
2019-04-30 21:36:48,316 [salt.state       :1813][INFO    ][3572] Executing state host.present for [ctl01]
2019-04-30 21:36:48,317 [salt.state       :300 ][INFO    ][3572] Host ctl01 (10.167.4.36) already present
2019-04-30 21:36:48,317 [salt.state       :1951][INFO    ][3572] Completed state [ctl01] at time 21:36:48.317423 duration_in_ms=0.718
2019-04-30 21:36:48,317 [salt.state       :1780][INFO    ][3572] Running state [ctl01.mcp-ovs-ha.local] at time 21:36:48.317751
2019-04-30 21:36:48,317 [salt.state       :1813][INFO    ][3572] Executing state host.present for [ctl01.mcp-ovs-ha.local]
2019-04-30 21:36:48,318 [salt.state       :300 ][INFO    ][3572] Host ctl01.mcp-ovs-ha.local (10.167.4.36) already present
2019-04-30 21:36:48,318 [salt.state       :1951][INFO    ][3572] Completed state [ctl01.mcp-ovs-ha.local] at time 21:36:48.318483 duration_in_ms=0.731
2019-04-30 21:36:48,318 [salt.state       :1780][INFO    ][3572] Running state [ctl] at time 21:36:48.318825
2019-04-30 21:36:48,319 [salt.state       :1813][INFO    ][3572] Executing state host.present for [ctl]
2019-04-30 21:36:48,319 [salt.state       :300 ][INFO    ][3572] Host ctl (10.167.4.35) already present
2019-04-30 21:36:48,319 [salt.state       :1951][INFO    ][3572] Completed state [ctl] at time 21:36:48.319543 duration_in_ms=0.717
2019-04-30 21:36:48,319 [salt.state       :1780][INFO    ][3572] Running state [ctl.mcp-ovs-ha.local] at time 21:36:48.319870
2019-04-30 21:36:48,320 [salt.state       :1813][INFO    ][3572] Executing state host.present for [ctl.mcp-ovs-ha.local]
2019-04-30 21:36:48,320 [salt.state       :300 ][INFO    ][3572] Host ctl.mcp-ovs-ha.local (10.167.4.35) already present
2019-04-30 21:36:48,320 [salt.state       :1951][INFO    ][3572] Completed state [ctl.mcp-ovs-ha.local] at time 21:36:48.320553 duration_in_ms=0.683
2019-04-30 21:36:48,320 [salt.state       :1780][INFO    ][3572] Running state [linux_network_bridge_pkgs] at time 21:36:48.320727
2019-04-30 21:36:48,320 [salt.state       :1813][INFO    ][3572] Executing state pkg.installed for [linux_network_bridge_pkgs]
2019-04-30 21:36:48,326 [salt.state       :300 ][INFO    ][3572] All specified packages are already installed
2019-04-30 21:36:48,326 [salt.state       :1951][INFO    ][3572] Completed state [linux_network_bridge_pkgs] at time 21:36:48.326392 duration_in_ms=5.664
2019-04-30 21:36:48,327 [salt.state       :1780][INFO    ][3572] Running state [/etc/systemd/system/ovsdb-server.service.d/override.conf] at time 21:36:48.327807
2019-04-30 21:36:48,328 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/systemd/system/ovsdb-server.service.d/override.conf]
2019-04-30 21:36:48,328 [salt.state       :300 ][INFO    ][3572] File /etc/systemd/system/ovsdb-server.service.d/override.conf is in the correct state
2019-04-30 21:36:48,328 [salt.state       :1951][INFO    ][3572] Completed state [/etc/systemd/system/ovsdb-server.service.d/override.conf] at time 21:36:48.328934 duration_in_ms=1.127
2019-04-30 21:36:48,329 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430213648320376
2019-04-30 21:36:48,330 [salt.state       :1780][INFO    ][3572] Running state [/etc/systemd/system/ovs-vswitchd.service.d/override.conf] at time 21:36:48.330084
2019-04-30 21:36:48,330 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/systemd/system/ovs-vswitchd.service.d/override.conf]
2019-04-30 21:36:48,330 [salt.state       :300 ][INFO    ][3572] File /etc/systemd/system/ovs-vswitchd.service.d/override.conf is in the correct state
2019-04-30 21:36:48,331 [salt.state       :1951][INFO    ][3572] Completed state [/etc/systemd/system/ovs-vswitchd.service.d/override.conf] at time 21:36:48.331089 duration_in_ms=1.005
2019-04-30 21:36:48,332 [salt.state       :1780][INFO    ][3572] Running state [/etc/systemd/system/networking.service.d/ovs_workaround.conf] at time 21:36:48.332164
2019-04-30 21:36:48,332 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/systemd/system/networking.service.d/ovs_workaround.conf]
2019-04-30 21:36:48,332 [salt.state       :300 ][INFO    ][3572] File /etc/systemd/system/networking.service.d/ovs_workaround.conf is in the correct state
2019-04-30 21:36:48,333 [salt.state       :1951][INFO    ][3572] Completed state [/etc/systemd/system/networking.service.d/ovs_workaround.conf] at time 21:36:48.333121 duration_in_ms=0.956
2019-04-30 21:36:48,333 [salt.state       :1780][INFO    ][3572] Running state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 21:36:48.333313
2019-04-30 21:36:48,333 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/network/interfaces.d/50-cloud-init.cfg]
2019-04-30 21:36:48,333 [salt.state       :300 ][INFO    ][3572] File /etc/network/interfaces.d/50-cloud-init.cfg is not present
2019-04-30 21:36:48,333 [salt.state       :1951][INFO    ][3572] Completed state [/etc/network/interfaces.d/50-cloud-init.cfg] at time 21:36:48.333850 duration_in_ms=0.537
2019-04-30 21:36:48,335 [salt.state       :1780][INFO    ][3572] Running state [enp6s0.300] at time 21:36:48.335579
2019-04-30 21:36:48,335 [salt.state       :1813][INFO    ][3572] Executing state network.managed for [enp6s0.300]
2019-04-30 21:36:48,337 [salt.minion      :1432][INFO    ][4576] Starting a new job with PID 4576
2019-04-30 21:36:48,347 [salt.minion      :1711][INFO    ][4576] Returning information for job: 20190430213648320376
2019-04-30 21:36:48,418 [salt.state       :300 ][INFO    ][3572] Interface enp6s0.300 is up to date.
2019-04-30 21:36:48,418 [salt.state       :1951][INFO    ][3572] Completed state [enp6s0.300] at time 21:36:48.418411 duration_in_ms=82.83
2019-04-30 21:36:48,419 [salt.state       :1780][INFO    ][3572] Running state [br-ctl] at time 21:36:48.419801
2019-04-30 21:36:48,420 [salt.state       :1813][INFO    ][3572] Executing state network.managed for [br-ctl]
2019-04-30 21:36:48,426 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 21:36:48,437 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'bridge-utils'] in directory '/root'
2019-04-30 21:36:48,684 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:36:48,725 [salt.state       :300 ][INFO    ][3572] Interface br-ctl is up to date.
2019-04-30 21:36:48,725 [salt.state       :1951][INFO    ][3572] Completed state [br-ctl] at time 21:36:48.725469 duration_in_ms=305.667
2019-04-30 21:36:48,725 [salt.state       :1780][INFO    ][3572] Running state [enp6s0] at time 21:36:48.725709
2019-04-30 21:36:48,725 [salt.state       :1813][INFO    ][3572] Executing state network.managed for [enp6s0]
2019-04-30 21:36:48,744 [salt.state       :300 ][INFO    ][3572] Interface enp6s0 is up to date.
2019-04-30 21:36:48,745 [salt.state       :1951][INFO    ][3572] Completed state [enp6s0] at time 21:36:48.745197 duration_in_ms=19.486
2019-04-30 21:36:48,745 [salt.state       :1780][INFO    ][3572] Running state [br-floating] at time 21:36:48.745603
2019-04-30 21:36:48,746 [salt.state       :1813][INFO    ][3572] Executing state openvswitch_bridge.present for [br-floating]
2019-04-30 21:36:48,746 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'ovs-vsctl br-exists br-floating' in directory '/root'
2019-04-30 21:36:48,754 [salt.state       :300 ][INFO    ][3572] Bridge br-floating already exists.
2019-04-30 21:36:48,754 [salt.state       :1951][INFO    ][3572] Completed state [br-floating] at time 21:36:48.754548 duration_in_ms=8.946
2019-04-30 21:36:48,754 [salt.state       :1780][INFO    ][3572] Running state [/etc/network/interfaces.u/ifcfg-br-floating] at time 21:36:48.754784
2019-04-30 21:36:48,755 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/network/interfaces.u/ifcfg-br-floating]
2019-04-30 21:36:48,774 [salt.state       :300 ][INFO    ][3572] File /etc/network/interfaces.u/ifcfg-br-floating is in the correct state
2019-04-30 21:36:48,774 [salt.state       :1951][INFO    ][3572] Completed state [/etc/network/interfaces.u/ifcfg-br-floating] at time 21:36:48.774258 duration_in_ms=19.474
2019-04-30 21:36:48,774 [salt.state       :1780][INFO    ][3572] Running state [/etc/network/interfaces] at time 21:36:48.774446
2019-04-30 21:36:48,774 [salt.state       :1813][INFO    ][3572] Executing state file.prepend for [/etc/network/interfaces]
2019-04-30 21:36:48,776 [salt.state       :300 ][INFO    ][3572] File changed:
--- 
+++ 
@@ -1,3 +1,6 @@
+source /etc/network/interfaces.d/*
+# Workaround for Upstream-Bug: https://github.com/saltstack/salt/issues/40262
+source /etc/network/interfaces.u/*
 auto lo
 iface lo inet loopback
 auto enp6s0.300

2019-04-30 21:36:48,776 [salt.state       :1951][INFO    ][3572] Completed state [/etc/network/interfaces] at time 21:36:48.776649 duration_in_ms=2.203
2019-04-30 21:36:48,781 [salt.state       :1780][INFO    ][3572] Running state [/etc/network/interfaces] at time 21:36:48.781111
2019-04-30 21:36:48,781 [salt.state       :1813][INFO    ][3572] Executing state file.prepend for [/etc/network/interfaces]
2019-04-30 21:36:48,782 [salt.state       :300 ][INFO    ][3572] File /etc/network/interfaces is in correct state
2019-04-30 21:36:48,782 [salt.state       :1951][INFO    ][3572] Completed state [/etc/network/interfaces] at time 21:36:48.782339 duration_in_ms=1.228
2019-04-30 21:36:48,784 [salt.state       :1780][INFO    ][3572] Running state [ifup --ignore-errors br-floating] at time 21:36:48.784735
2019-04-30 21:36:48,784 [salt.state       :1813][INFO    ][3572] Executing state cmd.run for [ifup --ignore-errors br-floating]
2019-04-30 21:36:48,785 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command '/bin/false' in directory '/root'
2019-04-30 21:36:48,791 [salt.state       :300 ][INFO    ][3572] onlyif execution failed
2019-04-30 21:36:48,791 [salt.state       :1951][INFO    ][3572] Completed state [ifup --ignore-errors br-floating] at time 21:36:48.791908 duration_in_ms=7.172
2019-04-30 21:36:48,792 [salt.state       :1780][INFO    ][3572] Running state [ovs-vsctl add-port br-floating enp8s0] at time 21:36:48.792129
2019-04-30 21:36:48,792 [salt.state       :1813][INFO    ][3572] Executing state cmd.run for [ovs-vsctl add-port br-floating enp8s0]
2019-04-30 21:36:48,792 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'ovs-vsctl list-ports br-floating | grep -qFx enp8s0' in directory '/root'
2019-04-30 21:36:48,801 [salt.state       :300 ][INFO    ][3572] unless execution succeeded
2019-04-30 21:36:48,801 [salt.state       :1951][INFO    ][3572] Completed state [ovs-vsctl add-port br-floating enp8s0] at time 21:36:48.801280 duration_in_ms=9.15
2019-04-30 21:36:48,803 [salt.state       :1780][INFO    ][3572] Running state [enp7s0.1000] at time 21:36:48.803271
2019-04-30 21:36:48,803 [salt.state       :1813][INFO    ][3572] Executing state network.managed for [enp7s0.1000]
2019-04-30 21:36:48,822 [salt.state       :300 ][INFO    ][3572] Interface enp7s0.1000 is up to date.
2019-04-30 21:36:48,822 [salt.state       :1951][INFO    ][3572] Completed state [enp7s0.1000] at time 21:36:48.822240 duration_in_ms=18.969
2019-04-30 21:36:48,823 [salt.state       :1780][INFO    ][3572] Running state [br-mesh] at time 21:36:48.823505
2019-04-30 21:36:48,823 [salt.state       :1813][INFO    ][3572] Executing state network.managed for [br-mesh]
2019-04-30 21:36:48,830 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 21:36:48,841 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'bridge-utils'] in directory '/root'
2019-04-30 21:36:49,092 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:36:49,133 [salt.state       :300 ][INFO    ][3572] Interface br-mesh is up to date.
2019-04-30 21:36:49,133 [salt.state       :1951][INFO    ][3572] Completed state [br-mesh] at time 21:36:49.133832 duration_in_ms=310.327
2019-04-30 21:36:49,134 [salt.state       :1780][INFO    ][3572] Running state [enp7s0] at time 21:36:49.134055
2019-04-30 21:36:49,134 [salt.state       :1813][INFO    ][3572] Executing state network.managed for [enp7s0]
2019-04-30 21:36:49,152 [salt.state       :300 ][INFO    ][3572] Interface enp7s0 is up to date.
2019-04-30 21:36:49,153 [salt.state       :1951][INFO    ][3572] Completed state [enp7s0] at time 21:36:49.153184 duration_in_ms=19.128
2019-04-30 21:36:49,153 [salt.state       :1780][INFO    ][3572] Running state [/etc/network/interfaces] at time 21:36:49.153404
2019-04-30 21:36:49,153 [salt.state       :1813][INFO    ][3572] Executing state file.prepend for [/etc/network/interfaces]
2019-04-30 21:36:49,154 [salt.state       :300 ][INFO    ][3572] File changed:
--- 
+++ 
@@ -1,3 +1,6 @@
+source /etc/network/interfaces.d/*
+# Workaround for Upstream-Bug: https://github.com/saltstack/salt/issues/40262
+source /etc/network/interfaces.u/*
 auto lo
 iface lo inet loopback
 auto enp6s0.300

2019-04-30 21:36:49,154 [salt.state       :1951][INFO    ][3572] Completed state [/etc/network/interfaces] at time 21:36:49.154863 duration_in_ms=1.459
2019-04-30 21:36:49,155 [salt.state       :1780][INFO    ][3572] Running state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 21:36:49.155014
2019-04-30 21:36:49,155 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/network/interfaces.u/ifcfg-enp8s0]
2019-04-30 21:36:49,173 [salt.state       :300 ][INFO    ][3572] File /etc/network/interfaces.u/ifcfg-enp8s0 is in the correct state
2019-04-30 21:36:49,173 [salt.state       :1951][INFO    ][3572] Completed state [/etc/network/interfaces.u/ifcfg-enp8s0] at time 21:36:49.173762 duration_in_ms=18.747
2019-04-30 21:36:49,173 [salt.state       :1780][INFO    ][3572] Running state [/etc/network/interfaces] at time 21:36:49.173907
2019-04-30 21:36:49,174 [salt.state       :1813][INFO    ][3572] Executing state file.replace for [/etc/network/interfaces]
2019-04-30 21:36:49,174 [salt.state       :300 ][INFO    ][3572] No changes needed to be made
2019-04-30 21:36:49,174 [salt.state       :1951][INFO    ][3572] Completed state [/etc/network/interfaces] at time 21:36:49.174903 duration_in_ms=0.995
2019-04-30 21:36:49,175 [salt.state       :1780][INFO    ][3572] Running state [/etc/network/interfaces] at time 21:36:49.175040
2019-04-30 21:36:49,175 [salt.state       :1813][INFO    ][3572] Executing state file.replace for [/etc/network/interfaces]
2019-04-30 21:36:49,175 [salt.state       :300 ][INFO    ][3572] No changes needed to be made
2019-04-30 21:36:49,176 [salt.state       :1951][INFO    ][3572] Completed state [/etc/network/interfaces] at time 21:36:49.176006 duration_in_ms=0.965
2019-04-30 21:36:49,180 [salt.state       :1780][INFO    ][3572] Running state [ifup enp8s0] at time 21:36:49.180274
2019-04-30 21:36:49,180 [salt.state       :1813][INFO    ][3572] Executing state cmd.run for [ifup enp8s0]
2019-04-30 21:36:49,180 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'ifup enp8s0' in directory '/root'
2019-04-30 21:36:49,187 [salt.state       :300 ][INFO    ][3572] {'pid': 4656, 'retcode': 0, 'stderr': 'ifup: interface enp8s0 already configured', 'stdout': ''}
2019-04-30 21:36:49,187 [salt.state       :1951][INFO    ][3572] Completed state [ifup enp8s0] at time 21:36:49.187899 duration_in_ms=7.623
2019-04-30 21:36:49,188 [salt.state       :1780][INFO    ][3572] Running state [/etc/network/interfaces] at time 21:36:49.188185
2019-04-30 21:36:49,188 [salt.state       :1813][INFO    ][3572] Executing state file.prepend for [/etc/network/interfaces]
2019-04-30 21:36:49,189 [salt.state       :300 ][INFO    ][3572] File /etc/network/interfaces is in correct state
2019-04-30 21:36:49,190 [salt.state       :1951][INFO    ][3572] Completed state [/etc/network/interfaces] at time 21:36:49.189985 duration_in_ms=1.8
2019-04-30 21:36:49,190 [salt.state       :1780][INFO    ][3572] Running state [/etc/udev/rules.d/60-net-txqueue.rules] at time 21:36:49.190189
2019-04-30 21:36:49,190 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/udev/rules.d/60-net-txqueue.rules]
2019-04-30 21:36:49,235 [salt.state       :300 ][INFO    ][3572] File /etc/udev/rules.d/60-net-txqueue.rules is in the correct state
2019-04-30 21:36:49,235 [salt.state       :1951][INFO    ][3572] Completed state [/etc/udev/rules.d/60-net-txqueue.rules] at time 21:36:49.235534 duration_in_ms=45.345
2019-04-30 21:36:49,237 [salt.state       :1780][INFO    ][3572] Running state [/etc/profile.d/proxy.sh] at time 21:36:49.237832
2019-04-30 21:36:49,238 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/profile.d/proxy.sh]
2019-04-30 21:36:49,238 [salt.state       :300 ][INFO    ][3572] File /etc/profile.d/proxy.sh is not present
2019-04-30 21:36:49,238 [salt.state       :1951][INFO    ][3572] Completed state [/etc/profile.d/proxy.sh] at time 21:36:49.238339 duration_in_ms=0.506
2019-04-30 21:36:49,238 [salt.state       :1780][INFO    ][3572] Running state [/etc/apt/apt.conf.d/95proxies] at time 21:36:49.238479
2019-04-30 21:36:49,238 [salt.state       :1813][INFO    ][3572] Executing state file.absent for [/etc/apt/apt.conf.d/95proxies]
2019-04-30 21:36:49,238 [salt.state       :300 ][INFO    ][3572] File /etc/apt/apt.conf.d/95proxies is not present
2019-04-30 21:36:49,238 [salt.state       :1951][INFO    ][3572] Completed state [/etc/apt/apt.conf.d/95proxies] at time 21:36:49.238930 duration_in_ms=0.452
2019-04-30 21:36:49,239 [salt.state       :1780][INFO    ][3572] Running state [linux_lvm_pkgs] at time 21:36:49.239069
2019-04-30 21:36:49,239 [salt.state       :1813][INFO    ][3572] Executing state pkg.installed for [linux_lvm_pkgs]
2019-04-30 21:36:49,245 [salt.state       :300 ][INFO    ][3572] All specified packages are already installed
2019-04-30 21:36:49,245 [salt.state       :1951][INFO    ][3572] Completed state [linux_lvm_pkgs] at time 21:36:49.245111 duration_in_ms=6.041
2019-04-30 21:36:49,246 [salt.state       :1780][INFO    ][3572] Running state [/etc/lvm/lvm.conf] at time 21:36:49.246230
2019-04-30 21:36:49,246 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/lvm/lvm.conf]
2019-04-30 21:36:49,335 [salt.state       :300 ][INFO    ][3572] File /etc/lvm/lvm.conf is in the correct state
2019-04-30 21:36:49,336 [salt.state       :1951][INFO    ][3572] Completed state [/etc/lvm/lvm.conf] at time 21:36:49.336104 duration_in_ms=89.873
2019-04-30 21:36:49,337 [salt.state       :1780][INFO    ][3572] Running state [lvm2-lvmetad] at time 21:36:49.337918
2019-04-30 21:36:49,338 [salt.state       :1813][INFO    ][3572] Executing state service.running for [lvm2-lvmetad]
2019-04-30 21:36:49,338 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'status', 'lvm2-lvmetad.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:49,347 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-active', 'lvm2-lvmetad.service'] in directory '/root'
2019-04-30 21:36:49,354 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmetad.service'] in directory '/root'
2019-04-30 21:36:49,361 [salt.state       :300 ][INFO    ][3572] The service lvm2-lvmetad is already running
2019-04-30 21:36:49,362 [salt.state       :1951][INFO    ][3572] Completed state [lvm2-lvmetad] at time 21:36:49.362063 duration_in_ms=24.144
2019-04-30 21:36:49,364 [salt.state       :1780][INFO    ][3572] Running state [lvm2-lvmpolld] at time 21:36:49.363984
2019-04-30 21:36:49,364 [salt.state       :1813][INFO    ][3572] Executing state service.running for [lvm2-lvmpolld]
2019-04-30 21:36:49,364 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'status', 'lvm2-lvmpolld.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:49,372 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-active', 'lvm2-lvmpolld.service'] in directory '/root'
2019-04-30 21:36:49,379 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-enabled', 'lvm2-lvmpolld.service'] in directory '/root'
2019-04-30 21:36:49,386 [salt.state       :300 ][INFO    ][3572] The service lvm2-lvmpolld is already running
2019-04-30 21:36:49,387 [salt.state       :1951][INFO    ][3572] Completed state [lvm2-lvmpolld] at time 21:36:49.387151 duration_in_ms=23.166
2019-04-30 21:36:49,389 [salt.state       :1780][INFO    ][3572] Running state [lvm2-monitor] at time 21:36:49.389048
2019-04-30 21:36:49,389 [salt.state       :1813][INFO    ][3572] Executing state service.running for [lvm2-monitor]
2019-04-30 21:36:49,389 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'status', 'lvm2-monitor.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:49,397 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-active', 'lvm2-monitor.service'] in directory '/root'
2019-04-30 21:36:49,404 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-enabled', 'lvm2-monitor.service'] in directory '/root'
2019-04-30 21:36:49,411 [salt.state       :300 ][INFO    ][3572] The service lvm2-monitor is already running
2019-04-30 21:36:49,412 [salt.state       :1951][INFO    ][3572] Completed state [lvm2-monitor] at time 21:36:49.412138 duration_in_ms=23.089
2019-04-30 21:36:49,415 [salt.state       :1780][INFO    ][3572] Running state [/dev/sda2] at time 21:36:49.415899
2019-04-30 21:36:49,416 [salt.state       :1813][INFO    ][3572] Executing state lvm.pv_present for [/dev/sda2]
2019-04-30 21:36:49,416 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['pvdisplay', '-c', '/dev/sda2'] in directory '/root'
2019-04-30 21:36:49,460 [salt.state       :300 ][INFO    ][3572] Physical Volume /dev/sda2 already present
2019-04-30 21:36:49,460 [salt.state       :1951][INFO    ][3572] Completed state [/dev/sda2] at time 21:36:49.460745 duration_in_ms=44.846
2019-04-30 21:36:49,462 [salt.state       :1780][INFO    ][3572] Running state [vgroot] at time 21:36:49.462055
2019-04-30 21:36:49,462 [salt.state       :1813][INFO    ][3572] Executing state lvm.vg_present for [vgroot]
2019-04-30 21:36:49,462 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['vgdisplay', '-c', 'vgroot'] in directory '/root'
2019-04-30 21:36:49,471 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['pvdisplay', '-c', '/dev/sda2'] in directory '/root'
2019-04-30 21:36:49,480 [salt.state       :300 ][INFO    ][3572] Volume Group vgroot already present
/dev/sda2 is part of Volume Group
2019-04-30 21:36:49,481 [salt.state       :1951][INFO    ][3572] Completed state [vgroot] at time 21:36:49.481049 duration_in_ms=18.993
2019-04-30 21:36:49,481 [salt.state       :1780][INFO    ][3572] Running state [/dev/shm] at time 21:36:49.481278
2019-04-30 21:36:49,481 [salt.state       :1813][INFO    ][3572] Executing state mount.mounted for [/dev/shm]
2019-04-30 21:36:49,482 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,490 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'blkid' in directory '/root'
2019-04-30 21:36:49,509 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,515 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -o rw,nosuid,nodev,noexec,relatime,remount -t tmpfs shm /dev/shm' in directory '/root'
2019-04-30 21:36:49,521 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,528 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -o rw,nosuid,nodev,noexec,relatime,remount -t tmpfs shm /dev/shm' in directory '/root'
2019-04-30 21:36:49,533 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,540 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'umount /dev/shm' in directory '/root'
2019-04-30 21:36:49,546 [salt.loaded.int.module.cmdmod:730 ][ERROR   ][3572] Command '['umount', '/dev/shm']' failed with return code: 32
2019-04-30 21:36:49,546 [salt.loaded.int.module.cmdmod:734 ][ERROR   ][3572] stderr: umount: /dev/shm: target is busy
        (In some cases useful info about processes that
         use the device is found by lsof(8) or fuser(1).)
2019-04-30 21:36:49,546 [salt.loaded.int.module.cmdmod:736 ][ERROR   ][3572] retcode: 32
2019-04-30 21:36:49,546 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'mount -l' in directory '/root'
2019-04-30 21:36:49,553 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command 'blkid' in directory '/root'
2019-04-30 21:36:49,562 [salt.state       :300 ][INFO    ][3572] {'umount': "Forced unmount because devices don't match. Wanted: shm, current: tmpfs, /tmpfs"}
2019-04-30 21:36:49,562 [salt.state       :1951][INFO    ][3572] Completed state [/dev/shm] at time 21:36:49.562255 duration_in_ms=80.976
2019-04-30 21:36:49,562 [salt.state       :1780][INFO    ][3572] Running state [ntp] at time 21:36:49.562479
2019-04-30 21:36:49,562 [salt.state       :1813][INFO    ][3572] Executing state pkg.installed for [ntp]
2019-04-30 21:36:49,568 [salt.state       :300 ][INFO    ][3572] All specified packages are already installed
2019-04-30 21:36:49,568 [salt.state       :1951][INFO    ][3572] Completed state [ntp] at time 21:36:49.568525 duration_in_ms=6.045
2019-04-30 21:36:49,570 [salt.state       :1780][INFO    ][3572] Running state [/etc/ntp.conf] at time 21:36:49.570009
2019-04-30 21:36:49,570 [salt.state       :1813][INFO    ][3572] Executing state file.managed for [/etc/ntp.conf]
2019-04-30 21:36:49,617 [salt.state       :300 ][INFO    ][3572] File /etc/ntp.conf is in the correct state
2019-04-30 21:36:49,617 [salt.state       :1951][INFO    ][3572] Completed state [/etc/ntp.conf] at time 21:36:49.617720 duration_in_ms=47.711
2019-04-30 21:36:49,619 [salt.state       :1780][INFO    ][3572] Running state [ntp] at time 21:36:49.619087
2019-04-30 21:36:49,619 [salt.state       :1813][INFO    ][3572] Executing state service.running for [ntp]
2019-04-30 21:36:49,619 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'status', 'ntp.service', '-n', '0'] in directory '/root'
2019-04-30 21:36:49,628 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-active', 'ntp.service'] in directory '/root'
2019-04-30 21:36:49,636 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3572] Executing command ['systemctl', 'is-enabled', 'ntp.service'] in directory '/root'
2019-04-30 21:36:49,645 [salt.state       :300 ][INFO    ][3572] The service ntp is already running
2019-04-30 21:36:49,646 [salt.state       :1951][INFO    ][3572] Completed state [ntp] at time 21:36:49.646126 duration_in_ms=27.037
2019-04-30 21:36:49,650 [salt.minion      :1711][INFO    ][3572] Returning information for job: 20190430213633214368
2019-04-30 21:36:50,296 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command pkg.upgrade with jid 20190430213650288277
2019-04-30 21:36:50,306 [salt.minion      :1432][INFO    ][4702] Starting a new job with PID 4702
2019-04-30 21:36:50,319 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][4702] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:36:50,563 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][4702] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'dist-upgrade'] in directory '/root'
2019-04-30 21:37:05,361 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430213705350778
2019-04-30 21:37:05,370 [salt.minion      :1432][INFO    ][7437] Starting a new job with PID 7437
2019-04-30 21:37:05,382 [salt.minion      :1711][INFO    ][7437] Returning information for job: 20190430213705350778
2019-04-30 21:37:21,834 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][4702] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:37:21,853 [salt.minion      :1711][INFO    ][4702] Returning information for job: 20190430213650288277
2019-04-30 21:38:59,081 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command state.apply with jid 20190430213859072859
2019-04-30 21:38:59,090 [salt.minion      :1432][INFO    ][9492] Starting a new job with PID 9492
2019-04-30 21:39:02,831 [salt.state       :915 ][INFO    ][9492] Loading fresh modules for state activity
2019-04-30 21:39:02,866 [salt.fileclient  :1219][INFO    ][9492] Fetching file from saltenv 'base', ** done ** 'salt/init.sls'
2019-04-30 21:39:03,426 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,441 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,862 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,873 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,885 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,896 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,908 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,918 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,929 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,941 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:03,952 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,015 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,026 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,037 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,047 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,058 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,069 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,080 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,091 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,102 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,195 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,206 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,217 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,228 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,240 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,251 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,262 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,274 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,284 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,296 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,311 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,323 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,333 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,344 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,356 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,367 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,378 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,388 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,400 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,410 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:04,444 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'salt-minion --version' in directory '/root'
2019-04-30 21:39:04,616 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'salt-minion --version' in directory '/root'
2019-04-30 21:39:05,207 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,220 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l ceilometer-agent-compute | grep ceilometer-agent-compute | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,382 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,393 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,404 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,415 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,425 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,436 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,447 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,457 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,468 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,529 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l nova-common | grep nova-common | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,540 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l nova-compute-kvm | grep nova-compute-kvm | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,552 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-novaclient | grep python-novaclient | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,562 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l pm-utils | grep pm-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,572 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,583 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,594 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,603 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-guestfs | grep python-guestfs | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,614 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,693 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,705 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,717 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,728 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,739 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,749 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,760 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,771 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,782 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,792 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,808 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l cinder-volume | grep cinder-volume | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,819 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l lvm2 | grep lvm2 | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,829 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sysfsutils | grep sysfsutils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,840 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l sg3-utils | grep sg3-utils | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,852 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-cinder | grep python-cinder | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,863 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-mysqldb | grep python-mysqldb | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,873 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l p7zip | grep p7zip | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,883 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l gettext-base | grep gettext-base | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,894 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-memcache | grep python-memcache | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,904 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'dpkg -l python-pycadf | grep python-pycadf | awk '{print $3}'' in directory '/root'
2019-04-30 21:39:05,938 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'salt-minion --version' in directory '/root'
2019-04-30 21:39:06,095 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'salt-minion --version' in directory '/root'
2019-04-30 21:39:06,987 [salt.state       :1780][INFO    ][9492] Running state [salt-minion] at time 21:39:06.987214
2019-04-30 21:39:06,987 [salt.state       :1813][INFO    ][9492] Executing state pkg.installed for [salt-minion]
2019-04-30 21:39:06,987 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 21:39:07,244 [salt.state       :300 ][INFO    ][9492] All specified packages are already installed
2019-04-30 21:39:07,244 [salt.state       :1951][INFO    ][9492] Completed state [salt-minion] at time 21:39:07.244313 duration_in_ms=257.098
2019-04-30 21:39:07,244 [salt.state       :1780][INFO    ][9492] Running state [salt_minion_dependency_packages] at time 21:39:07.244581
2019-04-30 21:39:07,244 [salt.state       :1813][INFO    ][9492] Executing state pkg.installed for [salt_minion_dependency_packages]
2019-04-30 21:39:07,249 [salt.state       :300 ][INFO    ][9492] All specified packages are already installed
2019-04-30 21:39:07,249 [salt.state       :1951][INFO    ][9492] Completed state [salt_minion_dependency_packages] at time 21:39:07.249685 duration_in_ms=5.104
2019-04-30 21:39:07,251 [salt.state       :1780][INFO    ][9492] Running state [/etc/salt/minion.d/minion.conf] at time 21:39:07.251590
2019-04-30 21:39:07,251 [salt.state       :1813][INFO    ][9492] Executing state file.managed for [/etc/salt/minion.d/minion.conf]
2019-04-30 21:39:07,393 [salt.state       :300 ][INFO    ][9492] File /etc/salt/minion.d/minion.conf is in the correct state
2019-04-30 21:39:07,393 [salt.state       :1951][INFO    ][9492] Completed state [/etc/salt/minion.d/minion.conf] at time 21:39:07.393527 duration_in_ms=141.937
2019-04-30 21:39:07,393 [salt.state       :1780][INFO    ][9492] Running state [python-netaddr] at time 21:39:07.393739
2019-04-30 21:39:07,393 [salt.state       :1813][INFO    ][9492] Executing state pkg.installed for [python-netaddr]
2019-04-30 21:39:07,398 [salt.state       :300 ][INFO    ][9492] All specified packages are already installed
2019-04-30 21:39:07,398 [salt.state       :1951][INFO    ][9492] Completed state [python-netaddr] at time 21:39:07.398438 duration_in_ms=4.698
2019-04-30 21:39:07,400 [salt.state       :1780][INFO    ][9492] Running state [/etc/systemd/system/salt-minion.service.d/50-restarts.conf] at time 21:39:07.400067
2019-04-30 21:39:07,400 [salt.state       :1813][INFO    ][9492] Executing state file.managed for [/etc/systemd/system/salt-minion.service.d/50-restarts.conf]
2019-04-30 21:39:07,409 [salt.state       :300 ][INFO    ][9492] File /etc/systemd/system/salt-minion.service.d/50-restarts.conf is in the correct state
2019-04-30 21:39:07,409 [salt.state       :1951][INFO    ][9492] Completed state [/etc/systemd/system/salt-minion.service.d/50-restarts.conf] at time 21:39:07.409522 duration_in_ms=9.455
2019-04-30 21:39:07,410 [salt.state       :1780][INFO    ][9492] Running state [salt-minion] at time 21:39:07.410077
2019-04-30 21:39:07,410 [salt.state       :1813][INFO    ][9492] Executing state service.running for [salt-minion]
2019-04-30 21:39:07,410 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
2019-04-30 21:39:07,422 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command ['systemctl', 'is-active', 'salt-minion.service'] in directory '/root'
2019-04-30 21:39:07,428 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command ['systemctl', 'is-enabled', 'salt-minion.service'] in directory '/root'
2019-04-30 21:39:07,434 [salt.state       :300 ][INFO    ][9492] The service salt-minion is already running
2019-04-30 21:39:07,434 [salt.state       :1951][INFO    ][9492] Completed state [salt-minion] at time 21:39:07.434188 duration_in_ms=24.109
2019-04-30 21:39:07,434 [salt.state       :1780][INFO    ][9492] Running state [/etc/salt/grains.d] at time 21:39:07.434935
2019-04-30 21:39:07,435 [salt.state       :1813][INFO    ][9492] Executing state file.directory for [/etc/salt/grains.d]
2019-04-30 21:39:07,435 [salt.state       :300 ][INFO    ][9492] Directory /etc/salt/grains.d is in the correct state
Directory /etc/salt/grains.d updated
2019-04-30 21:39:07,435 [salt.state       :1951][INFO    ][9492] Completed state [/etc/salt/grains.d] at time 21:39:07.435772 duration_in_ms=0.836
2019-04-30 21:39:07,436 [salt.state       :1780][INFO    ][9492] Running state [/etc/salt/grains] at time 21:39:07.436117
2019-04-30 21:39:07,436 [salt.state       :1813][INFO    ][9492] Executing state file.managed for [/etc/salt/grains]
2019-04-30 21:39:07,436 [salt.state       :300 ][INFO    ][9492] File /etc/salt/grains exists with proper permissions. No changes made.
2019-04-30 21:39:07,436 [salt.state       :1951][INFO    ][9492] Completed state [/etc/salt/grains] at time 21:39:07.436726 duration_in_ms=0.61
2019-04-30 21:39:07,436 [salt.state       :1780][INFO    ][9492] Running state [/etc/salt/grains.d/placeholder] at time 21:39:07.436957
2019-04-30 21:39:07,437 [salt.state       :1813][INFO    ][9492] Executing state file.managed for [/etc/salt/grains.d/placeholder]
2019-04-30 21:39:07,449 [salt.state       :300 ][INFO    ][9492] File /etc/salt/grains.d/placeholder exists with proper permissions. No changes made.
2019-04-30 21:39:07,450 [salt.state       :1951][INFO    ][9492] Completed state [/etc/salt/grains.d/placeholder] at time 21:39:07.450048 duration_in_ms=13.092
2019-04-30 21:39:07,450 [salt.state       :1780][INFO    ][9492] Running state [/etc/salt/grains.d/sphinx] at time 21:39:07.450283
2019-04-30 21:39:07,450 [salt.state       :1813][INFO    ][9492] Executing state file.managed for [/etc/salt/grains.d/sphinx]
2019-04-30 21:39:07,456 [salt.state       :300 ][INFO    ][9492] File changed:
--- 
+++ 
@@ -41,13 +41,13 @@
 
                 * lvm2: 2.02.133-1ubuntu10
 
-                * sysfsutils: dpkg-query: no packages found matching sysfsutils
+                * sysfsutils: 2.1.0+repack-4
 
                 * sg3-utils: dpkg-query: no packages found matching sg3-utils
 
                 * python-cinder: dpkg-query: no packages found matching python-cinder
 
-                * python-mysqldb: dpkg-query: no packages found matching python-mysqldb
+                * python-mysqldb: <none>
 
                 * p7zip: dpkg-query: no packages found matching p7zip
 
@@ -86,8 +86,11 @@
             ip:
               name: IP Addresses
               value:
+              - 10.1.0.6
+              - 10.167.4.56
               - 127.0.0.1
-              - 192.168.11.40
+              - 172.30.10.112
+              - 192.168.11.37
         system:
           name: System
           param:
@@ -135,7 +138,7 @@
 
                 * pm-utils: dpkg-query: no packages found matching pm-utils
 
-                * sysfsutils: dpkg-query: no packages found matching sysfsutils
+                * sysfsutils: 2.1.0+repack-4
 
                 * sg3-utils: dpkg-query: no packages found matching sg3-utils
 

2019-04-30 21:39:07,456 [salt.state       :1951][INFO    ][9492] Completed state [/etc/salt/grains.d/sphinx] at time 21:39:07.456779 duration_in_ms=6.496
2019-04-30 21:39:07,457 [salt.state       :1780][INFO    ][9492] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.457925
2019-04-30 21:39:07,458 [salt.state       :1813][INFO    ][9492] Executing state cmd.wait for [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"]
2019-04-30 21:39:07,458 [salt.state       :300 ][INFO    ][9492] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"
2019-04-30 21:39:07,458 [salt.state       :1951][INFO    ][9492] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.458384 duration_in_ms=0.458
2019-04-30 21:39:07,458 [salt.state       :1780][INFO    ][9492] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.458501
2019-04-30 21:39:07,458 [salt.state       :1813][INFO    ][9492] Executing state cmd.mod_watch for [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"]
2019-04-30 21:39:07,459 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"' in directory '/root'
2019-04-30 21:39:07,537 [salt.state       :300 ][INFO    ][9492] {'pid': 9862, 'retcode': 0, 'stderr': '', 'stdout': ''}
2019-04-30 21:39:07,537 [salt.state       :1951][INFO    ][9492] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/sphinx', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.537550 duration_in_ms=79.048
2019-04-30 21:39:07,537 [salt.state       :1780][INFO    ][9492] Running state [/etc/salt/grains.d/dns_records] at time 21:39:07.537919
2019-04-30 21:39:07,538 [salt.state       :1813][INFO    ][9492] Executing state file.managed for [/etc/salt/grains.d/dns_records]
2019-04-30 21:39:07,539 [salt.state       :300 ][INFO    ][9492] File /etc/salt/grains.d/dns_records is in the correct state
2019-04-30 21:39:07,539 [salt.state       :1951][INFO    ][9492] Completed state [/etc/salt/grains.d/dns_records] at time 21:39:07.539421 duration_in_ms=1.502
2019-04-30 21:39:07,539 [salt.state       :1780][INFO    ][9492] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.539922
2019-04-30 21:39:07,540 [salt.state       :1813][INFO    ][9492] Executing state cmd.wait for [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"]
2019-04-30 21:39:07,540 [salt.state       :300 ][INFO    ][9492] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"
2019-04-30 21:39:07,540 [salt.state       :1951][INFO    ][9492] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/dns_records', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.540388 duration_in_ms=0.467
2019-04-30 21:39:07,540 [salt.state       :1780][INFO    ][9492] Running state [/etc/salt/grains.d/salt] at time 21:39:07.540624
2019-04-30 21:39:07,540 [salt.state       :1813][INFO    ][9492] Executing state file.managed for [/etc/salt/grains.d/salt]
2019-04-30 21:39:07,556 [salt.state       :300 ][INFO    ][9492] File /etc/salt/grains.d/salt is in the correct state
2019-04-30 21:39:07,556 [salt.state       :1951][INFO    ][9492] Completed state [/etc/salt/grains.d/salt] at time 21:39:07.556582 duration_in_ms=15.958
2019-04-30 21:39:07,557 [salt.state       :1780][INFO    ][9492] Running state [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.557017
2019-04-30 21:39:07,557 [salt.state       :1813][INFO    ][9492] Executing state cmd.wait for [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"]
2019-04-30 21:39:07,557 [salt.state       :300 ][INFO    ][9492] No changes made for python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"
2019-04-30 21:39:07,557 [salt.state       :1951][INFO    ][9492] Completed state [python -c "import yaml; stream = file('/etc/salt/grains.d/salt', 'r'); yaml.load(stream); stream.close()"] at time 21:39:07.557492 duration_in_ms=0.474
2019-04-30 21:39:07,558 [salt.state       :1780][INFO    ][9492] Running state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 21:39:07.558482
2019-04-30 21:39:07,558 [salt.state       :1813][INFO    ][9492] Executing state cmd.wait for [cat /etc/salt/grains.d/* > /etc/salt/grains]
2019-04-30 21:39:07,558 [salt.state       :300 ][INFO    ][9492] No changes made for cat /etc/salt/grains.d/* > /etc/salt/grains
2019-04-30 21:39:07,558 [salt.state       :1951][INFO    ][9492] Completed state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 21:39:07.558946 duration_in_ms=0.463
2019-04-30 21:39:07,559 [salt.state       :1780][INFO    ][9492] Running state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 21:39:07.559060
2019-04-30 21:39:07,559 [salt.state       :1813][INFO    ][9492] Executing state cmd.mod_watch for [cat /etc/salt/grains.d/* > /etc/salt/grains]
2019-04-30 21:39:07,560 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9492] Executing command 'cat /etc/salt/grains.d/* > /etc/salt/grains' in directory '/root'
2019-04-30 21:39:07,565 [salt.state       :300 ][INFO    ][9492] {'pid': 9864, 'retcode': 0, 'stderr': '', 'stdout': ''}
2019-04-30 21:39:07,566 [salt.state       :1951][INFO    ][9492] Completed state [cat /etc/salt/grains.d/* > /etc/salt/grains] at time 21:39:07.566237 duration_in_ms=7.176
2019-04-30 21:39:07,566 [salt.state       :1780][INFO    ][9492] Running state [mine.update] at time 21:39:07.566772
2019-04-30 21:39:07,566 [salt.state       :1813][INFO    ][9492] Executing state module.wait for [mine.update]
2019-04-30 21:39:07,567 [salt.state       :300 ][INFO    ][9492] No changes made for mine.update
2019-04-30 21:39:07,567 [salt.state       :1951][INFO    ][9492] Completed state [mine.update] at time 21:39:07.567287 duration_in_ms=0.515
2019-04-30 21:39:07,567 [salt.state       :1780][INFO    ][9492] Running state [mine.update] at time 21:39:07.567406
2019-04-30 21:39:07,567 [salt.state       :1813][INFO    ][9492] Executing state module.mod_watch for [mine.update]
2019-04-30 21:39:07,567 [salt.utils.decorators:613 ][WARNING ][9492] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 21:39:08,083 [salt.state       :300 ][INFO    ][9492] {'ret': True}
2019-04-30 21:39:08,084 [salt.state       :1951][INFO    ][9492] Completed state [mine.update] at time 21:39:08.084182 duration_in_ms=516.776
2019-04-30 21:39:08,084 [salt.state       :1780][INFO    ][9492] Running state [ca-certificates] at time 21:39:08.084455
2019-04-30 21:39:08,084 [salt.state       :1813][INFO    ][9492] Executing state pkg.installed for [ca-certificates]
2019-04-30 21:39:08,091 [salt.state       :300 ][INFO    ][9492] All specified packages are already installed
2019-04-30 21:39:08,091 [salt.state       :1951][INFO    ][9492] Completed state [ca-certificates] at time 21:39:08.091845 duration_in_ms=7.39
2019-04-30 21:39:08,092 [salt.state       :1780][INFO    ][9492] Running state [update-ca-certificates] at time 21:39:08.092369
2019-04-30 21:39:08,092 [salt.state       :1813][INFO    ][9492] Executing state cmd.wait for [update-ca-certificates]
2019-04-30 21:39:08,092 [salt.state       :300 ][INFO    ][9492] No changes made for update-ca-certificates
2019-04-30 21:39:08,093 [salt.state       :1951][INFO    ][9492] Completed state [update-ca-certificates] at time 21:39:08.093089 duration_in_ms=0.72
2019-04-30 21:39:08,095 [salt.minion      :1711][INFO    ][9492] Returning information for job: 20190430213859072859
2019-04-30 21:42:53,450 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command saltutil.sync_all with jid 20190430214253441001
2019-04-30 21:42:53,461 [salt.minion      :1432][INFO    ][9877] Starting a new job with PID 9877
2019-04-30 21:42:56,538 [salt.state       :915 ][INFO    ][9877] Loading fresh modules for state activity
2019-04-30 21:42:56,937 [salt.utils.extmods:71  ][INFO    ][9877] Creating module dir '/var/cache/salt/minion/extmods/clouds'
2019-04-30 21:42:56,940 [salt.utils.extmods:82  ][INFO    ][9877] Syncing clouds for environment 'base'
2019-04-30 21:42:56,940 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_clouds, for base)
2019-04-30 21:42:56,940 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_clouds/' for environment 'base'
2019-04-30 21:42:56,996 [salt.utils.extmods:71  ][INFO    ][9877] Creating module dir '/var/cache/salt/minion/extmods/beacons'
2019-04-30 21:42:56,999 [salt.utils.extmods:82  ][INFO    ][9877] Syncing beacons for environment 'base'
2019-04-30 21:42:56,999 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_beacons, for base)
2019-04-30 21:42:56,999 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_beacons/' for environment 'base'
2019-04-30 21:42:57,049 [salt.utils.extmods:82  ][INFO    ][9877] Syncing modules for environment 'base'
2019-04-30 21:42:57,049 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_modules, for base)
2019-04-30 21:42:57,049 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_modules/' for environment 'base'
2019-04-30 21:42:57,641 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/_modules/gnocchiv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/_modules/gnocchiv1/__init__.py'
2019-04-30 21:42:57,642 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/_modules/gnocchiv1/archive_policy.py' to '/var/cache/salt/minion/extmods/modules/_modules/gnocchiv1/archive_policy.py'
2019-04-30 21:42:57,642 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/_modules/gnocchiv1/common.py' to '/var/cache/salt/minion/extmods/modules/_modules/gnocchiv1/common.py'
2019-04-30 21:42:57,643 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/architect.py' to '/var/cache/salt/minion/extmods/modules/architect.py'
2019-04-30 21:42:57,643 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/artifactory.py' to '/var/cache/salt/minion/extmods/modules/artifactory.py'
2019-04-30 21:42:57,643 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/auditd.py' to '/var/cache/salt/minion/extmods/modules/auditd.py'
2019-04-30 21:42:57,643 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/avinetworks.py' to '/var/cache/salt/minion/extmods/modules/avinetworks.py'
2019-04-30 21:42:57,644 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/__init__.py'
2019-04-30 21:42:57,644 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/acl.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/acl.py'
2019-04-30 21:42:57,644 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/common.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/common.py'
2019-04-30 21:42:57,645 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/barbicanv1/secrets.py' to '/var/cache/salt/minion/extmods/modules/barbicanv1/secrets.py'
2019-04-30 21:42:57,645 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ceph_ng.py' to '/var/cache/salt/minion/extmods/modules/ceph_ng.py'
2019-04-30 21:42:57,645 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cfgdrive.py' to '/var/cache/salt/minion/extmods/modules/cfgdrive.py'
2019-04-30 21:42:57,645 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderng.py' to '/var/cache/salt/minion/extmods/modules/cinderng.py'
2019-04-30 21:42:57,646 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/__init__.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/__init__.py'
2019-04-30 21:42:57,646 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/arg_converter.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/arg_converter.py'
2019-04-30 21:42:57,646 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/common.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/common.py'
2019-04-30 21:42:57,646 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/lists.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/lists.py'
2019-04-30 21:42:57,647 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/services.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/services.py'
2019-04-30 21:42:57,647 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/volume.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/volume.py'
2019-04-30 21:42:57,647 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/volume_actions.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/volume_actions.py'
2019-04-30 21:42:57,648 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/volume_types.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/volume_types.py'
2019-04-30 21:42:57,648 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/cinderv3/volumes.py' to '/var/cache/salt/minion/extmods/modules/cinderv3/volumes.py'
2019-04-30 21:42:57,648 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/configdrive.py' to '/var/cache/salt/minion/extmods/modules/configdrive.py'
2019-04-30 21:42:57,649 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/contrail.py' to '/var/cache/salt/minion/extmods/modules/contrail.py'
2019-04-30 21:42:57,649 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/contrail_health.py' to '/var/cache/salt/minion/extmods/modules/contrail_health.py'
2019-04-30 21:42:57,649 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/creds.py' to '/var/cache/salt/minion/extmods/modules/creds.py'
2019-04-30 21:42:57,650 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/devops_utils.py' to '/var/cache/salt/minion/extmods/modules/devops_utils.py'
2019-04-30 21:42:57,650 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/dockerng_service.py' to '/var/cache/salt/minion/extmods/modules/dockerng_service.py'
2019-04-30 21:42:57,650 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/encode_json.py' to '/var/cache/salt/minion/extmods/modules/encode_json.py'
2019-04-30 21:42:57,650 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/gerrit.py' to '/var/cache/salt/minion/extmods/modules/gerrit.py'
2019-04-30 21:42:57,651 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/gitlab.py' to '/var/cache/salt/minion/extmods/modules/gitlab.py'
2019-04-30 21:42:57,651 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/glanceng.py' to '/var/cache/salt/minion/extmods/modules/glanceng.py'
2019-04-30 21:42:57,651 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/__init__.py' to '/var/cache/salt/minion/extmods/modules/glancev2/__init__.py'
2019-04-30 21:42:57,652 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/common.py' to '/var/cache/salt/minion/extmods/modules/glancev2/common.py'
2019-04-30 21:42:57,652 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/image.py' to '/var/cache/salt/minion/extmods/modules/glancev2/image.py'
2019-04-30 21:42:57,652 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/glancev2/task.py' to '/var/cache/salt/minion/extmods/modules/glancev2/task.py'
2019-04-30 21:42:57,652 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/__init__.py'
2019-04-30 21:42:57,653 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/archive_policy.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/archive_policy.py'
2019-04-30 21:42:57,653 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/common.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/common.py'
2019-04-30 21:42:57,653 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/gnocchiv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/gnocchiv1/__init__.py'
2019-04-30 21:42:57,653 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/gnocchiv1/archive_policy.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/gnocchiv1/archive_policy.py'
2019-04-30 21:42:57,654 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/gnocchiv1/gnocchiv1/common.py' to '/var/cache/salt/minion/extmods/modules/gnocchiv1/gnocchiv1/common.py'
2019-04-30 21:42:57,654 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/heat.py' to '/var/cache/salt/minion/extmods/modules/heat.py'
2019-04-30 21:42:57,654 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/heatv1/__init__.py'
2019-04-30 21:42:57,655 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/common.py' to '/var/cache/salt/minion/extmods/modules/heatv1/common.py'
2019-04-30 21:42:57,655 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/services.py' to '/var/cache/salt/minion/extmods/modules/heatv1/services.py'
2019-04-30 21:42:57,656 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/heatv1/stack.py' to '/var/cache/salt/minion/extmods/modules/heatv1/stack.py'
2019-04-30 21:42:57,656 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/heka_alarming.py' to '/var/cache/salt/minion/extmods/modules/heka_alarming.py'
2019-04-30 21:42:57,657 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/helm.py' to '/var/cache/salt/minion/extmods/modules/helm.py'
2019-04-30 21:42:57,657 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/iptables_extra.py' to '/var/cache/salt/minion/extmods/modules/iptables_extra.py'
2019-04-30 21:42:57,657 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ironicng.py' to '/var/cache/salt/minion/extmods/modules/ironicng.py'
2019-04-30 21:42:57,658 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/__init__.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/__init__.py'
2019-04-30 21:42:57,658 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/chassis.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/chassis.py'
2019-04-30 21:42:57,658 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/common.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/common.py'
2019-04-30 21:42:57,658 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/drivers.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/drivers.py'
2019-04-30 21:42:57,659 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/nodes.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/nodes.py'
2019-04-30 21:42:57,659 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/ports.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/ports.py'
2019-04-30 21:42:57,659 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ironicv1/volumes.py' to '/var/cache/salt/minion/extmods/modules/ironicv1/volumes.py'
2019-04-30 21:42:57,660 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/jenkins_common.py' to '/var/cache/salt/minion/extmods/modules/jenkins_common.py'
2019-04-30 21:42:57,660 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystone_policy.py' to '/var/cache/salt/minion/extmods/modules/keystone_policy.py'
2019-04-30 21:42:57,660 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystoneng.py' to '/var/cache/salt/minion/extmods/modules/keystoneng.py'
2019-04-30 21:42:57,661 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/__init__.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/__init__.py'
2019-04-30 21:42:57,661 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/arg_converter.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/arg_converter.py'
2019-04-30 21:42:57,661 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/common.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/common.py'
2019-04-30 21:42:57,662 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/domains.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/domains.py'
2019-04-30 21:42:57,662 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/endpoints.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/endpoints.py'
2019-04-30 21:42:57,662 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/groups.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/groups.py'
2019-04-30 21:42:57,663 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/lists.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/lists.py'
2019-04-30 21:42:57,663 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/projects.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/projects.py'
2019-04-30 21:42:57,663 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/regions.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/regions.py'
2019-04-30 21:42:57,663 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/roles.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/roles.py'
2019-04-30 21:42:57,664 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/services.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/services.py'
2019-04-30 21:42:57,664 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/keystonev3/users.py' to '/var/cache/salt/minion/extmods/modules/keystonev3/users.py'
2019-04-30 21:42:57,665 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/linux_hosts.py' to '/var/cache/salt/minion/extmods/modules/linux_hosts.py'
2019-04-30 21:42:57,665 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/linux_netlink.py' to '/var/cache/salt/minion/extmods/modules/linux_netlink.py'
2019-04-30 21:42:57,665 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/maas.py' to '/var/cache/salt/minion/extmods/modules/maas.py'
2019-04-30 21:42:57,665 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/maas_client.py' to '/var/cache/salt/minion/extmods/modules/maas_client.py'
2019-04-30 21:42:57,666 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/maasng.py' to '/var/cache/salt/minion/extmods/modules/maasng.py'
2019-04-30 21:42:57,666 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/manilang/__init__.py' to '/var/cache/salt/minion/extmods/modules/manilang/__init__.py'
2019-04-30 21:42:57,666 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/manilang/common.py' to '/var/cache/salt/minion/extmods/modules/manilang/common.py'
2019-04-30 21:42:57,667 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/manilang/share_types.py' to '/var/cache/salt/minion/extmods/modules/manilang/share_types.py'
2019-04-30 21:42:57,667 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/manilang/shares.py' to '/var/cache/salt/minion/extmods/modules/manilang/shares.py'
2019-04-30 21:42:57,667 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/modelschema.py' to '/var/cache/salt/minion/extmods/modules/modelschema.py'
2019-04-30 21:42:57,667 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/modelutils.py' to '/var/cache/salt/minion/extmods/modules/modelutils.py'
2019-04-30 21:42:57,668 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/multipart.py' to '/var/cache/salt/minion/extmods/modules/multipart.py'
2019-04-30 21:42:57,668 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/nagios_alarming.py' to '/var/cache/salt/minion/extmods/modules/nagios_alarming.py'
2019-04-30 21:42:57,668 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/net_checks.py' to '/var/cache/salt/minion/extmods/modules/net_checks.py'
2019-04-30 21:42:57,668 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/netutils.py' to '/var/cache/salt/minion/extmods/modules/netutils.py'
2019-04-30 21:42:57,669 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronng.py' to '/var/cache/salt/minion/extmods/modules/neutronng.py'
2019-04-30 21:42:57,669 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/__init__.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/__init__.py'
2019-04-30 21:42:57,669 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/agents.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/agents.py'
2019-04-30 21:42:57,669 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/arg_converter.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/arg_converter.py'
2019-04-30 21:42:57,670 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/auto_alloc.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/auto_alloc.py'
2019-04-30 21:42:57,670 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/common.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/common.py'
2019-04-30 21:42:57,670 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/lists.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/lists.py'
2019-04-30 21:42:57,670 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/networks.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/networks.py'
2019-04-30 21:42:57,670 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/ports.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/ports.py'
2019-04-30 21:42:57,671 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/routers.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/routers.py'
2019-04-30 21:42:57,671 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/subnetpools.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/subnetpools.py'
2019-04-30 21:42:57,671 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/neutronv2/subnets.py' to '/var/cache/salt/minion/extmods/modules/neutronv2/subnets.py'
2019-04-30 21:42:57,671 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/novang.py' to '/var/cache/salt/minion/extmods/modules/novang.py'
2019-04-30 21:42:57,672 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/novav21/__init__.py' to '/var/cache/salt/minion/extmods/modules/novav21/__init__.py'
2019-04-30 21:42:57,672 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/novav21/aggregates.py' to '/var/cache/salt/minion/extmods/modules/novav21/aggregates.py'
2019-04-30 21:42:57,672 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/novav21/common.py' to '/var/cache/salt/minion/extmods/modules/novav21/common.py'
2019-04-30 21:42:57,672 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/novav21/flavors.py' to '/var/cache/salt/minion/extmods/modules/novav21/flavors.py'
2019-04-30 21:42:57,673 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/novav21/keypairs.py' to '/var/cache/salt/minion/extmods/modules/novav21/keypairs.py'
2019-04-30 21:42:57,673 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/novav21/quotas.py' to '/var/cache/salt/minion/extmods/modules/novav21/quotas.py'
2019-04-30 21:42:57,673 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/novav21/servers.py' to '/var/cache/salt/minion/extmods/modules/novav21/servers.py'
2019-04-30 21:42:57,673 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/novav21/services.py' to '/var/cache/salt/minion/extmods/modules/novav21/services.py'
2019-04-30 21:42:57,674 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/octaviav2/__init__.py' to '/var/cache/salt/minion/extmods/modules/octaviav2/__init__.py'
2019-04-30 21:42:57,674 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/octaviav2/common.py' to '/var/cache/salt/minion/extmods/modules/octaviav2/common.py'
2019-04-30 21:42:57,674 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/octaviav2/loadbalancers.py' to '/var/cache/salt/minion/extmods/modules/octaviav2/loadbalancers.py'
2019-04-30 21:42:57,674 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/ovs_config.py' to '/var/cache/salt/minion/extmods/modules/ovs_config.py'
2019-04-30 21:42:57,674 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/reclass.py' to '/var/cache/salt/minion/extmods/modules/reclass.py'
2019-04-30 21:42:57,675 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/rsyslog_util.py' to '/var/cache/salt/minion/extmods/modules/rsyslog_util.py'
2019-04-30 21:42:57,675 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/rundeck.py' to '/var/cache/salt/minion/extmods/modules/rundeck.py'
2019-04-30 21:42:57,675 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/saltkey.py' to '/var/cache/salt/minion/extmods/modules/saltkey.py'
2019-04-30 21:42:57,676 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/saltresource.py' to '/var/cache/salt/minion/extmods/modules/saltresource.py'
2019-04-30 21:42:57,676 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/seedng.py' to '/var/cache/salt/minion/extmods/modules/seedng.py'
2019-04-30 21:42:57,676 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/testing/__init__.py' to '/var/cache/salt/minion/extmods/modules/testing/__init__.py'
2019-04-30 21:42:57,676 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/testing/credentials.py' to '/var/cache/salt/minion/extmods/modules/testing/credentials.py'
2019-04-30 21:42:57,677 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/testing/django.py' to '/var/cache/salt/minion/extmods/modules/testing/django.py'
2019-04-30 21:42:57,677 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/testing/django_client_proxy.py' to '/var/cache/salt/minion/extmods/modules/testing/django_client_proxy.py'
2019-04-30 21:42:57,678 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/utils.py' to '/var/cache/salt/minion/extmods/modules/utils.py'
2019-04-30 21:42:57,678 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_modules/virtng.py' to '/var/cache/salt/minion/extmods/modules/virtng.py'
2019-04-30 21:42:57,686 [salt.utils.extmods:82  ][INFO    ][9877] Syncing states for environment 'base'
2019-04-30 21:42:57,686 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_states, for base)
2019-04-30 21:42:57,686 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_states/' for environment 'base'
2019-04-30 21:42:57,946 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/_states/gnocchiv1.py' to '/var/cache/salt/minion/extmods/states/_states/gnocchiv1.py'
2019-04-30 21:42:57,946 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/artifactory.py' to '/var/cache/salt/minion/extmods/states/artifactory.py'
2019-04-30 21:42:57,946 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/avinetworks.py' to '/var/cache/salt/minion/extmods/states/avinetworks.py'
2019-04-30 21:42:57,947 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/barbicanv1.py' to '/var/cache/salt/minion/extmods/states/barbicanv1.py'
2019-04-30 21:42:57,947 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/cinderng.py' to '/var/cache/salt/minion/extmods/states/cinderng.py'
2019-04-30 21:42:57,948 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/cinderv3.py' to '/var/cache/salt/minion/extmods/states/cinderv3.py'
2019-04-30 21:42:57,948 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/contrail.py' to '/var/cache/salt/minion/extmods/states/contrail.py'
2019-04-30 21:42:57,948 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/contrail_health.py' to '/var/cache/salt/minion/extmods/states/contrail_health.py'
2019-04-30 21:42:57,949 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/debmirror.py' to '/var/cache/salt/minion/extmods/states/debmirror.py'
2019-04-30 21:42:57,949 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/dockerng_service.py' to '/var/cache/salt/minion/extmods/states/dockerng_service.py'
2019-04-30 21:42:57,950 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/gerrit.py' to '/var/cache/salt/minion/extmods/states/gerrit.py'
2019-04-30 21:42:57,950 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/gitlab.py' to '/var/cache/salt/minion/extmods/states/gitlab.py'
2019-04-30 21:42:57,950 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/glanceng.py' to '/var/cache/salt/minion/extmods/states/glanceng.py'
2019-04-30 21:42:57,951 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/glancev2.py' to '/var/cache/salt/minion/extmods/states/glancev2.py'
2019-04-30 21:42:57,951 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/gnocchiv1.py' to '/var/cache/salt/minion/extmods/states/gnocchiv1.py'
2019-04-30 21:42:57,951 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/grafana3_dashboard.py' to '/var/cache/salt/minion/extmods/states/grafana3_dashboard.py'
2019-04-30 21:42:57,952 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/grafana3_datasource.py' to '/var/cache/salt/minion/extmods/states/grafana3_datasource.py'
2019-04-30 21:42:57,952 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/heat.py' to '/var/cache/salt/minion/extmods/states/heat.py'
2019-04-30 21:42:57,953 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/heatv1.py' to '/var/cache/salt/minion/extmods/states/heatv1.py'
2019-04-30 21:42:57,953 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/helm_release.py' to '/var/cache/salt/minion/extmods/states/helm_release.py'
2019-04-30 21:42:57,953 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/helm_repos.py' to '/var/cache/salt/minion/extmods/states/helm_repos.py'
2019-04-30 21:42:57,954 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/httpng.py' to '/var/cache/salt/minion/extmods/states/httpng.py'
2019-04-30 21:42:57,954 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/ironicng.py' to '/var/cache/salt/minion/extmods/states/ironicng.py'
2019-04-30 21:42:57,955 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/ironicv1.py' to '/var/cache/salt/minion/extmods/states/ironicv1.py'
2019-04-30 21:42:57,955 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_approval.py' to '/var/cache/salt/minion/extmods/states/jenkins_approval.py'
2019-04-30 21:42:57,955 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_artifactory.py' to '/var/cache/salt/minion/extmods/states/jenkins_artifactory.py'
2019-04-30 21:42:57,956 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_credential.py' to '/var/cache/salt/minion/extmods/states/jenkins_credential.py'
2019-04-30 21:42:57,956 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_gerrit.py' to '/var/cache/salt/minion/extmods/states/jenkins_gerrit.py'
2019-04-30 21:42:57,956 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_globalenvprop.py' to '/var/cache/salt/minion/extmods/states/jenkins_globalenvprop.py'
2019-04-30 21:42:57,957 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_jira.py' to '/var/cache/salt/minion/extmods/states/jenkins_jira.py'
2019-04-30 21:42:57,957 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_job.py' to '/var/cache/salt/minion/extmods/states/jenkins_job.py'
2019-04-30 21:42:57,958 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_lib.py' to '/var/cache/salt/minion/extmods/states/jenkins_lib.py'
2019-04-30 21:42:57,958 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_node.py' to '/var/cache/salt/minion/extmods/states/jenkins_node.py'
2019-04-30 21:42:57,958 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_plugin.py' to '/var/cache/salt/minion/extmods/states/jenkins_plugin.py'
2019-04-30 21:42:57,959 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_security.py' to '/var/cache/salt/minion/extmods/states/jenkins_security.py'
2019-04-30 21:42:57,959 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_slack.py' to '/var/cache/salt/minion/extmods/states/jenkins_slack.py'
2019-04-30 21:42:57,959 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_smtp.py' to '/var/cache/salt/minion/extmods/states/jenkins_smtp.py'
2019-04-30 21:42:57,960 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_theme.py' to '/var/cache/salt/minion/extmods/states/jenkins_theme.py'
2019-04-30 21:42:57,960 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_throttle_category.py' to '/var/cache/salt/minion/extmods/states/jenkins_throttle_category.py'
2019-04-30 21:42:57,960 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_user.py' to '/var/cache/salt/minion/extmods/states/jenkins_user.py'
2019-04-30 21:42:57,961 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/jenkins_view.py' to '/var/cache/salt/minion/extmods/states/jenkins_view.py'
2019-04-30 21:42:57,961 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/keystone_policy.py' to '/var/cache/salt/minion/extmods/states/keystone_policy.py'
2019-04-30 21:42:57,961 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/keystoneng.py' to '/var/cache/salt/minion/extmods/states/keystoneng.py'
2019-04-30 21:42:57,962 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/keystonev3.py' to '/var/cache/salt/minion/extmods/states/keystonev3.py'
2019-04-30 21:42:57,962 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/kibana_object.py' to '/var/cache/salt/minion/extmods/states/kibana_object.py'
2019-04-30 21:42:57,963 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/maasng.py' to '/var/cache/salt/minion/extmods/states/maasng.py'
2019-04-30 21:42:57,963 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/manilang.py' to '/var/cache/salt/minion/extmods/states/manilang.py'
2019-04-30 21:42:57,964 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/neutronng.py' to '/var/cache/salt/minion/extmods/states/neutronng.py'
2019-04-30 21:42:57,964 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/neutronv2.py' to '/var/cache/salt/minion/extmods/states/neutronv2.py'
2019-04-30 21:42:57,964 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/novang.py' to '/var/cache/salt/minion/extmods/states/novang.py'
2019-04-30 21:42:57,965 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/novav21.py' to '/var/cache/salt/minion/extmods/states/novav21.py'
2019-04-30 21:42:57,965 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/octaviav2.py' to '/var/cache/salt/minion/extmods/states/octaviav2.py'
2019-04-30 21:42:57,966 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/ovs_config.py' to '/var/cache/salt/minion/extmods/states/ovs_config.py'
2019-04-30 21:42:57,966 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/powerdns_mysql.py' to '/var/cache/salt/minion/extmods/states/powerdns_mysql.py'
2019-04-30 21:42:57,966 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/reclass.py' to '/var/cache/salt/minion/extmods/states/reclass.py'
2019-04-30 21:42:57,967 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/rundeck_project.py' to '/var/cache/salt/minion/extmods/states/rundeck_project.py'
2019-04-30 21:42:57,967 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/rundeck_scm.py' to '/var/cache/salt/minion/extmods/states/rundeck_scm.py'
2019-04-30 21:42:57,968 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_states/rundeck_secret.py' to '/var/cache/salt/minion/extmods/states/rundeck_secret.py'
2019-04-30 21:42:57,970 [salt.utils.extmods:71  ][INFO    ][9877] Creating module dir '/var/cache/salt/minion/extmods/sdb'
2019-04-30 21:42:57,973 [salt.utils.extmods:82  ][INFO    ][9877] Syncing sdb for environment 'base'
2019-04-30 21:42:57,973 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_sdb, for base)
2019-04-30 21:42:57,973 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_sdb/' for environment 'base'
2019-04-30 21:42:58,012 [salt.utils.extmods:82  ][INFO    ][9877] Syncing grains for environment 'base'
2019-04-30 21:42:58,012 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_grains, for base)
2019-04-30 21:42:58,012 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_grains/' for environment 'base'
2019-04-30 21:42:58,088 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/ceilometer_policy.py' to '/var/cache/salt/minion/extmods/grains/ceilometer_policy.py'
2019-04-30 21:42:58,088 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/ceph.py' to '/var/cache/salt/minion/extmods/grains/ceph.py'
2019-04-30 21:42:58,088 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/cinder_policy.py' to '/var/cache/salt/minion/extmods/grains/cinder_policy.py'
2019-04-30 21:42:58,089 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/docker_swarm.py' to '/var/cache/salt/minion/extmods/grains/docker_swarm.py'
2019-04-30 21:42:58,089 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/glance_policy.py' to '/var/cache/salt/minion/extmods/grains/glance_policy.py'
2019-04-30 21:42:58,089 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/heat_policy.py' to '/var/cache/salt/minion/extmods/grains/heat_policy.py'
2019-04-30 21:42:58,089 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/jenkins_plugins.py' to '/var/cache/salt/minion/extmods/grains/jenkins_plugins.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/keystone_policy.py' to '/var/cache/salt/minion/extmods/grains/keystone_policy.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/kubernetes.py' to '/var/cache/salt/minion/extmods/grains/kubernetes.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/neutron_policy.py' to '/var/cache/salt/minion/extmods/grains/neutron_policy.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/nova_policy.py' to '/var/cache/salt/minion/extmods/grains/nova_policy.py'
2019-04-30 21:42:58,090 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_grains/ssh_fingerprints.py' to '/var/cache/salt/minion/extmods/grains/ssh_fingerprints.py'
2019-04-30 21:42:58,091 [salt.utils.extmods:71  ][INFO    ][9877] Creating module dir '/var/cache/salt/minion/extmods/renderers'
2019-04-30 21:42:58,094 [salt.utils.extmods:82  ][INFO    ][9877] Syncing renderers for environment 'base'
2019-04-30 21:42:58,094 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_renderers, for base)
2019-04-30 21:42:58,094 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_renderers/' for environment 'base'
2019-04-30 21:42:58,136 [salt.utils.extmods:82  ][INFO    ][9877] Syncing returners for environment 'base'
2019-04-30 21:42:58,136 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_returners, for base)
2019-04-30 21:42:58,136 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_returners/' for environment 'base'
2019-04-30 21:42:58,174 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_returners/postgres_graph_db.py' to '/var/cache/salt/minion/extmods/returners/postgres_graph_db.py'
2019-04-30 21:42:58,175 [salt.utils.extmods:71  ][INFO    ][9877] Creating module dir '/var/cache/salt/minion/extmods/output'
2019-04-30 21:42:58,178 [salt.utils.extmods:82  ][INFO    ][9877] Syncing output for environment 'base'
2019-04-30 21:42:58,178 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_output, for base)
2019-04-30 21:42:58,178 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_output/' for environment 'base'
2019-04-30 21:42:58,215 [salt.utils.extmods:71  ][INFO    ][9877] Creating module dir '/var/cache/salt/minion/extmods/utils'
2019-04-30 21:42:58,217 [salt.utils.extmods:82  ][INFO    ][9877] Syncing utils for environment 'base'
2019-04-30 21:42:58,217 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_utils, for base)
2019-04-30 21:42:58,218 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_utils/' for environment 'base'
2019-04-30 21:42:58,252 [salt.utils.extmods:71  ][INFO    ][9877] Creating module dir '/var/cache/salt/minion/extmods/log_handlers'
2019-04-30 21:42:58,255 [salt.utils.extmods:82  ][INFO    ][9877] Syncing log_handlers for environment 'base'
2019-04-30 21:42:58,255 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_log_handlers, for base)
2019-04-30 21:42:58,255 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_log_handlers/' for environment 'base'
2019-04-30 21:42:58,292 [salt.utils.extmods:71  ][INFO    ][9877] Creating module dir '/var/cache/salt/minion/extmods/proxy'
2019-04-30 21:42:58,296 [salt.utils.extmods:82  ][INFO    ][9877] Syncing proxy for environment 'base'
2019-04-30 21:42:58,296 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_proxy, for base)
2019-04-30 21:42:58,296 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_proxy/' for environment 'base'
2019-04-30 21:42:58,334 [salt.utils.extmods:82  ][INFO    ][9877] Syncing engines for environment 'base'
2019-04-30 21:42:58,334 [salt.utils.extmods:86  ][INFO    ][9877] Loading cache from salt://_engines, for base)
2019-04-30 21:42:58,334 [salt.fileclient  :230 ][INFO    ][9877] Caching directory '_engines/' for environment 'base'
2019-04-30 21:42:58,380 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_engines/architect.py' to '/var/cache/salt/minion/extmods/engines/architect.py'
2019-04-30 21:42:58,380 [salt.utils.extmods:111 ][INFO    ][9877] Copying '/var/cache/salt/minion/files/base/_engines/saltgraph.py' to '/var/cache/salt/minion/extmods/engines/saltgraph.py'
2019-04-30 21:42:58,383 [salt.minion      :1711][INFO    ][9877] Returning information for job: 20190430214253441001
2019-04-30 22:01:02,570 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command state.sls with jid 20190430220102562635
2019-04-30 22:01:02,581 [salt.minion      :1432][INFO    ][9989] Starting a new job with PID 9989
2019-04-30 22:01:03,235 [salt.state       :915 ][INFO    ][9989] Loading fresh modules for state activity
2019-04-30 22:01:03,261 [salt.fileclient  :1219][INFO    ][9989] Fetching file from saltenv 'base', ** done ** 'glusterfs/client.sls'
2019-04-30 22:01:03,288 [salt.fileclient  :1219][INFO    ][9989] Fetching file from saltenv 'base', ** done ** 'glusterfs/map.jinja'
2019-04-30 22:01:03,304 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:01:03,558 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command 'systemd-escape -p --suffix=mount /var/lib/nova/instances' in directory '/root'
2019-04-30 22:01:04,023 [salt.state       :1780][INFO    ][9989] Running state [glusterfs-client] at time 22:01:04.023691
2019-04-30 22:01:04,024 [salt.state       :1813][INFO    ][9989] Executing state pkg.installed for [glusterfs-client]
2019-04-30 22:01:04,038 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['apt-cache', '-q', 'policy', 'glusterfs-client'] in directory '/root'
2019-04-30 22:01:04,080 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:01:06,387 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:01:06,401 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'glusterfs-client'] in directory '/root'
2019-04-30 22:01:09,825 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:01:09,846 [salt.state       :300 ][INFO    ][9989] Made the following changes:
'python-jwt' changed from 'absent' to '1.3.0-1ubuntu0.1'
'glusterfs-client' changed from 'absent' to '3.13.2-ubuntu1~xenial2'
'libaio1' changed from 'absent' to '0.3.110-2'
'attr' changed from 'absent' to '1:2.4.47-2'
'libpython2.7' changed from 'absent' to '2.7.12-1ubuntu0~16.04.4'
'glusterfs-common' changed from 'absent' to '3.13.2-ubuntu1~xenial2'
'librdmacm1' changed from 'absent' to '1.0.21-1'
'liburcu4' changed from 'absent' to '0.9.1-3'
'libibverbs1' changed from 'absent' to '1.1.8-1.1ubuntu2'
'python-prettytable' changed from 'absent' to '0.7.2-3'

2019-04-30 22:01:09,869 [salt.state       :915 ][INFO    ][9989] Loading fresh modules for state activity
2019-04-30 22:01:09,889 [salt.state       :1951][INFO    ][9989] Completed state [glusterfs-client] at time 22:01:09.889283 duration_in_ms=5865.592
2019-04-30 22:01:09,892 [salt.state       :1780][INFO    ][9989] Running state [attr] at time 22:01:09.892873
2019-04-30 22:01:09,893 [salt.state       :1813][INFO    ][9989] Executing state pkg.installed for [attr]
2019-04-30 22:01:10,272 [salt.state       :300 ][INFO    ][9989] All specified packages are already installed
2019-04-30 22:01:10,272 [salt.state       :1951][INFO    ][9989] Completed state [attr] at time 22:01:10.272831 duration_in_ms=379.958
2019-04-30 22:01:10,274 [salt.state       :1780][INFO    ][9989] Running state [/etc/systemd/system/var-lib-nova-instances.mount] at time 22:01:10.274380
2019-04-30 22:01:10,274 [salt.state       :1813][INFO    ][9989] Executing state file.managed for [/etc/systemd/system/var-lib-nova-instances.mount]
2019-04-30 22:01:10,294 [salt.fileclient  :1219][INFO    ][9989] Fetching file from saltenv 'base', ** done ** 'glusterfs/files/glusterfs-client.mount'
2019-04-30 22:01:10,302 [salt.state       :300 ][INFO    ][9989] File changed:
New file
2019-04-30 22:01:10,302 [salt.state       :1951][INFO    ][9989] Completed state [/etc/systemd/system/var-lib-nova-instances.mount] at time 22:01:10.302522 duration_in_ms=28.142
2019-04-30 22:01:10,303 [salt.state       :1780][INFO    ][9989] Running state [var-lib-nova-instances.mount] at time 22:01:10.303241
2019-04-30 22:01:10,303 [salt.state       :1813][INFO    ][9989] Executing state service.running for [var-lib-nova-instances.mount]
2019-04-30 22:01:10,303 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemctl', 'status', 'var-lib-nova-instances.mount', '-n', '0'] in directory '/root'
2019-04-30 22:01:10,316 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemctl', 'is-active', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:10,323 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:10,332 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:10,375 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemctl', 'is-active', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:10,381 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:10,390 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:10,400 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:10,489 [salt.loaded.int.module.cmdmod:395 ][INFO    ][9989] Executing command ['systemctl', 'is-enabled', 'var-lib-nova-instances.mount'] in directory '/root'
2019-04-30 22:01:10,497 [salt.state       :300 ][INFO    ][9989] {'var-lib-nova-instances.mount': True}
2019-04-30 22:01:10,497 [salt.state       :1951][INFO    ][9989] Completed state [var-lib-nova-instances.mount] at time 22:01:10.497412 duration_in_ms=194.17
2019-04-30 22:01:10,498 [salt.minion      :1711][INFO    ][9989] Returning information for job: 20190430220102562635
2019-04-30 22:24:07,803 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command state.sls with jid 20190430222407792286
2019-04-30 22:24:07,814 [salt.minion      :1432][INFO    ][11317] Starting a new job with PID 11317
2019-04-30 22:24:11,392 [salt.state       :915 ][INFO    ][11317] Loading fresh modules for state activity
2019-04-30 22:24:11,424 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'cinder/init.sls'
2019-04-30 22:24:11,444 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'cinder/volume.sls'
2019-04-30 22:24:11,515 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'cinder/user.sls'
2019-04-30 22:24:11,910 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'cinder/_ssl/volume_mysql.sls'
2019-04-30 22:24:11,963 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'cinder/_ssl/rabbitmq.sls'
2019-04-30 22:24:12,405 [salt.state       :1780][INFO    ][11317] Running state [cinder] at time 22:24:12.405264
2019-04-30 22:24:12,405 [salt.state       :1813][INFO    ][11317] Executing state group.present for [cinder]
2019-04-30 22:24:12,406 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['groupadd', '-g 304', '-r', 'cinder'] in directory '/root'
2019-04-30 22:24:12,466 [salt.state       :300 ][INFO    ][11317] {'passwd': 'x', 'gid': 304, 'name': 'cinder', 'members': []}
2019-04-30 22:24:12,466 [salt.state       :1951][INFO    ][11317] Completed state [cinder] at time 22:24:12.466227 duration_in_ms=60.964
2019-04-30 22:24:12,466 [salt.state       :1780][INFO    ][11317] Running state [cinder] at time 22:24:12.466513
2019-04-30 22:24:12,466 [salt.state       :1813][INFO    ][11317] Executing state user.present for [cinder]
2019-04-30 22:24:12,467 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['useradd', '-s', '/bin/false', '-u', '304', '-g', '304', '-m', '-d', '/var/lib/cinder', '-r', 'cinder'] in directory '/root'
2019-04-30 22:24:12,587 [salt.state       :300 ][INFO    ][11317] {'shell': '/bin/false', 'workphone': '', 'uid': 304, 'passwd': 'x', 'roomnumber': '', 'groups': ['cinder'], 'home': '/var/lib/cinder', 'name': 'cinder', 'gid': 304, 'fullname': '', 'homephone': ''}
2019-04-30 22:24:12,587 [salt.state       :1951][INFO    ][11317] Completed state [cinder] at time 22:24:12.587556 duration_in_ms=121.042
2019-04-30 22:24:12,587 [salt.state       :1780][INFO    ][11317] Running state [cinder-volume] at time 22:24:12.587865
2019-04-30 22:24:12,588 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [cinder-volume]
2019-04-30 22:24:12,588 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:24:12,862 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['apt-cache', '-q', 'policy', 'cinder-volume'] in directory '/root'
2019-04-30 22:24:12,903 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:24:14,433 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:24:14,446 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'cinder-volume'] in directory '/root'
2019-04-30 22:24:22,851 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430222422840207
2019-04-30 22:24:22,861 [salt.minion      :1432][INFO    ][12196] Starting a new job with PID 12196
2019-04-30 22:24:22,870 [salt.minion      :1711][INFO    ][12196] Returning information for job: 20190430222422840207
2019-04-30 22:24:52,900 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430222452889620
2019-04-30 22:24:52,912 [salt.minion      :1432][INFO    ][14609] Starting a new job with PID 14609
2019-04-30 22:24:52,923 [salt.minion      :1711][INFO    ][14609] Returning information for job: 20190430222452889620
2019-04-30 22:25:09,445 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:25:09,470 [salt.state       :300 ][INFO    ][11317] Made the following changes:
'python-routes' changed from 'absent' to '2.4.1-1~u16.04+mcp2'
'python-retrying' changed from 'absent' to '1.3.3-1'
'libiscsi2' changed from 'absent' to '1.12.0-2'
'python-os-service-types' changed from 'absent' to '1.3.0-1~u16.04+mcp'
'python-kombu' changed from 'absent' to '4.1.0-1~u16.04+mcp1'
'python-oslo.concurrency' changed from 'absent' to '3.27.0-2~u16.04+mcp7'
'python-kafka' changed from 'absent' to '1.3.2-1.1~u16.04+mcp2'
'python-sqlparse' changed from 'absent' to '0.2.2-1~u16.04+mcp1'
'python-monotonic' changed from 'absent' to '0.6-2'
'python-psycopg2' changed from 'absent' to '2.7.4-1.0~u16.04+mcp1'
'python-secretstorage' changed from 'absent' to '2.1.3-1'
'python2.7-numpy' changed from 'absent' to '1'
'python-glanceclient' changed from 'absent' to '1:2.13.1-3~u16.04+mcp4'
'python-formencode' changed from 'absent' to '1.3.0-0ubuntu5'
'python-functools32' changed from 'absent' to '3.2.3.2-2'
'python-migrate' changed from 'absent' to '0.11.0-1~u16.04+mcp3'
'python-cachetools' changed from 'absent' to '2.0.0-2.0~u16.04+mcp1'
'libboost-thread1.58.0' changed from 'absent' to '1.58.0+dfsg-5ubuntu3.1'
'libquadmath0' changed from 'absent' to '5.4.0-6ubuntu1~16.04.11'
'python-editor' changed from 'absent' to '0.4-2'
'python-egenix-mxtools' changed from 'absent' to '3.2.9-1'
'python-blinker' changed from 'absent' to '1.3.dfsg2-1build1'
'python-roman' changed from 'absent' to '2.0.0-2'
'python-pastescript' changed from 'absent' to '2.0.2-2~u16.04+mcp'
'sg3-utils' changed from 'absent' to '1.40-0ubuntu1'
'python-tenacity' changed from 'absent' to '4.12.0-1~u16.04+mcp'
'python-oslo.versionedobjects' changed from 'absent' to '1.33.3-1~u16.04+mcp9'
'python-setuptools' changed from 'absent' to '33.1.1-1~u16.04+mcp'
'docutils-doc' changed from 'absent' to '0.12+dfsg-1'
'python-dbus' changed from 'absent' to '1.2.0-3'
'librados2' changed from 'absent' to '10.2.11-0ubuntu0.16.04.1'
'python-traceback2' changed from 'absent' to '1.4.0-3'
'python-fixtures' changed from 'absent' to '3.0.0-1.1~u16.04+mcp2'
'python-pycadf' changed from 'absent' to '2.7.0-1~u16.04+mcp5'
'python-httplib2' changed from 'absent' to '0.9.1+dfsg-1'
'python-testtools' changed from 'absent' to '2.3.0-1.0~u16.04+mcp1'
'libboost-system1.58.0' changed from 'absent' to '1.58.0+dfsg-5ubuntu3.1'
'python-egenix-mxdatetime' changed from 'absent' to '3.2.9-1'
'python-anyjson' changed from 'absent' to '0.3.3-1build1'
'libnss3-nssdb' changed from 'absent' to '2:3.28.4-0ubuntu0.16.04.5'
'python-pymemcache' changed from 'absent' to '1.3.2-2ubuntu1'
'libblas3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-dnspython' changed from 'absent' to '1.14.0-3.1~u16.04+mcp2'
'python-babel' changed from 'absent' to '2.6.0+dfsg.1-1~u16.04+mcp'
'python2.7-paramiko' changed from 'absent' to '1'
'python-jsonschema' changed from 'absent' to '2.6.0-2.0~u16.04+mcp1'
'python-pil' changed from 'absent' to '3.1.2-0ubuntu1.1'
'python-oslo.privsep' changed from 'absent' to '1.29.0-1~u16.04+mcp'
'python2.7-lxml' changed from 'absent' to '1'
'python-suds' changed from 'absent' to '0.7~git20150727.94664dd-3'
'python-oslo.db' changed from 'absent' to '4.40.1-1~u16.04+mcp5'
'python2.7-sqlalchemy-ext' changed from 'absent' to '1'
'libnspr4' changed from 'absent' to '2:4.13.1-0ubuntu0.16.04.1'
'python-os-win' changed from 'absent' to '3.0.0-1.0~u16.04+mcp2'
'python-tz' changed from 'absent' to '2014.10~dfsg1-0ubuntu2'
'python2.7-simplejson' changed from 'absent' to '1'
'libtiff5' changed from 'absent' to '4.0.6-1ubuntu0.6'
'python-funcsigs' changed from 'absent' to '1.0.2-4.0~u16.04+mcp1'
'python-scgi' changed from 'absent' to '1.13-1.1build1'
'python2.7-pil' changed from 'absent' to '1'
'os-brick-common' changed from 'absent' to '2.5.6-1~u16.04+mcp3'
'python-repoze.lru' changed from 'absent' to '0.6-6'
'python-posix-ipc' changed from 'absent' to '0.9.8-2build2'
'formencode-i18n' changed from 'absent' to '1.3.0-0ubuntu5'
'python2.7-testtools' changed from 'absent' to '1'
'python-alembic' changed from 'absent' to '1.0.0-2~u16.04+mcp'
'docutils' changed from 'absent' to '1'
'python2.7-dbus' changed from 'absent' to '1'
'python-oslo.middleware' changed from 'absent' to '3.36.0-1~u16.04+mcp6'
'python-pygments' changed from 'absent' to '2.2.0+dfsg-1~u16.04+mcp2'
'python-pillow' changed from 'absent' to '1'
'libpaperg' changed from 'absent' to '1'
'liblapack.so.3' changed from 'absent' to '1'
'python2.7-netifaces' changed from 'absent' to '1'
'python-numpy-dev' changed from 'absent' to '1'
'liblcms2-2' changed from 'absent' to '2.6-3ubuntu2.1'
'docutils-common' changed from 'absent' to '0.12+dfsg-1'
'python-oslo.context' changed from 'absent' to '1:2.21.0-1~u16.04+mcp4'
'qemu-block-extra' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'sharutils' changed from 'absent' to '1:4.15.2-1ubuntu0.1'
'python-cursive' changed from 'absent' to '0.2.1-1.0~u16.04+mcp1'
'qemu-utils' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'python-oslo.cache' changed from 'absent' to '1.30.2-1~u16.04+mcp3'
'python2.7-pyinotify' changed from 'absent' to '1'
'python-webob' changed from 'absent' to '1:1.8.2-1~u16.04+mcp'
'python-pyparsing' changed from 'absent' to '2.2.0+dfsg1-2~u16.04+mcp1'
'python-babel-localedata' changed from 'absent' to '2.6.0+dfsg.1-1~u16.04+mcp'
'python-mimeparse' changed from 'absent' to '0.1.4-1build1'
'python-barbicanclient' changed from 'absent' to '4.7.2-2~u16.04+mcp4'
'python-castellan' changed from 'absent' to '0.19.0-1~u16.04+mcp4'
'python-cmd2' changed from 'absent' to '0.6.8-1'
'python-oslo.vmware' changed from 'absent' to '2.26.0-2~u16.04+mcp'
'python-distribute' changed from 'absent' to '1'
'libconfig-general-perl' changed from 'absent' to '2.60-1'
'libsgutils2-2' changed from 'absent' to '1.40-0ubuntu1'
'python-iso8601' changed from 'absent' to '0.1.11-1'
'python-jsonpatch' changed from 'absent' to '1.21-1~u16.04+mcp1'
'libwebpmux1' changed from 'absent' to '0.4.4-1'
'tgt' changed from 'absent' to '1:1.0.63-1ubuntu1.1'
'python-testscenarios' changed from 'absent' to '0.4-4'
'python-oslo.policy' changed from 'absent' to '1.38.1-1~u16.04+mcp'
'python-stevedore' changed from 'absent' to '1:1.29.0-1~u16.04+mcp4'
'python-paste' changed from 'absent' to '2.0.3+dfsg-4.1~u16.04+mcp1'
'python-lxml' changed from 'absent' to '3.5.0-1ubuntu0.1'
'python-oslo.config' changed from 'absent' to '1:6.4.0-1~u16.04+mcp'
'libnss3' changed from 'absent' to '2:3.28.4-0ubuntu0.16.04.5'
'python-paramiko' changed from 'absent' to '2.0.0-1.1~u16.04+mcp2'
'python-futurist' changed from 'absent' to '1.6.0-1.0~u16.04+mcp7'
'python-f2py' changed from 'absent' to '1'
'libpaper1' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python-fasteners' changed from 'absent' to '0.12.0-2ubuntu1'
'python2.7-gi' changed from 'absent' to '1'
'python-linecache2' changed from 'absent' to '1.0.0-2'
'python-pastedeploy-tpl' changed from 'absent' to '1.5.2-1'
'python-oauthlib' changed from 'absent' to '1.0.3-1'
'python-oslo-db' changed from 'absent' to '1'
'python2.7-testscenarios' changed from 'absent' to '1'
'libblas-common' changed from 'absent' to '3.6.0-2ubuntu2'
'libgfortran3' changed from 'absent' to '5.4.0-6ubuntu1~16.04.11'
'python-gi' changed from 'absent' to '3.20.0-0ubuntu1'
'libpq5' changed from 'absent' to '9.5.16-0ubuntu0.16.04.1'
'pycadf-common' changed from 'absent' to '2.7.0-1~u16.04+mcp5'
'python-contextlib2' changed from 'absent' to '0.5.1-1'
'libjpeg8' changed from 'absent' to '8c-2ubuntu8'
'python-oslo.serialization' changed from 'absent' to '2.27.0-1~u16.04+mcp5'
'python-oslo.utils' changed from 'absent' to '3.36.4-1~u16.04+mcp'
'python-taskflow' changed from 'absent' to '3.1.0-1.0~u16.04+mcp9'
'python-cinder' changed from 'absent' to '2:13.0.4-0ubuntu3~u16.04+mcp65'
'python-automaton' changed from 'absent' to '1.15.0-1.0~u16.04+mcp5'
'python-warlock' changed from 'absent' to '1.2.0-2.0~u16.04+mcp1'
'python-oslo.rootwrap' changed from 'absent' to '5.14.1-1~u16.04+mcp6'
'python2.7-iso8601' changed from 'absent' to '1'
'python-numpy' changed from 'absent' to '1:1.11.0-1ubuntu1'
'python-simplejson' changed from 'absent' to '3.8.1-1ubuntu2'
'python-wrapt' changed from 'absent' to '1.8.0-5build2'
'python-tooz' changed from 'absent' to '1.60.1-1.0~u16.04+mcp2'
'python-docutils' changed from 'absent' to '0.12+dfsg-1'
'python-openid' changed from 'absent' to '2.2.5-6'
'python-pastedeploy' changed from 'absent' to '1.5.2-1'
'python2.7-cmd2' changed from 'absent' to '1'
'libpaper-utils' changed from 'absent' to '1.1.24+nmu4ubuntu1'
'python2.7-zope.interface' changed from 'absent' to '1'
'python-cliff' changed from 'absent' to '2.11.1-1~u16.04+mcp6'
'python-oslo.i18n' changed from 'absent' to '3.21.0-1~u16.04+mcp6'
'python-bs4' changed from 'absent' to '4.6.0-1~u16.04+mcp1'
'cinder-volume' changed from 'absent' to '2:13.0.4-0ubuntu3~u16.04+mcp65'
'python-oslo.reports' changed from 'absent' to '1.28.0-2~u16.04+mcp6'
'python2.7-greenlet' changed from 'absent' to '1'
'python-networkx' changed from 'absent' to '1.11-1ubuntu1'
'python-statsd' changed from 'absent' to '3.2.1-2~u16.04+mcp2'
'libboost-random1.58.0' changed from 'absent' to '1.58.0+dfsg-5ubuntu3.1'
'python-redis' changed from 'absent' to '2.10.5-1ubuntu1'
'python-oslo-utils' changed from 'absent' to '1'
'libblas.so.3' changed from 'absent' to '1'
'python-novaclient' changed from 'absent' to '2:11.0.0-2~u16.04+mcp20'
'python-unicodecsv' changed from 'absent' to '0.14.1-1'
'python-memcache' changed from 'absent' to '1.57+fixed-1~u16.04+mcp1'
'python-mock' changed from 'absent' to '2.0.0-1.1~u16.04+mcp2'
'python-rfc3986' changed from 'absent' to '0.3.1-2.1~u16.04+mcp2'
'python-eventlet' changed from 'absent' to '0.20.0-5~u16.04+mcp0'
'python-unittest2' changed from 'absent' to '1.1.0-6.1'
'python2.7-pyparsing' changed from 'absent' to '1'
'python-oslo.log' changed from 'absent' to '3.39.2-1~u16.04+mcp2'
'python-pyinotify' changed from 'absent' to '0.9.6-1.1~u16.04+mcp2'
'libjpeg-turbo8' changed from 'absent' to '1.4.2-0ubuntu3.1'
'python-amqp' changed from 'absent' to '2.3.2-1~u16.04+mcp2'
'python-pbr' changed from 'absent' to '4.2.0-4~u16.04+mcp1'
'libwebp5' changed from 'absent' to '0.4.4-1'
'python-zope.interface' changed from 'absent' to '4.1.3-1build1'
'python-numpy-abi9' changed from 'absent' to '1'
'python-vine' changed from 'absent' to '1.1.3+dfsg-2~u16.04+mcp3'
'python-defusedxml' changed from 'absent' to '0.5.0-1~u16.04+mcp1'
'python-kazoo' changed from 'absent' to '2.2.1-1ubuntu1'
'python-decorator' changed from 'absent' to '4.3.0-1~u16.04+mcp'
'python-osprofiler' changed from 'absent' to '2.3.0-1~u16.04+mcp'
'python-oslo.messaging' changed from 'absent' to '8.1.2-1~u16.04+mcp12'
'python-os-brick' changed from 'absent' to '2.5.6-1~u16.04+mcp3'
'python-debtcollector' changed from 'absent' to '1.20.0-2~u16.04+mcp'
'python-keyrings.alt' changed from 'absent' to '1.1.1-1'
'python-oslo-log' changed from 'absent' to '1'
'python-json-pointer' changed from 'absent' to '1.9-3'
'python-dogpile.cache' changed from 'absent' to '0.6.2-1.1~u16.04+mcp2'
'python-html5lib' changed from 'absent' to '0.999-4'
'python-swiftclient' changed from 'absent' to '1:3.6.0-2~u16.04+mcp6'
'liblapack3' changed from 'absent' to '3.6.0-2ubuntu2'
'python-testresources' changed from 'absent' to '2.0.0-1.0~u16.04+mcp1'
'python-keystoneclient' changed from 'absent' to '1:3.17.0-1~u16.04+mcp6'
'python-greenlet' changed from 'absent' to '0.4.15-1~u16.04+mcp'
'python-sqlalchemy-ext' changed from 'absent' to '1.2.10+ds1-1~u16.04+mcp'
'python-oslo.service' changed from 'absent' to '1.31.7-1~u16.04+mcp4'
'librbd1' changed from 'absent' to '10.2.11-0ubuntu0.16.04.1'
'python-oslo-context' changed from 'absent' to '1'
'python-keyring' changed from 'absent' to '8.5.1-1.1~u16.04+mcp2'
'python-zake' changed from 'absent' to '0.1.6-1'
'python-zopeinterface' changed from 'absent' to '1'
'libboost-iostreams1.58.0' changed from 'absent' to '1.58.0+dfsg-5ubuntu3.1'
'python-numpy-api10' changed from 'absent' to '1'
'python-keystoneauth1' changed from 'absent' to '3.10.0-1~u16.04+mcp10'
'python-tempita' changed from 'absent' to '0.5.2-1build1'
'python-sqlalchemy' changed from 'absent' to '1.2.10+ds1-1~u16.04+mcp'
'python-keystonemiddleware' changed from 'absent' to '5.2.0-2~u16.04+mcp10'
'python-zope' changed from 'absent' to '1'
'python-voluptuous' changed from 'absent' to '0.9.3-1.1~u16.04+mcp2'
'python-extras' changed from 'absent' to '1.0.0-2.0~u16.04+mcp1'
'cinder-common' changed from 'absent' to '2:13.0.4-0ubuntu3~u16.04+mcp65'
'python-oslo-rootwrap' changed from 'absent' to '1'
'python-netifaces' changed from 'absent' to '0.10.4-0.1build2'
'libjbig0' changed from 'absent' to '2.1-3.1'

2019-04-30 22:25:09,486 [salt.state       :915 ][INFO    ][11317] Loading fresh modules for state activity
2019-04-30 22:25:09,507 [salt.state       :1951][INFO    ][11317] Completed state [cinder-volume] at time 22:25:09.507504 duration_in_ms=56919.638
2019-04-30 22:25:09,511 [salt.state       :1780][INFO    ][11317] Running state [lvm2] at time 22:25:09.511089
2019-04-30 22:25:09,511 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [lvm2]
2019-04-30 22:25:10,351 [salt.state       :300 ][INFO    ][11317] All specified packages are already installed
2019-04-30 22:25:10,352 [salt.state       :1951][INFO    ][11317] Completed state [lvm2] at time 22:25:10.352147 duration_in_ms=841.058
2019-04-30 22:25:10,352 [salt.state       :1780][INFO    ][11317] Running state [sysfsutils] at time 22:25:10.352426
2019-04-30 22:25:10,352 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [sysfsutils]
2019-04-30 22:25:10,357 [salt.state       :300 ][INFO    ][11317] All specified packages are already installed
2019-04-30 22:25:10,357 [salt.state       :1951][INFO    ][11317] Completed state [sysfsutils] at time 22:25:10.357324 duration_in_ms=4.897
2019-04-30 22:25:10,357 [salt.state       :1780][INFO    ][11317] Running state [sg3-utils] at time 22:25:10.357544
2019-04-30 22:25:10,357 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [sg3-utils]
2019-04-30 22:25:10,362 [salt.state       :300 ][INFO    ][11317] All specified packages are already installed
2019-04-30 22:25:10,362 [salt.state       :1951][INFO    ][11317] Completed state [sg3-utils] at time 22:25:10.362249 duration_in_ms=4.706
2019-04-30 22:25:10,362 [salt.state       :1780][INFO    ][11317] Running state [python-cinder] at time 22:25:10.362474
2019-04-30 22:25:10,362 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [python-cinder]
2019-04-30 22:25:10,366 [salt.state       :300 ][INFO    ][11317] All specified packages are already installed
2019-04-30 22:25:10,367 [salt.state       :1951][INFO    ][11317] Completed state [python-cinder] at time 22:25:10.367059 duration_in_ms=4.585
2019-04-30 22:25:10,367 [salt.state       :1780][INFO    ][11317] Running state [python-mysqldb] at time 22:25:10.367279
2019-04-30 22:25:10,367 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [python-mysqldb]
2019-04-30 22:25:10,380 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:25:10,395 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-mysqldb'] in directory '/root'
2019-04-30 22:25:12,355 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:25:12,379 [salt.state       :300 ][INFO    ][11317] Made the following changes:
'python2.7-mysqldb' changed from 'absent' to '1'
'mysql-common' changed from 'absent' to '5.7.26-0ubuntu0.16.04.1'
'mysql-common-5.6' changed from 'absent' to '1'
'libmysqlclient20' changed from 'absent' to '5.7.26-0ubuntu0.16.04.1'
'python-mysqldb' changed from 'absent' to '1.3.7-1build2'

2019-04-30 22:25:12,391 [salt.state       :915 ][INFO    ][11317] Loading fresh modules for state activity
2019-04-30 22:25:12,410 [salt.state       :1951][INFO    ][11317] Completed state [python-mysqldb] at time 22:25:12.410713 duration_in_ms=2043.434
2019-04-30 22:25:12,414 [salt.state       :1780][INFO    ][11317] Running state [p7zip] at time 22:25:12.414054
2019-04-30 22:25:12,414 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [p7zip]
2019-04-30 22:25:12,812 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:25:12,827 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'p7zip'] in directory '/root'
2019-04-30 22:25:14,477 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:25:14,501 [salt.state       :300 ][INFO    ][11317] Made the following changes:
'p7zip' changed from 'absent' to '9.20.1~dfsg.1-4.2ubuntu0.1'

2019-04-30 22:25:14,513 [salt.state       :915 ][INFO    ][11317] Loading fresh modules for state activity
2019-04-30 22:25:14,602 [salt.state       :1951][INFO    ][11317] Completed state [p7zip] at time 22:25:14.602650 duration_in_ms=2188.596
2019-04-30 22:25:14,606 [salt.state       :1780][INFO    ][11317] Running state [gettext-base] at time 22:25:14.606006
2019-04-30 22:25:14,606 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [gettext-base]
2019-04-30 22:25:15,001 [salt.state       :300 ][INFO    ][11317] All specified packages are already installed
2019-04-30 22:25:15,001 [salt.state       :1951][INFO    ][11317] Completed state [gettext-base] at time 22:25:15.001736 duration_in_ms=395.729
2019-04-30 22:25:15,002 [salt.state       :1780][INFO    ][11317] Running state [python-memcache] at time 22:25:15.002049
2019-04-30 22:25:15,002 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [python-memcache]
2019-04-30 22:25:15,006 [salt.state       :300 ][INFO    ][11317] All specified packages are already installed
2019-04-30 22:25:15,007 [salt.state       :1951][INFO    ][11317] Completed state [python-memcache] at time 22:25:15.007079 duration_in_ms=5.029
2019-04-30 22:25:15,007 [salt.state       :1780][INFO    ][11317] Running state [python-pycadf] at time 22:25:15.007324
2019-04-30 22:25:15,007 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [python-pycadf]
2019-04-30 22:25:15,012 [salt.state       :300 ][INFO    ][11317] All specified packages are already installed
2019-04-30 22:25:15,012 [salt.state       :1951][INFO    ][11317] Completed state [python-pycadf] at time 22:25:15.012164 duration_in_ms=4.84
2019-04-30 22:25:15,012 [salt.state       :1780][INFO    ][11317] Running state [cinder_volume_ssl_mysql] at time 22:25:15.012763
2019-04-30 22:25:15,012 [salt.state       :1813][INFO    ][11317] Executing state test.show_notification for [cinder_volume_ssl_mysql]
2019-04-30 22:25:15,013 [salt.state       :300 ][INFO    ][11317] Running cinder._ssl.volume_mysql
2019-04-30 22:25:15,013 [salt.state       :1951][INFO    ][11317] Completed state [cinder_volume_ssl_mysql] at time 22:25:15.013247 duration_in_ms=0.484
2019-04-30 22:25:15,013 [salt.state       :1780][INFO    ][11317] Running state [cinder_volume_ssl_rabbitmq] at time 22:25:15.013505
2019-04-30 22:25:15,013 [salt.state       :1813][INFO    ][11317] Executing state test.show_notification for [cinder_volume_ssl_rabbitmq]
2019-04-30 22:25:15,013 [salt.state       :300 ][INFO    ][11317] Running cinder._ssl.rabbitmq
2019-04-30 22:25:15,013 [salt.state       :1951][INFO    ][11317] Completed state [cinder_volume_ssl_rabbitmq] at time 22:25:15.013964 duration_in_ms=0.458
2019-04-30 22:25:15,015 [salt.state       :1780][INFO    ][11317] Running state [/var/lock/cinder] at time 22:25:15.015627
2019-04-30 22:25:15,015 [salt.state       :1813][INFO    ][11317] Executing state file.directory for [/var/lock/cinder]
2019-04-30 22:25:15,016 [salt.state       :300 ][INFO    ][11317] {'/var/lock/cinder': 'New Dir'}
2019-04-30 22:25:15,016 [salt.state       :1951][INFO    ][11317] Completed state [/var/lock/cinder] at time 22:25:15.016768 duration_in_ms=1.141
2019-04-30 22:25:15,017 [salt.state       :1780][INFO    ][11317] Running state [/etc/cinder/cinder.conf] at time 22:25:15.017145
2019-04-30 22:25:15,017 [salt.state       :1813][INFO    ][11317] Executing state file.managed for [/etc/cinder/cinder.conf]
2019-04-30 22:25:15,041 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'cinder/files/rocky/cinder.conf.volume.Debian'
2019-04-30 22:25:15,138 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/messaging/_default.conf'
2019-04-30 22:25:15,162 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_log.conf'
2019-04-30 22:25:15,179 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'cinder/files/backend/_lvm.conf'
2019-04-30 22:25:15,191 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/castellan/_barbican.conf'
2019-04-30 22:25:15,204 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/keystoneauth/_type_password.conf'
2019-04-30 22:25:15,228 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/keystonemiddleware/_auth_token.conf'
2019-04-30 22:25:15,252 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_database.conf'
2019-04-30 22:25:15,271 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/messaging/_notifications.conf'
2019-04-30 22:25:15,284 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/messaging/_rabbit.conf'
2019-04-30 22:25:15,322 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/queens/oslo/_middleware.conf'
2019-04-30 22:25:15,330 [salt.state       :300 ][INFO    ][11317] File changed:
--- 
+++ 
@@ -1,15 +1,4401 @@
+
 [DEFAULT]
+
+#
+# From cinder
+#
+
 rootwrap_config = /etc/cinder/rootwrap.conf
 api_paste_confg = /etc/cinder/api-paste.ini
+#
+# From oslo.messaging
+#
+
+# Size of RPC connection pool. (integer value)
+#rpc_conn_pool_size = 30
+
+# The pool size limit for connections expiration policy (integer
+# value)
+#conn_pool_min_size = 2
+
+# The time-to-live in sec of idle connections in the pool (integer
+# value)
+#conn_pool_ttl = 1200
+
+# ZeroMQ bind address. Should be a wildcard (*), an ethernet
+# interface, or IP. The "host" option should point or resolve to this
+# address. (string value)
+#rpc_zmq_bind_address = *
+
+# MatchMaker driver. (string value)
+# Possible values:
+# redis - <No description provided>
+# sentinel - <No description provided>
+# dummy - <No description provided>
+#rpc_zmq_matchmaker = redis
+
+# Number of ZeroMQ contexts, defaults to 1. (integer value)
+#rpc_zmq_contexts = 1
+
+# Maximum number of ingress messages to locally buffer per topic.
+# Default is unlimited. (integer value)
+#rpc_zmq_topic_backlog = <None>
+
+# Directory for holding IPC sockets. (string value)
+#rpc_zmq_ipc_dir = /var/run/openstack
+
+# Name of this node. Must be a valid hostname, FQDN, or IP address.
+# Must match "host" option, if running Nova. (string value)
+#rpc_zmq_host = localhost
+
+# Number of seconds to wait before all pending messages will be sent
+# after closing a socket. The default value of -1 specifies an
+# infinite linger period. The value of 0 specifies no linger period.
+# Pending messages shall be discarded immediately when the socket is
+# closed. Positive values specify an upper bound for the linger
+# period. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
+#zmq_linger = -1
+
+# The default number of seconds that poll should wait. Poll raises
+# timeout exception when timeout expired. (integer value)
+#rpc_poll_timeout = 1
+
+
+# Expiration timeout in seconds of a name service record about
+# existing target ( < 0 means no timeout). (integer value)
+#zmq_target_expire = 300
+
+# Update period in seconds of a name service record about existing
+# target. (integer value)
+#zmq_target_update = 180
+
+# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
+# (boolean value)
+#use_pub_sub = false
+
+# Use ROUTER remote proxy. (boolean value)
+#use_router_proxy = false
+
+# This option makes direct connections dynamic or static. It makes
+# sense only with use_router_proxy=False which means to use direct
+# connections for direct message types (ignored otherwise). (boolean
+# value)
+#use_dynamic_connections = false
+
+# How many additional connections to a host will be made for failover
+# reasons. This option is actual only in dynamic connections mode.
+# (integer value)
+#zmq_failover_connections = 2
+
+# Minimal port number for random ports range. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#rpc_zmq_min_port = 49153
+
+# Maximal port number for random ports range. (integer value)
+# Minimum value: 1
+# Maximum value: 65536
+#rpc_zmq_max_port = 65536
+
+# Number of retries to find free port number before fail with
+# ZMQBindError. (integer value)
+#rpc_zmq_bind_port_retries = 100
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#rpc_zmq_serialization = json
+
+# This option configures round-robin mode in zmq socket. True means
+# not keeping a queue when server side disconnects. False means to
+# keep queue and messages even if server is disconnected, when the
+# server appears we send all accumulated messages to it. (boolean
+# value)
+#zmq_immediate = true
+
+# Enable/disable TCP keepalive (KA) mechanism. The default value of -1
+# (or any other negative value) means to skip any overrides and leave
+# it to OS default; 0 and 1 (or any other positive value) mean to
+# disable and enable the option respectively. (integer value)
+#zmq_tcp_keepalive = -1
+
+# The duration between two keepalive transmissions in idle condition.
+# The unit is platform dependent, for example, seconds in Linux,
+# milliseconds in Windows etc. The default value of -1 (or any other
+# negative value and 0) means to skip any overrides and leave it to OS
+# default. (integer value)
+#zmq_tcp_keepalive_idle = -1
+
+# The number of retransmissions to be carried out before declaring
+# that remote end is not available. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_cnt = -1
+
+# The duration between two successive keepalive retransmissions, if
+# acknowledgement to the previous keepalive transmission is not
+# received. The unit is platform dependent, for example, seconds in
+# Linux, milliseconds in Windows etc. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_intvl = -1
+
+# Maximum number of (green) threads to work concurrently. (integer
+# value)
+#rpc_thread_pool_size = 100
+
+# Expiration timeout in seconds of a sent/received message after which
+# it is not tracked anymore by a client/server. (integer value)
+#rpc_message_ttl = 300
+
+# Wait for message acknowledgements from receivers. This mechanism
+# works only via proxy without PUB/SUB. (boolean value)
+#rpc_use_acks = false
+
+# Number of seconds to wait for an ack from a cast/call. After each
+# retry attempt this timeout is multiplied by some specified
+# multiplier. (integer value)
+#rpc_ack_timeout_base = 15
+
+# Number to multiply base ack timeout by after each retry attempt.
+# (integer value)
+#rpc_ack_timeout_multiplier = 2
+
+# Default number of message sending attempts in case of any problems
+# occurred: positive value N means at most N retries, 0 means no
+# retries, None or -1 (or any other negative values) mean to retry
+# forever. This option is used only if acknowledgments are enabled.
+# (integer value)
+#rpc_retry_attempts = 3
+
+# List of publisher hosts SubConsumer can subscribe on. This option
+# has higher priority then the default publishers list taken from the
+# matchmaker. (list value)
+#subscribe_on =
+
+# Size of executor thread pool when executor is threading or eventlet.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
+#executor_thread_pool_size = 64
+
+# Seconds to wait for a response from a call. (integer value)
+#rpc_response_timeout = 60
+rpc_response_timeout = 3600
+
+# The network address and optional user credentials for connecting to
+# the messaging backend, in URL format. The expected format is:
+#
+# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
+#
+# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
+#
+# For full details on the fields in the URL see the documentation of
+# oslo_messaging.TransportURL at
+# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
+# (string value)
+#transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
+
+# DEPRECATED: The messaging driver to use, defaults to rabbit. Other
+# drivers include amqp and zmq. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rpc_backend = rabbit
+
+# The default exchange under which topics are scoped. May be
+# overridden by an exchange name specified in the transport_url
+# option. (string value)
+#control_exchange = openstack
+control_exchange = cinder
+
+#
+# From oslo.log
+#
+
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
+# Note: This option can be changed without restarting.
+#debug = false
+
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
+# Note: This option can be changed without restarting.
+# Deprecated group/name - [DEFAULT]/log_config
+
+# Defines the format string for %%(asctime)s in log records. Default:
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
+#log_date_format = %Y-%m-%d %H:%M:%S
+
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logfile
+#log_file = <None>
+
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logdir
+#log_dir = <None>
+
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
+# (boolean value)
+#use_journal = false
+
+# Syslog facility to receive log lines. This option is ignored if
+# log_config_append is set. (string value)
+#syslog_log_facility = LOG_USER
+
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_json = false
+
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_stderr = false
+
+# Format string to use for log messages with context. (string value)
+#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+# Format string to use for log messages when context is undefined.
+# (string value)
+#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
+#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+
+# Prefix each line of exception output with this format. (string
+# value)
+#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
+
+# Defines the format string for %(user_identity)s that is used in
+# logging_context_format_string. (string value)
+#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
+
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+
+# Enables or disables publication of error events. (boolean value)
+#publish_errors = false
+
+# The format for an instance that is passed with the log message.
+# (string value)
+#instance_format = "[instance: %(uuid)s] "
+
+# The format for an instance UUID that is passed with the log message.
+# (string value)
+#instance_uuid_format = "[instance: %(uuid)s] "
+
+# Interval, number of seconds, of log rate limiting. (integer value)
+#rate_limit_interval = 0
+
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
+#rate_limit_burst = 0
+
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
+#rate_limit_except_level = CRITICAL
+
+# Enables or disables fatal status of deprecations. (boolean value)
+#fatal_deprecations = false
+
+# The maximum number of items that a collection resource returns in a single
+# response (integer value)
+#osapi_max_limit = 1000
+
+# Json file indicating user visible filter parameters for list queries. (string
+# value)
+# Deprecated group/name - [DEFAULT]/query_volume_filters
+#resource_query_filters_file = /etc/cinder/resource_filters.json
+
+# DEPRECATED: Volume filter options which non-admin user could use to query
+# volumes. Default values are: ['name', 'status', 'metadata',
+# 'availability_zone' ,'bootable', 'group_id'] (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#query_volume_filters = name,status,metadata,availability_zone,bootable,group_id
+
+# DEPRECATED: Allow the ability to modify the extra-spec settings of an in-use
+# volume-type. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#allow_inuse_volume_type_modification = false
+
+# Treat X-Forwarded-For as the canonical remote address. Only enable this if
+# you have a sanitizing proxy. (boolean value)
+#use_forwarded_for = false
+
+# Public url to use for versions endpoint. The default is None, which will use
+# the request's host_url attribute to populate the URL base. If Cinder is
+# operating behind a proxy, you will want to change this to represent the
+# proxy's URL. (string value)
+#public_endpoint = <None>
+
+# Backup services use same backend. (boolean value)
+#backup_use_same_host = false
+
+# Compression algorithm (None to disable) (string value)
+# Possible values:
+# none - <No description provided>
+# off - <No description provided>
+# no - <No description provided>
+# zlib - <No description provided>
+# gzip - <No description provided>
+# bz2 - <No description provided>
+# bzip2 - <No description provided>
+#backup_compression_algorithm = zlib
+
+# Backup metadata version to be used when backing up volume metadata. If this
+# number is bumped, make sure the service doing the restore supports the new
+# version. (integer value)
+#backup_metadata_version = 2
+
+# The number of chunks or objects, for which one Ceilometer notification will
+# be sent (integer value)
+#backup_object_number_per_notification = 10
+
+# Interval, in seconds, between two progress notifications reporting the backup
+# status (integer value)
+#backup_timer_interval = 120
+
+# Ceph configuration file to use. (string value)
+#backup_ceph_conf = /etc/ceph/ceph.conf
+
+# The Ceph user to connect with. Default here is to use the same user as for
+# Cinder volumes. If not using cephx this should be set to None. (string value)
+#backup_ceph_user = cinder
+
+# The chunk size, in bytes, that a backup is broken into before transfer to the
+# Ceph object store. (integer value)
+#backup_ceph_chunk_size = 134217728
+
+# The Ceph pool where volume backups are stored. (string value)
+#backup_ceph_pool = backups
+
+# RBD stripe unit to use when creating a backup image. (integer value)
+#backup_ceph_stripe_unit = 0
+
+# RBD stripe count to use when creating a backup image. (integer value)
+#backup_ceph_stripe_count = 0
+
+# If True, apply JOURNALING and EXCLUSIVE_LOCK feature bits to the backup RBD
+# objects to allow mirroring (boolean value)
+#backup_ceph_image_journals = false
+
+# If True, always discard excess bytes when restoring volumes i.e. pad with
+# zeroes. (boolean value)
+#restore_discard_excess_bytes = true
+
+# The GCS bucket to use. (string value)
+#backup_gcs_bucket = <None>
+
+# The size in bytes of GCS backup objects. (integer value)
+#backup_gcs_object_size = 52428800
+
+# The size in bytes that changes are tracked for incremental backups.
+# backup_gcs_object_size has to be multiple of backup_gcs_block_size. (integer
+# value)
+#backup_gcs_block_size = 32768
+
+# GCS object will be downloaded in chunks of bytes. (integer value)
+#backup_gcs_reader_chunk_size = 2097152
+
+# GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the
+# file is to be uploaded as a single chunk. (integer value)
+#backup_gcs_writer_chunk_size = 2097152
+
+# Number of times to retry. (integer value)
+#backup_gcs_num_retries = 3
+
+# List of GCS error codes. (list value)
+#backup_gcs_retry_error_codes = 429
+
+# Location of GCS bucket. (string value)
+#backup_gcs_bucket_location = US
+
+# Storage class of GCS bucket. (string value)
+#backup_gcs_storage_class = NEARLINE
+
+# Absolute path of GCS service account credential file. (string value)
+#backup_gcs_credential_file = <None>
+
+# Owner project id for GCS bucket. (string value)
+#backup_gcs_project_id = <None>
+
+# Http user-agent string for gcs api. (string value)
+#backup_gcs_user_agent = gcscinder
+
+# Enable or Disable the timer to send the periodic progress notifications to
+# Ceilometer when backing up the volume to the GCS backend storage. The default
+# value is True to enable the timer. (boolean value)
+#backup_gcs_enable_progress_timer = true
+
+# URL for http proxy access. (uri value)
+#backup_gcs_proxy_url = <None>
+
+# Base dir containing mount point for gluster share. (string value)
+#glusterfs_backup_mount_point = $state_path/backup_mount
+
+# GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format.
+# Eg: 1.2.3.4:backup_vol (string value)
+#glusterfs_backup_share = <None>
+
+# Base dir containing mount point for NFS share. (string value)
+#backup_mount_point_base = $state_path/backup_mount
+
+# NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format.
+# (string value)
+#backup_share = <None>
+
+# Mount options passed to the NFS client. See NFS man page for details. (string
+# value)
+#backup_mount_options = <None>
+
+# The maximum size in bytes of the files used to hold backups. If the volume
+# being backed up exceeds this size, then it will be backed up into multiple
+# files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
+# (integer value)
+#backup_file_size = 1999994880
+
+# The size in bytes that changes are tracked for incremental backups.
+# backup_file_size has to be multiple of backup_sha_block_size_bytes. (integer
+# value)
+#backup_sha_block_size_bytes = 32768
+
+# Enable or Disable the timer to send the periodic progress notifications to
+# Ceilometer when backing up the volume to the backend storage. The default
+# value is True to enable the timer. (boolean value)
+#backup_enable_progress_timer = true
+
+# Path specifying where to store backups. (string value)
+#backup_posix_path = $state_path/backup
+
+# Custom directory to use for backups. (string value)
+#backup_container = <None>
+
+# The URL of the Swift endpoint (uri value)
+#backup_swift_url = <None>
+
+# The URL of the Keystone endpoint (uri value)
+#backup_swift_auth_url = <None>
+
+# Info to match when looking for swift in the service catalog. Format is:
+# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
+# Only used if backup_swift_url is unset (string value)
+#swift_catalog_info = object-store:swift:publicURL
+
+# Info to match when looking for keystone in the service catalog. Format is:
+# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
+# Only used if backup_swift_auth_url is unset (string value)
+#keystone_catalog_info = identity:Identity Service:publicURL
+
+# Swift authentication mechanism (per_user or single_user). (string value)
+# Possible values:
+# per_user - <No description provided>
+# single_user - <No description provided>
+#backup_swift_auth = per_user
+
+# Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0
+# or "3" for auth 3.0 (string value)
+#backup_swift_auth_version = 1
+
+# Swift tenant/account name. Required when connecting to an auth 2.0 system
+# (string value)
+#backup_swift_tenant = <None>
+
+# Swift user domain name. Required when connecting to an auth 3.0 system
+# (string value)
+#backup_swift_user_domain = <None>
+
+# Swift project domain name. Required when connecting to an auth 3.0 system
+# (string value)
+#backup_swift_project_domain = <None>
+
+# Swift project/account name. Required when connecting to an auth 3.0 system
+# (string value)
+#backup_swift_project = <None>
+
+# Swift user name (string value)
+#backup_swift_user = <None>
+
+# Swift key for authentication (string value)
+#backup_swift_key = <None>
+
+# The default Swift container to use (string value)
+#backup_swift_container = volumebackups
+
+# The size in bytes of Swift backup objects (integer value)
+#backup_swift_object_size = 52428800
+
+# The size in bytes that changes are tracked for incremental backups.
+# backup_swift_object_size has to be multiple of backup_swift_block_size.
+# (integer value)
+#backup_swift_block_size = 32768
+
+# The number of retries to make for Swift operations (integer value)
+#backup_swift_retry_attempts = 3
+
+# The backoff time in seconds between Swift retries (integer value)
+#backup_swift_retry_backoff = 2
+
+# Enable or Disable the timer to send the periodic progress notifications to
+# Ceilometer when backing up the volume to the Swift backend storage. The
+# default value is True to enable the timer. (boolean value)
+#backup_swift_enable_progress_timer = true
+
+# Location of the CA certificate file to use for swift client requests. (string
+# value)
+#backup_swift_ca_cert_file = <None>
+
+# Bypass verification of server certificate when making SSL connection to
+# Swift. (boolean value)
+#backup_swift_auth_insecure = false
+
+# Volume prefix for the backup id when backing up to TSM (string value)
+#backup_tsm_volume_prefix = backup
+
+# TSM password for the running username (string value)
+#backup_tsm_password = password
+
+# Enable or Disable compression for backups (boolean value)
+#backup_tsm_compression = true
+
+# Driver to use for backups. (string value)
+#backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
+
+# Offload pending backup delete during backup service startup. If false, the
+# backup service will remain down until all pending backups are deleted.
+# (boolean value)
+#backup_service_inithost_offload = true
+
+# Size of the native threads pool for the backups.  Most backup drivers rely
+# heavily on this, it can be decreased for specific drivers that don't.
+# (integer value)
+# Minimum value: 20
+#backup_native_threads_pool_size = 60
+
+# Number of backup processes to launch. Improves performance with concurrent
+# backups. (integer value)
+# Minimum value: 1
+# Maximum value: 4
+#backup_workers = 1
+
+# Name of this cluster. Used to group volume hosts that share the same backend
+# configurations to work in HA Active-Active mode.  Active-Active is not yet
+# supported. (string value)
+#cluster = <None>
+
+# Top-level directory for maintaining cinder's state (string value)
+state_path = /var/lib/cinder
+
+# IP address of this host (host address value)
+#my_ip = <HOST_IP_ADDRESS>
+my_ip = 10.167.4.56
+
+# A list of the URLs of glance API servers available to cinder
+# ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to
+# http. (list value)
+glance_api_servers = http://10.167.4.35:9292
+
+# Number retries when downloading an image from glance (integer value)
+# Minimum value: 0
+glance_num_retries = 0
+
+# Allow to perform insecure SSL (https) requests to glance (https will be used
+# but cert validation will not be performed). (boolean value)
+#glance_api_insecure = false
+
+# Enables or disables negotiation of SSL layer compression. In some cases
+# disabling compression can improve data throughput, such as when high network
+# bandwidth is available and you use compressed image formats like qcow2.
+# (boolean value)
+#glance_api_ssl_compression = false
+
+# Location of ca certificates file to use for glance client requests. (string
+# value)
+
+# http/https timeout value for glance operations. If no value (None) is
+# supplied here, the glanceclient default value is used. (integer value)
+#glance_request_timeout = <None>
+
+# DEPRECATED: Deploy v2 of the Cinder API. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#enable_v2_api = true
+
+# Deploy v3 of the Cinder API. (boolean value)
+enable_v3_api = true
+
+# Enables or disables rate limit of the API. (boolean value)
+#api_rate_limit = true
+
+# Specify list of extensions to load when using osapi_volume_extension option
+# with cinder.api.contrib.select_extensions (list value)
+#osapi_volume_ext_list =
+
+# osapi volume extension to load (multi valued)
+osapi_volume_extension = cinder.api.contrib.standard_extensions
+
+# Full class name for the Manager for volume (string value)
+#volume_manager = cinder.volume.manager.VolumeManager
+
+# Full class name for the Manager for volume backup (string value)
+#backup_manager = cinder.backup.manager.BackupManager
+
+# Full class name for the Manager for scheduler (string value)
+#scheduler_manager = cinder.scheduler.manager.SchedulerManager
+
+# Name of this node.  This can be an opaque identifier. It is not necessarily a
+# host name, FQDN, or IP address. (host address value)
+#host = localhost
+
+# Availability zone of this node. Can be overridden per volume backend with the
+# option "backend_availability_zone". (string value)
+#storage_availability_zone = nova
+
+# Default availability zone for new volumes. If not set, the
+# storage_availability_zone option value is used as the default for new
+# volumes. (string value)
+#default_availability_zone = <None>
+
+# If the requested Cinder availability zone is unavailable, fall back to the
+# value of default_availability_zone, then storage_availability_zone, instead
+# of failing. (boolean value)
+allow_availability_zone_fallback = True
+
+# Default volume type to use (string value)
+#default_volume_type = <None>
+
+# Default group type to use (string value)
+#default_group_type = <None>
+
+# Time period for which to generate volume usages. The options are hour, day,
+# month, or year. (string value)
+#volume_usage_audit_period = month
+
+# Path to the rootwrap configuration file to use for running commands as root
+# (string value)
+#rootwrap_config = /etc/cinder/rootwrap.conf
+
+# Enable monkey patching (boolean value)
+#monkey_patch = false
+
+# List of modules/decorators to monkey patch (list value)
+#monkey_patch_modules =
+
+# Maximum time since last check-in for a service to be considered up (integer
+# value)
+#service_down_time = 60
+
+# The full class name of the volume API class to use (string value)
+#volume_api_class = cinder.volume.api.API
+
+# The full class name of the volume backup API class (string value)
+#backup_api_class = cinder.backup.api.API
+
+# The strategy to use for auth. Supports noauth or keystone. (string value)
+# Possible values:
+# noauth - <No description provided>
+# keystone - <No description provided>
+auth_strategy = keystone
+
+# A list of backend names to use. These backend names should be backed by a
+# unique [CONFIG] group with its options (list value)
+#enabled_backends = <None>
+default_volume_type=lvm-driver
+
+enabled_backends=lvm-driver
+
+
+# Whether snapshots count against gigabyte quota (boolean value)
+#no_snapshot_gb_quota = false
+
+# The full class name of the volume transfer API class (string value)
+#transfer_api_class = cinder.transfer.api.API
+
+# The full class name of the consistencygroup API class (string value)
+#consistencygroup_api_class = cinder.consistencygroup.api.API
+
+# The full class name of the group API class (string value)
+#group_api_class = cinder.group.api.API
+
+# The full class name of the compute API class to use (string value)
+#compute_api_class = cinder.compute.nova.API
+
+# ID of the project which will be used as the Cinder internal tenant. (string
+# value)
+#cinder_internal_tenant_project_id = <None>
+
+# ID of the user to be used in volume operations as the Cinder internal tenant.
+# (string value)
+#cinder_internal_tenant_user_id = <None>
+
+# Services to be added to the available pool on create (boolean value)
+#enable_new_services = true
+
+# Template string to be used to generate volume names (string value)
+volume_name_template = volume-%s
+
+# Template string to be used to generate snapshot names (string value)
+#snapshot_name_template = snapshot-%s
+
+# Template string to be used to generate backup names (string value)
+#backup_name_template = backup-%s
+
+# Driver to use for database access (string value)
+#db_driver = cinder.db
+
+# A list of url schemes that can be downloaded directly via the direct_url.
+# Currently supported schemes: [file, cinder]. (list value)
+#allowed_direct_url_schemes =
+
+# Info to match when looking for glance in the service catalog. Format is:
+# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
+# Only used if glance_api_servers are not provided. (string value)
+#glance_catalog_info = image:glance:publicURL
+
+# Default core properties of image (list value)
+#glance_core_properties = checksum,container_format,disk_format,image_name,image_id,min_disk,min_ram,name,size
+
+# Directory used for temporary storage during image conversion (string value)
+#image_conversion_dir = $state_path/conversion
+
+# message minimum life in seconds. (integer value)
+#message_ttl = 2592000
+
+# interval between periodic task runs to clean expired messages in seconds.
+# (integer value)
+#message_reap_interval = 86400
+
+# Number of volumes allowed per project (integer value)
+#quota_volumes = 10
+
+# Number of volume snapshots allowed per project (integer value)
+#quota_snapshots = 10
+
+# Number of consistencygroups allowed per project (integer value)
+#quota_consistencygroups = 10
+
+# Number of groups allowed per project (integer value)
+#quota_groups = 10
+
+# Total amount of storage, in gigabytes, allowed for volumes and snapshots per
+# project (integer value)
+#quota_gigabytes = 1000
+
+# Number of volume backups allowed per project (integer value)
+#quota_backups = 10
+
+# Total amount of storage, in gigabytes, allowed for backups per project
+# (integer value)
+#quota_backup_gigabytes = 1000
+
+# Number of seconds until a reservation expires (integer value)
+#reservation_expire = 86400
+
+# Interval between periodic task runs to clean expired reservations in seconds.
+# (integer value)
+#reservation_clean_interval = $reservation_expire
+
+# Count of reservations until usage is refreshed (integer value)
+#until_refresh = 0
+
+# Number of seconds between subsequent usage refreshes (integer value)
+#max_age = 0
+
+# Default driver to use for quota checks (string value)
+#quota_driver = cinder.quota.DbQuotaDriver
+
+# Enables or disables use of default quota class with default quota. (boolean
+# value)
+#use_default_quota_class = true
+
+# Max size allowed per volume, in gigabytes (integer value)
+#per_volume_size_limit = -1
+
+# The scheduler host manager class to use (string value)
+#scheduler_host_manager = cinder.scheduler.host_manager.HostManager
+
+# Maximum number of attempts to schedule a volume (integer value)
+#scheduler_max_attempts = 3
+
+# Which filter class names to use for filtering hosts when not specified in the
+# request. (list value)
+#scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter
+
+# Which weigher class names to use for weighing hosts. (list value)
+#scheduler_default_weighers = CapacityWeigher
+
+# Which handler to use for selecting the host/pool after weighing (string
+# value)
+#scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler
+
+# Default scheduler driver to use (string value)
+#scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler
+
+# Absolute path to scheduler configuration JSON file. (string value)
+#scheduler_json_config_location =
+
+# Multiplier used for weighing free capacity. Negative numbers mean to stack vs
+# spread. (floating point value)
+#capacity_weight_multiplier = 1.0
+
+# Multiplier used for weighing allocated capacity. Positive numbers mean to
+# stack vs spread. (floating point value)
+#allocated_capacity_weight_multiplier = -1.0
+
+# Multiplier used for weighing volume number. Negative numbers mean to spread
+# vs stack. (floating point value)
+#volume_number_multiplier = -1.0
+
+# Interval, in seconds, between nodes reporting state to datastore (integer
+# value)
+#report_interval = 10
+
+# Interval, in seconds, between running periodic tasks (integer value)
+#periodic_interval = 60
+
+# Range, in seconds, to randomly delay when starting the periodic task
+# scheduler to reduce stampeding. (Disable by setting to 0) (integer value)
+#periodic_fuzzy_delay = 60
+
+# IP address on which OpenStack Volume API listens (string value)
+osapi_volume_listen = 10.167.4.56
+
+# Port on which OpenStack Volume API listens (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#osapi_volume_listen_port = 8776
+
+# Number of workers for OpenStack Volume API service. The default is equal to
+# the number of CPUs available. (integer value)
+osapi_volume_workers = 4
+
+# Wraps the socket in a SSL context if True is set. A certificate file and key
+# file must be specified. (boolean value)
+#osapi_volume_use_ssl = false
+
+# Option to enable strict host key checking.  When set to "True" Cinder will
+# only connect to systems with a host key present in the configured
+# "ssh_hosts_key_file".  When set to "False" the host key will be saved upon
+# first connection and used for subsequent connections.  Default=False (boolean
+# value)
+#strict_ssh_host_key_policy = false
+
+# File containing SSH host keys for the systems with which Cinder needs to
+# communicate.  OPTIONAL: Default=$state_path/ssh_known_hosts (string value)
+#ssh_hosts_key_file = $state_path/ssh_known_hosts
+
+# The number of characters in the salt. (integer value)
+#volume_transfer_salt_length = 8
+
+# The number of characters in the autogenerated auth key. (integer value)
+#volume_transfer_key_length = 16
+
+# Enables the Force option on upload_to_image. This enables running
+# upload_volume on in-use volumes for backends that support it. (boolean value)
+#enable_force_upload = false
+enable_force_upload = false
+
+# Create volume from snapshot at the host where snapshot resides (boolean
+# value)
+#snapshot_same_host = true
+
+# Ensure that the new volumes are the same AZ as snapshot or source volume
+# (boolean value)
+#cloned_volume_same_az = true
+
+# Cache volume availability zones in memory for the provided duration in
+# seconds (integer value)
+#az_cache_duration = 3600
+
+# Number of times to attempt to run flakey shell commands (integer value)
+#num_shell_tries = 3
+
+# The percentage of backend capacity is reserved (integer value)
+# Minimum value: 0
+# Maximum value: 100
+#reserved_percentage = 0
+
+# Prefix for iSCSI volumes (string value)
+# Deprecated group/name - [DEFAULT]/iscsi_target_prefix
+#target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSCSI daemon is listening on (string value)
+# Deprecated group/name - [DEFAULT]/iscsi_ip_address
+#target_ip_address = $my_ip
+
+# The list of secondary IP addresses of the iSCSI daemon (list value)
+#iscsi_secondary_ip_addresses =
+
+# The port that the iSCSI daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# Deprecated group/name - [DEFAULT]/iscsi_port
+#target_port = 3260
+
+# The maximum number of times to rescan targets to find volume (integer value)
+#num_volume_device_scan_tries = 3
+
+# The backend name for a given driver implementation (string value)
+volume_backend_name = DEFAULT
+
+# Do we attach/detach volumes in cinder using multipath for volume to image and
+# image to volume transfers? (boolean value)
+#use_multipath_for_image_xfer = false
+
+# If this is set to True, attachment of volumes for image transfer will be
+# aborted when multipathd is not running. Otherwise, it will fallback to single
+# path. (boolean value)
+#enforce_multipath_for_image_xfer = false
+
+# Method used to wipe old volumes (string value)
+# Possible values:
+# none - <No description provided>
+# zero - <No description provided>
+#volume_clear = zero
+volume_clear = none
+
+# Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
+# (integer value)
+# Maximum value: 1024
+#volume_clear_size = 0
+
+# The flag to pass to ionice to alter the i/o priority of the process used to
+# zero a volume after deletion, for example "-c3" for idle only priority.
+# (string value)
+#volume_clear_ionice = <None>
+
+# Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI
+# support, scstadmin for SCST target support, ietadm for iSCSI Enterprise
+# Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, or fake
+# for testing. (string value)
+# Possible values:
+# tgtadm - <No description provided>
+# lioadm - <No description provided>
+# scstadmin - <No description provided>
+# iscsictl - <No description provided>
+# ietadm - <No description provided>
+# nvmet - <No description provided>
+# fake - <No description provided>
+# Deprecated group/name - [DEFAULT]/iscsi_helper
+target_helper = tgtadm
+
+# Volume configuration file storage directory (string value)
+volumes_dir = /var/lib/cinder/volumes
+
+# IET configuration file (string value)
+#iet_conf = /etc/iet/ietd.conf
+
+# Chiscsi (CXT) global defaults configuration file (string value)
+#chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf
+
+# Sets the behavior of the iSCSI target to either perform blockio or fileio
+# optionally, auto can be set and Cinder will autodetect type of backing device
+# (string value)
+# Possible values:
+# blockio - <No description provided>
+# fileio - <No description provided>
+# auto - <No description provided>
+#iscsi_iotype = fileio
+
+# The default block size used when copying/clearing volumes (string value)
+#volume_dd_blocksize = 1M
+
+# The blkio cgroup name to be used to limit bandwidth of volume copy (string
+# value)
+#volume_copy_blkio_cgroup_name = cinder-volume-copy
+
+# The upper limit of bandwidth of volume copy. 0 => unlimited (integer value)
+#volume_copy_bps_limit = 0
+
+# Sets the behavior of the iSCSI target to either perform write-back(on) or
+# write-through(off). This parameter is valid if target_helper is set to
+# tgtadm. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+#iscsi_write_cache = on
+
+# Sets the target-specific flags for the iSCSI target. Only used for tgtadm to
+# specify backing device flags using bsoflags option. The specified string is
+# passed as is to the underlying tool. (string value)
+#iscsi_target_flags =
+
+# Determines the target protocol for new volumes, created with tgtadm, lioadm
+# and nvmet target helpers. In order to enable RDMA, this parameter should be
+# set with the value "iser". The supported iSCSI protocol values are "iscsi"
+# and "iser", in case of nvmet target set to "nvmet_rdma". (string value)
+# Possible values:
+# iscsi - <No description provided>
+# iser - <No description provided>
+# nvmet_rdma - <No description provided>
+# Deprecated group/name - [DEFAULT]/iscsi_protocol
+#target_protocol = iscsi
+
+# The path to the client certificate key for verification, if the driver
+# supports it. (string value)
+#driver_client_cert_key = <None>
+
+# The path to the client certificate for verification, if the driver supports
+# it. (string value)
+#driver_client_cert = <None>
+
+# Tell driver to use SSL for connection to backend storage if the driver
+# supports it. (boolean value)
+#driver_use_ssl = false
+
+# Representation of the over subscription ratio when thin provisioning is
+# enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times
+# of the total physical capacity. If the ratio is 10.5, it means provisioned
+# capacity can be 10.5 times of the total physical capacity. A ratio of 1.0
+# means provisioned capacity cannot exceed the total physical capacity. If
+# ratio is 'auto', Cinder will automatically calculate the ratio based on the
+# provisioned capacity and the used space. If not set to auto, the ratio has to
+# be a minimum of 1.0. (string value)
+#max_over_subscription_ratio = 20.0
+
+# Certain ISCSI targets have predefined target names, SCST target driver uses
+# this name. (string value)
+#scst_target_iqn_name = <None>
+
+# SCST target implementation can choose from multiple SCST target drivers.
+# (string value)
+#scst_target_driver = iscsi
+
+# Option to enable/disable CHAP authentication for targets. (boolean value)
+#use_chap_auth = false
+
+# CHAP user name. (string value)
+#chap_username =
+
+# Password for specified CHAP account name. (string value)
+#chap_password =
+
+# Namespace for driver private data values to be saved in. (string value)
+#driver_data_namespace = <None>
+
+# String representation for an equation that will be used to filter hosts. Only
+# used when the driver filter is set to be used by the Cinder scheduler.
+# (string value)
+#filter_function = <None>
+
+# String representation for an equation that will be used to determine the
+# goodness of a host. Only used when using the goodness weigher is set to be
+# used by the Cinder scheduler. (string value)
+#goodness_function = <None>
+
+# If set to True the http client will validate the SSL certificate of the
+# backend endpoint. (boolean value)
+#driver_ssl_cert_verify = false
+
+# Can be used to specify a non default path to a CA_BUNDLE file or directory
+# with certificates of trusted CAs, which will be used to validate the backend
+# (string value)
+#driver_ssl_cert_path = <None>
+
+# List of options that control which trace info is written to the DEBUG log
+# level to assist developers. Valid values are method and api. (list value)
+#trace_flags = <None>
+
+# Multi opt of dictionaries to represent a replication target device.  This
+# option may be specified multiple times in a single config section to specify
+# multiple replication target devices.  Each entry takes the standard dict
+# config form: replication_device =
+# target_device_id:<required>,key1:value1,key2:value2... (dict value)
+#replication_device = <None>
+
+# If set to True, upload-to-image in raw format will create a cloned volume and
+# register its location to the image service, instead of uploading the volume
+# content. The cinder backend and locations support must be enabled in the
+# image service. (boolean value)
+#image_upload_use_cinder_backend = false
+
+# If set to True, the image volume created by upload-to-image will be placed in
+# the internal tenant. Otherwise, the image volume is created in the current
+# context's tenant. (boolean value)
+#image_upload_use_internal_tenant = false
+
+# Enable the image volume cache for this backend. (boolean value)
+#image_volume_cache_enabled = false
+
+# Max size of the image volume cache for this backend in GB. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_size_gb = 0
+
+# Max number of entries allowed in the image volume cache. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_count = 0
+
+# Report to clients of Cinder that the backend supports discard (aka.
+# trim/unmap). This will not actually change the behavior of the backend or the
+# client directly, it will only notify that it can be used. (boolean value)
+#report_discard_supported = false
+
+# Protocol for transferring data between host and storage back-end. (string
+# value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+#storage_protocol = iscsi
+
+# If this is set to True, a temporary snapshot will be created for performing
+# non-disruptive backups. Otherwise a temporary volume will be cloned in order
+# to perform a backup. (boolean value)
+#backup_use_temp_snapshot = false
+
+# Set this to True when you want to allow an unsupported driver to start.
+# Drivers that haven't maintained a working CI system and testing are marked as
+# unsupported until CI is working again.  This also marks a driver as
+# deprecated and may be removed in the next release. (boolean value)
+#enable_unsupported_driver = false
+
+# Availability zone for this volume backend. If not set, the
+# storage_availability_zone option value is used as the default for all
+# backends. (string value)
+#backend_availability_zone = <None>
+
+# The maximum number of times to rescan iSER targetto find volume (integer
+# value)
+#num_iser_scan_tries = 3
+
+# Prefix for iSER volumes (string value)
+#iser_target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSER daemon is listening on (string value)
+#iser_ip_address = $my_ip
+
+# The port that the iSER daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#iser_port = 3260
+
+# The name of the iSER target user-land tool to use (string value)
+#iser_helper = tgtadm
+
+# The port that the NVMe target is listening on. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#nvmet_port_id = 1
+
+# The namespace id associated with the subsystem that will be created with the
+# path for the LVM volume. (integer value)
+#nvmet_ns_id = 10
+
+# DataCore virtual disk type (single/mirrored). Mirrored virtual disks require
+# two storage servers in the server group. (string value)
+# Possible values:
+# single - <No description provided>
+# mirrored - <No description provided>
+#datacore_disk_type = single
+
+# DataCore virtual disk storage profile. (string value)
+#datacore_storage_profile = <None>
+
+# List of DataCore disk pools that can be used by volume driver. (list value)
+#datacore_disk_pools =
+
+# Seconds to wait for a response from a DataCore API call. (integer value)
+# Minimum value: 1
+#datacore_api_timeout = 300
+
+# Seconds to wait for DataCore virtual disk to come out of the "Failed" state.
+# (integer value)
+# Minimum value: 0
+#datacore_disk_failed_delay = 15
+
+# List of iSCSI targets that cannot be used to attach volume. To prevent the
+# DataCore iSCSI volume driver from using some front-end targets in volume
+# attachment, specify this option and list the iqn and target machine for each
+# target as the value, such as <iqn:target name>, <iqn:target name>,
+# <iqn:target name>. (list value)
+#datacore_iscsi_unallowed_targets =
+
+# Configure CHAP authentication for iSCSI connections. (boolean value)
+#datacore_iscsi_chap_enabled = false
+
+# iSCSI CHAP authentication password storage file. (string value)
+#datacore_iscsi_chap_storage = <None>
+
+# Storage system autoexpand parameter for volumes (True/False) (boolean value)
+#instorage_mcs_vol_autoexpand = true
+
+# Storage system compression option for volumes (boolean value)
+#instorage_mcs_vol_compression = false
+
+# Enable InTier for volumes (boolean value)
+#instorage_mcs_vol_intier = true
+
+# Allow tenants to specify QOS on create (boolean value)
+#instorage_mcs_allow_tenant_qos = false
+
+# Storage system grain size parameter for volumes (32/64/128/256) (integer
+# value)
+# Minimum value: 32
+# Maximum value: 256
+#instorage_mcs_vol_grainsize = 256
+
+# Storage system space-efficiency parameter for volumes (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#instorage_mcs_vol_rsize = 2
+
+# Storage system threshold for volume capacity warnings (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#instorage_mcs_vol_warning = 0
+
+# Maximum number of seconds to wait for LocalCopy to be prepared. (integer
+# value)
+# Minimum value: 1
+# Maximum value: 600
+#instorage_mcs_localcopy_timeout = 120
+
+# Specifies the InStorage LocalCopy copy rate to be used when creating a full
+# volume copy. The default is rate is 50, and the valid rates are 1-100.
+# (integer value)
+# Minimum value: 1
+# Maximum value: 100
+#instorage_mcs_localcopy_rate = 50
+
+# The I/O group in which to allocate volumes. It can be a comma-separated list
+# in which case the driver will select an io_group based on least number of
+# volumes associated with the io_group. (string value)
+#instorage_mcs_vol_iogrp = 0
+
+# Specifies secondary management IP or hostname to be used if san_ip is invalid
+# or becomes inaccessible. (string value)
+#instorage_san_secondary_ip = <None>
+
+# Comma separated list of storage system storage pools for volumes. (list
+# value)
+#instorage_mcs_volpool_name = volpool
+
+# Configure CHAP authentication for iSCSI connections (Default: Enabled)
+# (boolean value)
+#instorage_mcs_iscsi_chap_enabled = true
+
+# The StorPool template for volumes with no type. (string value)
+#storpool_template = <None>
+
+# The default StorPool chain replication value.  Used when creating a volume
+# with no specified type if storpool_template is not set.  Also used for
+# calculating the apparent free space reported in the stats. (integer value)
+#storpool_replication = 3
+
+# Create sparse Lun. (boolean value)
+#vrts_lun_sparse = true
+
+# VA config file. (string value)
+#vrts_target_config = /etc/cinder/vrts_target.xml
+
+# Timeout for creating the volume to migrate to when performing volume
+# migration (seconds) (integer value)
+#migration_create_volume_timeout_secs = 300
+
+# Offload pending volume delete during volume service startup (boolean value)
+#volume_service_inithost_offload = false
+
+# FC Zoning mode configured, only 'fabric' is supported now. (string value)
+#zoning_mode = <None>
+
+# Sets the value of TCP_KEEPALIVE (True/False) for each server socket. (boolean
+# value)
+#tcp_keepalive = true
+
+# Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not
+# supported on OS X. (integer value)
+#tcp_keepalive_interval = <None>
+
+# Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X.
+# (integer value)
+#tcp_keepalive_count = <None>
+
+
+[backend]
+
+#
+# From cinder
+#
+
+# Backend override of host value. (string value)
+#backend_host = <None>
+[lvm-driver]
+host=cmp002
+volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
+volume_backend_name=lvm-driver
 iscsi_helper = tgtadm
-volume_name_template = volume-%s
-volume_group = cinder-volumes
-verbose = True
-auth_strategy = keystone
-state_path = /var/lib/cinder
-lock_path = /var/lock/cinder
-volumes_dir = /var/lib/cinder/volumes
-enabled_backends = lvm
+volume_group = vgroot
+
+
+[backend_defaults]
+
+#
+# From cinder
+#
+
+# Number of times to attempt to run flakey shell commands (integer value)
+#num_shell_tries = 3
+
+# The percentage of backend capacity is reserved (integer value)
+# Minimum value: 0
+# Maximum value: 100
+#reserved_percentage = 0
+
+# Prefix for iSCSI volumes (string value)
+# Deprecated group/name - [backend_defaults]/iscsi_target_prefix
+#target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSCSI daemon is listening on (string value)
+# Deprecated group/name - [backend_defaults]/iscsi_ip_address
+#target_ip_address = $my_ip
+
+# The list of secondary IP addresses of the iSCSI daemon (list value)
+#iscsi_secondary_ip_addresses =
+
+# The port that the iSCSI daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# Deprecated group/name - [backend_defaults]/iscsi_port
+#target_port = 3260
+
+# The maximum number of times to rescan targets to find volume (integer value)
+#num_volume_device_scan_tries = 3
+
+# The backend name for a given driver implementation (string value)
+#volume_backend_name = <None>
+
+# Do we attach/detach volumes in cinder using multipath for volume to image and
+# image to volume transfers? (boolean value)
+#use_multipath_for_image_xfer = false
+
+# If this is set to True, attachment of volumes for image transfer will be
+# aborted when multipathd is not running. Otherwise, it will fallback to single
+# path. (boolean value)
+#enforce_multipath_for_image_xfer = false
+
+# Method used to wipe old volumes (string value)
+# Possible values:
+# none - <No description provided>
+# zero - <No description provided>
+#volume_clear = zero
+
+# Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
+# (integer value)
+# Maximum value: 1024
+#volume_clear_size = 0
+
+# The flag to pass to ionice to alter the i/o priority of the process used to
+# zero a volume after deletion, for example "-c3" for idle only priority.
+# (string value)
+#volume_clear_ionice = <None>
+
+# Target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI
+# support, scstadmin for SCST target support, ietadm for iSCSI Enterprise
+# Target, iscsictl for Chelsio iSCSI Target, nvmet for NVMEoF support, or fake
+# for testing. (string value)
+# Possible values:
+# tgtadm - <No description provided>
+# lioadm - <No description provided>
+# scstadmin - <No description provided>
+# iscsictl - <No description provided>
+# ietadm - <No description provided>
+# nvmet - <No description provided>
+# fake - <No description provided>
+# Deprecated group/name - [backend_defaults]/iscsi_helper
+#target_helper = tgtadm
+
+# Volume configuration file storage directory (string value)
+#volumes_dir = $state_path/volumes
+
+# IET configuration file (string value)
+#iet_conf = /etc/iet/ietd.conf
+
+# Chiscsi (CXT) global defaults configuration file (string value)
+#chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf
+
+# Sets the behavior of the iSCSI target to either perform blockio or fileio
+# optionally, auto can be set and Cinder will autodetect type of backing device
+# (string value)
+# Possible values:
+# blockio - <No description provided>
+# fileio - <No description provided>
+# auto - <No description provided>
+#iscsi_iotype = fileio
+
+# The default block size used when copying/clearing volumes (string value)
+#volume_dd_blocksize = 1M
+
+# The blkio cgroup name to be used to limit bandwidth of volume copy (string
+# value)
+#volume_copy_blkio_cgroup_name = cinder-volume-copy
+
+# The upper limit of bandwidth of volume copy. 0 => unlimited (integer value)
+#volume_copy_bps_limit = 0
+
+# Sets the behavior of the iSCSI target to either perform write-back(on) or
+# write-through(off). This parameter is valid if target_helper is set to
+# tgtadm. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+#iscsi_write_cache = on
+
+# Sets the target-specific flags for the iSCSI target. Only used for tgtadm to
+# specify backing device flags using bsoflags option. The specified string is
+# passed as is to the underlying tool. (string value)
+#iscsi_target_flags =
+
+# Determines the target protocol for new volumes, created with tgtadm, lioadm
+# and nvmet target helpers. In order to enable RDMA, this parameter should be
+# set with the value "iser". The supported iSCSI protocol values are "iscsi"
+# and "iser", in case of nvmet target set to "nvmet_rdma". (string value)
+# Possible values:
+# iscsi - <No description provided>
+# iser - <No description provided>
+# nvmet_rdma - <No description provided>
+# Deprecated group/name - [backend_defaults]/iscsi_protocol
+#target_protocol = iscsi
+
+# The path to the client certificate key for verification, if the driver
+# supports it. (string value)
+#driver_client_cert_key = <None>
+
+# The path to the client certificate for verification, if the driver supports
+# it. (string value)
+#driver_client_cert = <None>
+
+# Tell driver to use SSL for connection to backend storage if the driver
+# supports it. (boolean value)
+#driver_use_ssl = false
+
+# Representation of the over subscription ratio when thin provisioning is
+# enabled. Default ratio is 20.0, meaning provisioned capacity can be 20 times
+# of the total physical capacity. If the ratio is 10.5, it means provisioned
+# capacity can be 10.5 times of the total physical capacity. A ratio of 1.0
+# means provisioned capacity cannot exceed the total physical capacity. If
+# ratio is 'auto', Cinder will automatically calculate the ratio based on the
+# provisioned capacity and the used space. If not set to auto, the ratio has to
+# be a minimum of 1.0. (string value)
+#max_over_subscription_ratio = 20.0
+
+# Certain ISCSI targets have predefined target names, SCST target driver uses
+# this name. (string value)
+#scst_target_iqn_name = <None>
+
+# SCST target implementation can choose from multiple SCST target drivers.
+# (string value)
+#scst_target_driver = iscsi
+
+# Option to enable/disable CHAP authentication for targets. (boolean value)
+#use_chap_auth = false
+
+# CHAP user name. (string value)
+#chap_username =
+
+# Password for specified CHAP account name. (string value)
+#chap_password =
+
+# Namespace for driver private data values to be saved in. (string value)
+#driver_data_namespace = <None>
+
+# String representation for an equation that will be used to filter hosts. Only
+# used when the driver filter is set to be used by the Cinder scheduler.
+# (string value)
+#filter_function = <None>
+
+# String representation for an equation that will be used to determine the
+# goodness of a host. Only used when using the goodness weigher is set to be
+# used by the Cinder scheduler. (string value)
+#goodness_function = <None>
+
+# If set to True the http client will validate the SSL certificate of the
+# backend endpoint. (boolean value)
+#driver_ssl_cert_verify = false
+
+# Can be used to specify a non default path to a CA_BUNDLE file or directory
+# with certificates of trusted CAs, which will be used to validate the backend
+# (string value)
+#driver_ssl_cert_path = <None>
+
+# List of options that control which trace info is written to the DEBUG log
+# level to assist developers. Valid values are method and api. (list value)
+#trace_flags = <None>
+
+# Multi opt of dictionaries to represent a replication target device.  This
+# option may be specified multiple times in a single config section to specify
+# multiple replication target devices.  Each entry takes the standard dict
+# config form: replication_device =
+# target_device_id:<required>,key1:value1,key2:value2... (dict value)
+#replication_device = <None>
+
+# If set to True, upload-to-image in raw format will create a cloned volume and
+# register its location to the image service, instead of uploading the volume
+# content. The cinder backend and locations support must be enabled in the
+# image service. (boolean value)
+#image_upload_use_cinder_backend = false
+
+# If set to True, the image volume created by upload-to-image will be placed in
+# the internal tenant. Otherwise, the image volume is created in the current
+# context's tenant. (boolean value)
+#image_upload_use_internal_tenant = false
+
+# Enable the image volume cache for this backend. (boolean value)
+#image_volume_cache_enabled = false
+
+# Max size of the image volume cache for this backend in GB. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_size_gb = 0
+
+# Max number of entries allowed in the image volume cache. 0 => unlimited.
+# (integer value)
+#image_volume_cache_max_count = 0
+
+# Report to clients of Cinder that the backend supports discard (aka.
+# trim/unmap). This will not actually change the behavior of the backend or the
+# client directly, it will only notify that it can be used. (boolean value)
+#report_discard_supported = false
+
+# Protocol for transferring data between host and storage back-end. (string
+# value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+#storage_protocol = iscsi
+
+# If this is set to True, a temporary snapshot will be created for performing
+# non-disruptive backups. Otherwise a temporary volume will be cloned in order
+# to perform a backup. (boolean value)
+#backup_use_temp_snapshot = false
+
+# Set this to True when you want to allow an unsupported driver to start.
+# Drivers that haven't maintained a working CI system and testing are marked as
+# unsupported until CI is working again.  This also marks a driver as
+# deprecated and may be removed in the next release. (boolean value)
+#enable_unsupported_driver = false
+
+# Availability zone for this volume backend. If not set, the
+# storage_availability_zone option value is used as the default for all
+# backends. (string value)
+#backend_availability_zone = <None>
+
+# The maximum number of times to rescan iSER targetto find volume (integer
+# value)
+#num_iser_scan_tries = 3
+
+# Prefix for iSER volumes (string value)
+#iser_target_prefix = iqn.2010-10.org.openstack:
+
+# The IP address that the iSER daemon is listening on (string value)
+#iser_ip_address = $my_ip
+
+# The port that the iSER daemon is listening on (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#iser_port = 3260
+
+# The name of the iSER target user-land tool to use (string value)
+#iser_helper = tgtadm
+
+# The port that the NVMe target is listening on. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#nvmet_port_id = 1
+
+# The namespace id associated with the subsystem that will be created with the
+# path for the LVM volume. (integer value)
+#nvmet_ns_id = 10
+
+# Hostname for the CoprHD Instance (string value)
+#coprhd_hostname = <None>
+
+# Port for the CoprHD Instance (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#coprhd_port = 4443
+
+# Username for accessing the CoprHD Instance (string value)
+#coprhd_username = <None>
+
+# Password for accessing the CoprHD Instance (string value)
+#coprhd_password = <None>
+
+# Tenant to utilize within the CoprHD Instance (string value)
+#coprhd_tenant = <None>
+
+# Project to utilize within the CoprHD Instance (string value)
+#coprhd_project = <None>
+
+# Virtual Array to utilize within the CoprHD Instance (string value)
+#coprhd_varray = <None>
+
+# True | False to indicate if the storage array in CoprHD is VMAX or VPLEX
+# (boolean value)
+#coprhd_emulate_snapshot = false
+
+# Rest Gateway IP or FQDN for Scaleio (string value)
+#coprhd_scaleio_rest_gateway_host = None
+
+# Rest Gateway Port for Scaleio (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#coprhd_scaleio_rest_gateway_port = 4984
+
+# Username for Rest Gateway (string value)
+#coprhd_scaleio_rest_server_username = <None>
+
+# Rest Gateway Password (string value)
+#coprhd_scaleio_rest_server_password = <None>
+
+# verify server certificate (boolean value)
+#scaleio_verify_server_certificate = false
+
+# Server certificate path (string value)
+#scaleio_server_certificate_path = <None>
+
+# Datera API port. (string value)
+#datera_api_port = 7717
+
+# DEPRECATED: Datera API version. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#datera_api_version = 2
+
+# Timeout for HTTP 503 retry messages (integer value)
+#datera_503_timeout = 120
+
+# Interval between 503 retries (integer value)
+#datera_503_interval = 5
+
+# True to set function arg and return logging (boolean value)
+#datera_debug = false
+
+# ONLY FOR DEBUG/TESTING PURPOSES
+# True to set replica_count to 1 (boolean value)
+#datera_debug_replica_count_override = false
+
+# If set to 'Map' --> OpenStack project ID will be mapped implicitly to Datera
+# tenant ID
+# If set to 'None' --> Datera tenant ID will not be used during volume
+# provisioning
+# If set to anything else --> Datera tenant ID will be the provided value
+# (string value)
+#datera_tenant_id = <None>
+
+# Set to True to disable profiling in the Datera driver (boolean value)
+#datera_disable_profiler = false
+
+# Group name to use for creating volumes. Defaults to "group-0". (string value)
+#eqlx_group_name = group-0
+
+# Maximum retry count for reconnection. Default is 5. (integer value)
+# Minimum value: 0
+#eqlx_cli_max_retries = 5
+
+# Pool in which volumes will be created. Defaults to "default". (string value)
+#eqlx_pool = default
+
+# Storage Center System Serial Number (integer value)
+#dell_sc_ssn = 64702
+
+# Dell API port (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#dell_sc_api_port = 3033
+
+# Name of the server folder to use on the Storage Center (string value)
+#dell_sc_server_folder = openstack
+
+# Name of the volume folder to use on the Storage Center (string value)
+#dell_sc_volume_folder = openstack
+
+# Enable HTTPS SC certificate verification (boolean value)
+#dell_sc_verify_cert = false
+
+# IP address of secondary DSM volume (string value)
+#secondary_san_ip =
+
+# Secondary DSM user name (string value)
+#secondary_san_login = Admin
+
+# Secondary DSM user password name (string value)
+#secondary_san_password =
+
+# Secondary Dell API port (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#secondary_sc_api_port = 3033
+
+# Dell SC API async call default timeout in seconds. (integer value)
+#dell_api_async_rest_timeout = 15
+
+# Dell SC API sync call default timeout in seconds. (integer value)
+#dell_api_sync_rest_timeout = 30
+
+# Domain IP to be excluded from iSCSI returns. (IP address value)
+#excluded_domain_ip = <None>
+
+# Server OS type to use when creating a new server on the Storage Center.
+# (string value)
+#dell_server_os = Red Hat Linux 6.x
+
+# REST server port. (string value)
+#sio_rest_server_port = 443
+
+# Verify server certificate. (boolean value)
+#sio_verify_server_certificate = false
+
+# Server certificate path. (string value)
+#sio_server_certificate_path = <None>
+
+# Round up volume capacity. (boolean value)
+#sio_round_volume_capacity = true
+
+# Unmap volume before deletion. (boolean value)
+#sio_unmap_volume_before_deletion = false
+
+# Storage Pools. (string value)
+#sio_storage_pools = <None>
+
+# DEPRECATED: Protection Domain ID. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_protection_domain_id = <None>
+
+# DEPRECATED: Protection Domain name. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_protection_domain_name = <None>
+
+# DEPRECATED: Storage Pool name. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_storage_pool_name = <None>
+
+# DEPRECATED: Storage Pool ID. (string value)
+# This option is deprecated for removal since Pike.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by sio_storage_pools option
+#sio_storage_pool_id = <None>
+
+# ScaleIO API version. (string value)
+#sio_server_api_version = <None>
+
+# max_over_subscription_ratio setting for the ScaleIO driver. This replaces the
+# general max_over_subscription_ratio which has no effect in this
+# driver.Maximum value allowed for ScaleIO is 10.0. (floating point value)
+#sio_max_over_subscription_ratio = 10.0
+
+# Allow thick volumes to be created in Storage Pools when zero padding is
+# disabled. This option should not be enabled if multiple tenants will utilize
+# thick volumes from a shared Storage Pool. (boolean value)
+#sio_allow_non_padded_thick_volumes = false
+
+# A comma-separated list of storage pool names to be used. (list value)
+#unity_storage_pool_names = <None>
+
+# A comma-separated list of iSCSI or FC ports to be used. Each port can be
+# Unix-style glob expressions. (list value)
+#unity_io_ports = <None>
+
+# To remove the host from Unity when the last LUN is detached from it. By
+# default, it is False. (boolean value)
+#remove_empty_host = false
+
+# DEPRECATED: Use this file for cinder emc plugin config data. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#cinder_dell_emc_config_file = /etc/cinder/cinder_dell_emc_config.xml
+
+# Use this value to specify length of the interval in seconds. (integer value)
+#interval = 3
+
+# Use this value to specify number of retries. (integer value)
+#retries = 200
+
+# Use this value to enable the initiator_check. (boolean value)
+#initiator_check = false
+
+# REST server port number. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#san_rest_port = 8443
+
+# Serial number of the array to connect to. (string value)
+#vmax_array = <None>
+
+# Storage resource pool on array to use for provisioning. (string value)
+#vmax_srp = <None>
+
+# Service level to use for provisioning storage. (string value)
+#vmax_service_level = <None>
+
+# Workload (string value)
+#vmax_workload = <None>
+
+# List of port groups containing frontend ports configured prior for server
+# connection. (list value)
+#vmax_port_groups = <None>
+
+# VNX authentication scope type. By default, the value is global. (string
+# value)
+#storage_vnx_authentication_type = global
+
+# Directory path that contains the VNX security file. Make sure the security
+# file is generated first. (string value)
+#storage_vnx_security_file_dir = <None>
+
+# Naviseccli Path. (string value)
+#naviseccli_path = <None>
+
+# Comma-separated list of storage pool names to be used. (list value)
+#storage_vnx_pool_names = <None>
+
+# Default timeout for CLI operations in minutes. For example, LUN migration is
+# a typical long running operation, which depends on the LUN size and the load
+# of the array. An upper bound in the specific deployment can be set to avoid
+# unnecessary long wait. By default, it is 365 days long. (integer value)
+#default_timeout = 31536000
+
+# Default max number of LUNs in a storage group. By default, the value is 255.
+# (integer value)
+#max_luns_per_storage_group = 255
+
+# To destroy storage group when the last LUN is removed from it. By default,
+# the value is False. (boolean value)
+#destroy_empty_storage_group = false
+
+# Mapping between hostname and its iSCSI initiator IP addresses. (string value)
+#iscsi_initiators = <None>
+
+# Comma separated iSCSI or FC ports to be used in Nova or Cinder. (list value)
+#io_port_list = <None>
+
+# Automatically register initiators. By default, the value is False. (boolean
+# value)
+#initiator_auto_registration = false
+
+# Automatically deregister initiators after the related storage group is
+# destroyed. By default, the value is False. (boolean value)
+#initiator_auto_deregistration = false
+
+# DEPRECATED: Report free_capacity_gb as 0 when the limit to maximum number of
+# pool LUNs is reached. By default, the value is False. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#check_max_pool_luns_threshold = false
+
+# Delete a LUN even if it is in Storage Groups. By default, the value is False.
+# (boolean value)
+#force_delete_lun_in_storagegroup = false
+
+# Force LUN creation even if the full threshold of pool is reached. By default,
+# the value is False. (boolean value)
+#ignore_pool_full_threshold = false
+
+# XMS cluster id in multi-cluster environment (string value)
+#xtremio_cluster_name =
+
+# Number of retries in case array is busy (integer value)
+#xtremio_array_busy_retry_count = 5
+
+# Interval between retries in case array is busy (integer value)
+#xtremio_array_busy_retry_interval = 5
+
+# Number of volumes created from each cached glance image (integer value)
+#xtremio_volumes_per_glance_cache = 100
+
+# Should the driver remove initiator groups with no volumes after the last
+# connection was terminated. Since the behavior till now was to leave the IG
+# be, we default to False (not deleting IGs without connected volumes); setting
+# this parameter to True will remove any IG after terminating its connection to
+# the last volume. (boolean value)
+#xtremio_clean_unused_ig = false
+
+# The IP of DMS client socket server (IP address value)
+#disco_client = 127.0.0.1
+
+# The port to connect DMS client socket server (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#disco_client_port = 9898
+
+# DEPRECATED: Path to the wsdl file to communicate with DISCO request manager
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#disco_wsdl_path = /etc/cinder/DISCOService.wsdl
+
+# DEPRECATED: The IP address of the REST server (IP address value)
+# Deprecated group/name - [DEFAULT]/rest_ip
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Using san_ip later
+#disco_rest_ip = <None>
+
+# Use soap client or rest client for communicating with DISCO. Possible values
+# are "soap" or "rest". (string value)
+# Possible values:
+# soap - <No description provided>
+# rest - <No description provided>
+# Deprecated group/name - [DEFAULT]/choice_client
+#disco_choice_client = <None>
+
+# DEPRECATED: The port of DISCO source API (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Using san_api_port later
+#disco_src_api_port = 8080
+
+# Prefix before volume name to differentiate DISCO volume created through
+# openstack and the other ones (string value)
+# Deprecated group/name - [backend_defaults]/volume_name_prefix
+#disco_volume_name_prefix = openstack-
+
+# How long we check whether a snapshot is finished before we give up (integer
+# value)
+# Deprecated group/name - [backend_defaults]/snapshot_check_timeout
+#disco_snapshot_check_timeout = 3600
+
+# How long we check whether a restore is finished before we give up (integer
+# value)
+# Deprecated group/name - [backend_defaults]/restore_check_timeout
+#disco_restore_check_timeout = 3600
+
+# How long we check whether a clone is finished before we give up (integer
+# value)
+# Deprecated group/name - [backend_defaults]/clone_check_timeout
+#disco_clone_check_timeout = 3600
+
+# How long we wait before retrying to get an item detail (integer value)
+# Deprecated group/name - [backend_defaults]/retry_interval
+#disco_retry_interval = 1
+
+# Number of nodes that should replicate the data. (integer value)
+#drbdmanage_redundancy = 1
+
+# Resource deployment completion wait policy. (string value)
+#drbdmanage_resource_policy = {"ratio": "0.51", "timeout": "60"}
+
+# Disk options to set on new resources. See http://www.drbd.org/en/doc/users-
+# guide-90/re-drbdconf for all the details. (string value)
+#drbdmanage_disk_options = {"c-min-rate": "4M"}
+
+# Net options to set on new resources. See http://www.drbd.org/en/doc/users-
+# guide-90/re-drbdconf for all the details. (string value)
+#drbdmanage_net_options = {"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"}
+
+# Resource options to set on new resources. See
+# http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details.
+# (string value)
+#drbdmanage_resource_options = {"auto-promote-timeout": "300"}
+
+# Snapshot completion wait policy. (string value)
+#drbdmanage_snapshot_policy = {"count": "1", "timeout": "60"}
+
+# Volume resize completion wait policy. (string value)
+#drbdmanage_resize_policy = {"timeout": "60"}
+
+# Resource deployment completion wait plugin. (string value)
+#drbdmanage_resource_plugin = drbdmanage.plugins.plugins.wait_for.WaitForResource
+
+# Snapshot completion wait plugin. (string value)
+#drbdmanage_snapshot_plugin = drbdmanage.plugins.plugins.wait_for.WaitForSnapshot
+
+# Volume resize completion wait plugin. (string value)
+#drbdmanage_resize_plugin = drbdmanage.plugins.plugins.wait_for.WaitForVolumeSize
+
+# If set, the c-vol node will receive a useable
+#                 /dev/drbdX device, even if the actual data is stored on
+#                 other nodes only.
+#                 This is useful for debugging, maintenance, and to be
+#                 able to do the iSCSI export from the c-vol node. (boolean
+# value)
+#drbdmanage_devs_on_volume = true
+
+# config file for cinder eternus_dx volume driver (string value)
+#cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml
+
+# The flag of thin storage allocation. (boolean value)
+#dsware_isthin = false
+
+# Fusionstorage manager ip addr for cinder-volume. (string value)
+#dsware_manager =
+
+# Fusionstorage agent ip addr range. (string value)
+#fusionstorageagent =
+
+# Pool type, like sata-2copy. (string value)
+#pool_type = default
+
+# Pool id permit to use. (list value)
+#pool_id_filter =
+
+# Create clone volume timeout. (integer value)
+#clone_volume_timeout = 680
+
+# Space network name to use for data transfer (string value)
+#hgst_net = Net 1 (IPv4)
+
+# Comma separated list of Space storage servers:devices. ex:
+# os1_stor:gbd0,os2_stor:gbd0 (string value)
+#hgst_storage_servers = os:gbd0
+
+# Should spaces be redundantly stored (1/0) (string value)
+#hgst_redundancy = 0
+
+# User to own created spaces (string value)
+#hgst_space_user = root
+
+# Group to own created spaces (string value)
+#hgst_space_group = disk
+
+# UNIX mode for created spaces (string value)
+#hgst_space_mode = 0600
+
+# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1 (string value)
+#hpe3par_api_url =
+
+# 3PAR username with the 'edit' role (string value)
+#hpe3par_username =
+
+# 3PAR password for the user specified in hpe3par_username (string value)
+#hpe3par_password =
+
+# List of the CPG(s) to use for volume creation (list value)
+#hpe3par_cpg = OpenStack
+
+# The CPG to use for Snapshots for volumes. If empty the userCPG will be used.
+# (string value)
+#hpe3par_cpg_snap =
+
+# The time in hours to retain a snapshot.  You can't delete it before this
+# expires. (string value)
+#hpe3par_snapshot_retention =
+
+# The time in hours when a snapshot expires  and is deleted.  This must be
+# larger than expiration (string value)
+#hpe3par_snapshot_expiration =
+
+# Enable HTTP debugging to 3PAR (boolean value)
+#hpe3par_debug = false
+
+# List of target iSCSI addresses to use. (list value)
+#hpe3par_iscsi_ips =
+
+# Enable CHAP authentication for iSCSI connections. (boolean value)
+#hpe3par_iscsi_chap_enabled = false
+
+# HPE LeftHand WSAPI Server Url like https://<LeftHand ip>:8081/lhos (uri
+# value)
+# Deprecated group/name - [backend_defaults]/hplefthand_api_url
+#hpelefthand_api_url = <None>
+
+# HPE LeftHand Super user username (string value)
+# Deprecated group/name - [backend_defaults]/hplefthand_username
+#hpelefthand_username = <None>
+
+# HPE LeftHand Super user password (string value)
+# Deprecated group/name - [backend_defaults]/hplefthand_password
+#hpelefthand_password = <None>
+
+# HPE LeftHand cluster name (string value)
+# Deprecated group/name - [backend_defaults]/hplefthand_clustername
+#hpelefthand_clustername = <None>
+
+# Configure CHAP authentication for iSCSI connections (Default: Disabled)
+# (boolean value)
+# Deprecated group/name - [backend_defaults]/hplefthand_iscsi_chap_enabled
+#hpelefthand_iscsi_chap_enabled = false
+
+# Enable HTTP debugging to LeftHand (boolean value)
+# Deprecated group/name - [backend_defaults]/hplefthand_debug
+#hpelefthand_debug = false
+
+# Port number of SSH service. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#hpelefthand_ssh_port = 16022
+
+# The configuration file for the Cinder Huawei driver. (string value)
+#cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
+
+# The remote device hypermetro will use. (string value)
+#hypermetro_devices = <None>
+
+# The remote metro device san user. (string value)
+#metro_san_user = <None>
+
+# The remote metro device san password. (string value)
+#metro_san_password = <None>
+
+# The remote metro device domain name. (string value)
+#metro_domain_name = <None>
+
+# The remote metro device request url. (string value)
+#metro_san_address = <None>
+
+# The remote metro device pool names. (string value)
+#metro_storage_pools = <None>
+
+# Connection protocol should be FC. (Default is FC.) (string value)
+#flashsystem_connection_protocol = FC
+
+# Allows vdisk to multi host mapping. (Default is True) (boolean value)
+#flashsystem_multihostmap_enabled = true
+
+# DEPRECATED: This option no longer has any affect. It is deprecated and will
+# be removed in the next release. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#flashsystem_multipath_enabled = false
+
+# Default iSCSI Port ID of FlashSystem. (Default port is 0.) (integer value)
+#flashsystem_iscsi_portid = 0
+
+# Specifies the path of the GPFS directory where Block Storage volume and
+# snapshot files are stored. (string value)
+#gpfs_mount_point_base = <None>
+
+# Specifies the path of the Image service repository in GPFS.  Leave undefined
+# if not storing images in GPFS. (string value)
+#gpfs_images_dir = <None>
+
+# Specifies the type of image copy to be used.  Set this when the Image service
+# repository also uses GPFS so that image files can be transferred efficiently
+# from the Image service to the Block Storage service. There are two valid
+# values: "copy" specifies that a full copy of the image is made;
+# "copy_on_write" specifies that copy-on-write optimization strategy is used
+# and unmodified blocks of the image file are shared efficiently. (string
+# value)
+# Possible values:
+# copy - <No description provided>
+# copy_on_write - <No description provided>
+# <None> - <No description provided>
+#gpfs_images_share_mode = <None>
+
+# Specifies an upper limit on the number of indirections required to reach a
+# specific block due to snapshots or clones.  A lengthy chain of copy-on-write
+# snapshots or clones can have a negative impact on performance, but improves
+# space utilization.  0 indicates unlimited clone depth. (integer value)
+#gpfs_max_clone_depth = 0
+
+# Specifies that volumes are created as sparse files which initially consume no
+# space. If set to False, the volume is created as a fully allocated file, in
+# which case, creation may take a significantly longer time. (boolean value)
+#gpfs_sparse_volumes = true
+
+# Specifies the storage pool that volumes are assigned to. By default, the
+# system storage pool is used. (string value)
+#gpfs_storage_pool = system
+
+# Comma-separated list of IP address or hostnames of GPFS nodes. (list value)
+#gpfs_hosts =
+
+# Username for GPFS nodes. (string value)
+#gpfs_user_login = root
+
+# Password for GPFS node user. (string value)
+#gpfs_user_password =
+
+# Filename of private key to use for SSH authentication. (string value)
+#gpfs_private_key =
+
+# SSH port to use. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#gpfs_ssh_port = 22
+
+# File containing SSH host keys for the gpfs nodes with which driver needs to
+# communicate. Default=$state_path/ssh_known_hosts (string value)
+#gpfs_hosts_key_file = $state_path/ssh_known_hosts
+
+# Option to enable strict gpfs host key checking while connecting to gpfs
+# nodes. Default=False (boolean value)
+#gpfs_strict_host_key_policy = false
+
+# Mapping between IODevice address and unit address. (string value)
+#ds8k_devadd_unitadd_mapping =
+
+# Set the first two digits of SSID. (string value)
+#ds8k_ssid_prefix = FF
+
+# Reserve LSSs for consistency group. (string value)
+#lss_range_for_cg =
+
+# Set to zLinux if your OpenStack version is prior to Liberty and you're
+# connecting to zLinux systems. Otherwise set to auto. Valid values for this
+# parameter are: 'auto', 'AMDLinuxRHEL', 'AMDLinuxSuse', 'AppleOSX', 'Fujitsu',
+# 'Hp', 'HpTru64', 'HpVms', 'LinuxDT', 'LinuxRF', 'LinuxRHEL', 'LinuxSuse',
+# 'Novell', 'SGI', 'SVC', 'SanFsAIX', 'SanFsLinux', 'Sun', 'VMWare', 'Win2000',
+# 'Win2003', 'Win2008', 'Win2012', 'iLinux', 'nSeries', 'pLinux', 'pSeries',
+# 'pSeriesPowerswap', 'zLinux', 'iSeries'. (string value)
+#ds8k_host_type = auto
+
+# Proxy driver that connects to the IBM Storage Array (string value)
+#proxy = cinder.volume.drivers.ibm.ibm_storage.proxy.IBMStorageProxy
+
+# Connection type to the IBM Storage Array (string value)
+# Possible values:
+# fibre_channel - <No description provided>
+# iscsi - <No description provided>
+#connection_type = iscsi
+
+# CHAP authentication mode, effective only for iscsi (disabled|enabled) (string
+# value)
+# Possible values:
+# disabled - <No description provided>
+# enabled - <No description provided>
+#chap = disabled
+
+# List of Management IP addresses (separated by commas) (string value)
+#management_ips =
+
+# Comma separated list of storage system storage pools for volumes. (list
+# value)
+#storwize_svc_volpool_name = volpool
+
+# Storage system space-efficiency parameter for volumes (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#storwize_svc_vol_rsize = 2
+
+# Storage system threshold for volume capacity warnings (percentage) (integer
+# value)
+# Minimum value: -1
+# Maximum value: 100
+#storwize_svc_vol_warning = 0
+
+# Storage system autoexpand parameter for volumes (True/False) (boolean value)
+#storwize_svc_vol_autoexpand = true
+
+# Storage system grain size parameter for volumes (32/64/128/256) (integer
+# value)
+#storwize_svc_vol_grainsize = 256
+
+# Storage system compression option for volumes (boolean value)
+#storwize_svc_vol_compression = false
+
+# Enable Easy Tier for volumes (boolean value)
+#storwize_svc_vol_easytier = true
+
+# The I/O group in which to allocate volumes. It can be a comma-separated list
+# in which case the driver will select an io_group based on least number of
+# volumes associated with the io_group. (string value)
+#storwize_svc_vol_iogrp = 0
+
+# Maximum number of seconds to wait for FlashCopy to be prepared. (integer
+# value)
+# Minimum value: 1
+# Maximum value: 600
+#storwize_svc_flashcopy_timeout = 120
+
+# DEPRECATED: This option no longer has any affect. It is deprecated and will
+# be removed in the next release. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#storwize_svc_multihostmap_enabled = true
+
+# Allow tenants to specify QOS on create (boolean value)
+#storwize_svc_allow_tenant_qos = false
+
+# If operating in stretched cluster mode, specify the name of the pool in which
+# mirrored copies are stored.Example: "pool2" (string value)
+#storwize_svc_stretched_cluster_partner = <None>
+
+# Specifies secondary management IP or hostname to be used if san_ip is invalid
+# or becomes inaccessible. (string value)
+#storwize_san_secondary_ip = <None>
+
+# Specifies that the volume not be formatted during creation. (boolean value)
+#storwize_svc_vol_nofmtdisk = false
+
+# Specifies the Storwize FlashCopy copy rate to be used when creating a full
+# volume copy. The default is rate is 50, and the valid rates are 1-150.
+# (integer value)
+# Minimum value: 1
+# Maximum value: 150
+#storwize_svc_flashcopy_rate = 50
+
+# Specifies the name of the pool in which mirrored copy is stored. Example:
+# "pool2" (string value)
+#storwize_svc_mirror_pool = <None>
+
+# Specifies the name of the peer pool for hyperswap volume, the peer pool must
+# exist on the other site. (string value)
+#storwize_peer_pool = <None>
+
+# Specifies the site information for host. One WWPN or multi WWPNs used in the
+# host can be specified. For example:
+# storwize_preferred_host_site=site1:wwpn1,site2:wwpn2&wwpn3 or
+# storwize_preferred_host_site=site1:iqn1,site2:iqn2 (dict value)
+#storwize_preferred_host_site =
+
+# This defines an optional cycle period that applies to Global Mirror
+# relationships with a cycling mode of multi. A Global Mirror relationship
+# using the multi cycling_mode performs a complete cycle at most once each
+# period. The default is 300 seconds, and the valid seconds are 60-86400.
+# (integer value)
+# Minimum value: 60
+# Maximum value: 86400
+#cycle_period_seconds = 300
+
+# Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
+# (boolean value)
+#storwize_svc_multipath_enabled = false
+
+# Configure CHAP authentication for iSCSI connections (Default: Enabled)
+# (boolean value)
+#storwize_svc_iscsi_chap_enabled = true
+
+# Name of the pool from which volumes are allocated (string value)
+#infinidat_pool_name = <None>
+
+# Protocol for transferring data between host and storage back-end. (string
+# value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+#infinidat_storage_protocol = fc
+
+# List of names of network spaces to use for iSCSI connectivity (list value)
+#infinidat_iscsi_netspaces =
+
+# Specifies whether to turn on compression for newly created volumes. (boolean
+# value)
+#infinidat_use_compression = false
+
+# K2 driver will calculate max_oversubscription_ratio on setting this option as
+# True. (boolean value)
+#auto_calc_max_oversubscription_ratio = false
+
+# Whether or not our private network has unique FQDN on each initiator or not.
+# For example networks with QA systems usually have multiple servers/VMs with
+# the same FQDN.  When true this will create host entries on K2 using the FQDN,
+# when false it will use the reversed IQN/WWNN. (boolean value)
+#unique_fqdn_network = true
+
+# Disabling iSCSI discovery (sendtargets) for multipath connections on K2
+# driver. (boolean value)
+#disable_discovery = false
+
+# Pool or Vdisk name to use for volume creation. (string value)
+#lenovo_backend_name = A
+
+# linear (for VDisk) or virtual (for Pool). (string value)
+# Possible values:
+# linear - <No description provided>
+# virtual - <No description provided>
+#lenovo_backend_type = virtual
+
+# Lenovo api interface protocol. (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+#lenovo_api_protocol = https
+
+# Whether to verify Lenovo array SSL certificate. (boolean value)
+#lenovo_verify_certificate = false
+
+# Lenovo array SSL certificate path. (string value)
+#lenovo_verify_certificate_path = <None>
+
+# List of comma-separated target iSCSI IP addresses. (list value)
+#lenovo_iscsi_ips =
+
+# Name for the VG that will contain exported volumes (string value)
+#volume_group = cinder-volumes
+
+# If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors
+# + 2 PVs with available space (integer value)
+#lvm_mirrors = 0
+
+# Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to
+# thin if thin is supported. (string value)
+# Possible values:
+# default - <No description provided>
+# thin - <No description provided>
+# auto - <No description provided>
+#lvm_type = auto
+
+# LVM conf file to use for the LVM driver in Cinder; this setting is ignored if
+# the specified file does not exist (You can also specify 'None' to not use a
+# conf file even if one exists). (string value)
+#lvm_conf_file = /etc/cinder/lvm.conf
+
+# Suppress leaked file descriptor warnings in LVM commands. (boolean value)
+#lvm_suppress_fd_warnings = false
+
+# The storage family type used on the storage system; valid values are
+# ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
+# (string value)
+# Possible values:
+# ontap_cluster - <No description provided>
+# eseries - <No description provided>
+#netapp_storage_family = ontap_cluster
+
+# The storage protocol to be used on the data path with the storage system.
+# (string value)
+# Possible values:
+# iscsi - <No description provided>
+# fc - <No description provided>
+# nfs - <No description provided>
+#netapp_storage_protocol = <None>
+
+# The hostname (or IP address) for the storage system or proxy server. (string
+# value)
+#netapp_server_hostname = <None>
+
+# The TCP port to use for communication with the storage system or proxy
+# server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for
+# HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. (integer value)
+#netapp_server_port = <None>
+
+# The transport protocol used when communicating with the storage system or
+# proxy server. (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+#netapp_transport_type = http
+
+# Administrative user account name used to access the storage system or proxy
+# server. (string value)
+#netapp_login = <None>
+
+# Password for the administrative user account specified in the netapp_login
+# option. (string value)
+#netapp_password = <None>
+
+# This option specifies the virtual storage server (Vserver) name on the
+# storage cluster on which provisioning of block storage volumes should occur.
+# (string value)
+#netapp_vserver = <None>
+
+# The quantity to be multiplied by the requested volume size to ensure enough
+# space is available on the virtual storage server (Vserver) to fulfill the
+# volume creation request.  Note: this option is deprecated and will be removed
+# in favor of "reserved_percentage" in the Mitaka release. (floating point
+# value)
+#netapp_size_multiplier = 1.2
+
+# This option determines if storage space is reserved for LUN allocation. If
+# enabled, LUNs are thick provisioned. If space reservation is disabled,
+# storage space is allocated on demand. (string value)
+# Possible values:
+# enabled - <No description provided>
+# disabled - <No description provided>
+#netapp_lun_space_reservation = enabled
+
+# If the percentage of available space for an NFS share has dropped below the
+# value specified by this option, the NFS image cache will be cleaned. (integer
+# value)
+#thres_avl_size_perc_start = 20
+
+# When the percentage of available space on an NFS share has reached the
+# percentage specified by this option, the driver will stop clearing files from
+# the NFS image cache that have not been accessed in the last M minutes, where
+# M is the value of the expiry_thres_minutes configuration option. (integer
+# value)
+#thres_avl_size_perc_stop = 60
+
+# This option specifies the threshold for last access time for images in the
+# NFS image cache. When a cache cleaning cycle begins, images in the cache that
+# have not been accessed in the last M minutes, where M is the value of this
+# parameter, will be deleted from the cache to create free space on the NFS
+# share. (integer value)
+#expiry_thres_minutes = 720
+
+# This option is used to specify the path to the E-Series proxy application on
+# a proxy server. The value is combined with the value of the
+# netapp_transport_type, netapp_server_hostname, and netapp_server_port options
+# to create the URL used by the driver to connect to the proxy application.
+# (string value)
+#netapp_webservice_path = /devmgr/v2
+
+# This option is only utilized when the storage family is configured to
+# eseries. This option is used to restrict provisioning to the specified
+# volumes. Specify the value of this option to be a comma separated list of
+# volume hostnames or IP addresses to be used for provisioning. (string
+# value)
+#netapp_volume_ips = <None>
+
+# Password for the NetApp E-Series storage array. (string value)
+#netapp_sa_password = <None>
+
+# This option specifies whether the driver should allow operations that require
+# multiple attachments to a volume. An example would be live migration of
+# servers that have volumes attached. When enabled, this backend is limited to
+# 256 total volumes in order to guarantee volumes can be accessed by more than
+# one host. (boolean value)
+#netapp_enable_multiattach = false
+
+# This option specifies the path of the NetApp copy offload tool binary. Ensure
+# that the binary has execute permissions set which allow the effective user of
+# the cinder-volume process to execute the file. (string value)
+#netapp_copyoffload_tool_path = <None>
+
+# This option defines the type of operating system that will access a LUN
+# exported from Data ONTAP; it is assigned to the LUN at the time it is
+# created. (string value)
+#netapp_lun_ostype = <None>
+
+# This option defines the type of operating system for all initiators that can
+# access a LUN. This information is used when mapping LUNs to individual hosts
+# or groups of hosts. (string value)
+#netapp_host_type = <None>
+
+# This option is used to restrict provisioning to the specified pools. Specify
+# the value of this option to be a regular expression which will be applied to
+# the names of objects from the storage backend which represent pools in
+# Cinder. This option is only utilized when the storage protocol is configured
+# to use iSCSI or FC. (string value)
+# Deprecated group/name - [backend_defaults]/netapp_volume_list
+# Deprecated group/name - [backend_defaults]/netapp_storage_pools
+#netapp_pool_name_search_pattern = (.+)
+
+# Multi opt of dictionaries to represent the aggregate mapping between source
+# and destination back ends when using whole back end replication. For every
+# source aggregate associated with a cinder pool (NetApp FlexVol), you would
+# need to specify the destination aggregate on the replication target device. A
+# replication target device is configured with the configuration option
+# replication_device. Specify this option as many times as you have replication
+# devices. Each entry takes the standard dict config form:
+# netapp_replication_aggregate_map =
+# backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
+# (dict value)
+#netapp_replication_aggregate_map = <None>
+
+# The maximum time in seconds to wait for existing SnapMirror transfers to
+# complete before aborting during a failover. (integer value)
+# Minimum value: 0
+#netapp_snapmirror_quiesce_timeout = 3600
+
+# IP address of Nexenta SA (string value)
+#nexenta_host =
+
+# HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443
+# for HTTPS and 8080 for HTTP is used (integer value)
+#nexenta_rest_port = 0
+
+# Use http or https for REST connection (default auto) (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+# auto - <No description provided>
+#nexenta_rest_protocol = auto
+
+# Use secure HTTP for REST connection (default True) (boolean value)
+#nexenta_use_https = true
+
+# User name to connect to Nexenta SA (string value)
+#nexenta_user = admin
+
+# Password to connect to Nexenta SA (string value)
+#nexenta_password = nexenta
+
+# Nexenta target portal port (integer value)
+#nexenta_iscsi_target_portal_port = 3260
+
+# SA Pool that holds all volumes (string value)
+#nexenta_volume = cinder
+
+# IQN prefix for iSCSI targets (string value)
+#nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder-
+
+# Prefix for iSCSI target groups on SA (string value)
+#nexenta_target_group_prefix = cinder/
+
+# Volume group for ns5 (string value)
+#nexenta_volume_group = iscsi
+
+# Compression value for new ZFS folders. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+# gzip - <No description provided>
+# gzip-1 - <No description provided>
+# gzip-2 - <No description provided>
+# gzip-3 - <No description provided>
+# gzip-4 - <No description provided>
+# gzip-5 - <No description provided>
+# gzip-6 - <No description provided>
+# gzip-7 - <No description provided>
+# gzip-8 - <No description provided>
+# gzip-9 - <No description provided>
+# lzjb - <No description provided>
+# zle - <No description provided>
+# lz4 - <No description provided>
+#nexenta_dataset_compression = on
+
+# Deduplication value for new ZFS folders. (string value)
+# Possible values:
+# on - <No description provided>
+# off - <No description provided>
+# sha256 - <No description provided>
+# verify - <No description provided>
+# sha256, verify - <No description provided>
+#nexenta_dataset_dedup = off
+
+# Human-readable description for the folder. (string value)
+#nexenta_dataset_description =
+
+# Block size for datasets (integer value)
+#nexenta_blocksize = 4096
+
+# Block size for datasets (integer value)
+#nexenta_ns5_blocksize = 32
+
+# Enables or disables the creation of sparse datasets (boolean value)
+#nexenta_sparse = false
+
+# File with the list of available nfs shares (string value)
+#nexenta_shares_config = /etc/cinder/nfs_shares
+
+# Base directory that contains NFS share mount points (string value)
+#nexenta_mount_point_base = $state_path/mnt
+
+# Enables or disables the creation of volumes as sparsed files that take no
+# space. If disabled (False), volume is created as a regular file, which takes
+# a long time. (boolean value)
+#nexenta_sparsed_volumes = true
+
+# If set True cache NexentaStor appliance volroot option value. (boolean value)
+#nexenta_nms_cache_volroot = true
+
+# Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best
+# compression. (integer value)
+#nexenta_rrmgr_compression = 0
+
+# TCP Buffer size in KiloBytes. (integer value)
+#nexenta_rrmgr_tcp_buf_size = 4096
+
+# Number of TCP connections. (integer value)
+#nexenta_rrmgr_connections = 2
+
+# NexentaEdge logical path of directory to store symbolic links to NBDs (string
+# value)
+#nexenta_nbd_symlinks_dir = /dev/disk/by-path
+
+# IP address of NexentaEdge management REST API endpoint (string value)
+#nexenta_rest_address =
+
+# User name to connect to NexentaEdge (string value)
+#nexenta_rest_user = admin
+
+# Password to connect to NexentaEdge (string value)
+#nexenta_rest_password = nexenta
+
+# NexentaEdge logical path of bucket for LUNs (string value)
+#nexenta_lun_container =
+
+# NexentaEdge iSCSI service name (string value)
+#nexenta_iscsi_service =
+
+# NexentaEdge iSCSI Gateway client address for non-VIP service (string value)
+#nexenta_client_address =
+
+# NexentaEdge iSCSI LUN object chunk size (integer value)
+#nexenta_chunksize = 32768
+
+# File with the list of available NFS shares. (string value)
+#nfs_shares_config = /etc/cinder/nfs_shares
+
+# Create volumes as sparsed files which take no space. If set to False volume
+# is created as regular file. In such case volume creation takes a lot of time.
+# (boolean value)
+#nfs_sparsed_volumes = true
+
+# Create volumes as QCOW2 files rather than raw files. (boolean value)
+#nfs_qcow2_volumes = false
+
+# Base dir containing mount points for NFS shares. (string value)
+#nfs_mount_point_base = $state_path/mnt
+
+# Mount options passed to the NFS client. See section of the NFS man page for
+# details. (string value)
+#nfs_mount_options = <None>
+
+# The number of attempts to mount NFS shares before raising an error.  At least
+# one attempt will be made to mount an NFS share, regardless of the value
+# specified. (integer value)
+#nfs_mount_attempts = 3
+
+# Enable support for snapshots on the NFS driver. Platforms using libvirt
+# <1.2.7 will encounter issues with this feature. (boolean value)
+#nfs_snapshot_support = false
+
+# Nimble Controller pool name (string value)
+#nimble_pool_name = default
+
+# Nimble Subnet Label (string value)
+#nimble_subnet_label = *
+
+# Whether to verify Nimble SSL Certificate (boolean value)
+#nimble_verify_certificate = false
+
+# Path to Nimble Array SSL certificate (string value)
+#nimble_verify_cert_path = <None>
+
+# DPL pool uuid in which DPL volumes are stored. (string value)
+#dpl_pool =
+
+# DPL port number. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#dpl_port = 8357
+
+# REST API authorization token. (string value)
+#pure_api_token = <None>
+
+# Automatically determine an oversubscription ratio based on the current total
+# data reduction values. If used this calculated value will override the
+# max_over_subscription_ratio config option. (boolean value)
+#pure_automatic_max_oversubscription_ratio = true
+
+# Snapshot replication interval in seconds. (integer value)
+#pure_replica_interval_default = 3600
+
+# Retain all snapshots on target for this time (in seconds.) (integer value)
+#pure_replica_retention_short_term_default = 14400
+
+# Retain how many snapshots for each day. (integer value)
+#pure_replica_retention_long_term_per_day_default = 3
+
+# Retain snapshots per day on target for this time (in days.) (integer value)
+#pure_replica_retention_long_term_default = 7
+
+# When enabled, all Pure volumes, snapshots, and protection groups will be
+# eradicated at the time of deletion in Cinder. Data will NOT be recoverable
+# after a delete with this set to True! When disabled, volumes and snapshots
+# will go into pending eradication state and can be recovered. (boolean value)
+#pure_eradicate_on_delete = false
+
+# The URL to management QNAP Storage. Driver does not support IPv6 address in
+# URL. (uri value)
+#qnap_management_url = <None>
+
+# The pool name in the QNAP Storage (string value)
+#qnap_poolname = <None>
+
+# Communication protocol to access QNAP storage (string value)
+#qnap_storage_protocol = iscsi
+
+# Quobyte URL to the Quobyte volume using e.g. a DNS SRV record (preferred) or
+# a host list (alternatively) like quobyte://<DIR host1>, <DIR host2>/<volume
+# name> (string value)
+#quobyte_volume_url = <None>
+
+# Path to a Quobyte Client configuration file. (string value)
+#quobyte_client_cfg = <None>
+
+# Create volumes as sparse files which take no space. If set to False, volume
+# is created as regular file. (boolean value)
+#quobyte_sparsed_volumes = true
+
+# Create volumes as QCOW2 files rather than raw files. (boolean value)
+#quobyte_qcow2_volumes = true
+
+# Base dir containing the mount point for the Quobyte volume. (string value)
+#quobyte_mount_point_base = $state_path/mnt
+
+# Create a cache of volumes from merged snapshots to speed up creation of
+# multiple volumes from a single snapshot. (boolean value)
+#quobyte_volume_from_snapshot_cache = false
+
+# The name of ceph cluster (string value)
+#rbd_cluster_name = ceph
+
+# The RADOS pool where rbd volumes are stored (string value)
+#rbd_pool = rbd
+
+# The RADOS client name for accessing rbd volumes - only set when using cephx
+# authentication (string value)
+#rbd_user = <None>
+
+# Path to the ceph configuration file (string value)
+#rbd_ceph_conf =
+
+# Path to the ceph keyring file (string value)
+#rbd_keyring_conf =
+
+# Flatten volumes created from snapshots to remove dependency from volume to
+# snapshot (boolean value)
+#rbd_flatten_volume_from_snapshot = false
+
+# The libvirt uuid of the secret for the rbd_user volumes (string value)
+#rbd_secret_uuid = <None>
+
+# Maximum number of nested volume clones that are taken before a flatten
+# occurs. Set to 0 to disable cloning. (integer value)
+#rbd_max_clone_depth = 5
+
+# Volumes will be chunked into objects of this size (in megabytes). (integer
+# value)
+#rbd_store_chunk_size = 4
+
+# Timeout value (in seconds) used when connecting to ceph cluster. If value <
+# 0, no timeout is set and default librados value is used. (integer value)
+#rados_connect_timeout = -1
+
+# Number of retries if connection to ceph cluster failed. (integer value)
+#rados_connection_retries = 3
+
+# Interval value (in seconds) between connection retries to ceph cluster.
+# (integer value)
+#rados_connection_interval = 5
+
+# Timeout value (in seconds) used when connecting to ceph cluster to do a
+# demotion/promotion of volumes. If value < 0, no timeout is set and default
+# librados value is used. (integer value)
+#replication_connect_timeout = 5
+
+# Set to True for driver to report total capacity as a dynamic value -used +
+# current free- and to False to report a static value -quota max bytes if
+# defined and global size of cluster if not. (boolean value)
+#report_dynamic_total_capacity = true
+
+# Set to True if the pool is used exclusively by Cinder. On exclusive use
+# driver won't query images' provisioned size as they will match the value
+# calculated by the Cinder core code for allocated_capacity_gb. This reduces
+# the load on the Ceph cluster as well as on the volume service. (boolean
+# value)
+#rbd_exclusive_cinder_pool = false
+
+# IP address or Hostname of NAS system. (string value)
+#nas_host =
+
+# User name to connect to NAS system. (string value)
+#nas_login = admin
+
+# Password to connect to NAS system. (string value)
+#nas_password =
+
+# SSH port to use to connect to NAS system. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#nas_ssh_port = 22
+
+# Filename of private key to use for SSH authentication. (string value)
+#nas_private_key =
+
+# Allow network-attached storage systems to operate in a secure environment
+# where root level access is not permitted. If set to False, access is as the
+# root user and insecure. If set to True, access is not as root. If set to
+# auto, a check is done to determine if this is a new installation: True is
+# used if so, otherwise False. Default is auto. (string value)
+#nas_secure_file_operations = auto
+
+# Set more secure file permissions on network-attached storage volume files to
+# restrict broad other/world access. If set to False, volumes are created with
+# open permissions. If set to True, volumes are created with permissions for
+# the cinder user and group (660). If set to auto, a check is done to determine
+# if this is a new installation: True is used if so, otherwise False. Default
+# is auto. (string value)
+#nas_secure_file_permissions = auto
+
+# Path to the share to use for storing Cinder volumes. For example:
+# "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 .
+# (string value)
+#nas_share_path =
+
+# Options used to mount the storage backend file system where Cinder volumes
+# are stored. (string value)
+#nas_mount_options = <None>
+
+# Provisioning type that will be used when creating volumes. (string value)
+# Possible values:
+# thin - <No description provided>
+# thick - <No description provided>
+#nas_volume_prov_type = thin
+
+# Pool or Vdisk name to use for volume creation. (string value)
+#hpmsa_backend_name = A
+
+# linear (for Vdisk) or virtual (for Pool). (string value)
+# Possible values:
+# linear - <No description provided>
+# virtual - <No description provided>
+#hpmsa_backend_type = virtual
+
+# HPMSA API interface protocol. (string value)
+# Possible values:
+# http - <No description provided>
+# https - <No description provided>
+#hpmsa_api_protocol = https
+
+# Whether to verify HPMSA array SSL certificate. (boolean value)
+#hpmsa_verify_certificate = false
+
+# HPMSA array SSL certificate path. (string value)
+#hpmsa_verify_certificate_path = <None>
+
+# List of comma-separated target iSCSI IP addresses. (list value)
+#hpmsa_iscsi_ips =
+
+# Use thin provisioning for SAN volumes? (boolean value)
+#san_thin_provision = true
+
+# IP address of SAN volume (string value)
+#san_ip =
+
+# Username for SAN volume (string value)
+#san_login = admin
+
+# Password for SAN volume (string value)
+#san_password =
+
+# Filename of private key to use for SSH authentication (string value)
+#san_private_key =
+
+# Cluster name to use for creating volumes (string value)
+#san_clustername =
+
+# SSH port to use with SAN (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#san_ssh_port = 22
+
+# Port to use to access the SAN API (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#san_api_port = <None>
+
+# Execute commands locally instead of over SSH; use if the volume service is
+# running on the SAN device (boolean value)
+#san_is_local = false
+
+# SSH connection timeout in seconds (integer value)
+#ssh_conn_timeout = 30
+
+# Minimum ssh connections in the pool (integer value)
+#ssh_min_pool_conn = 1
+
+# Maximum ssh connections in the pool (integer value)
+#ssh_max_pool_conn = 5
+
+# IP address of sheep daemon. (string value)
+#sheepdog_store_address = 127.0.0.1
+
+# Port of sheep daemon. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#sheepdog_store_port = 7000
+
+# Set 512 byte emulation on volume creation;  (boolean value)
+#sf_emulate_512 = true
+
+# Allow tenants to specify QOS on create (boolean value)
+#sf_allow_tenant_qos = false
+
+# Create SolidFire accounts with this prefix. Any string can be used here, but
+# the string "hostname" is special and will create a prefix using the cinder
+# node hostname (previous default behavior).  The default is NO prefix. (string
+# value)
+#sf_account_prefix = <None>
+
+# Create SolidFire volumes with this prefix. Volume names are of the form
+# <sf_volume_prefix><cinder-volume-id>.  The default is to use a prefix of
+# 'UUID-'. (string value)
+#sf_volume_prefix = UUID-
+
+# Account name on the SolidFire Cluster to use as owner of template/cache
+# volumes (created if does not exist). (string value)
+#sf_template_account_name = openstack-vtemplate
+
+# DEPRECATED: This option is deprecated and will be removed in the next
+# OpenStack release.  Please use the general cinder image-caching feature
+# instead. (boolean value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: The Cinder caching feature should be used rather than this driver
+# specific implementation.
+#sf_allow_template_caching = false
+
+# Overrides default cluster SVIP with the one specified. This is required or
+# deployments that have implemented the use of VLANs for iSCSI networks in
+# their cloud. (string value)
+#sf_svip = <None>
+
+# SolidFire API port. Useful if the device api is behind a proxy on a different
+# port. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#sf_api_port = 443
+
+# Utilize volume access groups on a per-tenant basis. (boolean value)
+#sf_enable_vag = false
+
+# Volume on Synology storage to be used for creating lun. (string value)
+#synology_pool_name =
+
+# Management port for Synology storage. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#synology_admin_port = 5000
+
+# Administrator of Synology storage. (string value)
+#synology_username = admin
+
+# Password of administrator for logging in Synology storage. (string value)
+#synology_password =
+
+# Do certificate validation or not if $driver_use_ssl is True (boolean value)
+#synology_ssl_verify = true
+
+# One time password of administrator for logging in Synology storage if OTP is
+# enabled. (string value)
+#synology_one_time_pass = <None>
+
+# Device id for skip one time password check for logging in Synology storage if
+# OTP is enabled. (string value)
+#synology_device_id = <None>
+
+# The hostname (or IP address) for the storage system (string value)
+#tintri_server_hostname = <None>
+
+# User name for the storage system (string value)
+#tintri_server_username = <None>
+
+# Password for the storage system (string value)
+#tintri_server_password = <None>
+
+# API version for the storage system (string value)
+#tintri_api_version = v310
+
+# Delete unused image snapshots older than mentioned days (integer value)
+#tintri_image_cache_expiry_days = 30
+
+# Path to image nfs shares file (string value)
+#tintri_image_shares_config = <None>
+
+# IP address for connecting to VMware vCenter server. (string value)
+#vmware_host_ip = <None>
+
+# Port number for connecting to VMware vCenter server. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#vmware_host_port = 443
+
+# Username for authenticating with VMware vCenter server. (string value)
+#vmware_host_username = <None>
+
+# Password for authenticating with VMware vCenter server. (string value)
+#vmware_host_password = <None>
+
+# Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl.
+# Optional over-ride to default location for bug work-arounds. (string value)
+#vmware_wsdl_location = <None>
+
+# Number of times VMware vCenter server API must be retried upon connection
+# related issues. (integer value)
+#vmware_api_retry_count = 10
+
+# The interval (in seconds) for polling remote tasks invoked on VMware vCenter
+# server. (floating point value)
+#vmware_task_poll_interval = 2.0
+
+# Name of the vCenter inventory folder that will contain Cinder volumes. This
+# folder will be created under "OpenStack/<project_folder>", where
+# project_folder is of format "Project (<volume_project_id>)". (string value)
+#vmware_volume_folder = Volumes
+
+# Timeout in seconds for VMDK volume transfer between Cinder and Glance.
+# (integer value)
+#vmware_image_transfer_timeout_secs = 7200
+
+# Max number of objects to be retrieved per batch. Query results will be
+# obtained in batches from the server and not in one shot. Server may still
+# limit the count to something less than the configured value. (integer value)
+#vmware_max_objects_retrieval = 100
+
+# Optional string specifying the VMware vCenter server version. The driver
+# attempts to retrieve the version from VMware vCenter server. Set this
+# configuration only if you want to override the vCenter server version.
+# (string value)
+#vmware_host_version = <None>
+
+# Directory where virtual disks are stored during volume backup and restore.
+# (string value)
+#vmware_tmp_dir = /tmp
+
+# CA bundle file to use in verifying the vCenter server certificate. (string
+# value)
+#vmware_ca_file = <None>
+
+# If true, the vCenter server certificate is not verified. If false, then the
+# default CA truststore is used for verification. This option is ignored if
+# "vmware_ca_file" is set. (boolean value)
+#vmware_insecure = false
+
+# Name of a vCenter compute cluster where volumes should be created. (multi
+# valued)
+#vmware_cluster_name =
+
+# Maximum number of connections in http connection pool. (integer value)
+#vmware_connection_pool_size = 10
+
+# Default adapter type to be used for attaching volumes. (string value)
+# Possible values:
+# lsiLogic - <No description provided>
+# busLogic - <No description provided>
+# lsiLogicsas - <No description provided>
+# paraVirtual - <No description provided>
+# ide - <No description provided>
+#vmware_adapter_type = lsiLogic
+
+# Volume snapshot format in vCenter server. (string value)
+# Possible values:
+# template - <No description provided>
+# COW - <No description provided>
+#vmware_snapshot_format = template
+
+# If true, the backend volume in vCenter server is created lazily when the
+# volume is created without any source. The backend volume is created when the
+# volume is attached, uploaded to image service or during backup. (boolean
+# value)
+#vmware_lazy_create = true
+
+# Regular expression pattern to match the name of datastores where backend
+# volumes are created. (string value)
+#vmware_datastore_regex = <None>
+
+# File with the list of available vzstorage shares. (string value)
+#vzstorage_shares_config = /etc/cinder/vzstorage_shares
+
+# Create volumes as sparsed files which take no space rather than regular files
+# when using raw format, in which case volume creation takes lot of time.
+# (boolean value)
+#vzstorage_sparsed_volumes = true
+
+# Percent of ACTUAL usage of the underlying volume before no new volumes can be
+# allocated to the volume destination. (floating point value)
+#vzstorage_used_ratio = 0.95
+
+# Base dir containing mount points for vzstorage shares. (string value)
+#vzstorage_mount_point_base = $state_path/mnt
+
+# Mount options passed to the vzstorage client. See section of the pstorage-
+# mount man page for details. (list value)
+#vzstorage_mount_options = <None>
+
+# Default format that will be used when creating volumes if no volume format is
+# specified. (string value)
+#vzstorage_default_volume_format = raw
+
+# Path to store VHD backed volumes (string value)
+#windows_iscsi_lun_path = C:\iSCSIVirtualDisks
+
+# File with the list of available smbfs shares. (string value)
+#smbfs_shares_config = C:\OpenStack\smbfs_shares.txt
+
+# Default format that will be used when creating volumes if no volume format is
+# specified. (string value)
+# Possible values:
+# vhd - <No description provided>
+# vhdx - <No description provided>
+#smbfs_default_volume_format = vhd
+
+# Base dir containing mount points for smbfs shares. (string value)
+#smbfs_mount_point_base = C:\OpenStack\_mnt
+
+# Mappings between share locations and pool names. If not specified, the share
+# names will be used as pool names. Example:
+# //addr/share:pool_name,//addr/share2:pool_name2 (dict value)
+#smbfs_pool_mappings =
+
+# VPSA - Use ISER instead of iSCSI (boolean value)
+#zadara_use_iser = true
+
+# VPSA - Management Host name or IP address (string value)
+#zadara_vpsa_host = <None>
+
+# VPSA - Port number (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#zadara_vpsa_port = <None>
+
+# VPSA - Use SSL connection (boolean value)
+#zadara_vpsa_use_ssl = false
+
+# If set to True the http client will validate the SSL certificate of the VPSA
+# endpoint. (boolean value)
+#zadara_ssl_cert_verify = true
+
+# VPSA - Username (string value)
+#zadara_user = <None>
+
+# VPSA - Password (string value)
+#zadara_password = <None>
+
+# VPSA - Storage Pool assigned for volumes (string value)
+#zadara_vpsa_poolname = <None>
+
+# VPSA - Default encryption policy for volumes (boolean value)
+#zadara_vol_encrypt = false
+
+# VPSA - Default template for VPSA volume names (string value)
+#zadara_vol_name_template = OS_%s
+
+# VPSA - Attach snapshot policy for volumes (boolean value)
+#zadara_default_snap_policy = false
+
+# Storage pool name. (string value)
+#zfssa_pool = <None>
+
+# Project name. (string value)
+#zfssa_project = <None>
+
+# Block size. (string value)
+# Possible values:
+# 512 - <No description provided>
+# 1k - <No description provided>
+# 2k - <No description provided>
+# 4k - <No description provided>
+# 8k - <No description provided>
+# 16k - <No description provided>
+# 32k - <No description provided>
+# 64k - <No description provided>
+# 128k - <No description provided>
+#zfssa_lun_volblocksize = 8k
+
+# Flag to enable sparse (thin-provisioned): True, False. (boolean value)
+#zfssa_lun_sparse = false
+
+# Data compression. (string value)
+# Possible values:
+# off - <No description provided>
+# lzjb - <No description provided>
+# gzip-2 - <No description provided>
+# gzip - <No description provided>
+# gzip-9 - <No description provided>
+#zfssa_lun_compression = off
+
+# Synchronous write bias. (string value)
+# Possible values:
+# latency - <No description provided>
+# throughput - <No description provided>
+#zfssa_lun_logbias = latency
+
+# iSCSI initiator group. (string value)
+#zfssa_initiator_group =
+
+# iSCSI initiator IQNs. (comma separated) (string value)
+#zfssa_initiator =
+
+# iSCSI initiator CHAP user (name). (string value)
+#zfssa_initiator_user =
+
+# Secret of the iSCSI initiator CHAP user. (string value)
+#zfssa_initiator_password =
+
+# iSCSI initiators configuration. (string value)
+#zfssa_initiator_config =
+
+# iSCSI target group name. (string value)
+#zfssa_target_group = tgt-grp
+
+# iSCSI target CHAP user (name). (string value)
+#zfssa_target_user =
+
+# Secret of the iSCSI target CHAP user. (string value)
+#zfssa_target_password =
+
+# iSCSI target portal (Data-IP:Port, w.x.y.z:3260). (string value)
+#zfssa_target_portal = <None>
+
+# Network interfaces of iSCSI targets. (comma separated) (string value)
+#zfssa_target_interfaces = <None>
+
+# REST connection timeout. (seconds) (integer value)
+#zfssa_rest_timeout = <None>
+
+# IP address used for replication data. (maybe the same as data ip) (string
+# value)
+#zfssa_replication_ip =
+
+# Flag to enable local caching: True, False. (boolean value)
+#zfssa_enable_local_cache = true
+
+# Name of ZFSSA project where cache volumes are stored. (string value)
+#zfssa_cache_project = os-cinder-cache
+
+# Driver policy for volume manage. (string value)
+# Possible values:
+# loose - <No description provided>
+# strict - <No description provided>
+#zfssa_manage_policy = loose
+
+# Data path IP address (string value)
+#zfssa_data_ip = <None>
+
+# HTTPS port number (string value)
+#zfssa_https_port = 443
+
+# Options to be passed while mounting share over nfs (string value)
+#zfssa_nfs_mount_options =
+
+# Storage pool name. (string value)
+#zfssa_nfs_pool =
+
+# Project name. (string value)
+#zfssa_nfs_project = NFSProject
+
+# Share name. (string value)
+#zfssa_nfs_share = nfs_share
+
+# Data compression. (string value)
+# Possible values:
+# off - <No description provided>
+# lzjb - <No description provided>
+# gzip-2 - <No description provided>
+# gzip - <No description provided>
+# gzip-9 - <No description provided>
+#zfssa_nfs_share_compression = off
+
+# Synchronous write bias-latency, throughput. (string value)
+# Possible values:
+# latency - <No description provided>
+# throughput - <No description provided>
+#zfssa_nfs_share_logbias = latency
+
+# Name of directory inside zfssa_nfs_share where cache volumes are stored.
+# (string value)
+#zfssa_cache_directory = os-cinder-cache
+
+# Driver to use for volume creation (string value)
+#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
+
+# User defined capabilities, a JSON formatted string specifying key/value
+# pairs. The key/value pairs can be used by the CapabilitiesFilter to select
+# between backends when requests specify volume types. For example, specifying
+# a service level or the geographical location of a backend, then creating a
+# volume type to allow the user to select by these different properties.
+# (string value)
+#extra_capabilities = {}
+
+# Suppress requests library SSL certificate warnings. (boolean value)
+#suppress_requests_ssl_warnings = false
+
+# Size of the native threads pool for the backend.  Increase for backends that
+# heavily rely on this, like the RBD driver. (integer value)
+# Minimum value: 20
+#backend_native_threads_pool_size = 20
+
+
+[coordination]
+
+#
+# From cinder
+#
+
+# The backend URL to use for distributed coordination. (string value)
+#backend_url = file://$state_path
+
+
+[fc-zone-manager]
+
+#
+# From cinder
+#
+
+# South bound connector for zoning operation (string value)
+#brcd_sb_connector = HTTP
+
+# Southbound connector for zoning operation (string value)
+#cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
+
+# FC Zone Driver responsible for zone management (string value)
+#zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver
+
+# Zoning policy configured by user; valid values include "initiator-target" or
+# "initiator" (string value)
+#zoning_policy = initiator-target
+
+# Comma separated list of Fibre Channel fabric names. This list of names is
+# used to retrieve other SAN credentials for connecting to each SAN fabric
+# (string value)
+#fc_fabric_names = <None>
+
+# FC SAN Lookup Service (string value)
+#fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService
+
+# Set this to True when you want to allow an unsupported zone manager driver to
+# start.  Drivers that haven't maintained a working CI system and testing are
+# marked as unsupported until CI is working again.  This also marks a driver as
+# deprecated and may be removed in the next release. (boolean value)
+#enable_unsupported_driver = false
+
+
+[nova]
+
+#
+# From cinder
+#
+
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#interface = public
+
+# The authentication URL for the nova connection when using the current users
+# token (string value)
+#token_auth_url = <None>
+
+# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# (string value)
+#cafile = <None>
+
+# PEM encoded client certificate cert file (string value)
+#certfile = <None>
+
+# PEM encoded client certificate key file (string value)
+#keyfile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section = <None>
+
+
+[service_user]
+
+#
+# From cinder
+#
+
+#
+# When True, if sending a user token to an REST API, also send a service token.
+#  (boolean value)
+#send_service_user_token = false
+[barbican]
+#
+# From castellan.config
+#
+
+# Use this endpoint to connect to Barbican, for example:
+# "http://localhost:9311/" (string value)
+#barbican_endpoint = <None>
+
+# Version of the Barbican API, for example: "v1" (string value)
+#barbican_api_version = <None>
+
+# Use this endpoint to connect to Keystone (string value)
+# Deprecated group/name - [key_manager]/auth_url
+#auth_endpoint = http://localhost/identity/v3
+auth_endpoint = http://10.167.4.35:35357/v3
+
+# Number of seconds to wait before retrying poll for key creation completion
+# (integer value)
+#retry_delay = 1
+
+# Number of times to retry poll for key creation completion (integer value)
+#number_of_retries = 60
+
+# Specifies if insecure TLS (https) requests. If False, the server's
+# certificate will not be validated (boolean value)
+#verify_ssl = true
+
+# Specifies the type of endpoint.  Allowed values are: public, internal, and
+# admin (string value)
+# Possible values:
+# public - <No description provided>
+# internal - <No description provided>
+# admin - <No description provided>
+#barbican_endpoint_type = public
+barbican_endpoint_type = internal
+
+
+[key_manager]
+#
+# From castellan.config
+#
+
+# Specify the key manager implementation. Options are "barbican" and "vault".
+# Default is  "barbican". Will support the  values earlier set using
+# [key_manager]/api_class for some time. (string value)
+# Deprecated group/name - [key_manager]/api_class
+#backend = barbican
+backend = barbican
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internalURL
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = cinder
+
+
+[keystone_authtoken]
+
+#
+# From keystonemiddleware.auth_token
+#
+
+# Complete "public" Identity API endpoint. This endpoint should not be an
+# "admin" endpoint, as it should be accessible by all end users.
+# Unauthenticated clients are redirected to this endpoint to authenticate.
+# Although this endpoint should ideally be unversioned, client support in the
+# wild varies. If you're using a versioned v2 endpoint here, then this should
+# *not* be the same endpoint the service user utilizes for validating tokens,
+# because normal end users may not be able to reach that endpoint. (string
+# value)
+# Deprecated group/name - [keystone_authtoken]/auth_uri
+#www_authenticate_uri = <None>
+www_authenticate_uri = http://10.167.4.35:5000
+
+# DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
+# be an "admin" endpoint, as it should be accessible by all end users.
+# Unauthenticated clients are redirected to this endpoint to authenticate.
+# Although this endpoint should ideally be unversioned, client support in the
+# wild varies. If you're using a versioned v2 endpoint here, then this should
+# *not* be the same endpoint the service user utilizes for validating tokens,
+# because normal end users may not be able to reach that endpoint. This option
+# is deprecated in favor of www_authenticate_uri and will be removed in the S
+# release. (string value)
+# This option is deprecated for removal since Queens.
+# Its value may be silently ignored in the future.
+# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
+# and will be removed in the S  release.
+#auth_uri = <None>
+auth_uri = http://10.167.4.35:5000
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+# Do not handle authorization requests within the middleware, but delegate the
+# authorization decision to downstream WSGI components. (boolean value)
+#delay_auth_decision = false
+
+# Request timeout value for communicating with Identity API server. (integer
+# value)
+#http_connect_timeout = <None>
+
+# How many times are we trying to reconnect when communicating with Identity
+# API Server. (integer value)
+#http_request_max_retries = 3
+
+# Request environment key where the Swift cache object is stored. When
+# auth_token middleware is deployed with a Swift cache, use this option to have
+# the middleware share a caching backend with swift. Otherwise, use the
+# ``memcached_servers`` option instead. (string value)
+#cache = <None>
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# The region in which the identity server can be found. (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# DEPRECATED: Directory used to cache files related to PKI tokens. This option
+# has been deprecated in the Ocata release and will be removed in the P
+# release. (string value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#signing_dir = <None>
+
+# Optionally specify a list of memcached server(s) to use for caching. If left
+# undefined, tokens will instead be cached in-process. (list value)
+# Deprecated group/name - [keystone_authtoken]/memcache_servers
+#memcached_servers = <None>
+memcached_servers=10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
+
+# In order to prevent excessive effort spent validating tokens, the middleware
+# caches previously-seen tokens for a configurable duration (in seconds). Set
+# to -1 to disable caching completely. (integer value)
+#token_cache_time = 300
+
+# DEPRECATED: Determines the frequency at which the list of revoked tokens is
+# retrieved from the Identity service (in seconds). A high number of revocation
+# events combined with a low cache duration may significantly reduce
+# performance. Only valid for PKI tokens. This option has been deprecated in
+# the Ocata release and will be removed in the P release. (integer value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#revocation_cache_time = 10
+
+# (Optional) Number of seconds memcached server is considered dead before it is
+# tried again. (integer value)
+#memcache_pool_dead_retry = 300
+
+# (Optional) Maximum total number of open connections to every memcached
+# server. (integer value)
+#memcache_pool_maxsize = 10
+
+# (Optional) Socket timeout in seconds for communicating with a memcached
+# server. (integer value)
+#memcache_pool_socket_timeout = 3
+
+# (Optional) Number of seconds a connection to memcached is held unused in the
+# pool before it is closed. (integer value)
+#memcache_pool_unused_timeout = 60
+
+# (Optional) Number of seconds that an operation will wait to get a memcached
+# client connection from the pool. (integer value)
+#memcache_pool_conn_get_timeout = 10
+
+# (Optional) Use the advanced (eventlet safe) memcached client pool. The
+# advanced pool will only work under python 2.x. (boolean value)
+#memcache_use_advanced_pool = false
+
+# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
+# middleware will not ask for service catalog on token validation and will not
+# set the X-Service-Catalog header. (boolean value)
+#include_service_catalog = true
+
+# Used to control the use and type of token binding. Can be set to: "disabled"
+# to not check token binding. "permissive" (default) to validate binding
+# information if the bind type is of a form known to the server and ignore it
+# if not. "strict" like "permissive" but if the bind type is unknown the token
+# will be rejected. "required" any form of token binding is needed to be
+# allowed. Finally the name of a binding method that must be present in tokens.
+# (string value)
+#enforce_token_bind = permissive
+
+# DEPRECATED: If true, the revocation list will be checked for cached tokens.
+# This requires that PKI tokens are configured on the identity server. (boolean
+# value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#check_revocations_for_cached = false
+
+# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
+# single algorithm or multiple. The algorithms are those supported by Python
+# standard hashlib.new(). The hashes will be tried in the order given, so put
+# the preferred one first for performance. The result of the first hash will be
+# stored in the cache. This will typically be set to multiple values only while
+# migrating from a less secure algorithm to a more secure one. Once all the old
+# tokens are expired this option should be set to a single value for better
+# performance. (list value)
+# This option is deprecated for removal since Ocata.
+# Its value may be silently ignored in the future.
+# Reason: PKI token format is no longer supported.
+#hash_algorithms = md5
+
+# A choice of roles that must be present in a service token. Service tokens are
+# allowed to request that an expired token can be used and so this check should
+# tightly control that only actual services should be sending this token. Roles
+# here are applied as an ANY check so any role in this list must be present.
+# For backwards compatibility reasons this currently only affects the
+# allow_expired check. (list value)
+#service_token_roles = service
+
+# For backwards compatibility reasons we must let valid service tokens pass
+# that don't pass the service_token_roles check as valid. Setting this true
+# will become the default in a future release and should be enabled if
+# possible. (boolean value)
+#service_token_roles_required = false
+
+# Authentication type to load (string value)
+# Deprecated group/name - [keystone_authtoken]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Config Section from which to load plugin specific options (string value)
+#auth_section = <None>
+
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internalURL
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = cinder
+
+[profiler]
+
+[oslo_concurrency]
 
 [database]
-connection = sqlite:////var/lib/cinder/cinder.sqlite
+#
+# From oslo.db
+#
+
+# If True, SQLite uses synchronous mode. (boolean value)
+#sqlite_synchronous = true
+
+# The back end to use for the database. (string value)
+# Deprecated group/name - [DEFAULT]/db_backend
+#backend = sqlalchemy
+
+# The SQLAlchemy connection string to use to connect to the database.
+# (string value)
+# Deprecated group/name - [DEFAULT]/sql_connection
+# Deprecated group/name - [DATABASE]/sql_connection
+# Deprecated group/name - [sql]/connection
+#connection = <None>
+connection = mysql+pymysql://cinder:opnfv_secret@10.167.4.23/cinder?charset=utf8
+# The SQLAlchemy connection string to use to connect to the slave
+# database. (string value)
+#slave_connection = <None>
+
+# The SQL mode to be used for MySQL sessions. This option, including
+# the default, overrides any server-set SQL mode. To use whatever SQL
+# mode is set by the server configuration, set this to no value.
+# Example: mysql_sql_mode= (string value)
+#mysql_sql_mode = TRADITIONAL
+
+# If True, transparently enables support for handling MySQL Cluster
+# (NDB). (boolean value)
+#mysql_enable_ndb = false
+
+# Connections which have been present in the connection pool longer
+# than this number of seconds will be replaced with a new one the next
+# time they are checked out from the pool. (integer value)
+# Deprecated group/name - [DATABASE]/idle_timeout
+# Deprecated group/name - [database]/idle_timeout
+# Deprecated group/name - [DEFAULT]/sql_idle_timeout
+# Deprecated group/name - [DATABASE]/sql_idle_timeout
+# Deprecated group/name - [sql]/idle_timeout
+#connection_recycle_time = 3600
+# (obryndzii) we change default connection_recycle_time to 280 in order to fix numerous
+# DBConnection errors in services until we implement this setting in reclass-system
+connection_recycle_time = 300
+
+# Minimum number of SQL connections to keep open in a pool. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_min_pool_size
+# Deprecated group/name - [DATABASE]/sql_min_pool_size
+#min_pool_size = 1
+
+# Maximum number of SQL connections to keep open in a pool. Setting a
+# value of 0 indicates no limit. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_pool_size
+# Deprecated group/name - [DATABASE]/sql_max_pool_size
+#max_pool_size = 5
+max_pool_size = 10
+
+# Maximum number of database connection retries during startup. Set to
+# -1 to specify an infinite retry count. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_retries
+# Deprecated group/name - [DATABASE]/sql_max_retries
+#max_retries = 10
+max_retries = -1
+
+# Interval between retries of opening a SQL connection. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_retry_interval
+# Deprecated group/name - [DATABASE]/reconnect_interval
+#retry_interval = 10
+
+# If set, use this value for max_overflow with SQLAlchemy. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_max_overflow
+# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
+#max_overflow = 50
+max_overflow = 30
+
+# Verbosity of SQL debugging information: 0=None, 100=Everything.
+# (integer value)
+# Minimum value: 0
+# Maximum value: 100
+# Deprecated group/name - [DEFAULT]/sql_connection_debug
+#connection_debug = 0
+
+# Add Python stack traces to SQL as comment strings. (boolean value)
+# Deprecated group/name - [DEFAULT]/sql_connection_trace
+#connection_trace = false
+
+# If set, use this value for pool_timeout with SQLAlchemy. (integer
+# value)
+# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
+#pool_timeout = <None>
+
+# Enable the experimental use of database reconnect on connection
+# lost. (boolean value)
+#use_db_reconnect = false
+
+# Seconds between retries of a database transaction. (integer value)
+#db_retry_interval = 1
+
+# If True, increases the interval between retries of a database
+# operation up to db_max_retry_interval. (boolean value)
+#db_inc_retry_interval = true
+
+# If db_inc_retry_interval is set, the maximum seconds between retries
+# of a database operation. (integer value)
+#db_max_retry_interval = 10
+
+# Maximum retries in case of connection error or deadlock error before
+# error is raised. Set to -1 to specify an infinite retry count.
+# (integer value)
+#db_max_retries = 20
+
+#
+# From oslo.db.concurrency
+#
+
+# Enable the experimental use of thread pooling for all DB API calls
+# (boolean value)
+# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
+#use_tpool = false
+
+[oslo_messaging_notifications]
+#
+# From oslo.messaging
+#
+
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+
+# A URL representing the messaging driver to use for notifications. If
+# not set, we fall back to the same configuration used for RPC.
+# (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+
+# The maximum number of attempts to re-send a notification message
+# which failed to be delivered due to a recoverable error. 0 - No
+# retry, -1 - indefinite (integer value)
+#retry = -1
+[oslo_messaging_rabbit]
+#
+# From oslo.messaging
+#
+
+# Use durable queues in AMQP. (boolean value)
+# Deprecated group/name - [DEFAULT]/amqp_durable_queues
+# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
+#amqp_durable_queues = false
+
+# Auto-delete queues in AMQP. (boolean value)
+#amqp_auto_delete = false
+
+# Enable SSL (boolean value)
+#ssl = <None>
+
+
+# How long to wait before reconnecting in response to an AMQP consumer
+# cancel notification. (floating point value)
+#kombu_reconnect_delay = 1.0
+
+# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression
+# will not be used. This option may not be available in future
+# versions. (string value)
+#kombu_compression = <None>
+
+# How long to wait a missing client before abandoning to send it its
+# replies. This value should not be longer than rpc_response_timeout.
+# (integer value)
+# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
+#kombu_missing_consumer_retry_timeout = 60
+
+# Determines how the next RabbitMQ node is chosen in case the one we
+# are currently connected to becomes unavailable. Takes effect only if
+# more than one RabbitMQ node is provided in config. (string value)
+# Possible values:
+# round-robin - <No description provided>
+# shuffle - <No description provided>
+#kombu_failover_strategy = round-robin
+
+# DEPRECATED: The RabbitMQ broker address where a single node is used.
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_host = localhost
+
+# DEPRECATED: The RabbitMQ broker port where a single node is used.
+# (port value)
+# Minimum value: 0
+# Maximum value: 65535
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_port = 5672
+
+# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_hosts = $rabbit_host:$rabbit_port
+
+# DEPRECATED: The RabbitMQ userid. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_userid = guest
+
+# DEPRECATED: The RabbitMQ password. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_password = guest
+
+# The RabbitMQ login method. (string value)
+# Possible values:
+# PLAIN - <No description provided>
+# AMQPLAIN - <No description provided>
+# RABBIT-CR-DEMO - <No description provided>
+#rabbit_login_method = AMQPLAIN
+
+# DEPRECATED: The RabbitMQ virtual host. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_virtual_host = /
+
+# How frequently to retry connecting with RabbitMQ. (integer value)
+#rabbit_retry_interval = 1
+
+# How long to backoff for between retries when connecting to RabbitMQ.
+# (integer value)
+#rabbit_retry_backoff = 2
+
+# Maximum interval of RabbitMQ connection retries. Default is 30
+# seconds. (integer value)
+#rabbit_interval_max = 30
+
+# DEPRECATED: Maximum number of RabbitMQ connection retries. Default
+# is 0 (infinite retry count). (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#rabbit_max_retries = 0
+
+# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change
+# this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0,
+# queue mirroring is no longer controlled by the x-ha-policy argument
+# when declaring a queue. If you just want to make sure that all
+# queues (except those with auto-generated names) are mirrored across
+# all nodes, run: "rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-
+# mode": "all"}' " (boolean value)
+#rabbit_ha_queues = false
+
+# Positive integer representing duration in seconds for queue TTL
+# (x-expires). Queues which are unused for the duration of the TTL are
+# automatically deleted. The parameter affects only reply and fanout
+# queues. (integer value)
+# Minimum value: 1
+#rabbit_transient_queues_ttl = 1800
+
+# Specifies the number of messages to prefetch. Setting to zero allows
+# unlimited messages. (integer value)
+
+# NOTE(dmescheryakov) hardcoding to >0 by default
+# Having no prefetch limit makes oslo.messaging consume all available
+# messages from the queue. That can lead to a situation when several
+# server processes hog all the messages leaving others out of business.
+# That leads to artificial high message processing latency and at the
+# extrime to MessagingTimeout errors.
+rabbit_qos_prefetch_count = 64
+
+# Number of seconds after which the Rabbit broker is considered down
+# if heartbeat's keep-alive fails (0 disable the heartbeat).
+# EXPERIMENTAL (integer value)
+#heartbeat_timeout_threshold = 60
+
+# How often times during the heartbeat_timeout_threshold we check the
+# heartbeat. (integer value)
+#heartbeat_rate = 2
+
+# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
+# (boolean value)
+#fake_rabbit = false
+
+# Maximum number of channels to allow (integer value)
+#channel_max = <None>
+
+
+# The maximum byte size for an AMQP frame (integer value)
+#frame_max = <None>
+
+# How often to send heartbeats for consumer's connections (integer
+# value)
+#heartbeat_interval = 3
+
+# Arguments passed to ssl.wrap_socket (dict value)
+#ssl_options = <None>
+
+# Set socket timeout in seconds for connection's socket (floating
+# point value)
+#socket_timeout = 0.25
+
+# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating
+# point value)
+#tcp_user_timeout = 0.25
+
+# Set delay for reconnection to some host which has connection error
+# (floating point value)
+#host_connection_reconnect_delay = 0.25
+
+# Connection factory implementation (string value)
+# Possible values:
+# new - <No description provided>
+# single - <No description provided>
+# read_write - <No description provided>
+#connection_factory = single
+
+# Maximum number of connections to keep queued. (integer value)
+#pool_max_size = 30
+
+# Maximum number of connections to create above `pool_max_size`.
+# (integer value)
+#pool_max_overflow = 0
+
+# Default number of seconds to wait for a connections to available
+# (integer value)
+#pool_timeout = 30
+
+# Lifetime of a connection (since creation) in seconds or None for no
+# recycling. Expired connections are closed on acquire. (integer
+# value)
+#pool_recycle = 600
+
+# Threshold at which inactive (since release) connections are
+# considered stale in seconds or None for no staleness. Stale
+# connections are closed on acquire. (integer value)
+#pool_stale = 60
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#default_serializer_type = json
+
+# Persist notification messages. (boolean value)
+#notification_persistence = false
+
+# Exchange name for sending notifications (string value)
+#default_notification_exchange = ${control_exchange}_notification
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# notification listener. (integer value)
+#notification_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending notification, -1 means infinite retry. (integer value)
+#default_notification_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending notification message (floating point value)
+#notification_retry_delay = 0.25
+
+# Time to live for rpc queues without consumers in seconds. (integer
+# value)
+#rpc_queue_expiration = 60
+
+# Exchange name for sending RPC messages (string value)
+#default_rpc_exchange = ${control_exchange}_rpc
+
+# Exchange name for receiving RPC replies (string value)
+#rpc_reply_exchange = ${control_exchange}_rpc_reply
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc listener. (integer value)
+#rpc_listener_prefetch_count = 100
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc reply listener. (integer value)
+#rpc_reply_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending reply. -1 means infinite retry during rpc_timeout (integer
+# value)
+#rpc_reply_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending reply. (floating point value)
+#rpc_reply_retry_delay = 0.25
+
+# Reconnecting retry count in case of connectivity problem during
+# sending RPC message, -1 means infinite retry. If actual retry
+# attempts in not 0 the rpc request could be processed more than one
+# time (integer value)
+#default_rpc_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending RPC message (floating point value)
+#rpc_retry_delay = 0.25
+
+[oslo_middleware]
+#
+# From oslo.middleware
+#
+
+# The maximum body size for each  request, in bytes. (integer value)
+# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
+# Deprecated group/name - [DEFAULT]/max_request_body_size
+#max_request_body_size = 114688
+
+# DEPRECATED: The HTTP Header that will be used to determine what the
+# original request protocol scheme was, even if it was hidden by a SSL
+# termination proxy. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#secure_proxy_ssl_header = X-Forwarded-Proto
+
+# Whether the application is behind a proxy or not. This determines if
+# the middleware should parse the headers or not. (boolean value)
+enable_proxy_headers_parsing = True
+
+[oslo_policy]
+
+[oslo_reports]

2019-04-30 22:25:15,332 [salt.state       :1951][INFO    ][11317] Completed state [/etc/cinder/cinder.conf] at time 22:25:15.332329 duration_in_ms=315.18
2019-04-30 22:25:15,332 [salt.state       :1780][INFO    ][11317] Running state [/etc/cinder/api-paste.ini] at time 22:25:15.332753
2019-04-30 22:25:15,332 [salt.state       :1813][INFO    ][11317] Executing state file.managed for [/etc/cinder/api-paste.ini]
2019-04-30 22:25:15,347 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'cinder/files/rocky/api-paste.ini.volume.Debian'
2019-04-30 22:25:15,391 [salt.state       :300 ][INFO    ][11317] {'mode': '0640'}
2019-04-30 22:25:15,391 [salt.state       :1951][INFO    ][11317] Completed state [/etc/cinder/api-paste.ini] at time 22:25:15.391704 duration_in_ms=58.952
2019-04-30 22:25:15,392 [salt.state       :1780][INFO    ][11317] Running state [/etc/default/cinder-volume] at time 22:25:15.392043
2019-04-30 22:25:15,392 [salt.state       :1813][INFO    ][11317] Executing state file.managed for [/etc/default/cinder-volume]
2019-04-30 22:25:15,405 [salt.fileclient  :1219][INFO    ][11317] Fetching file from saltenv 'base', ** done ** 'cinder/files/default'
2019-04-30 22:25:15,411 [salt.state       :300 ][INFO    ][11317] File changed:
New file
2019-04-30 22:25:15,411 [salt.state       :1951][INFO    ][11317] Completed state [/etc/default/cinder-volume] at time 22:25:15.411136 duration_in_ms=19.093
2019-04-30 22:25:15,412 [salt.state       :1780][INFO    ][11317] Running state [cinder-volume] at time 22:25:15.412809
2019-04-30 22:25:15,413 [salt.state       :1813][INFO    ][11317] Executing state service.running for [cinder-volume]
2019-04-30 22:25:15,413 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'status', 'cinder-volume.service', '-n', '0'] in directory '/root'
2019-04-30 22:25:15,422 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:25:15,430 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:25:15,436 [salt.state       :300 ][INFO    ][11317] The service cinder-volume is already running
2019-04-30 22:25:15,436 [salt.state       :1951][INFO    ][11317] Completed state [cinder-volume] at time 22:25:15.436854 duration_in_ms=24.045
2019-04-30 22:25:15,437 [salt.state       :1780][INFO    ][11317] Running state [cinder-volume] at time 22:25:15.437000
2019-04-30 22:25:15,437 [salt.state       :1813][INFO    ][11317] Executing state service.mod_watch for [cinder-volume]
2019-04-30 22:25:15,437 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:25:15,444 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:25:15,460 [salt.state       :300 ][INFO    ][11317] {'cinder-volume': True}
2019-04-30 22:25:15,461 [salt.state       :1951][INFO    ][11317] Completed state [cinder-volume] at time 22:25:15.461047 duration_in_ms=24.046
2019-04-30 22:25:15,461 [salt.state       :1780][INFO    ][11317] Running state [open-iscsi] at time 22:25:15.461664
2019-04-30 22:25:15,461 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [open-iscsi]
2019-04-30 22:25:15,469 [salt.state       :300 ][INFO    ][11317] All specified packages are already installed
2019-04-30 22:25:15,469 [salt.state       :1951][INFO    ][11317] Completed state [open-iscsi] at time 22:25:15.469748 duration_in_ms=8.084
2019-04-30 22:25:15,470 [salt.state       :1780][INFO    ][11317] Running state [tgt] at time 22:25:15.470026
2019-04-30 22:25:15,470 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [tgt]
2019-04-30 22:25:15,474 [salt.state       :300 ][INFO    ][11317] All specified packages are already installed
2019-04-30 22:25:15,474 [salt.state       :1951][INFO    ][11317] Completed state [tgt] at time 22:25:15.474791 duration_in_ms=4.765
2019-04-30 22:25:15,475 [salt.state       :1780][INFO    ][11317] Running state [thin-provisioning-tools] at time 22:25:15.475053
2019-04-30 22:25:15,475 [salt.state       :1813][INFO    ][11317] Executing state pkg.installed for [thin-provisioning-tools]
2019-04-30 22:25:15,488 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:25:15,508 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'thin-provisioning-tools'] in directory '/root'
2019-04-30 22:25:17,277 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:25:17,303 [salt.state       :300 ][INFO    ][11317] Made the following changes:
'thin-provisioning-tools' changed from 'absent' to '0.5.6-1ubuntu1'

2019-04-30 22:25:17,316 [salt.state       :915 ][INFO    ][11317] Loading fresh modules for state activity
2019-04-30 22:25:17,336 [salt.state       :1951][INFO    ][11317] Completed state [thin-provisioning-tools] at time 22:25:17.336860 duration_in_ms=1861.807
2019-04-30 22:25:17,652 [salt.state       :1780][INFO    ][11317] Running state [open-iscsi] at time 22:25:17.652441
2019-04-30 22:25:17,652 [salt.state       :1813][INFO    ][11317] Executing state service.running for [open-iscsi]
2019-04-30 22:25:17,653 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'status', 'open-iscsi.service', '-n', '0'] in directory '/root'
2019-04-30 22:25:17,661 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'is-active', 'open-iscsi.service'] in directory '/root'
2019-04-30 22:25:17,667 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'is-enabled', 'open-iscsi.service'] in directory '/root'
2019-04-30 22:25:17,673 [salt.state       :300 ][INFO    ][11317] The service open-iscsi is already running
2019-04-30 22:25:17,673 [salt.state       :1951][INFO    ][11317] Completed state [open-iscsi] at time 22:25:17.673480 duration_in_ms=21.038
2019-04-30 22:25:17,673 [salt.state       :1780][INFO    ][11317] Running state [tgt] at time 22:25:17.673857
2019-04-30 22:25:17,674 [salt.state       :1813][INFO    ][11317] Executing state service.running for [tgt]
2019-04-30 22:25:17,674 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'status', 'tgt.service', '-n', '0'] in directory '/root'
2019-04-30 22:25:17,680 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'is-active', 'tgt.service'] in directory '/root'
2019-04-30 22:25:17,687 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'is-enabled', 'tgt.service'] in directory '/root'
2019-04-30 22:25:17,693 [salt.state       :300 ][INFO    ][11317] The service tgt is already running
2019-04-30 22:25:17,693 [salt.state       :1951][INFO    ][11317] Completed state [tgt] at time 22:25:17.693464 duration_in_ms=19.607
2019-04-30 22:25:17,693 [salt.state       :1780][INFO    ][11317] Running state [iscsid] at time 22:25:17.693863
2019-04-30 22:25:17,694 [salt.state       :1813][INFO    ][11317] Executing state service.running for [iscsid]
2019-04-30 22:25:17,694 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'status', 'iscsid.service', '-n', '0'] in directory '/root'
2019-04-30 22:25:17,701 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'is-active', 'iscsid.service'] in directory '/root'
2019-04-30 22:25:17,707 [salt.loaded.int.module.cmdmod:395 ][INFO    ][11317] Executing command ['systemctl', 'is-enabled', 'iscsid.service'] in directory '/root'
2019-04-30 22:25:17,712 [salt.state       :300 ][INFO    ][11317] The service iscsid is already running
2019-04-30 22:25:17,713 [salt.state       :1951][INFO    ][11317] Completed state [iscsid] at time 22:25:17.712965 duration_in_ms=19.101
2019-04-30 22:25:17,714 [salt.minion      :1711][INFO    ][11317] Returning information for job: 20190430222407792286
2019-04-30 22:26:23,732 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command state.sls with jid 20190430222623724024
2019-04-30 22:26:23,741 [salt.minion      :1432][INFO    ][17198] Starting a new job with PID 17198
2019-04-30 22:26:27,453 [salt.state       :915 ][INFO    ][17198] Loading fresh modules for state activity
2019-04-30 22:26:28,814 [salt.state       :1780][INFO    ][17198] Running state [cinder-volume] at time 22:26:28.814312
2019-04-30 22:26:28,820 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [cinder-volume]
2019-04-30 22:26:28,821 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:26:29,093 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,093 [salt.state       :1951][INFO    ][17198] Completed state [cinder-volume] at time 22:26:29.093480 duration_in_ms=279.167
2019-04-30 22:26:29,093 [salt.state       :1780][INFO    ][17198] Running state [lvm2] at time 22:26:29.093752
2019-04-30 22:26:29,093 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [lvm2]
2019-04-30 22:26:29,157 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,157 [salt.state       :1951][INFO    ][17198] Completed state [lvm2] at time 22:26:29.157574 duration_in_ms=63.821
2019-04-30 22:26:29,157 [salt.state       :1780][INFO    ][17198] Running state [sysfsutils] at time 22:26:29.157732
2019-04-30 22:26:29,157 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [sysfsutils]
2019-04-30 22:26:29,162 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,162 [salt.state       :1951][INFO    ][17198] Completed state [sysfsutils] at time 22:26:29.162555 duration_in_ms=4.822
2019-04-30 22:26:29,162 [salt.state       :1780][INFO    ][17198] Running state [sg3-utils] at time 22:26:29.162699
2019-04-30 22:26:29,162 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [sg3-utils]
2019-04-30 22:26:29,167 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,167 [salt.state       :1951][INFO    ][17198] Completed state [sg3-utils] at time 22:26:29.167440 duration_in_ms=4.741
2019-04-30 22:26:29,167 [salt.state       :1780][INFO    ][17198] Running state [python-cinder] at time 22:26:29.167583
2019-04-30 22:26:29,167 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [python-cinder]
2019-04-30 22:26:29,172 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,172 [salt.state       :1951][INFO    ][17198] Completed state [python-cinder] at time 22:26:29.172175 duration_in_ms=4.591
2019-04-30 22:26:29,172 [salt.state       :1780][INFO    ][17198] Running state [python-mysqldb] at time 22:26:29.172314
2019-04-30 22:26:29,172 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [python-mysqldb]
2019-04-30 22:26:29,176 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,177 [salt.state       :1951][INFO    ][17198] Completed state [python-mysqldb] at time 22:26:29.176986 duration_in_ms=4.672
2019-04-30 22:26:29,177 [salt.state       :1780][INFO    ][17198] Running state [p7zip] at time 22:26:29.177127
2019-04-30 22:26:29,177 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [p7zip]
2019-04-30 22:26:29,182 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,182 [salt.state       :1951][INFO    ][17198] Completed state [p7zip] at time 22:26:29.182257 duration_in_ms=5.131
2019-04-30 22:26:29,182 [salt.state       :1780][INFO    ][17198] Running state [gettext-base] at time 22:26:29.182391
2019-04-30 22:26:29,182 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [gettext-base]
2019-04-30 22:26:29,187 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,187 [salt.state       :1951][INFO    ][17198] Completed state [gettext-base] at time 22:26:29.187134 duration_in_ms=4.742
2019-04-30 22:26:29,187 [salt.state       :1780][INFO    ][17198] Running state [python-memcache] at time 22:26:29.187275
2019-04-30 22:26:29,187 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [python-memcache]
2019-04-30 22:26:29,191 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,191 [salt.state       :1951][INFO    ][17198] Completed state [python-memcache] at time 22:26:29.191950 duration_in_ms=4.675
2019-04-30 22:26:29,192 [salt.state       :1780][INFO    ][17198] Running state [python-pycadf] at time 22:26:29.192085
2019-04-30 22:26:29,192 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [python-pycadf]
2019-04-30 22:26:29,196 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,196 [salt.state       :1951][INFO    ][17198] Completed state [python-pycadf] at time 22:26:29.196786 duration_in_ms=4.7
2019-04-30 22:26:29,197 [salt.state       :1780][INFO    ][17198] Running state [cinder_volume_ssl_mysql] at time 22:26:29.197052
2019-04-30 22:26:29,197 [salt.state       :1813][INFO    ][17198] Executing state test.show_notification for [cinder_volume_ssl_mysql]
2019-04-30 22:26:29,197 [salt.state       :300 ][INFO    ][17198] Running cinder._ssl.volume_mysql
2019-04-30 22:26:29,197 [salt.state       :1951][INFO    ][17198] Completed state [cinder_volume_ssl_mysql] at time 22:26:29.197514 duration_in_ms=0.462
2019-04-30 22:26:29,197 [salt.state       :1780][INFO    ][17198] Running state [cinder_volume_ssl_rabbitmq] at time 22:26:29.197750
2019-04-30 22:26:29,197 [salt.state       :1813][INFO    ][17198] Executing state test.show_notification for [cinder_volume_ssl_rabbitmq]
2019-04-30 22:26:29,198 [salt.state       :300 ][INFO    ][17198] Running cinder._ssl.rabbitmq
2019-04-30 22:26:29,198 [salt.state       :1951][INFO    ][17198] Completed state [cinder_volume_ssl_rabbitmq] at time 22:26:29.198158 duration_in_ms=0.408
2019-04-30 22:26:29,199 [salt.state       :1780][INFO    ][17198] Running state [/var/lock/cinder] at time 22:26:29.199768
2019-04-30 22:26:29,199 [salt.state       :1813][INFO    ][17198] Executing state file.directory for [/var/lock/cinder]
2019-04-30 22:26:29,200 [salt.state       :300 ][INFO    ][17198] Directory /var/lock/cinder is in the correct state
Directory /var/lock/cinder updated
2019-04-30 22:26:29,200 [salt.state       :1951][INFO    ][17198] Completed state [/var/lock/cinder] at time 22:26:29.200622 duration_in_ms=0.853
2019-04-30 22:26:29,200 [salt.state       :1780][INFO    ][17198] Running state [/etc/cinder/cinder.conf] at time 22:26:29.200960
2019-04-30 22:26:29,201 [salt.state       :1813][INFO    ][17198] Executing state file.managed for [/etc/cinder/cinder.conf]
2019-04-30 22:26:29,429 [salt.state       :300 ][INFO    ][17198] File /etc/cinder/cinder.conf is in the correct state
2019-04-30 22:26:29,430 [salt.state       :1951][INFO    ][17198] Completed state [/etc/cinder/cinder.conf] at time 22:26:29.430022 duration_in_ms=229.062
2019-04-30 22:26:29,430 [salt.state       :1780][INFO    ][17198] Running state [/etc/cinder/api-paste.ini] at time 22:26:29.430302
2019-04-30 22:26:29,430 [salt.state       :1813][INFO    ][17198] Executing state file.managed for [/etc/cinder/api-paste.ini]
2019-04-30 22:26:29,473 [salt.state       :300 ][INFO    ][17198] File /etc/cinder/api-paste.ini is in the correct state
2019-04-30 22:26:29,473 [salt.state       :1951][INFO    ][17198] Completed state [/etc/cinder/api-paste.ini] at time 22:26:29.473379 duration_in_ms=43.077
2019-04-30 22:26:29,473 [salt.state       :1780][INFO    ][17198] Running state [/etc/default/cinder-volume] at time 22:26:29.473634
2019-04-30 22:26:29,473 [salt.state       :1813][INFO    ][17198] Executing state file.managed for [/etc/default/cinder-volume]
2019-04-30 22:26:29,486 [salt.state       :300 ][INFO    ][17198] File /etc/default/cinder-volume is in the correct state
2019-04-30 22:26:29,486 [salt.state       :1951][INFO    ][17198] Completed state [/etc/default/cinder-volume] at time 22:26:29.486670 duration_in_ms=13.036
2019-04-30 22:26:29,487 [salt.state       :1780][INFO    ][17198] Running state [cinder-volume] at time 22:26:29.487874
2019-04-30 22:26:29,488 [salt.state       :1813][INFO    ][17198] Executing state service.running for [cinder-volume]
2019-04-30 22:26:29,488 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'status', 'cinder-volume.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:29,504 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'is-active', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:29,509 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'is-enabled', 'cinder-volume.service'] in directory '/root'
2019-04-30 22:26:29,517 [salt.state       :300 ][INFO    ][17198] The service cinder-volume is already running
2019-04-30 22:26:29,517 [salt.state       :1951][INFO    ][17198] Completed state [cinder-volume] at time 22:26:29.517530 duration_in_ms=29.655
2019-04-30 22:26:29,517 [salt.state       :1780][INFO    ][17198] Running state [open-iscsi] at time 22:26:29.517869
2019-04-30 22:26:29,518 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [open-iscsi]
2019-04-30 22:26:29,523 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,523 [salt.state       :1951][INFO    ][17198] Completed state [open-iscsi] at time 22:26:29.523853 duration_in_ms=5.985
2019-04-30 22:26:29,524 [salt.state       :1780][INFO    ][17198] Running state [tgt] at time 22:26:29.524104
2019-04-30 22:26:29,524 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [tgt]
2019-04-30 22:26:29,528 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,528 [salt.state       :1951][INFO    ][17198] Completed state [tgt] at time 22:26:29.528861 duration_in_ms=4.757
2019-04-30 22:26:29,529 [salt.state       :1780][INFO    ][17198] Running state [thin-provisioning-tools] at time 22:26:29.529113
2019-04-30 22:26:29,529 [salt.state       :1813][INFO    ][17198] Executing state pkg.installed for [thin-provisioning-tools]
2019-04-30 22:26:29,533 [salt.state       :300 ][INFO    ][17198] All specified packages are already installed
2019-04-30 22:26:29,533 [salt.state       :1951][INFO    ][17198] Completed state [thin-provisioning-tools] at time 22:26:29.533705 duration_in_ms=4.592
2019-04-30 22:26:29,534 [salt.state       :1780][INFO    ][17198] Running state [open-iscsi] at time 22:26:29.534099
2019-04-30 22:26:29,534 [salt.state       :1813][INFO    ][17198] Executing state service.running for [open-iscsi]
2019-04-30 22:26:29,534 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'status', 'open-iscsi.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:29,542 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'is-active', 'open-iscsi.service'] in directory '/root'
2019-04-30 22:26:29,549 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'is-enabled', 'open-iscsi.service'] in directory '/root'
2019-04-30 22:26:29,557 [salt.state       :300 ][INFO    ][17198] The service open-iscsi is already running
2019-04-30 22:26:29,557 [salt.state       :1951][INFO    ][17198] Completed state [open-iscsi] at time 22:26:29.557397 duration_in_ms=23.297
2019-04-30 22:26:29,557 [salt.state       :1780][INFO    ][17198] Running state [tgt] at time 22:26:29.557730
2019-04-30 22:26:29,557 [salt.state       :1813][INFO    ][17198] Executing state service.running for [tgt]
2019-04-30 22:26:29,558 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'status', 'tgt.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:29,566 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'is-active', 'tgt.service'] in directory '/root'
2019-04-30 22:26:29,574 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'is-enabled', 'tgt.service'] in directory '/root'
2019-04-30 22:26:29,581 [salt.state       :300 ][INFO    ][17198] The service tgt is already running
2019-04-30 22:26:29,582 [salt.state       :1951][INFO    ][17198] Completed state [tgt] at time 22:26:29.582078 duration_in_ms=24.347
2019-04-30 22:26:29,582 [salt.state       :1780][INFO    ][17198] Running state [iscsid] at time 22:26:29.582408
2019-04-30 22:26:29,582 [salt.state       :1813][INFO    ][17198] Executing state service.running for [iscsid]
2019-04-30 22:26:29,582 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'status', 'iscsid.service', '-n', '0'] in directory '/root'
2019-04-30 22:26:29,591 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'is-active', 'iscsid.service'] in directory '/root'
2019-04-30 22:26:29,598 [salt.loaded.int.module.cmdmod:395 ][INFO    ][17198] Executing command ['systemctl', 'is-enabled', 'iscsid.service'] in directory '/root'
2019-04-30 22:26:29,605 [salt.state       :300 ][INFO    ][17198] The service iscsid is already running
2019-04-30 22:26:29,605 [salt.state       :1951][INFO    ][17198] Completed state [iscsid] at time 22:26:29.605586 duration_in_ms=23.177
2019-04-30 22:26:29,606 [salt.minion      :1711][INFO    ][17198] Returning information for job: 20190430222623724024
2019-04-30 22:26:42,036 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command pkg.install with jid 20190430222642027084
2019-04-30 22:26:42,046 [salt.minion      :1432][INFO    ][17264] Starting a new job with PID 17264
2019-04-30 22:26:42,889 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][17264] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:26:43,138 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][17264] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:26:43,156 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][17264] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-neutron'] in directory '/root'
2019-04-30 22:26:56,441 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][17264] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:26:56,465 [salt.minion      :1711][INFO    ][17264] Returning information for job: 20190430222642027084
2019-04-30 22:27:04,921 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command file.patch with jid 20190430222704913147
2019-04-30 22:27:04,932 [salt.minion      :1432][INFO    ][18094] Starting a new job with PID 18094
2019-04-30 22:27:04,942 [salt.loader.192.168.11.2.int.module.cmdmod:395 ][INFO    ][18094] Executing command ['/usr/bin/patch', '--forward', '--reject-file=-', '-i', '/var/tmp/dhcp_agent.patch', '/usr/lib/python2.7/dist-packages/neutron/agent/dhcp/agent.py'] in directory '/root'
2019-04-30 22:27:04,977 [salt.minion      :1711][INFO    ][18094] Returning information for job: 20190430222704913147
2019-04-30 22:29:16,337 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command state.sls with jid 20190430222916329301
2019-04-30 22:29:16,347 [salt.minion      :1432][INFO    ][18140] Starting a new job with PID 18140
2019-04-30 22:29:19,990 [salt.state       :915 ][INFO    ][18140] Loading fresh modules for state activity
2019-04-30 22:29:20,021 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/gateway.sls'
2019-04-30 22:29:20,055 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/map.jinja'
2019-04-30 22:29:20,111 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/agents/_vpp.sls'
2019-04-30 22:29:20,171 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/_ssl/rabbitmq.sls'
2019-04-30 22:29:21,198 [salt.state       :1780][INFO    ][18140] Running state [neutron-dhcp-agent] at time 22:29:21.198109
2019-04-30 22:29:21,198 [salt.state       :1813][INFO    ][18140] Executing state pkg.installed for [neutron-dhcp-agent]
2019-04-30 22:29:21,198 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:21,481 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['apt-cache', '-q', 'policy', 'neutron-dhcp-agent'] in directory '/root'
2019-04-30 22:29:21,523 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:29:23,061 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:29:23,076 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-dhcp-agent'] in directory '/root'
2019-04-30 22:29:28,001 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:28,027 [salt.state       :300 ][INFO    ][18140] Made the following changes:
'haproxy' changed from 'absent' to '1.6.3-1ubuntu0.2'
'neutron-dhcp-agent' changed from 'absent' to '2:13.0.3-2~u16.04+mcp88'
'neutron-metadata-agent' changed from 'absent' to '2:13.0.3-2~u16.04+mcp88'
'dnsmasq-utils' changed from 'absent' to '2.80-1~u16.04+mcp1'
'liblua5.3-0' changed from 'absent' to '5.3.1-1ubuntu2.1'

2019-04-30 22:29:28,041 [salt.state       :915 ][INFO    ][18140] Loading fresh modules for state activity
2019-04-30 22:29:28,158 [salt.state       :1951][INFO    ][18140] Completed state [neutron-dhcp-agent] at time 22:29:28.158596 duration_in_ms=6960.487
2019-04-30 22:29:28,161 [salt.state       :1780][INFO    ][18140] Running state [neutron-metadata-agent] at time 22:29:28.161929
2019-04-30 22:29:28,162 [salt.state       :1813][INFO    ][18140] Executing state pkg.installed for [neutron-metadata-agent]
2019-04-30 22:29:28,550 [salt.state       :300 ][INFO    ][18140] All specified packages are already installed
2019-04-30 22:29:28,550 [salt.state       :1951][INFO    ][18140] Completed state [neutron-metadata-agent] at time 22:29:28.550875 duration_in_ms=388.945
2019-04-30 22:29:28,551 [salt.state       :1780][INFO    ][18140] Running state [neutron-openvswitch-agent] at time 22:29:28.551155
2019-04-30 22:29:28,551 [salt.state       :1813][INFO    ][18140] Executing state pkg.installed for [neutron-openvswitch-agent]
2019-04-30 22:29:28,564 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:29:28,578 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-openvswitch-agent'] in directory '/root'
2019-04-30 22:29:31,358 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430222931347222
2019-04-30 22:29:31,365 [salt.minion      :1432][INFO    ][19858] Starting a new job with PID 19858
2019-04-30 22:29:31,376 [salt.minion      :1711][INFO    ][19858] Returning information for job: 20190430222931347222
2019-04-30 22:29:32,868 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:32,898 [salt.state       :300 ][INFO    ][18140] Made the following changes:
'neutron-openvswitch-agent' changed from 'absent' to '2:13.0.3-2~u16.04+mcp88'
'conntrack' changed from 'absent' to '1:1.4.3-3'

2019-04-30 22:29:32,918 [salt.state       :915 ][INFO    ][18140] Loading fresh modules for state activity
2019-04-30 22:29:32,939 [salt.state       :1951][INFO    ][18140] Completed state [neutron-openvswitch-agent] at time 22:29:32.939042 duration_in_ms=4387.887
2019-04-30 22:29:32,942 [salt.state       :1780][INFO    ][18140] Running state [neutron-l3-agent] at time 22:29:32.942485
2019-04-30 22:29:32,942 [salt.state       :1813][INFO    ][18140] Executing state pkg.installed for [neutron-l3-agent]
2019-04-30 22:29:33,349 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:29:33,364 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'neutron-l3-agent'] in directory '/root'
2019-04-30 22:29:39,541 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:39,569 [salt.state       :300 ][INFO    ][18140] Made the following changes:
'libsnmp-base' changed from 'absent' to '5.7.3+dfsg-1ubuntu4.2'
'keepalived' changed from 'absent' to '1:1.2.24-1ubuntu0.16.04.1'
'neutron-l3-agent' changed from 'absent' to '2:13.0.3-2~u16.04+mcp88'
'neutron-fwaas-common' changed from 'absent' to '2:13.0.1-2~u16.04+mcp21'
'ipvsadm' changed from 'absent' to '1:1.28-3'
'python-neutron-fwaas' changed from 'absent' to '2:13.0.1-2~u16.04+mcp21'
'libsnmp30' changed from 'absent' to '5.7.3+dfsg-1ubuntu4.2'
'iputils-arping' changed from 'absent' to '3:20121221-5ubuntu2'
'libsensors4' changed from 'absent' to '1:3.4.0-2'
'libnl-route-3-200' changed from 'absent' to '3.2.27-1ubuntu0.16.04.1'
'radvd' changed from 'absent' to '1:2.11-1'

2019-04-30 22:29:39,582 [salt.state       :915 ][INFO    ][18140] Loading fresh modules for state activity
2019-04-30 22:29:39,602 [salt.state       :1951][INFO    ][18140] Completed state [neutron-l3-agent] at time 22:29:39.601966 duration_in_ms=6659.481
2019-04-30 22:29:39,603 [salt.state       :1780][INFO    ][18140] Running state [neutron_gateway_ssl_rabbitmq] at time 22:29:39.603577
2019-04-30 22:29:39,603 [salt.state       :1813][INFO    ][18140] Executing state test.show_notification for [neutron_gateway_ssl_rabbitmq]
2019-04-30 22:29:39,603 [salt.state       :300 ][INFO    ][18140] Running neutron._ssl.rabbitmq
2019-04-30 22:29:39,604 [salt.state       :1951][INFO    ][18140] Completed state [neutron_gateway_ssl_rabbitmq] at time 22:29:39.604063 duration_in_ms=0.485
2019-04-30 22:29:39,986 [salt.state       :1780][INFO    ][18140] Running state [haproxy] at time 22:29:39.986527
2019-04-30 22:29:39,986 [salt.state       :1813][INFO    ][18140] Executing state service.dead for [haproxy]
2019-04-30 22:29:39,987 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'status', 'haproxy.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:40,009 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:40,018 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:40,024 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemd-run', '--scope', 'systemctl', 'stop', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:40,035 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:40,041 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:40,047 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:40,055 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemd-run', '--scope', 'systemctl', 'disable', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:40,334 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-enabled', 'haproxy.service'] in directory '/root'
2019-04-30 22:29:40,344 [salt.state       :300 ][INFO    ][18140] {'haproxy': True}
2019-04-30 22:29:40,344 [salt.state       :1951][INFO    ][18140] Completed state [haproxy] at time 22:29:40.344829 duration_in_ms=358.301
2019-04-30 22:29:40,347 [salt.state       :1780][INFO    ][18140] Running state [/etc/neutron/neutron.conf] at time 22:29:40.347147
2019-04-30 22:29:40,347 [salt.state       :1813][INFO    ][18140] Executing state file.managed for [/etc/neutron/neutron.conf]
2019-04-30 22:29:40,367 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/neutron-generic.conf'
2019-04-30 22:29:40,468 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_log.conf'
2019-04-30 22:29:40,485 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/messaging/_default.conf'
2019-04-30 22:29:40,510 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/service/_wsgi_default.conf'
2019-04-30 22:29:40,522 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_concurrency.conf'
2019-04-30 22:29:40,534 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/messaging/_rabbit.conf'
2019-04-30 22:29:40,556 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/messaging/_notifications.conf'
2019-04-30 22:29:40,569 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_middleware.conf'
2019-04-30 22:29:40,580 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/service/_ssl.conf'
2019-04-30 22:29:40,589 [salt.state       :300 ][INFO    ][18140] File changed:
--- 
+++ 
@@ -1,5 +1,5 @@
+
 [DEFAULT]
-core_plugin = ml2
 
 #
 # From neutron
@@ -26,12 +26,11 @@
 
 # The type of authentication to use (string value)
 #auth_strategy = keystone
-
 # The core plugin Neutron will use (string value)
-#core_plugin = <None>
+core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
 
 # The service plugins Neutron will use (list value)
-#service_plugins =
+service_plugins = router,metering
 
 # The base MAC address Neutron will use for VIFs. The first 3 octets will
 # remain unchanged. If the 4th octet is not 00, it will also be used. The
@@ -43,7 +42,7 @@
 
 # The maximum number of items returned in a single response, value was
 # 'infinite' or negative integer means no limit (string value)
-#pagination_max_limit = -1
+pagination_max_limit = -1
 
 # Default value of availability zone hints. The availability zone aware
 # schedulers use this when the resources availability_zone_hints is empty.
@@ -69,7 +68,7 @@
 
 # DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite
 # lease times. (integer value)
-#dhcp_lease_duration = 86400
+dhcp_lease_duration = 3600
 
 # Domain to use for building the hostnames (string value)
 #dns_domain = openstacklocal
@@ -83,7 +82,7 @@
 # Allow overlapping IP support in Neutron. Attention: the following parameter
 # MUST be set to False if Neutron is being used in conjunction with Nova
 # security groups. (boolean value)
-#allow_overlapping_ips = false
+allow_overlapping_ips = true
 
 # Hostname to be used by the Neutron server, agents and services running on
 # this machine. All the agents and services running on this machine must use
@@ -96,11 +95,11 @@
 #network_link_prefix = <None>
 
 # Send notification to nova when port status changes (boolean value)
-#notify_nova_on_port_status_changes = true
+notify_nova_on_port_status_changes = true
 
 # Send notification to nova when port data (fixed_ips/floatingip) changes so
 # nova can update its cache. (boolean value)
-#notify_nova_on_port_data_changes = true
+notify_nova_on_port_data_changes = true
 
 # Number of seconds between sending events to nova if there are any events to
 # send. (integer value)
@@ -125,7 +124,7 @@
 # neutron automatically subtracts the overlay protocol overhead from this
 # value. Defaults to 1500, the standard value for Ethernet. (integer value)
 # Deprecated group/name - [ml2]/segment_mtu
-#global_physnet_mtu = 1500
+global_physnet_mtu = 1500
 
 # Number of backlog requests to configure the socket with (integer value)
 #backlog = 4096
@@ -145,11 +144,11 @@
 #api_workers = <None>
 
 # Number of RPC worker processes for service. (integer value)
-#rpc_workers = 1
+rpc_workers = 16
 
 # Number of RPC worker processes dedicated to state reports queue. (integer
 # value)
-#rpc_state_report_workers = 1
+rpc_state_report_workers = 4
 
 # Range of seconds to randomly delay when starting the periodic task scheduler
 # to reduce stampeding. (Disable by setting to 0) (integer value)
@@ -176,10 +175,6 @@
 #
 # From neutron.db
 #
-
-# Seconds to regard the agent is down; should be at least twice
-# report_interval, to be sure the agent is down for good. (integer value)
-#agent_down_time = 75
 
 # Representing the resource type whose load is being reported by the agent.
 # This can be "networks", "subnets" or "ports". When specified (Default is
@@ -222,7 +217,7 @@
 # greater than 1, the scheduler automatically assigns multiple DHCP agents for
 # a given tenant network, providing high availability for DHCP service.
 # (integer value)
-#dhcp_agents_per_network = 1
+dhcp_agents_per_network = 2
 
 # Enable services on an agent with admin_state_up False. If this option is
 # False, when admin_state_up of an agent is turned False, services on it will
@@ -241,28 +236,28 @@
 
 # System-wide flag to determine the type of router that tenants can create.
 # Only admin can override. (boolean value)
-#router_distributed = false
+router_distributed = False
 
 # Determine if setup is configured for DVR. If False, DVR API extension will be
 # disabled. (boolean value)
-#enable_dvr = true
+enable_dvr = False
 
 # Driver to use for scheduling router to a default L3 agent (string value)
-#router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler
+router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
 
 # Allow auto scheduling of routers to L3 agent. (boolean value)
 #router_auto_schedule = true
 
 # Automatically reschedule routers from offline L3 agents to online L3 agents.
 # (boolean value)
-#allow_automatic_l3agent_failover = false
+allow_automatic_l3agent_failover = true
 
 # Enable HA mode for virtual routers. (boolean value)
-#l3_ha = false
+l3_ha = false
 
 # Maximum number of L3 agents which a HA router will be scheduled on. If it is
 # set to 0 then the router will be scheduled on every agent. (integer value)
-#max_l3_agents_per_router = 3
+max_l3_agents_per_router = 0
 
 # Subnet used for the l3 HA admin network. (string value)
 #l3_ha_net_cidr = 169.254.192.0/18
@@ -283,7 +278,6 @@
 
 # Maximum number of allowed address pairs (integer value)
 #max_allowed_address_pair = 10
-
 #
 # From oslo.log
 #
@@ -447,6 +441,7 @@
 # exception when timeout expired. (integer value)
 #rpc_poll_timeout = 1
 
+
 # Expiration timeout in seconds of a name service record about existing target
 # ( < 0 means no timeout). (integer value)
 #zmq_target_expire = 300
@@ -574,6 +569,7 @@
 # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
 # (string value)
 #transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
@@ -584,8 +580,7 @@
 
 # The default exchange under which topics are scoped. May be overridden by an
 # exchange name specified in the transport_url option. (string value)
-#control_exchange = neutron
-
+#control_exchange = keystone
 #
 # From oslo.service.wsgi
 #
@@ -621,7 +616,6 @@
 
 
 [agent]
-root_helper = "sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf"
 
 #
 # From neutron.agent
@@ -631,7 +625,7 @@
 # /etc/neutron/rootwrap.conf' to use the real root filter facility. Change to
 # 'sudo' to skip the filtering and just run the command directly. (string
 # value)
-#root_helper = sudo
+root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
 
 # Use the root helper when listing the namespaces on a system. This may not be
 # required depending on the security configuration. If the root helper is not
@@ -656,7 +650,7 @@
 # Seconds between nodes reporting state to server; should be less than
 # agent_down_time, best if it is half or less than agent_down_time. (floating
 # point value)
-#report_interval = 30
+report_interval = 120
 
 # Log agent heartbeats (boolean value)
 #log_agent_heartbeats = false
@@ -689,509 +683,11 @@
 
 [cors]
 
-#
-# From oslo.middleware.cors
-#
-
-# Indicate whether this resource may be shared with the domain received in the
-# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
-# slash. Example: https://horizon.example.com (list value)
-#allowed_origin = <None>
-
-# Indicate that the actual request can include user credentials (boolean value)
-#allow_credentials = true
-
-# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
-# Headers. (list value)
-#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-Volume-microversion
-
-# Maximum cache age of CORS preflight requests. (integer value)
-#max_age = 3600
-
-# Indicate which methods can be used during the actual request. (list value)
-#allow_methods = GET,PUT,POST,DELETE,PATCH
-
-# Indicate which header field names may be used during the actual request.
-# (list value)
-#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
-
-
-[database]
-connection = sqlite:////var/lib/neutron/neutron.sqlite
-
-#
-# From neutron.db
-#
-
-# Database engine for which script will be generated when using offline
-# migration. (string value)
-#engine =
-
-#
-# From oslo.db
-#
-
-# If True, SQLite uses synchronous mode. (boolean value)
-#sqlite_synchronous = true
-
-# The back end to use for the database. (string value)
-# Deprecated group/name - [DEFAULT]/db_backend
-#backend = sqlalchemy
-
-# The SQLAlchemy connection string to use to connect to the database. (string
-# value)
-# Deprecated group/name - [DEFAULT]/sql_connection
-# Deprecated group/name - [DATABASE]/sql_connection
-# Deprecated group/name - [sql]/connection
-#connection = <None>
-
-# The SQLAlchemy connection string to use to connect to the slave database.
-# (string value)
-#slave_connection = <None>
-
-# The SQL mode to be used for MySQL sessions. This option, including the
-# default, overrides any server-set SQL mode. To use whatever SQL mode is set
-# by the server configuration, set this to no value. Example: mysql_sql_mode=
-# (string value)
-#mysql_sql_mode = TRADITIONAL
-
-# If True, transparently enables support for handling MySQL Cluster (NDB).
-# (boolean value)
-#mysql_enable_ndb = false
-
-# Connections which have been present in the connection pool longer than this
-# number of seconds will be replaced with a new one the next time they are
-# checked out from the pool. (integer value)
-# Deprecated group/name - [DATABASE]/idle_timeout
-# Deprecated group/name - [database]/idle_timeout
-# Deprecated group/name - [DEFAULT]/sql_idle_timeout
-# Deprecated group/name - [DATABASE]/sql_idle_timeout
-# Deprecated group/name - [sql]/idle_timeout
-#connection_recycle_time = 3600
-
-# DEPRECATED: Minimum number of SQL connections to keep open in a pool.
-# (integer value)
-# Deprecated group/name - [DEFAULT]/sql_min_pool_size
-# Deprecated group/name - [DATABASE]/sql_min_pool_size
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: The option to set the minimum pool size is not supported by
-# sqlalchemy.
-#min_pool_size = 1
-
-# Maximum number of SQL connections to keep open in a pool. Setting a value of
-# 0 indicates no limit. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_pool_size
-# Deprecated group/name - [DATABASE]/sql_max_pool_size
-#max_pool_size = 5
-
-# Maximum number of database connection retries during startup. Set to -1 to
-# specify an infinite retry count. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_retries
-# Deprecated group/name - [DATABASE]/sql_max_retries
-#max_retries = 10
-
-# Interval between retries of opening a SQL connection. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_retry_interval
-# Deprecated group/name - [DATABASE]/reconnect_interval
-#retry_interval = 10
-
-# If set, use this value for max_overflow with SQLAlchemy. (integer value)
-# Deprecated group/name - [DEFAULT]/sql_max_overflow
-# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
-#max_overflow = 50
-
-# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
-# value)
-# Minimum value: 0
-# Maximum value: 100
-# Deprecated group/name - [DEFAULT]/sql_connection_debug
-#connection_debug = 0
-
-# Add Python stack traces to SQL as comment strings. (boolean value)
-# Deprecated group/name - [DEFAULT]/sql_connection_trace
-#connection_trace = false
-
-# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
-# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
-#pool_timeout = <None>
-
-# Enable the experimental use of database reconnect on connection lost.
-# (boolean value)
-#use_db_reconnect = false
-
-# Seconds between retries of a database transaction. (integer value)
-#db_retry_interval = 1
-
-# If True, increases the interval between retries of a database operation up to
-# db_max_retry_interval. (boolean value)
-#db_inc_retry_interval = true
-
-# If db_inc_retry_interval is set, the maximum seconds between retries of a
-# database operation. (integer value)
-#db_max_retry_interval = 10
-
-# Maximum retries in case of connection error or deadlock error before error is
-# raised. Set to -1 to specify an infinite retry count. (integer value)
-#db_max_retries = 20
-
-# Optional URL parameters to append onto the connection URL at connect time;
-# specify as param1=value1&param2=value2&... (string value)
-#connection_parameters =
-
 
 [keystone_authtoken]
 
-#
-# From keystonemiddleware.auth_token
-#
-
-# Complete "public" Identity API endpoint. This endpoint should not be an
-# "admin" endpoint, as it should be accessible by all end users.
-# Unauthenticated clients are redirected to this endpoint to authenticate.
-# Although this endpoint should ideally be unversioned, client support in the
-# wild varies. If you're using a versioned v2 endpoint here, then this should
-# *not* be the same endpoint the service user utilizes for validating tokens,
-# because normal end users may not be able to reach that endpoint. (string
-# value)
-# Deprecated group/name - [keystone_authtoken]/auth_uri
-#www_authenticate_uri = <None>
-
-# DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
-# be an "admin" endpoint, as it should be accessible by all end users.
-# Unauthenticated clients are redirected to this endpoint to authenticate.
-# Although this endpoint should ideally be unversioned, client support in the
-# wild varies. If you're using a versioned v2 endpoint here, then this should
-# *not* be the same endpoint the service user utilizes for validating tokens,
-# because normal end users may not be able to reach that endpoint. This option
-# is deprecated in favor of www_authenticate_uri and will be removed in the S
-# release. (string value)
-# This option is deprecated for removal since Queens.
-# Its value may be silently ignored in the future.
-# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
-# and will be removed in the S  release.
-#auth_uri = <None>
-
-# API version of the admin Identity API endpoint. (string value)
-#auth_version = <None>
-
-# Do not handle authorization requests within the middleware, but delegate the
-# authorization decision to downstream WSGI components. (boolean value)
-#delay_auth_decision = false
-
-# Request timeout value for communicating with Identity API server. (integer
-# value)
-#http_connect_timeout = <None>
-
-# How many times are we trying to reconnect when communicating with Identity
-# API Server. (integer value)
-#http_request_max_retries = 3
-
-# Request environment key where the Swift cache object is stored. When
-# auth_token middleware is deployed with a Swift cache, use this option to have
-# the middleware share a caching backend with swift. Otherwise, use the
-# ``memcached_servers`` option instead. (string value)
-#cache = <None>
-
-# Required if identity server requires client certificate (string value)
-#certfile = <None>
-
-# Required if identity server requires client certificate (string value)
-#keyfile = <None>
-
-# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# Defaults to system CAs. (string value)
-#cafile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# The region in which the identity server can be found. (string value)
-#region_name = <None>
-
-# DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P
-# release. (string value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#signing_dir = <None>
-
-# Optionally specify a list of memcached server(s) to use for caching. If left
-# undefined, tokens will instead be cached in-process. (list value)
-# Deprecated group/name - [keystone_authtoken]/memcache_servers
-#memcached_servers = <None>
-
-# In order to prevent excessive effort spent validating tokens, the middleware
-# caches previously-seen tokens for a configurable duration (in seconds). Set
-# to -1 to disable caching completely. (integer value)
-#token_cache_time = 300
-
-# DEPRECATED: Determines the frequency at which the list of revoked tokens is
-# retrieved from the Identity service (in seconds). A high number of revocation
-# events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in
-# the Ocata release and will be removed in the P release. (integer value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#revocation_cache_time = 10
-
-# (Optional) If defined, indicate whether token data should be authenticated or
-# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
-# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
-# cache. If the value is not one of these options or empty, auth_token will
-# raise an exception on initialization. (string value)
-# Possible values:
-# None - <No description provided>
-# MAC - <No description provided>
-# ENCRYPT - <No description provided>
-#memcache_security_strategy = None
-
-# (Optional, mandatory if memcache_security_strategy is defined) This string is
-# used for key derivation. (string value)
-#memcache_secret_key = <None>
-
-# (Optional) Number of seconds memcached server is considered dead before it is
-# tried again. (integer value)
-#memcache_pool_dead_retry = 300
-
-# (Optional) Maximum total number of open connections to every memcached
-# server. (integer value)
-#memcache_pool_maxsize = 10
-
-# (Optional) Socket timeout in seconds for communicating with a memcached
-# server. (integer value)
-#memcache_pool_socket_timeout = 3
-
-# (Optional) Number of seconds a connection to memcached is held unused in the
-# pool before it is closed. (integer value)
-#memcache_pool_unused_timeout = 60
-
-# (Optional) Number of seconds that an operation will wait to get a memcached
-# client connection from the pool. (integer value)
-#memcache_pool_conn_get_timeout = 10
-
-# (Optional) Use the advanced (eventlet safe) memcached client pool. The
-# advanced pool will only work under python 2.x. (boolean value)
-#memcache_use_advanced_pool = false
-
-# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
-# middleware will not ask for service catalog on token validation and will not
-# set the X-Service-Catalog header. (boolean value)
-#include_service_catalog = true
-
-# Used to control the use and type of token binding. Can be set to: "disabled"
-# to not check token binding. "permissive" (default) to validate binding
-# information if the bind type is of a form known to the server and ignore it
-# if not. "strict" like "permissive" but if the bind type is unknown the token
-# will be rejected. "required" any form of token binding is needed to be
-# allowed. Finally the name of a binding method that must be present in tokens.
-# (string value)
-#enforce_token_bind = permissive
-
-# DEPRECATED: If true, the revocation list will be checked for cached tokens.
-# This requires that PKI tokens are configured on the identity server. (boolean
-# value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#check_revocations_for_cached = false
-
-# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
-# single algorithm or multiple. The algorithms are those supported by Python
-# standard hashlib.new(). The hashes will be tried in the order given, so put
-# the preferred one first for performance. The result of the first hash will be
-# stored in the cache. This will typically be set to multiple values only while
-# migrating from a less secure algorithm to a more secure one. Once all the old
-# tokens are expired this option should be set to a single value for better
-# performance. (list value)
-# This option is deprecated for removal since Ocata.
-# Its value may be silently ignored in the future.
-# Reason: PKI token format is no longer supported.
-#hash_algorithms = md5
-
-# A choice of roles that must be present in a service token. Service tokens are
-# allowed to request that an expired token can be used and so this check should
-# tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present.
-# For backwards compatibility reasons this currently only affects the
-# allow_expired check. (list value)
-#service_token_roles = service
-
-# For backwards compatibility reasons we must let valid service tokens pass
-# that don't pass the service_token_roles check as valid. Setting this true
-# will become the default in a future release and should be enabled if
-# possible. (boolean value)
-#service_token_roles_required = false
-
-# Authentication type to load (string value)
-# Deprecated group/name - [keystone_authtoken]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
-
-
-[matchmaker_redis]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Host to locate redis. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
-
-# DEPRECATED: Use this port to connect to redis host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
-
-# DEPRECATED: Password for Redis server (optional). (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#password =
-
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
-# [host:port, host1:port ... ] (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#sentinel_hosts =
-
-# Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
-
-# Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
-
-# Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
-
-# Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
-
-
-[nova]
-
-#
-# From neutron
-#
-
-# Name of nova region to use. Useful if keystone manages more than one region.
-# (string value)
-#region_name = <None>
-
-# Type of the nova endpoint to use.  This endpoint will be looked up in the
-# keystone catalog and should be one of public, internal or admin. (string
-# value)
-# Possible values:
-# public - <No description provided>
-# admin - <No description provided>
-# internal - <No description provided>
-#endpoint_type = public
-
-#
-# From nova.auth
-#
-
-# Authentication URL (string value)
-#auth_url = <None>
-
-# Authentication type to load (string value)
-# Deprecated group/name - [nova]/auth_plugin
-#auth_type = <None>
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# Collect per-API call timing information. (boolean value)
-#collect_timing = false
-
-# Optional domain ID to use with v3 and v2 parameters. It will be used for both
-# the user and project domain in v3 and ignored in v2 authentication. (string
-# value)
-#default_domain_id = <None>
-
-# Optional domain name to use with v3 API and v2 parameters. It will be used
-# for both the user and project domain in v3 and ignored in v2 authentication.
-# (string value)
-#default_domain_name = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# User's password (string value)
-#password = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Project ID to scope to (string value)
-# Deprecated group/name - [nova]/tenant_id
-#project_id = <None>
-
-# Project name to scope to (string value)
-# Deprecated group/name - [nova]/tenant_name
-#project_name = <None>
-
-# Log requests to multiple loggers. (boolean value)
-#split_loggers = false
-
-# Scope for system operations (string value)
-#system_scope = <None>
-
-# Tenant ID (string value)
-#tenant_id = <None>
-
-# Tenant Name (string value)
-#tenant_name = <None>
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User id (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [nova]/user_name
-#username = <None>
-
 
 [oslo_concurrency]
-
 #
 # From oslo.concurrency
 #
@@ -1199,326 +695,15 @@
 # Enables or disables inter-process locks. (boolean value)
 #disable_process_locking = false
 
-# Directory to use for lock files.  For security, the specified directory
-# should only be writable by the user running the processes that need locking.
-# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
-# a lock path must be set. (string value)
-#lock_path = <None>
-
-
-[oslo_messaging_amqp]
-
-#
-# From oslo.messaging
-#
-
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string
-# value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string
-# value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional)
-# (string value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# By default SSL checks that the name in the server's certificate matches the
-# hostname in the transport_url. In some configurations it may be preferable to
-# use the virtual hostname instead, for example if the server uses the Server
-# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
-# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
-# virtual host name instead of the DNS name. (boolean value)
-#ssl_verify_vhost = false
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# Enable virtual host support for those message buses that do not natively
-# support virtual hosting (such as qpidd). When set to true the virtual host
-# name will be added to all message bus addresses, effectively creating a
-# private 'subnet' per virtual host. Set to False if the message bus supports
-# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
-# as the name of the virtual host. (boolean value)
-#pseudo_vhost = true
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-
-# robin fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# DEPRECATED: Pool Size for Kafka Consumers (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#pool_size = 10
-
-# DEPRECATED: The pool size limit for connections expiration policy (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_min_size = 2
-
-# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating
-# point value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
-# Enable asynchronous consumer commits (boolean value)
-#enable_auto_commit = false
-
-# The maximum number of records returned in a poll call (integer value)
-#max_poll_records = 500
-
-# Protocol used to communicate with brokers (string value)
-# Possible values:
-# PLAINTEXT - <No description provided>
-# SASL_PLAINTEXT - <No description provided>
-# SSL - <No description provided>
-# SASL_SSL - <No description provided>
-#security_protocol = PLAINTEXT
-
-# Mechanism when security protocol is SASL (string value)
-#sasl_mechanism = PLAIN
-
-# CA certificate PEM file used to verify the server certificate (string value)
-#ssl_cafile =
-
-
-[oslo_messaging_notifications]
-
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are
-# messaging, messagingv2, routing, log, test, noop (multi valued)
-# Deprecated group/name - [DEFAULT]/notification_driver
-#driver =
-
-# A URL representing the messaging driver to use for notifications. If not set,
-# we fall back to the same configuration used for RPC. (string value)
-# Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
-
-# AMQP topic used for OpenStack notifications. (list value)
-# Deprecated group/name - [rpc_notifier2]/topics
-# Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
-
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
-
-
+# Directory to use for lock files.  For security, the specified
+# directory should only be writable by the user running the processes
+# that need locking. Defaults to environment variable OSLO_LOCK_PATH.
+# If OSLO_LOCK_PATH is not set in the environment, use the Python
+# tempfile.gettempdir function to find a suitable location. If
+# external locks are used, a lock path must be set. (string value)
+#lock_path = /tmp
+lock_path = /var/lock/neutron
 [oslo_messaging_rabbit]
-
 #
 # From oslo.messaging
 #
@@ -1534,24 +719,6 @@
 # Connect over SSL. (boolean value)
 # Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl
 #ssl = false
-
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
 
 # How long to wait before reconnecting in response to an AMQP consumer cancel
 # notification. (floating point value)
@@ -1666,213 +833,59 @@
 #heartbeat_rate = 2
 
 
-[oslo_messaging_zmq]
-
+
+[oslo_messaging_notifications]
 #
 # From oslo.messaging
 #
 
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+driver = messagingv2
+
+# A URL representing the messaging driver to use for notifications. If not set,
+# we fall back to the same configuration used for RPC. (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+
+# The maximum number of attempts to re-send a notification message which failed
+# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
+# (integer value)
+#retry = -1
 
 
 [oslo_middleware]
-
-#
-# From oslo.middleware.http_proxy_to_wsgi
-#
+#
+# From oslo.middleware
+#
+
+# The maximum body size for each  request, in bytes. (integer value)
+# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
+# Deprecated group/name - [DEFAULT]/max_request_body_size
+#max_request_body_size = 114688
+
+# DEPRECATED: The HTTP Header that will be used to determine what the original
+# request protocol scheme was, even if it was hidden by a SSL termination
+# proxy. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#secure_proxy_ssl_header = X-Forwarded-Proto
 
 # Whether the application is behind a proxy or not. This determines if the
 # middleware should parse the headers or not. (boolean value)
 #enable_proxy_headers_parsing = false
+enable_proxy_headers_parsing = True
+
 
 
 [oslo_policy]
-
-#
-# From oslo.policy
-#
-
-# This option controls whether or not to enforce scope when evaluating
-# policies. If ``True``, the scope of the token used in the request is compared
-# to the ``scope_types`` of the policy being enforced. If the scopes do not
-# match, an ``InvalidScope`` exception will be raised. If ``False``, a message
-# will be logged informing operators that policies are being invoked with
-# mismatching scope. (boolean value)
-#enforce_scope = false
-
-# The file that defines policies. (string value)
-#policy_file = policy.json
-
-# Default rule. Enforced when a requested rule is not found. (string value)
-#policy_default_rule = default
-
-# Directories where policy configuration files are stored. They can be relative
-# to any directory in the search path defined by the config_dir option, or
-# absolute paths. The file defined by policy_file must exist for these
-# directories to be searched.  Missing or empty directories are ignored. (multi
-# valued)
-#policy_dirs = policy.d
-
-# Content Type to send and receive data for REST based policy check (string
-# value)
-# Possible values:
-# application/x-www-form-urlencoded - <No description provided>
-# application/json - <No description provided>
-#remote_content_type = application/x-www-form-urlencoded
-
-# server identity verification for REST based policy check (boolean value)
-#remote_ssl_verify_server_crt = false
-
-# Absolute path to ca cert file for REST based policy check (string value)
-#remote_ssl_ca_crt_file = <None>
-
-# Absolute path to client cert for REST based policy check (string value)
-#remote_ssl_client_crt_file = <None>
-
-# Absolute path client key file REST based policy check (string value)
-#remote_ssl_client_key_file = <None>
 
 
 [quotas]
@@ -1927,7 +940,6 @@
 
 
 [ssl]
-
 #
 # From oslo.service.sslutils
 #
@@ -1952,3 +964,6 @@
 # Sets the list of available ciphers. value should be a string in the OpenSSL
 # cipher list format. (string value)
 #ciphers = <None>
+
+
+[ovs]

2019-04-30 22:29:40,590 [salt.state       :1951][INFO    ][18140] Completed state [/etc/neutron/neutron.conf] at time 22:29:40.590626 duration_in_ms=243.472
2019-04-30 22:29:40,591 [salt.state       :1780][INFO    ][18140] Running state [/etc/neutron/l3_agent.ini] at time 22:29:40.591117
2019-04-30 22:29:40,591 [salt.state       :1813][INFO    ][18140] Executing state file.managed for [/etc/neutron/l3_agent.ini]
2019-04-30 22:29:40,605 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/l3_agent.ini'
2019-04-30 22:29:40,674 [salt.state       :300 ][INFO    ][18140] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -13,7 +14,7 @@
 #ovs_use_veth = false
 
 # The driver used to manage the virtual interface. (string value)
-#interface_driver = <None>
+interface_driver = openvswitch
 
 #
 # From neutron.l3.agent
@@ -37,12 +38,12 @@
 # dvr_snat - <No description provided>
 # legacy - <No description provided>
 # dvr_no_external - <No description provided>
-#agent_mode = legacy
+agent_mode = legacy
 
 # TCP Port used by Neutron metadata namespace proxy. (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#metadata_port = 9697
+metadata_port = 8775
 
 # Indicates that this L3 agent should also handle routers that do not have an
 # external network gateway configured. This option should be True only for a
@@ -162,7 +163,6 @@
 
 # MaxRtrAdvInterval setting for radvd.conf (integer value)
 #max_rtr_adv_interval = 100
-
 #
 # From oslo.log
 #
@@ -277,6 +277,7 @@
 #fatal_deprecations = false
 
 
+
 [agent]
 
 #
@@ -314,8 +315,8 @@
 
 # DEPRECATED: The interface for interacting with the OVSDB (string value)
 # Possible values:
+# native - <No description provided>
 # vsctl - <No description provided>
-# native - <No description provided>
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #ovsdb_interface = native

2019-04-30 22:29:40,675 [salt.state       :1951][INFO    ][18140] Completed state [/etc/neutron/l3_agent.ini] at time 22:29:40.675115 duration_in_ms=83.998
2019-04-30 22:29:40,675 [salt.state       :1780][INFO    ][18140] Running state [/etc/neutron/plugins/ml2/openvswitch_agent.ini] at time 22:29:40.675411
2019-04-30 22:29:40,675 [salt.state       :1813][INFO    ][18140] Executing state file.managed for [/etc/neutron/plugins/ml2/openvswitch_agent.ini]
2019-04-30 22:29:40,689 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/openvswitch_agent.ini'
2019-04-30 22:29:40,765 [salt.state       :300 ][INFO    ][18140] File changed:
--- 
+++ 
@@ -1,5 +1,5 @@
+
 [DEFAULT]
-
 #
 # From oslo.log
 #
@@ -114,6 +114,7 @@
 #fatal_deprecations = false
 
 
+
 [agent]
 
 #
@@ -126,38 +127,37 @@
 # The number of seconds to wait before respawning the ovsdb monitor after
 # losing communication with it. (integer value)
 #ovsdb_monitor_respawn_interval = 30
-
 # Network types supported by the agent (gre, vxlan and/or geneve). (list value)
-#tunnel_types =
+tunnel_types = vxlan
 
 # The UDP port to use for VXLAN tunnels. (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#vxlan_udp_port = 4789
+vxlan_udp_port = 4789
 
 # MTU size of veth interfaces (integer value)
 #veth_mtu = 9000
 
 # Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve
 # tunnel scalability. (boolean value)
-#l2_population = false
+l2_population = true
 
 # Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2
 # l2population driver. Allows the switch (when supporting an overlay) to
 # respond to an ARP request locally without performing a costly ARP broadcast
 # into the overlay. (boolean value)
-#arp_responder = false
+arp_responder = true
 
 # Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying
 # GRE/VXLAN tunnel. (boolean value)
 #dont_fragment = true
 
 # Make the l2 agent run in DVR mode. (boolean value)
-#enable_distributed_routing = false
+enable_distributed_routing = False
 
 # Reset flow table on start. Setting this to True will cause brief traffic
 # interruption. (boolean value)
-#drop_flows_on_start = false
+drop_flows_on_start = false
 
 # Set or un-set the tunnel header checksum  on outgoing IP packet carrying
 # GRE/VXLAN tunnel. (boolean value)
@@ -169,7 +169,7 @@
 #agent_type = Open vSwitch agent
 
 # Extensions list to use (list value)
-#extensions =
+extensions = 
 
 
 [network_log]
@@ -218,6 +218,7 @@
 # in the ML2 plug-in configuration file on the neutron server node(s). (IP
 # address value)
 #local_ip = <None>
+local_ip = 10.1.0.6
 
 # Comma-separated list of <physical_network>:<bridge> tuples mapping physical
 # network names to the agent's node-specific Open vSwitch bridge names to be
@@ -227,7 +228,8 @@
 # have mappings to appropriate bridges on each agent. Note: If you remove a
 # bridge from this mapping, make sure to disconnect it from the integration
 # bridge as it won't be managed by the agent anymore. (list value)
-#bridge_mappings =
+
+bridge_mappings = physnet1:br-floating
 
 # Use veths instead of patch ports to interconnect the integration bridge to
 # physical networks. Support kernel without Open vSwitch patch port support so
@@ -265,16 +267,16 @@
 
 # Timeout in seconds to wait for the local switch connecting the controller.
 # Used only for 'native' driver. (integer value)
-#of_connect_timeout = 300
+#of_connect_timeout = 30
 
 # Timeout in seconds to wait for a single OpenFlow request. Used only for
 # 'native' driver. (integer value)
-#of_request_timeout = 300
+#of_request_timeout = 10
 
 # DEPRECATED: The interface for interacting with the OVSDB (string value)
 # Possible values:
+# native - <No description provided>
 # vsctl - <No description provided>
-# native - <No description provided>
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #ovsdb_interface = native
@@ -308,12 +310,12 @@
 #
 
 # Driver for security groups firewall in the L2 agent (string value)
-#firewall_driver = <None>
+firewall_driver = openvswitch
 
 # Controls whether the neutron security group API is enabled in the server. It
 # should be false when using no security groups or using the nova security
 # group API. (boolean value)
-#enable_security_group = true
+enable_security_group = True
 
 # Use ipset to speed-up the iptables based security groups. Enabling ipset
 # support requires that ipset is installed on L2 agent node. (boolean value)

2019-04-30 22:29:40,765 [salt.state       :1951][INFO    ][18140] Completed state [/etc/neutron/plugins/ml2/openvswitch_agent.ini] at time 22:29:40.765773 duration_in_ms=90.361
2019-04-30 22:29:40,766 [salt.state       :1780][INFO    ][18140] Running state [/etc/neutron/dhcp_agent.ini] at time 22:29:40.766010
2019-04-30 22:29:40,766 [salt.state       :1813][INFO    ][18140] Executing state file.managed for [/etc/neutron/dhcp_agent.ini]
2019-04-30 22:29:40,779 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/dhcp_agent.ini'
2019-04-30 22:29:40,850 [salt.state       :300 ][INFO    ][18140] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -13,7 +14,7 @@
 #ovs_use_veth = false
 
 # The driver used to manage the virtual interface. (string value)
-#interface_driver = <None>
+interface_driver = openvswitch
 
 #
 # From neutron.dhcp.agent
@@ -22,7 +23,7 @@
 # The DHCP agent will resync its state with Neutron to recover from any
 # transient notification or RPC errors. The interval is number of seconds
 # between attempts. (integer value)
-#resync_interval = 5
+resync_interval = 30
 
 # The driver used to manage the DHCP server. (string value)
 #dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
@@ -34,7 +35,7 @@
 # instance must be configured to request host routes via DHCP (Option 121).
 # This option doesn't have any effect when force_metadata is set to True.
 # (boolean value)
-#enable_isolated_metadata = false
+enable_isolated_metadata = true
 
 # In some cases the Neutron router is not present to provide the metadata IP
 # but the DHCP server can be used to provide this info. Setting this value will
@@ -49,7 +50,7 @@
 # DHCP Option 121 will not be injected in VMs, as they will be able to reach
 # 169.254.169.254 through a router. This option requires
 # enable_isolated_metadata = True. (boolean value)
-#enable_metadata_network = false
+enable_metadata_network = false
 
 # Number of threads to use during sync process. Should not exceed connection
 # pool size configured on server. (integer value)
@@ -90,7 +91,6 @@
 # DHCP rebinding time T2 (in seconds). If set to 0, it will default to 7/8 of
 # the lease time. (integer value)
 #dhcp_rebinding_time = 0
-
 #
 # From oslo.log
 #
@@ -205,6 +205,7 @@
 #fatal_deprecations = false
 
 
+
 [agent]
 
 #
@@ -235,8 +236,8 @@
 
 # DEPRECATED: The interface for interacting with the OVSDB (string value)
 # Possible values:
+# native - <No description provided>
 # vsctl - <No description provided>
-# native - <No description provided>
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #ovsdb_interface = native

2019-04-30 22:29:40,850 [salt.state       :1951][INFO    ][18140] Completed state [/etc/neutron/dhcp_agent.ini] at time 22:29:40.850219 duration_in_ms=84.208
2019-04-30 22:29:40,850 [salt.state       :1780][INFO    ][18140] Running state [/etc/neutron/metadata_agent.ini] at time 22:29:40.850453
2019-04-30 22:29:40,850 [salt.state       :1813][INFO    ][18140] Executing state file.managed for [/etc/neutron/metadata_agent.ini]
2019-04-30 22:29:40,864 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/files/rocky/metadata_agent.ini'
2019-04-30 22:29:40,926 [salt.state       :300 ][INFO    ][18140] File changed:
--- 
+++ 
@@ -1,3 +1,4 @@
+
 [DEFAULT]
 
 #
@@ -19,7 +20,7 @@
 #auth_ca_cert = <None>
 
 # IP address or DNS name of Nova metadata server. (host address value)
-#nova_metadata_host = 127.0.0.1
+nova_metadata_host = 10.167.4.35
 
 # TCP Port used by Nova metadata server. (port value)
 # Minimum value: 0
@@ -31,13 +32,13 @@
 # but it must match here and in the configuration used by the Nova Metadata
 # Server. NOTE: Nova uses the same config key, but in [neutron] section.
 # (string value)
-#metadata_proxy_shared_secret =
+metadata_proxy_shared_secret = opnfv_secret
 
 # Protocol to access nova metadata, http or https (string value)
 # Possible values:
 # http - <No description provided>
 # https - <No description provided>
-#nova_metadata_protocol = http
+nova_metadata_protocol = http
 
 # Allow to perform insecure SSL (https) requests to nova metadata (boolean
 # value)
@@ -69,7 +70,6 @@
 # Number of backlog requests to configure the metadata server socket with
 # (integer value)
 #metadata_backlog = 4096
-
 #
 # From oslo.log
 #
@@ -184,6 +184,7 @@
 #fatal_deprecations = false
 
 
+
 [agent]
 
 #
@@ -200,82 +201,3 @@
 
 
 [cache]
-
-#
-# From oslo.cache
-#
-
-# Prefix for building the configuration dictionary for the cache region. This
-# should not need to be changed unless there is another dogpile.cache region
-# with the same configuration name. (string value)
-#config_prefix = cache.oslo
-
-# Default TTL, in seconds, for any cached item in the dogpile.cache region.
-# This applies to any cached method that doesn't have an explicit cache
-# expiration time defined for it. (integer value)
-#expiration_time = 600
-
-# Cache backend module. For eventlet-based or environments with hundreds of
-# threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is
-# recommended. For environments with less than 100 threaded servers, Memcached
-# (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test
-# environments with a single instance of the server can use the
-# dogpile.cache.memory backend. (string value)
-# Possible values:
-# oslo_cache.memcache_pool - <No description provided>
-# oslo_cache.dict - <No description provided>
-# oslo_cache.mongo - <No description provided>
-# oslo_cache.etcd3gw - <No description provided>
-# dogpile.cache.memcached - <No description provided>
-# dogpile.cache.pylibmc - <No description provided>
-# dogpile.cache.bmemcached - <No description provided>
-# dogpile.cache.dbm - <No description provided>
-# dogpile.cache.redis - <No description provided>
-# dogpile.cache.memory - <No description provided>
-# dogpile.cache.memory_pickle - <No description provided>
-# dogpile.cache.null - <No description provided>
-#backend = dogpile.cache.null
-
-# Arguments supplied to the backend module. Specify this option once per
-# argument to be passed to the dogpile.cache backend. Example format:
-# "<argname>:<value>". (multi valued)
-#backend_argument =
-
-# Proxy classes to import that will affect the way the dogpile.cache backend
-# functions. See the dogpile.cache documentation on changing-backend-behavior.
-# (list value)
-#proxies =
-
-# Global toggle for caching. (boolean value)
-#enabled = false
-
-# Extra debugging from the cache backend (cache keys, get/set/delete/etc
-# calls). This is only really useful if you need to see the specific cache-
-# backend get/set/delete calls with the keys/values.  Typically this should be
-# left set to false. (boolean value)
-#debug_cache_backend = false
-
-# Memcache servers in the format of "host:port". (dogpile.cache.memcache and
-# oslo_cache.memcache_pool backends only). (list value)
-#memcache_servers = localhost:11211
-
-# Number of seconds memcached server is considered dead before it is tried
-# again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
-# (integer value)
-#memcache_dead_retry = 300
-
-# Timeout in seconds for every call to a server. (dogpile.cache.memcache and
-# oslo_cache.memcache_pool backends only). (floating point value)
-#memcache_socket_timeout = 3.0
-
-# Max total number of open connections to every memcached server.
-# (oslo_cache.memcache_pool backend only). (integer value)
-#memcache_pool_maxsize = 10
-
-# Number of seconds a connection to memcached is held unused in the pool before
-# it is closed. (oslo_cache.memcache_pool backend only). (integer value)
-#memcache_pool_unused_timeout = 60
-
-# Number of seconds that an operation will wait to get a memcache client
-# connection. (integer value)
-#memcache_pool_connection_get_timeout = 10

2019-04-30 22:29:40,926 [salt.state       :1951][INFO    ][18140] Completed state [/etc/neutron/metadata_agent.ini] at time 22:29:40.926734 duration_in_ms=76.281
2019-04-30 22:29:40,927 [salt.state       :1780][INFO    ][18140] Running state [/etc/default/neutron-metadata-agent] at time 22:29:40.927012
2019-04-30 22:29:40,927 [salt.state       :1813][INFO    ][18140] Executing state file.managed for [/etc/default/neutron-metadata-agent]
2019-04-30 22:29:40,940 [salt.fileclient  :1219][INFO    ][18140] Fetching file from saltenv 'base', ** done ** 'neutron/files/default'
2019-04-30 22:29:40,944 [salt.state       :300 ][INFO    ][18140] File changed:
New file
2019-04-30 22:29:40,944 [salt.state       :1951][INFO    ][18140] Completed state [/etc/default/neutron-metadata-agent] at time 22:29:40.944465 duration_in_ms=17.453
2019-04-30 22:29:40,944 [salt.state       :1780][INFO    ][18140] Running state [/etc/default/neutron-dhcp-agent] at time 22:29:40.944756
2019-04-30 22:29:40,944 [salt.state       :1813][INFO    ][18140] Executing state file.managed for [/etc/default/neutron-dhcp-agent]
2019-04-30 22:29:40,956 [salt.state       :300 ][INFO    ][18140] File changed:
New file
2019-04-30 22:29:40,957 [salt.state       :1951][INFO    ][18140] Completed state [/etc/default/neutron-dhcp-agent] at time 22:29:40.956994 duration_in_ms=12.238
2019-04-30 22:29:40,957 [salt.state       :1780][INFO    ][18140] Running state [/etc/default/neutron-openvswitch-agent] at time 22:29:40.957302
2019-04-30 22:29:40,957 [salt.state       :1813][INFO    ][18140] Executing state file.managed for [/etc/default/neutron-openvswitch-agent]
2019-04-30 22:29:40,969 [salt.state       :300 ][INFO    ][18140] File changed:
New file
2019-04-30 22:29:40,969 [salt.state       :1951][INFO    ][18140] Completed state [/etc/default/neutron-openvswitch-agent] at time 22:29:40.969862 duration_in_ms=12.56
2019-04-30 22:29:40,970 [salt.state       :1780][INFO    ][18140] Running state [/etc/default/neutron-l3-agent] at time 22:29:40.970145
2019-04-30 22:29:40,970 [salt.state       :1813][INFO    ][18140] Executing state file.managed for [/etc/default/neutron-l3-agent]
2019-04-30 22:29:40,982 [salt.state       :300 ][INFO    ][18140] File changed:
New file
2019-04-30 22:29:40,982 [salt.state       :1951][INFO    ][18140] Completed state [/etc/default/neutron-l3-agent] at time 22:29:40.982638 duration_in_ms=12.492
2019-04-30 22:29:40,984 [salt.state       :1780][INFO    ][18140] Running state [neutron-metadata-agent] at time 22:29:40.984490
2019-04-30 22:29:40,984 [salt.state       :1813][INFO    ][18140] Executing state service.running for [neutron-metadata-agent]
2019-04-30 22:29:40,985 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'status', 'neutron-metadata-agent.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:40,994 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'neutron-metadata-agent.service'] in directory '/root'
2019-04-30 22:29:41,000 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-enabled', 'neutron-metadata-agent.service'] in directory '/root'
2019-04-30 22:29:41,005 [salt.state       :300 ][INFO    ][18140] The service neutron-metadata-agent is already running
2019-04-30 22:29:41,006 [salt.state       :1951][INFO    ][18140] Completed state [neutron-metadata-agent] at time 22:29:41.006123 duration_in_ms=21.633
2019-04-30 22:29:41,006 [salt.state       :1780][INFO    ][18140] Running state [neutron-metadata-agent] at time 22:29:41.006279
2019-04-30 22:29:41,006 [salt.state       :1813][INFO    ][18140] Executing state service.mod_watch for [neutron-metadata-agent]
2019-04-30 22:29:41,006 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'neutron-metadata-agent.service'] in directory '/root'
2019-04-30 22:29:41,012 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-metadata-agent.service'] in directory '/root'
2019-04-30 22:29:42,034 [salt.state       :300 ][INFO    ][18140] {'neutron-metadata-agent': True}
2019-04-30 22:29:42,035 [salt.state       :1951][INFO    ][18140] Completed state [neutron-metadata-agent] at time 22:29:42.035024 duration_in_ms=1028.744
2019-04-30 22:29:42,036 [salt.state       :1780][INFO    ][18140] Running state [neutron-dhcp-agent] at time 22:29:42.036273
2019-04-30 22:29:42,036 [salt.state       :1813][INFO    ][18140] Executing state service.running for [neutron-dhcp-agent]
2019-04-30 22:29:42,037 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'status', 'neutron-dhcp-agent.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:42,046 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'neutron-dhcp-agent.service'] in directory '/root'
2019-04-30 22:29:42,055 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-enabled', 'neutron-dhcp-agent.service'] in directory '/root'
2019-04-30 22:29:42,063 [salt.state       :300 ][INFO    ][18140] The service neutron-dhcp-agent is already running
2019-04-30 22:29:42,064 [salt.state       :1951][INFO    ][18140] Completed state [neutron-dhcp-agent] at time 22:29:42.064211 duration_in_ms=27.936
2019-04-30 22:29:42,064 [salt.state       :1780][INFO    ][18140] Running state [neutron-dhcp-agent] at time 22:29:42.064443
2019-04-30 22:29:42,064 [salt.state       :1813][INFO    ][18140] Executing state service.mod_watch for [neutron-dhcp-agent]
2019-04-30 22:29:42,065 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'neutron-dhcp-agent.service'] in directory '/root'
2019-04-30 22:29:42,074 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-dhcp-agent.service'] in directory '/root'
2019-04-30 22:29:42,181 [salt.state       :300 ][INFO    ][18140] {'neutron-dhcp-agent': True}
2019-04-30 22:29:42,182 [salt.state       :1951][INFO    ][18140] Completed state [neutron-dhcp-agent] at time 22:29:42.182193 duration_in_ms=117.749
2019-04-30 22:29:42,183 [salt.state       :1780][INFO    ][18140] Running state [neutron-openvswitch-agent] at time 22:29:42.183820
2019-04-30 22:29:42,184 [salt.state       :1813][INFO    ][18140] Executing state service.running for [neutron-openvswitch-agent]
2019-04-30 22:29:42,184 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'status', 'neutron-openvswitch-agent.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:42,197 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'neutron-openvswitch-agent.service'] in directory '/root'
2019-04-30 22:29:42,207 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-enabled', 'neutron-openvswitch-agent.service'] in directory '/root'
2019-04-30 22:29:42,217 [salt.state       :300 ][INFO    ][18140] The service neutron-openvswitch-agent is already running
2019-04-30 22:29:42,217 [salt.state       :1951][INFO    ][18140] Completed state [neutron-openvswitch-agent] at time 22:29:42.217666 duration_in_ms=33.844
2019-04-30 22:29:42,217 [salt.state       :1780][INFO    ][18140] Running state [neutron-openvswitch-agent] at time 22:29:42.217909
2019-04-30 22:29:42,218 [salt.state       :1813][INFO    ][18140] Executing state service.mod_watch for [neutron-openvswitch-agent]
2019-04-30 22:29:42,218 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'neutron-openvswitch-agent.service'] in directory '/root'
2019-04-30 22:29:42,229 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-openvswitch-agent.service'] in directory '/root'
2019-04-30 22:29:42,253 [salt.state       :300 ][INFO    ][18140] {'neutron-openvswitch-agent': True}
2019-04-30 22:29:42,253 [salt.state       :1951][INFO    ][18140] Completed state [neutron-openvswitch-agent] at time 22:29:42.253602 duration_in_ms=35.691
2019-04-30 22:29:42,254 [salt.state       :1780][INFO    ][18140] Running state [neutron-l3-agent] at time 22:29:42.254538
2019-04-30 22:29:42,254 [salt.state       :1813][INFO    ][18140] Executing state service.running for [neutron-l3-agent]
2019-04-30 22:29:42,255 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'status', 'neutron-l3-agent.service', '-n', '0'] in directory '/root'
2019-04-30 22:29:42,265 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'neutron-l3-agent.service'] in directory '/root'
2019-04-30 22:29:42,276 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-enabled', 'neutron-l3-agent.service'] in directory '/root'
2019-04-30 22:29:42,286 [salt.state       :300 ][INFO    ][18140] The service neutron-l3-agent is already running
2019-04-30 22:29:42,286 [salt.state       :1951][INFO    ][18140] Completed state [neutron-l3-agent] at time 22:29:42.286732 duration_in_ms=32.192
2019-04-30 22:29:42,286 [salt.state       :1780][INFO    ][18140] Running state [neutron-l3-agent] at time 22:29:42.286951
2019-04-30 22:29:42,287 [salt.state       :1813][INFO    ][18140] Executing state service.mod_watch for [neutron-l3-agent]
2019-04-30 22:29:42,287 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemctl', 'is-active', 'neutron-l3-agent.service'] in directory '/root'
2019-04-30 22:29:42,298 [salt.loaded.int.module.cmdmod:395 ][INFO    ][18140] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'neutron-l3-agent.service'] in directory '/root'
2019-04-30 22:29:42,790 [salt.state       :300 ][INFO    ][18140] {'neutron-l3-agent': True}
2019-04-30 22:29:42,790 [salt.state       :1951][INFO    ][18140] Completed state [neutron-l3-agent] at time 22:29:42.790274 duration_in_ms=503.322
2019-04-30 22:29:42,792 [salt.minion      :1711][INFO    ][18140] Returning information for job: 20190430222916329301
2019-04-30 22:29:49,257 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command test.ping with jid 20190430222949245471
2019-04-30 22:29:49,265 [salt.minion      :1432][INFO    ][22008] Starting a new job with PID 22008
2019-04-30 22:29:49,275 [salt.minion      :1711][INFO    ][22008] Returning information for job: 20190430222949245471
2019-04-30 22:29:49,439 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command match.pillar with jid 20190430222949429505
2019-04-30 22:29:49,449 [salt.minion      :1432][INFO    ][22013] Starting a new job with PID 22013
2019-04-30 22:29:49,453 [salt.minion      :1711][INFO    ][22013] Returning information for job: 20190430222949429505
2019-04-30 22:29:50,091 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command state.sls with jid 20190430222950081629
2019-04-30 22:29:50,099 [salt.minion      :1432][INFO    ][22018] Starting a new job with PID 22018
2019-04-30 22:29:53,758 [salt.state       :915 ][INFO    ][22018] Loading fresh modules for state activity
2019-04-30 22:29:53,794 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'nova/init.sls'
2019-04-30 22:29:53,817 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'nova/compute.sls'
2019-04-30 22:29:54,022 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'nova/_ssl/rabbitmq.sls'
2019-04-30 22:29:54,164 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'armband/init.sls'
2019-04-30 22:29:54,175 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:54,450 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'armband/qemu_efi.sls'
2019-04-30 22:29:54,467 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'armband/vgabios.sls'
2019-04-30 22:29:54,478 [salt.state       :1780][INFO    ][22018] Running state [libvirtd] at time 22:29:54.478612
2019-04-30 22:29:54,478 [salt.state       :1813][INFO    ][22018] Executing state group.present for [libvirtd]
2019-04-30 22:29:54,479 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['groupadd', '-r', 'libvirtd'] in directory '/root'
2019-04-30 22:29:54,542 [salt.state       :300 ][INFO    ][22018] {'passwd': 'x', 'gid': 999, 'name': 'libvirtd', 'members': []}
2019-04-30 22:29:54,542 [salt.state       :1951][INFO    ][22018] Completed state [libvirtd] at time 22:29:54.542874 duration_in_ms=64.261
2019-04-30 22:29:54,543 [salt.state       :1780][INFO    ][22018] Running state [nova] at time 22:29:54.543098
2019-04-30 22:29:54,543 [salt.state       :1813][INFO    ][22018] Executing state group.present for [nova]
2019-04-30 22:29:54,543 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['groupadd', '-g 303', '-r', 'nova'] in directory '/root'
2019-04-30 22:29:54,587 [salt.state       :300 ][INFO    ][22018] {'passwd': 'x', 'gid': 303, 'name': 'nova', 'members': []}
2019-04-30 22:29:54,587 [salt.state       :1951][INFO    ][22018] Completed state [nova] at time 22:29:54.587569 duration_in_ms=44.471
2019-04-30 22:29:54,587 [salt.state       :1780][INFO    ][22018] Running state [nova] at time 22:29:54.587958
2019-04-30 22:29:54,588 [salt.state       :1813][INFO    ][22018] Executing state user.present for [nova]
2019-04-30 22:29:54,589 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['useradd', '-s', '/bin/bash', '-u', '303', '-g', '303', '-m', '-d', '/var/lib/nova', '-r', 'nova'] in directory '/root'
2019-04-30 22:29:54,635 [salt.state       :300 ][INFO    ][22018] {'shell': '/bin/bash', 'workphone': '', 'uid': 303, 'passwd': 'x', 'roomnumber': '', 'groups': ['nova'], 'home': '/var/lib/nova', 'name': 'nova', 'gid': 303, 'fullname': '', 'homephone': ''}
2019-04-30 22:29:54,635 [salt.state       :1951][INFO    ][22018] Completed state [nova] at time 22:29:54.635676 duration_in_ms=47.717
2019-04-30 22:29:54,636 [salt.state       :1780][INFO    ][22018] Running state [nova_compute_ssl_rabbitmq] at time 22:29:54.636054
2019-04-30 22:29:54,636 [salt.state       :1813][INFO    ][22018] Executing state test.show_notification for [nova_compute_ssl_rabbitmq]
2019-04-30 22:29:54,636 [salt.state       :300 ][INFO    ][22018] Running nova._ssl.rabbitmq
2019-04-30 22:29:54,636 [salt.state       :1951][INFO    ][22018] Completed state [nova_compute_ssl_rabbitmq] at time 22:29:54.636560 duration_in_ms=0.517
2019-04-30 22:29:55,590 [salt.state       :1780][INFO    ][22018] Running state [nova-common] at time 22:29:55.590542
2019-04-30 22:29:55,590 [salt.state       :1813][INFO    ][22018] Executing state pkg.installed for [nova-common]
2019-04-30 22:29:55,606 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['apt-cache', '-q', 'policy', 'nova-common'] in directory '/root'
2019-04-30 22:29:55,650 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:29:57,782 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:29:57,797 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-common'] in directory '/root'
2019-04-30 22:29:59,286 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:29:59,314 [salt.state       :300 ][INFO    ][22018] Made the following changes:
'nova-common' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'

2019-04-30 22:29:59,328 [salt.state       :915 ][INFO    ][22018] Loading fresh modules for state activity
2019-04-30 22:29:59,448 [salt.state       :1951][INFO    ][22018] Completed state [nova-common] at time 22:29:59.448161 duration_in_ms=3857.62
2019-04-30 22:29:59,452 [salt.state       :1780][INFO    ][22018] Running state [nova-compute-kvm] at time 22:29:59.452014
2019-04-30 22:29:59,452 [salt.state       :1813][INFO    ][22018] Executing state pkg.installed for [nova-compute-kvm]
2019-04-30 22:29:59,911 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:29:59,925 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nova-compute-kvm'] in directory '/root'
2019-04-30 22:30:05,189 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430223005180009
2019-04-30 22:30:05,197 [salt.minion      :1432][INFO    ][22858] Starting a new job with PID 22858
2019-04-30 22:30:05,207 [salt.minion      :1711][INFO    ][22858] Returning information for job: 20190430223005180009
2019-04-30 22:30:28,497 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:30:28,523 [salt.state       :300 ][INFO    ][22018] Made the following changes:
'libbluetooth3' changed from 'absent' to '5.37-0ubuntu5.1'
'python-pypowervm' changed from 'absent' to '1.1.16-1~u16.04+mcp'
'zvmcloudconnector-common' changed from 'absent' to '1.2.3-0ubuntu3~u16.04+mcp'
'libavahi-common3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2.3'
'qemu-keymaps' changed from 'absent' to '1'
'xmlsec1' changed from 'absent' to '1.2.20-2ubuntu4'
'python-microversion-parse' changed from 'absent' to '0.2.1-0.1~u16.04+mcp'
'libyajl2' changed from 'absent' to '2.1.0-2'
'python-click' changed from 'absent' to '6.2-2ubuntu1'
'nova-compute-hypervisor' changed from 'absent' to '1'
'python2.7-alabaster' changed from 'absent' to '1'
'python-ldap' changed from 'absent' to '3.0.0-1~u16.04+mcp1'
'python-sphinx' changed from 'absent' to '1.6.4-2~u16.04+mcp3'
'libasyncns0' changed from 'absent' to '0.8-5build1'
'libogg0' changed from 'absent' to '1.3.2-1'
'python-scrypt' changed from 'absent' to '0.8.0-1~u16.04+mcp2'
'sphinx-common' changed from 'absent' to '1.6.4-2~u16.04+mcp3'
'libpciaccess0' changed from 'absent' to '0.13.4-1'
'libasound2-data' changed from 'absent' to '1.1.0-0ubuntu1'
'libfdt1' changed from 'absent' to '1.4.2-1.2~u16.04+mcp2'
'python-nova' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'
'python-typing' changed from 'absent' to '3.6.2-1~u16.04+mcp2'
'libpixman-1-0' changed from 'absent' to '0.33.6-1'
'python-werkzeug' changed from 'absent' to '0.14.1+dfsg1-1~u16.04+mcp'
'augeas-lenses' changed from 'absent' to '1.4.0-0ubuntu1.1'
'libvirt-daemon-system' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'qemu-system-x86' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'python-aniso8601' changed from 'absent' to '0.83-1'
'libvirt-clients' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'ipxe-qemu' changed from 'absent' to '1.0.0+git-20180124.fbe8c52d-0.2.1~u16.04+mcp1'
'libsndfile1' changed from 'absent' to '1.0.25-10ubuntu0.16.04.1'
'qemu-kvm' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'libxml2-utils' changed from 'absent' to '2.9.3+dfsg1-1ubuntu0.6'
'python-bcrypt' changed from 'absent' to '3.1.3-1~u16.04+mcp3'
'mkisofs' changed from 'absent' to '1'
'libvorbis0a' changed from 'absent' to '1.3.5-3ubuntu0.2'
'libspice-server1' changed from 'absent' to '0.12.6-4ubuntu0.4'
'keystone-common' changed from 'absent' to '2:14.1.0-1~u16.04+mcp22'
'genisoimage' changed from 'absent' to '9:1.1.11-3ubuntu1'
'libcaca0' changed from 'absent' to '0.99.beta19-2ubuntu0.16.04.1'
'python-keystone' changed from 'absent' to '2:14.1.0-1~u16.04+mcp22'
'libvirt0' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'qemu-kvm-spice' changed from 'absent' to '1'
'qemu-system-common' changed from 'absent' to '1:2.11+dfsg-1.7.12~u16.04+mcp'
'qemu-system-i386' changed from 'absent' to '1'
'python-flask' changed from 'absent' to '1.0.2-1~u16.04+mcp'
'qemu-system-x86-64' changed from 'absent' to '1'
'libvirt-daemon-driver-storage-rbd' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'python-libvirt' changed from 'absent' to '3.5.0-1.1~u16.04+mcp3'
'libusbredirparser1' changed from 'absent' to '0.7.1-1'
'python-itsdangerous' changed from 'absent' to '0.24+dfsg1-1'
'python-alabaster' changed from 'absent' to '0.7.7-1'
'python-imagesize' changed from 'absent' to '0.7.1-1.1~u16.04+mcp2'
'python-os-vif' changed from 'absent' to '1.9.1-1.0~u16.04+mcp6'
'libopus0' changed from 'absent' to '1.1.2-1ubuntu1'
'seabios' changed from 'absent' to '1.10.2-1.1~u16.04+mcp2'
'libavahi-client3' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2.3'
'python2.7-ldap' changed from 'absent' to '1'
'libcacard0' changed from 'absent' to '1:2.5.0-2'
'libasound2' changed from 'absent' to '1.1.0-0ubuntu1'
'libxen-4.6' changed from 'absent' to '4.6.5-0ubuntu1.4'
'libxenstore3.0' changed from 'absent' to '4.6.5-0ubuntu1.4'
'python-zvmcloudconnector' changed from 'absent' to '1.2.3-0ubuntu3~u16.04+mcp'
'msr-tools' changed from 'absent' to '1.3-2'
'python-repoze.who' changed from 'absent' to '2.2-3'
'libsdl1.2debian' changed from 'absent' to '1.2.15+dfsg1-3'
'libvirt-daemon' changed from 'absent' to '4.0.0-1.8.5~u16.04+mcp1'
'python-colorama' changed from 'absent' to '0.3.7-1'
'python-responses' changed from 'absent' to '0.3.0-1'
'libvorbisenc2' changed from 'absent' to '1.3.5-3ubuntu0.2'
'kpartx' changed from 'absent' to '0.5.0+git1.656f8865-5ubuntu2.5'
'nova-compute-libvirt' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'
'python-pyldap' changed from 'absent' to '1'
'nova-compute-kvm' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'
'python2.7-cinderclient' changed from 'absent' to '1'
'python-flask-restful' changed from 'absent' to '0.3.5-1~u16.04+mcp0'
'python-os-traits' changed from 'absent' to '0.5.0-1.0~u16.04+mcp5'
'ebtables' changed from 'absent' to '2.0.10.4-3.4ubuntu2.16.04.2'
'python-passlib' changed from 'absent' to '1.7.1-2.1~u16.04+mcp2'
'ipxe-qemu-256k-compat-efi-roms' changed from 'absent' to '1.0.0+git-20150424.a25a16d-0.2~u16.04+mcp1'
'kvm' changed from 'absent' to '1'
'python-pyasn1-modules' changed from 'absent' to '0.0.7-0.1'
'libaugeas0' changed from 'absent' to '1.4.0-0ubuntu1.1'
'python2.7-nova' changed from 'absent' to '1'
'python2.7-keystone' changed from 'absent' to '1'
'libbrlapi0.6' changed from 'absent' to '5.3.1-2ubuntu2.1'
'libpulse0' changed from 'absent' to '1:8.0-0ubuntu3.10'
'nova-compute' changed from 'absent' to '2:18.2.0-2~u16.04+mcp110'
'libnetcf1' changed from 'absent' to '1:0.2.8-1ubuntu1'
'python-pysaml2' changed from 'absent' to '4.5.0-1~u16.04+mcp'
'libavahi-common-data' changed from 'absent' to '0.6.32~rc+dfsg-1ubuntu2.3'
'cpu-checker' changed from 'absent' to '0.7-0ubuntu7'
'libflac8' changed from 'absent' to '1.3.1-4'
'python-cinderclient' changed from 'absent' to '1:4.0.1-1~u16.04+mcp9'
'python-future' changed from 'absent' to '0.15.2-1'

2019-04-30 22:30:28,535 [salt.state       :915 ][INFO    ][22018] Loading fresh modules for state activity
2019-04-30 22:30:28,556 [salt.state       :1951][INFO    ][22018] Completed state [nova-compute-kvm] at time 22:30:28.556065 duration_in_ms=29104.05
2019-04-30 22:30:28,559 [salt.state       :1780][INFO    ][22018] Running state [python-novaclient] at time 22:30:28.559402
2019-04-30 22:30:28,559 [salt.state       :1813][INFO    ][22018] Executing state pkg.installed for [python-novaclient]
2019-04-30 22:30:29,107 [salt.state       :300 ][INFO    ][22018] All specified packages are already installed
2019-04-30 22:30:29,107 [salt.state       :1951][INFO    ][22018] Completed state [python-novaclient] at time 22:30:29.107273 duration_in_ms=547.871
2019-04-30 22:30:29,107 [salt.state       :1780][INFO    ][22018] Running state [pm-utils] at time 22:30:29.107572
2019-04-30 22:30:29,107 [salt.state       :1813][INFO    ][22018] Executing state pkg.installed for [pm-utils]
2019-04-30 22:30:29,121 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:30:29,136 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'pm-utils'] in directory '/root'
2019-04-30 22:30:31,289 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:30:31,317 [salt.state       :300 ][INFO    ][22018] Made the following changes:
'pm-utils' changed from 'absent' to '1.4.1-16'
'libx86-1' changed from 'absent' to '1.1+ds1-10'
'vbetool' changed from 'absent' to '1.1-3'

2019-04-30 22:30:31,330 [salt.state       :915 ][INFO    ][22018] Loading fresh modules for state activity
2019-04-30 22:30:31,351 [salt.state       :1951][INFO    ][22018] Completed state [pm-utils] at time 22:30:31.351542 duration_in_ms=2243.968
2019-04-30 22:30:31,355 [salt.state       :1780][INFO    ][22018] Running state [sysfsutils] at time 22:30:31.355030
2019-04-30 22:30:31,355 [salt.state       :1813][INFO    ][22018] Executing state pkg.installed for [sysfsutils]
2019-04-30 22:30:31,769 [salt.state       :300 ][INFO    ][22018] All specified packages are already installed
2019-04-30 22:30:31,770 [salt.state       :1951][INFO    ][22018] Completed state [sysfsutils] at time 22:30:31.770255 duration_in_ms=415.225
2019-04-30 22:30:31,770 [salt.state       :1780][INFO    ][22018] Running state [sg3-utils] at time 22:30:31.770727
2019-04-30 22:30:31,770 [salt.state       :1813][INFO    ][22018] Executing state pkg.installed for [sg3-utils]
2019-04-30 22:30:31,775 [salt.state       :300 ][INFO    ][22018] All specified packages are already installed
2019-04-30 22:30:31,775 [salt.state       :1951][INFO    ][22018] Completed state [sg3-utils] at time 22:30:31.775913 duration_in_ms=5.186
2019-04-30 22:30:31,776 [salt.state       :1780][INFO    ][22018] Running state [python-memcache] at time 22:30:31.776147
2019-04-30 22:30:31,776 [salt.state       :1813][INFO    ][22018] Executing state pkg.installed for [python-memcache]
2019-04-30 22:30:31,780 [salt.state       :300 ][INFO    ][22018] All specified packages are already installed
2019-04-30 22:30:31,780 [salt.state       :1951][INFO    ][22018] Completed state [python-memcache] at time 22:30:31.780745 duration_in_ms=4.597
2019-04-30 22:30:31,781 [salt.state       :1780][INFO    ][22018] Running state [python-guestfs] at time 22:30:31.780987
2019-04-30 22:30:31,781 [salt.state       :1813][INFO    ][22018] Executing state pkg.installed for [python-guestfs]
2019-04-30 22:30:31,794 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:30:31,810 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'python-guestfs'] in directory '/root'
2019-04-30 22:30:35,322 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430223035311796
2019-04-30 22:30:35,333 [salt.minion      :1432][INFO    ][26435] Starting a new job with PID 26435
2019-04-30 22:30:35,343 [salt.minion      :1711][INFO    ][26435] Returning information for job: 20190430223035311796
2019-04-30 22:30:58,184 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:30:58,214 [salt.state       :300 ][INFO    ][22018] Made the following changes:
'hfsplus' changed from 'absent' to '1.0.4-13'
'scrub' changed from 'absent' to '2.6.1-1'
'syslinux-common' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libguestfs0' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'libguestfs-hfsplus' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'lzop' changed from 'absent' to '1.03-3.2'
'libhfsp0' changed from 'absent' to '1.0.4-13'
'binutils-gold' changed from 'absent' to '1'
'elf-binutils' changed from 'absent' to '1'
'reiserfsprogs' changed from 'absent' to '1:3.6.24-3.1'
'lsscsi' changed from 'absent' to '0.27-3'
'python-guestfs' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'syslinux' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libguestfs-xfs' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'libguestfs-reiserfs' changed from 'absent' to '1:1.32.2-4ubuntu2.2'
'python-libguestfs' changed from 'absent' to '1'
'mtools' changed from 'absent' to '4.0.18-2ubuntu0.16.04'
'supermin' changed from 'absent' to '5.1.14-2ubuntu1.1'
'extlinux' changed from 'absent' to '3:6.03+dfsg-11ubuntu1'
'libhivex0' changed from 'absent' to '1.3.13-1build3'
'binutils' changed from 'absent' to '2.26.1-1ubuntu1~16.04.8'

2019-04-30 22:30:58,232 [salt.state       :915 ][INFO    ][22018] Loading fresh modules for state activity
2019-04-30 22:30:58,253 [salt.state       :1951][INFO    ][22018] Completed state [python-guestfs] at time 22:30:58.253388 duration_in_ms=26472.401
2019-04-30 22:30:58,256 [salt.state       :1780][INFO    ][22018] Running state [gettext-base] at time 22:30:58.256928
2019-04-30 22:30:58,257 [salt.state       :1813][INFO    ][22018] Executing state pkg.installed for [gettext-base]
2019-04-30 22:30:58,747 [salt.state       :300 ][INFO    ][22018] All specified packages are already installed
2019-04-30 22:30:58,747 [salt.state       :1951][INFO    ][22018] Completed state [gettext-base] at time 22:30:58.747650 duration_in_ms=490.721
2019-04-30 22:30:58,749 [salt.state       :1780][INFO    ][22018] Running state [/var/log/nova] at time 22:30:58.749562
2019-04-30 22:30:58,749 [salt.state       :1813][INFO    ][22018] Executing state file.directory for [/var/log/nova]
2019-04-30 22:30:58,750 [salt.state       :300 ][INFO    ][22018] {'group': 'nova'}
2019-04-30 22:30:58,750 [salt.state       :1951][INFO    ][22018] Completed state [/var/log/nova] at time 22:30:58.750549 duration_in_ms=0.987
2019-04-30 22:30:58,750 [salt.state       :1780][INFO    ][22018] Running state [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com] at time 22:30:58.750948
2019-04-30 22:30:58,751 [salt.state       :1813][INFO    ][22018] Executing state ssh_auth.present for [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com]
2019-04-30 22:30:58,754 [salt.state       :300 ][INFO    ][22018] {'AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ': 'New'}
2019-04-30 22:30:58,755 [salt.state       :1951][INFO    ][22018] Completed state [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCltIn93BcTMzNK/n2eBze6PyTkmIgdDkeXNR9X4DqE48Va80ojv2pq8xuaBxiNITJzyl+4p4UvTTXo+HmuX8qbHvqgMGXvuPUCpndEfb2r67f6vpMqPwMgBrUg2ZKgN4OsSDHU+H0dia0cEaTjz5pvbUy9lIsSyhrqOUVF9reJq+boAvVEedm8fUqiZuiejAw2D27+rRtdEPgsKMnh3626YEsr963q4rjU/JssV/iKMNu7mk2a+koOrJ+aHvcVU8zJjfA0YghoeVT/I3GLU/MB/4tD/RyR8GM+UYbI4sgAC7ZOCdQyHdJgnEzx3SJIwcS65U0T2XYvn2qXHXqJ9iGZ root@mirantis.com] at time 22:30:58.755046 duration_in_ms=4.097
2019-04-30 22:30:58,755 [salt.state       :1780][INFO    ][22018] Running state [nova] at time 22:30:58.755525
2019-04-30 22:30:58,755 [salt.state       :1813][INFO    ][22018] Executing state user.present for [nova]
2019-04-30 22:30:58,757 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['usermod', '-G', 'libvirtd', 'nova'] in directory '/root'
2019-04-30 22:30:58,802 [salt.state       :300 ][INFO    ][22018] {'groups': ['libvirtd', 'nova']}
2019-04-30 22:30:58,802 [salt.state       :1951][INFO    ][22018] Completed state [nova] at time 22:30:58.802462 duration_in_ms=46.937
2019-04-30 22:30:58,802 [salt.state       :1780][INFO    ][22018] Running state [libvirt-qemu] at time 22:30:58.802652
2019-04-30 22:30:58,802 [salt.state       :1813][INFO    ][22018] Executing state user.present for [libvirt-qemu]
2019-04-30 22:30:58,804 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['usermod', '-G', 'nova', 'libvirt-qemu'] in directory '/root'
2019-04-30 22:30:58,849 [salt.state       :300 ][INFO    ][22018] {'groups': ['kvm', 'nova']}
2019-04-30 22:30:58,850 [salt.state       :1951][INFO    ][22018] Completed state [libvirt-qemu] at time 22:30:58.850039 duration_in_ms=47.386
2019-04-30 22:30:58,850 [salt.state       :1780][INFO    ][22018] Running state [/var/lib/nova] at time 22:30:58.850216
2019-04-30 22:30:58,850 [salt.state       :1813][INFO    ][22018] Executing state file.directory for [/var/lib/nova]
2019-04-30 22:30:58,851 [salt.state       :300 ][INFO    ][22018] {'mode': '0750'}
2019-04-30 22:30:58,851 [salt.state       :1951][INFO    ][22018] Completed state [/var/lib/nova] at time 22:30:58.851173 duration_in_ms=0.958
2019-04-30 22:30:58,851 [salt.state       :1780][INFO    ][22018] Running state [/var/lib/nova/.ssh/id_rsa] at time 22:30:58.851582
2019-04-30 22:30:58,851 [salt.state       :1813][INFO    ][22018] Executing state file.managed for [/var/lib/nova/.ssh/id_rsa]
2019-04-30 22:30:58,855 [salt.state       :300 ][INFO    ][22018] File changed:
New file
2019-04-30 22:30:58,855 [salt.state       :1951][INFO    ][22018] Completed state [/var/lib/nova/.ssh/id_rsa] at time 22:30:58.855640 duration_in_ms=4.057
2019-04-30 22:30:58,856 [salt.state       :1780][INFO    ][22018] Running state [/var/lib/nova/.ssh/config] at time 22:30:58.855979
2019-04-30 22:30:58,856 [salt.state       :1813][INFO    ][22018] Executing state file.managed for [/var/lib/nova/.ssh/config]
2019-04-30 22:30:58,857 [salt.state       :300 ][INFO    ][22018] File changed:
New file
2019-04-30 22:30:58,857 [salt.state       :1951][INFO    ][22018] Completed state [/var/lib/nova/.ssh/config] at time 22:30:58.857430 duration_in_ms=1.451
2019-04-30 22:30:58,857 [salt.state       :1780][INFO    ][22018] Running state [/etc/nova/nova.conf] at time 22:30:58.857800
2019-04-30 22:30:58,858 [salt.state       :1813][INFO    ][22018] Executing state file.managed for [/etc/nova/nova.conf]
2019-04-30 22:30:58,892 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'nova/files/rocky/nova-compute.conf.Debian'
2019-04-30 22:30:59,176 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_database.conf'
2019-04-30 22:30:59,197 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/castellan/_barbican.conf'
2019-04-30 22:30:59,211 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/oslo/_cache.conf'
2019-04-30 22:30:59,228 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/keystoneauth/_type_password.conf'
2019-04-30 22:30:59,253 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'oslo_templates/files/rocky/keystonemiddleware/_auth_token.conf'
2019-04-30 22:30:59,370 [salt.state       :300 ][INFO    ][22018] File changed:
--- 
+++ 
@@ -1,7 +1,5 @@
+
 [DEFAULT]
-log_dir = /var/log/nova
-lock_path = /var/lock/nova
-state_path = /var/lib/nova
 
 #
 # From nova.conf
@@ -63,7 +61,7 @@
 # *  period with offset, example: ``month@15`` will result in monthly audits
 #    starting on 15th day of month.
 #  (string value)
-#instance_usage_audit_period = month
+instance_usage_audit_period = hour
 
 #
 # Start and use a daemon that can run the commands that need to be run with
@@ -99,7 +97,7 @@
 # * ``powervm.PowerVMDriver``
 # * ``zvm.ZVMDriver``
 #  (string value)
-#compute_driver = <None>
+compute_driver = libvirt.LibvirtDriver
 
 #
 # Allow destination machine to match source for resize. Useful when
@@ -108,7 +106,7 @@
 # the same host to the destination options. Also set to true
 # if you allow the ServerGroupAffinityFilter and need to resize.
 #  (boolean value)
-#allow_resize_to_same_host = false
+allow_resize_to_same_host = true
 
 #
 # Image properties that should not be inherited from the instance
@@ -204,7 +202,7 @@
 # * True: Instances should fail after VIF plugging timeout
 # * False: Instances should continue booting after VIF plugging timeout
 #  (boolean value)
-#vif_plugging_is_fatal = true
+vif_plugging_is_fatal = true
 
 #
 # Timeout for Neutron VIF plugging event message arrival.
@@ -223,7 +221,7 @@
 #   arrive at all.
 #  (integer value)
 # Minimum value: 0
-#vif_plugging_timeout = 300
+vif_plugging_timeout = 300
 
 # Path to '/etc/network/interfaces' template.
 #
@@ -272,19 +270,13 @@
 # none - <No description provided>
 # space - <No description provided>
 #preallocate_images = none
+preallocate_images = space
 
 #
 # Enable use of copy-on-write (cow) images.
 #
 # QEMU/KVM allow the use of qcow2 as backing files. By disabling this,
-# backing files will not be used. This option is also used by image backends.
-# If the value is False, images are flattened after fetching or cloning.
-# This makes instance images completely independent from parent images.
-#
-# Related options:
-#
-# * ``images_type``: setting ``use_cow_images`` option to False is not supported
-#   when ``images_type=qcow2`` is being used.
+# backing files will not be used.
 #  (boolean value)
 #use_cow_images = true
 
@@ -300,7 +292,7 @@
 #
 # * ``compute_driver``: Only the libvirt driver uses this option.
 #  (boolean value)
-#force_raw_images = true
+force_raw_images = true
 
 #
 # Name of the mkfs commands for ephemeral device.
@@ -426,7 +418,7 @@
 #   for the host.
 #  (integer value)
 # Minimum value: 0
-#reserved_host_memory_mb = 512
+reserved_host_memory_mb = 512
 
 #
 # Number of physical CPUs to reserve for the host. The host resources usage is
@@ -568,12 +560,8 @@
 # * $state_path/instances where state_path is a config option that specifies
 #   the top-level directory for maintaining nova's state. (default) or
 #   Any string representing directory path.
-#
-# Related options:
-#
-# * ``[workarounds]/ensure_libvirt_rbd_instance_dir_cleanup``
-#  (string value)
-#instances_path = $state_path/instances
+#  (string value)
+instances_path = $state_path/instances
 
 #
 # This option enables periodic compute.instance.exists notifications. Each
@@ -582,6 +570,7 @@
 #  (boolean value)
 #instance_usage_audit = false
 
+
 #
 # Maximum number of 1 second retries in live_migration. It specifies number
 # of retries to iptables when it complains. It happens when an user continuously
@@ -600,7 +589,7 @@
 # host rebooted. It ensures that all of the instances on a Nova compute node
 # resume their state each time the compute node boots or restarts.
 #  (boolean value)
-#resume_guests_state_on_host_boot = false
+resume_guests_state_on_host_boot = True
 
 #
 # Number of times to retry network allocation. It is required to attempt network
@@ -656,7 +645,7 @@
 # * Any negative value is treated as 0.
 # * For any value > 0, total attempts are (value + 1)
 #  (integer value)
-#block_device_allocate_retries = 60
+block_device_allocate_retries = 600
 
 #
 # Number of greenthreads available for use to sync power states.
@@ -737,7 +726,7 @@
 # * Any positive integer in seconds.
 # * Any value <=0 will disable the sync. This is not recommended.
 #  (integer value)
-#heal_instance_info_cache_interval = 60
+heal_instance_info_cache_interval = 300
 
 #
 # Interval for reclaiming deleted instances.
@@ -857,7 +846,7 @@
 # * ``block_device_allocate_retries`` in compute_manager_opts group.
 #  (integer value)
 # Minimum value: 0
-#block_device_allocate_retries_interval = 3
+block_device_allocate_retries_interval = 10
 
 #
 # Interval between sending the scheduler a list of current instance UUIDs to
@@ -1171,7 +1160,7 @@
 # Possible values:
 # iso9660 - <No description provided>
 # vfat - <No description provided>
-#config_drive_format = iso9660
+config_drive_format = vfat
 
 #
 # Force injection to take place on a config drive
@@ -1198,7 +1187,7 @@
 #   configuration section to the full path to an qemu-img command
 #   installation.
 #  (boolean value)
-#force_config_drive = false
+force_config_drive = true
 
 #
 # Name or path of the tool used for ISO image creation
@@ -1251,6 +1240,7 @@
 # * vpn_ip
 #  (string value)
 #my_ip = <host_ipv4>
+my_ip = 10.167.4.56
 
 #
 # The IP address which is used to connect to the block storage network.
@@ -2097,7 +2087,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#force_dhcp_release = true
+force_dhcp_release = true
 
 # DEPRECATED:
 # When this option is True, whenever a DNS entry must be updated, a fanout cast
@@ -2149,7 +2139,7 @@
 # Its value may be silently ignored in the future.
 # Reason:
 # nova-network is deprecated, as are any related configuration options.
-#dhcp_domain = novalocal
+dhcp_domain = novalocal
 
 # DEPRECATED:
 # This option allows you to specify the L3 management library to be used.
@@ -2650,7 +2640,7 @@
 #
 # * The full path to a directory.
 #  (string value)
-#bindir = /usr/local/bin
+#bindir = /tmp/nova/.tox/shared/local/bin
 
 #
 # The top-level directory for maintaining Nova's state.
@@ -2666,7 +2656,7 @@
 #
 # * The full path to a directory. Defaults to value provided in ``pybasedir``.
 #  (string value)
-#state_path = $pybasedir
+state_path = /var/lib/nova
 
 #
 # This option allows setting an alternate timeout value for RPC calls
@@ -2677,7 +2667,6 @@
 # Operations with RPC calls that utilize this value:
 #
 # * live migration
-# * scheduling
 #
 # Related options:
 #
@@ -2697,7 +2686,7 @@
 #   is less than report_interval, services will routinely be considered down,
 #   because they report in too rarely.
 #  (integer value)
-#report_interval = 10
+report_interval = 60
 
 #
 # Maximum time in seconds since last check-in for up service
@@ -2711,7 +2700,7 @@
 # * report_interval (service_down_time should not be less than report_interval)
 # * scheduler.periodic_task_interval
 #  (integer value)
-#service_down_time = 60
+service_down_time = 90
 
 #
 # Enable periodic tasks.
@@ -2849,7 +2838,6 @@
 # db - <No description provided>
 # mc - <No description provided>
 #servicegroup_driver = db
-
 #
 # From oslo.log
 #
@@ -2887,19 +2875,19 @@
 
 # Uses logging handler designed to watch file system. When log file is moved or
 # removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and Linux
-# platform is used. This option is ignored if log_config_append is set. (boolean
-# value)
+# instantaneously. It makes sense only if log_file option is specified and
+# Linux platform is used. This option is ignored if log_config_append is set.
+# (boolean value)
 #watch_log_file = false
 
 # Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append is
-# set. (boolean value)
+# changed later to honor RFC5424. This option is ignored if log_config_append
+# is set. (boolean value)
 #use_syslog = false
 
 # Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol which
-# includes structured metadata in addition to log messages.This option is
+# to enable journal support. Doing so will use the journal native protocol
+# which includes structured metadata in addition to log messages.This option is
 # ignored if log_config_append is set. (boolean value)
 #use_journal = false
 
@@ -2922,8 +2910,8 @@
 # value)
 #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
 
-# Additional data to append to log message when logging level for the message is
-# DEBUG. (string value)
+# Additional data to append to log message when logging level for the message
+# is DEBUG. (string value)
 #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
 
 # Prefix each line of exception output with this format. (string value)
@@ -2940,7 +2928,8 @@
 # Enables or disables publication of error events. (boolean value)
 #publish_errors = false
 
-# The format for an instance that is passed with the log message. (string value)
+# The format for an instance that is passed with the log message. (string
+# value)
 #instance_format = "[instance: %(uuid)s] "
 
 # The format for an instance UUID that is passed with the log message. (string
@@ -2953,9 +2942,9 @@
 # Maximum number of logged messages per rate_limit_interval. (integer value)
 #rate_limit_burst = 0
 
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
-# empty string. Logs with level greater or equal to rate_limit_except_level are
-# not filtered. An empty string means that all levels are filtered. (string
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
+# or empty string. Logs with level greater or equal to rate_limit_except_level
+# are not filtered. An empty string means that all levels are filtered. (string
 # value)
 #rate_limit_except_level = CRITICAL
 
@@ -3001,10 +2990,10 @@
 #rpc_zmq_host = localhost
 
 # Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger period.
-# The value of 0 specifies no linger period. Pending messages shall be discarded
-# immediately when the socket is closed. Positive values specify an upper bound
-# for the linger period. (integer value)
+# closing a socket. The default value of -1 specifies an infinite linger
+# period. The value of 0 specifies no linger period. Pending messages shall be
+# discarded immediately when the socket is closed. Positive values specify an
+# upper bound for the linger period. (integer value)
 # Deprecated group/name - [DEFAULT]/rpc_cast_timeout
 #zmq_linger = -1
 
@@ -3012,8 +3001,9 @@
 # exception when timeout expired. (integer value)
 #rpc_poll_timeout = 1
 
-# Expiration timeout in seconds of a name service record about existing target (
-# < 0 means no timeout). (integer value)
+
+# Expiration timeout in seconds of a name service record about existing target
+# ( < 0 means no timeout). (integer value)
 #zmq_target_expire = 300
 
 # Update period in seconds of a name service record about existing target.
@@ -3071,27 +3061,28 @@
 
 # The duration between two keepalive transmissions in idle condition. The unit
 # is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to skip
-# any overrides and leave it to OS default. (integer value)
+# etc. The default value of -1 (or any other negative value and 0) means to
+# skip any overrides and leave it to OS default. (integer value)
 #zmq_tcp_keepalive_idle = -1
 
 # The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value and
-# 0) means to skip any overrides and leave it to OS default. (integer value)
+# end is not available. The default value of -1 (or any other negative value
+# and 0) means to skip any overrides and leave it to OS default. (integer
+# value)
 #zmq_tcp_keepalive_cnt = -1
 
 # The duration between two successive keepalive retransmissions, if
 # acknowledgement to the previous keepalive transmission is not received. The
 # unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0) means
-# to skip any overrides and leave it to OS default. (integer value)
+# Windows etc. The default value of -1 (or any other negative value and 0)
+# means to skip any overrides and leave it to OS default. (integer value)
 #zmq_tcp_keepalive_intvl = -1
 
 # Maximum number of (green) threads to work concurrently. (integer value)
 #rpc_thread_pool_size = 100
 
-# Expiration timeout in seconds of a sent/received message after which it is not
-# tracked anymore by a client/server. (integer value)
+# Expiration timeout in seconds of a sent/received message after which it is
+# not tracked anymore by a client/server. (integer value)
 #rpc_message_ttl = 300
 
 # Wait for message acknowledgements from receivers. This mechanism works only
@@ -3125,6 +3116,7 @@
 
 # Seconds to wait for a response from a call. (integer value)
 #rpc_response_timeout = 60
+rpc_response_timeout = 30
 
 # The network address and optional user credentials for connecting to the
 # messaging backend, in URL format. The expected format is:
@@ -3138,6 +3130,7 @@
 # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
 # (string value)
 #transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
 
 # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
 # include amqp and zmq. (string value)
@@ -3148,7 +3141,8 @@
 
 # The default exchange under which topics are scoped. May be overridden by an
 # exchange name specified in the transport_url option. (string value)
-#control_exchange = openstack
+#control_exchange = keystone
+
 
 #
 # From oslo.service.periodic_task
@@ -3479,72 +3473,126 @@
 
 
 [api_database]
-connection = sqlite:////var/lib/nova/nova_api.sqlite
-#
-# The *Nova API Database* is a separate database which is used for information
-# which is used across *cells*. This database is mandatory since the Mitaka
-# release (13.0.0).
-
-#
-# From nova.conf
-#
+#
+# From oslo.db
+#
+
+# If True, SQLite uses synchronous mode. (boolean value)
+#sqlite_synchronous = true
+
+# The back end to use for the database. (string value)
+# Deprecated group/name - [DEFAULT]/db_backend
+#backend = sqlalchemy
 
 # The SQLAlchemy connection string to use to connect to the database. (string
 # value)
+# Deprecated group/name - [DEFAULT]/sql_connection
+# Deprecated group/name - [DATABASE]/sql_connection
+# Deprecated group/name - [sql]/connection
 #connection = <None>
+connection = mysql+pymysql://nova:opnfv_secret@10.167.4.23/nova_api?charset=utf8
+# The SQLAlchemy connection string to use to connect to the slave
+# database. (string value)
+#slave_connection = <None>
+
+# The SQL mode to be used for MySQL sessions. This option, including the
+# default, overrides any server-set SQL mode. To use whatever SQL mode is set
+# by the server configuration, set this to no value. Example: mysql_sql_mode=
+# (string value)
+#mysql_sql_mode = TRADITIONAL
+
+# If True, transparently enables support for handling MySQL Cluster (NDB).
+# (boolean value)
+#mysql_enable_ndb = false
+
+# Connections which have been present in the connection pool longer than this
+# number of seconds will be replaced with a new one the next time they are
+# checked out from the pool. (integer value)
+# Deprecated group/name - [DATABASE]/idle_timeout
+# Deprecated group/name - [database]/idle_timeout
+# Deprecated group/name - [DEFAULT]/sql_idle_timeout
+# Deprecated group/name - [DATABASE]/sql_idle_timeout
+# Deprecated group/name - [sql]/idle_timeout
+#connection_recycle_time = 3600
+
+# DEPRECATED: Minimum number of SQL connections to keep open in a pool.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/sql_min_pool_size
+# Deprecated group/name - [DATABASE]/sql_min_pool_size
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: The option to set the minimum pool size is not supported by
+# sqlalchemy.
+#min_pool_size = 1
+
+# Maximum number of SQL connections to keep open in a pool. Setting a value of
+# 0 indicates no limit. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_pool_size
+# Deprecated group/name - [DATABASE]/sql_max_pool_size
+#max_pool_size = 5
+max_pool_size = 10
+
+# Maximum number of database connection retries during startup. Set to -1 to
+# specify an infinite retry count. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_retries
+# Deprecated group/name - [DATABASE]/sql_max_retries
+#max_retries = 10
+max_retries = -1
+
+# Interval between retries of opening a SQL connection. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_retry_interval
+# Deprecated group/name - [DATABASE]/reconnect_interval
+#retry_interval = 10
+
+# If set, use this value for max_overflow with SQLAlchemy. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_overflow
+# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
+#max_overflow = 50
+max_overflow = 30
+
+# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
+# value)
+# Minimum value: 0
+# Maximum value: 100
+# Deprecated group/name - [DEFAULT]/sql_connection_debug
+#connection_debug = 0
+
+# Add Python stack traces to SQL as comment strings. (boolean value)
+# Deprecated group/name - [DEFAULT]/sql_connection_trace
+#connection_trace = false
+
+# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
+# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
+#pool_timeout = <None>
+
+# Enable the experimental use of database reconnect on connection lost.
+# (boolean value)
+#use_db_reconnect = false
+
+# Seconds between retries of a database transaction. (integer value)
+#db_retry_interval = 1
+
+# If True, increases the interval between retries of a database operation up to
+# db_max_retry_interval. (boolean value)
+#db_inc_retry_interval = true
+
+# If db_inc_retry_interval is set, the maximum seconds between retries of a
+# database operation. (integer value)
+#db_max_retry_interval = 10
+
+# Maximum retries in case of connection error or deadlock error before error is
+# raised. Set to -1 to specify an infinite retry count. (integer value)
+#db_max_retries = 20
 
 # Optional URL parameters to append onto the connection URL at connect time;
 # specify as param1=value1&param2=value2&... (string value)
 #connection_parameters =
 
-# If True, SQLite uses synchronous mode. (boolean value)
-#sqlite_synchronous = true
-
-# The SQLAlchemy connection string to use to connect to the slave database.
-# (string value)
-#slave_connection = <None>
-
-# The SQL mode to be used for MySQL sessions. This option, including the
-# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
-# the server configuration, set this to no value. Example: mysql_sql_mode=
-# (string value)
-#mysql_sql_mode = TRADITIONAL
-
-# Connections which have been present in the connection pool longer than this
-# number of seconds will be replaced with a new one the next time they are
-# checked out from the pool. (integer value)
-# Deprecated group/name - [api_database]/idle_timeout
-#connection_recycle_time = 3600
-
-# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
-# indicates no limit. (integer value)
-#max_pool_size = <None>
-
-# Maximum number of database connection retries during startup. Set to -1 to
-# specify an infinite retry count. (integer value)
-#max_retries = 10
-
-# Interval between retries of opening a SQL connection. (integer value)
-#retry_interval = 10
-
-# If set, use this value for max_overflow with SQLAlchemy. (integer value)
-#max_overflow = <None>
-
-# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
-# value)
-#connection_debug = 0
-
-# Add Python stack traces to SQL as comment strings. (boolean value)
-#connection_trace = false
-
-# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
-#pool_timeout = <None>
 
 
 [barbican]
-
-#
-# From nova.conf
+#
+# From castellan.config
 #
 
 # Use this endpoint to connect to Barbican, for example:
@@ -3557,6 +3605,7 @@
 # Use this endpoint to connect to Keystone (string value)
 # Deprecated group/name - [key_manager]/auth_url
 #auth_endpoint = http://localhost/identity/v3
+auth_endpoint = http://10.167.4.35:35357/v3
 
 # Number of seconds to wait before retrying poll for key creation completion
 # (integer value)
@@ -3565,23 +3614,23 @@
 # Number of times to retry poll for key creation completion (integer value)
 #number_of_retries = 60
 
-# Specifies if insecure TLS (https) requests. If False, the server's certificate
-# will not be validated (boolean value)
+# Specifies if insecure TLS (https) requests. If False, the server's
+# certificate will not be validated (boolean value)
 #verify_ssl = true
 
-# Specifies the type of endpoint.  Allowed values are: public, private, and
+# Specifies the type of endpoint.  Allowed values are: public, internal, and
 # admin (string value)
 # Possible values:
 # public - <No description provided>
 # internal - <No description provided>
 # admin - <No description provided>
 #barbican_endpoint_type = public
+barbican_endpoint_type = internal
 
 
 [cache]
-
-#
-# From nova.conf
+#
+# From oslo.cache
 #
 
 # Prefix for building the configuration dictionary for the cache region. This
@@ -3589,9 +3638,9 @@
 # with the same configuration name. (string value)
 #config_prefix = cache.oslo
 
-# Default TTL, in seconds, for any cached item in the dogpile.cache region. This
-# applies to any cached method that doesn't have an explicit cache expiration
-# time defined for it. (integer value)
+# Default TTL, in seconds, for any cached item in the dogpile.cache region.
+# This applies to any cached method that doesn't have an explicit cache
+# expiration time defined for it. (integer value)
 #expiration_time = 600
 
 # Cache backend module. For eventlet-based or environments with hundreds of
@@ -3614,6 +3663,7 @@
 # dogpile.cache.memory_pickle - <No description provided>
 # dogpile.cache.null - <No description provided>
 #backend = dogpile.cache.null
+backend = oslo_cache.memcache_pool
 
 # Arguments supplied to the backend module. Specify this option once per
 # argument to be passed to the dogpile.cache backend. Example format:
@@ -3626,17 +3676,19 @@
 #proxies =
 
 # Global toggle for caching. (boolean value)
-#enabled = false
-
-# Extra debugging from the cache backend (cache keys, get/set/delete/etc calls).
-# This is only really useful if you need to see the specific cache-backend
-# get/set/delete calls with the keys/values.  Typically this should be left set
-# to false. (boolean value)
+#enabled = true
+enabled = True
+
+# Extra debugging from the cache backend (cache keys, get/set/delete/etc
+# calls). This is only really useful if you need to see the specific cache-
+# backend get/set/delete calls with the keys/values.  Typically this should be
+# left set to false. (boolean value)
 #debug_cache_backend = false
 
 # Memcache servers in the format of "host:port". (dogpile.cache.memcache and
 # oslo_cache.memcache_pool backends only). (list value)
 #memcache_servers = localhost:11211
+memcache_servers =10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
 
 # Number of seconds memcached server is considered dead before it is tried
 # again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
@@ -3660,8 +3712,8 @@
 #memcache_pool_connection_get_timeout = 10
 
 
+
 [cells]
-enable = False
 #
 # DEPRECATED: Cells options allow you to use cells v1 functionality in an
 # OpenStack deployment.
@@ -4199,7 +4251,7 @@
 #
 # * endpoint_template - Setting this option will override catalog_info
 #  (string value)
-#catalog_info = volumev3:cinderv3:publicURL
+catalog_info = volumev3:cinderv3:internalURL
 
 #
 # If this option is set then it will override service catalog lookup with
@@ -4227,7 +4279,7 @@
 #
 # * Any string representing region name
 #  (string value)
-#os_region_name = <None>
+os_region_name = RegionOne
 
 #
 # Number of times cinderclient should retry on any failed http call.
@@ -4257,62 +4309,40 @@
 # By default there is no availability zone restriction on volume attach.
 #  (boolean value)
 #cross_az_attach = true
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Name of nova region to use. Useful if keystone manages more than one region.
 # (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Collect per-API call timing information. (boolean value)
-#collect_timing = false
-
-# Log requests to multiple loggers. (boolean value)
-#split_loggers = false
-
-# Authentication type to load (string value)
-# Deprecated group/name - [cinder]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internal
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
 
 # Authentication URL (string value)
 #auth_url = <None>
-
-# Scope for system operations (string value)
-#system_scope = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-#project_id = <None>
-
-# Project name to scope to (string value)
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
 
 # Optional domain ID to use with v3 and v2 parameters. It will be used for both
 # the user and project domain in v3 and ignored in v2 authentication. (string
@@ -4324,27 +4354,65 @@
 # (string value)
 #default_domain_name = <None>
 
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
 # User ID (string value)
 #user_id = <None>
 
 # Username (string value)
-# Deprecated group/name - [cinder]/user_name
+# Deprecated group/name - [neutron]/user_name
 #username = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User's password (string value)
-#password = <None>
-
-# Tenant ID (string value)
-#tenant_id = <None>
-
-# Tenant Name (string value)
-#tenant_name = <None>
+username = nova
 
 
 [compute]
@@ -4534,6 +4602,7 @@
 #token_ttl = 600
 
 
+
 [cors]
 
 #
@@ -4564,8 +4633,6 @@
 
 
 [database]
-connection = sqlite:////var/lib/nova/nova.sqlite
-
 #
 # From oslo.db
 #
@@ -4583,14 +4650,14 @@
 # Deprecated group/name - [DATABASE]/sql_connection
 # Deprecated group/name - [sql]/connection
 #connection = <None>
-
-# The SQLAlchemy connection string to use to connect to the slave database.
-# (string value)
+connection = mysql+pymysql://nova:opnfv_secret@10.167.4.23/nova?charset=utf8
+# The SQLAlchemy connection string to use to connect to the slave
+# database. (string value)
 #slave_connection = <None>
 
 # The SQL mode to be used for MySQL sessions. This option, including the
-# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
-# the server configuration, set this to no value. Example: mysql_sql_mode=
+# default, overrides any server-set SQL mode. To use whatever SQL mode is set
+# by the server configuration, set this to no value. Example: mysql_sql_mode=
 # (string value)
 #mysql_sql_mode = TRADITIONAL
 
@@ -4608,8 +4675,8 @@
 # Deprecated group/name - [sql]/idle_timeout
 #connection_recycle_time = 3600
 
-# DEPRECATED: Minimum number of SQL connections to keep open in a pool. (integer
-# value)
+# DEPRECATED: Minimum number of SQL connections to keep open in a pool.
+# (integer value)
 # Deprecated group/name - [DEFAULT]/sql_min_pool_size
 # Deprecated group/name - [DATABASE]/sql_min_pool_size
 # This option is deprecated for removal.
@@ -4618,17 +4685,19 @@
 # sqlalchemy.
 #min_pool_size = 1
 
-# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
-# indicates no limit. (integer value)
+# Maximum number of SQL connections to keep open in a pool. Setting a value of
+# 0 indicates no limit. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
 # Deprecated group/name - [DATABASE]/sql_max_pool_size
 #max_pool_size = 5
+max_pool_size = 10
 
 # Maximum number of database connection retries during startup. Set to -1 to
 # specify an infinite retry count. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_max_retries
 # Deprecated group/name - [DATABASE]/sql_max_retries
 #max_retries = 10
+max_retries = -1
 
 # Interval between retries of opening a SQL connection. (integer value)
 # Deprecated group/name - [DEFAULT]/sql_retry_interval
@@ -4639,6 +4708,7 @@
 # Deprecated group/name - [DEFAULT]/sql_max_overflow
 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
 #max_overflow = 50
+max_overflow = 30
 
 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
 # value)
@@ -4655,8 +4725,8 @@
 # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
 #pool_timeout = <None>
 
-# Enable the experimental use of database reconnect on connection lost. (boolean
-# value)
+# Enable the experimental use of database reconnect on connection lost.
+# (boolean value)
 #use_db_reconnect = false
 
 # Seconds between retries of a database transaction. (integer value)
@@ -4678,14 +4748,6 @@
 # specify as param1=value1&param2=value2&... (string value)
 #connection_parameters =
 
-#
-# From oslo.db.concurrency
-#
-
-# Enable the experimental use of thread pooling for all DB API calls (boolean
-# value)
-# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
-#use_tpool = false
 
 
 [devices]
@@ -5267,6 +5329,8 @@
 #   (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image").
 #  (list value)
 #api_servers = <None>
+api_servers = http://10.167.4.35:9292
+
 
 #
 # Enable glance operation retries.
@@ -5316,7 +5380,7 @@
 # * Both enable_certificate_validation and default_trusted_certificate_ids
 #   below depend on this option being enabled.
 #  (boolean value)
-#verify_glance_signatures = false
+verify_glance_signatures = False
 
 # DEPRECATED:
 # Enable certificate validation for image signature verification.
@@ -5691,7 +5755,7 @@
 # * You can configure the Compute service to always create a configuration
 #   drive by setting the force_config_drive option to 'True'.
 #  (boolean value)
-#config_drive_cdrom = false
+config_drive_cdrom = false
 
 #
 # Configuration drive inject password
@@ -5704,7 +5768,7 @@
 #   configuration drive usage with Hyper-V, such as force_config_drive.
 # * Currently, the only accepted config_drive_format is 'iso9660'.
 #  (boolean value)
-#config_drive_inject_password = false
+config_drive_inject_password = false
 
 #
 # Volume attach retry count
@@ -5788,147 +5852,6 @@
 #iscsi_initiator_list =
 
 
-[ironic]
-#
-# Configuration options for Ironic driver (Bare Metal).
-# If using the Ironic driver following options must be set:
-# * auth_type
-# * auth_url
-# * project_name
-# * username
-# * password
-# * project_domain_id or project_domain_name
-# * user_domain_id or user_domain_name
-
-#
-# From nova.conf
-#
-
-# DEPRECATED: URL override for the Ironic API endpoint. (uri value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Endpoint lookup uses the service catalog via common keystoneauth1
-# Adapter configuration options. In the current release, api_endpoint will
-# override this behavior, but will be ignored and/or removed in a future
-# release. To achieve the same result, use the endpoint_override option instead.
-#api_endpoint = http://ironic.example.org:6385/
-
-#
-# The number of times to retry when a request conflicts.
-# If set to 0, only try once, no retries.
-#
-# Related options:
-#
-# * api_retry_interval
-#  (integer value)
-# Minimum value: 0
-#api_max_retries = 60
-
-#
-# The number of seconds to wait before retrying the request.
-#
-# Related options:
-#
-# * api_max_retries
-#  (integer value)
-# Minimum value: 0
-#api_retry_interval = 2
-
-# Timeout (seconds) to wait for node serial console state changed. Set to 0 to
-# disable timeout. (integer value)
-# Minimum value: 0
-#serial_console_state_timeout = 10
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
-# (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Collect per-API call timing information. (boolean value)
-#collect_timing = false
-
-# Log requests to multiple loggers. (boolean value)
-#split_loggers = false
-
-# Authentication type to load (string value)
-# Deprecated group/name - [ironic]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
-
-# Authentication URL (string value)
-#auth_url = <None>
-
-# Scope for system operations (string value)
-#system_scope = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-#project_id = <None>
-
-# Project name to scope to (string value)
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
-
-# User ID (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [ironic]/user_name
-#username = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User's password (string value)
-#password = <None>
-
-# The default service_type for endpoint URL discovery. (string value)
-#service_type = baremetal
-
-# The default service_name for endpoint URL discovery. (string value)
-#service_name = <None>
-
-# List of interfaces, in order of preference, for endpoint URL. (list value)
-#valid_interfaces = internal,public
-
-# The default region_name for endpoint URL discovery. (string value)
-#region_name = <None>
-
-# Always use this endpoint URL for requests for this client. NOTE: The
-# unversioned endpoint should be specified here; to request a particular API
-# version, use the `version`, `min-version`, and/or `max-version` options.
-# (string value)
-# Deprecated group/name - [ironic]/api_endpoint
-#endpoint_override = <None>
 
 
 [key_manager]
@@ -6072,14 +5995,16 @@
 #
 
 # Complete "public" Identity API endpoint. This endpoint should not be an
-# "admin" endpoint, as it should be accessible by all end users. Unauthenticated
-# clients are redirected to this endpoint to authenticate. Although this
-# endpoint should ideally be unversioned, client support in the wild varies. If
-# you're using a versioned v2 endpoint here, then this should *not* be the same
-# endpoint the service user utilizes for validating tokens, because normal end
-# users may not be able to reach that endpoint. (string value)
+# "admin" endpoint, as it should be accessible by all end users.
+# Unauthenticated clients are redirected to this endpoint to authenticate.
+# Although this endpoint should ideally be unversioned, client support in the
+# wild varies. If you're using a versioned v2 endpoint here, then this should
+# *not* be the same endpoint the service user utilizes for validating tokens,
+# because normal end users may not be able to reach that endpoint. (string
+# value)
 # Deprecated group/name - [keystone_authtoken]/auth_uri
 #www_authenticate_uri = <None>
+www_authenticate_uri = http://10.167.4.35:5000
 
 # DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
 # be an "admin" endpoint, as it should be accessible by all end users.
@@ -6092,9 +6017,10 @@
 # release. (string value)
 # This option is deprecated for removal since Queens.
 # Its value may be silently ignored in the future.
-# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri and
-# will be removed in the S  release.
+# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
+# and will be removed in the S  release.
 #auth_uri = <None>
+auth_uri = http://10.167.4.35:5000
 
 # API version of the admin Identity API endpoint. (string value)
 #auth_version = <None>
@@ -6107,8 +6033,8 @@
 # value)
 #http_connect_timeout = <None>
 
-# How many times are we trying to reconnect when communicating with Identity API
-# Server. (integer value)
+# How many times are we trying to reconnect when communicating with Identity
+# API Server. (integer value)
 #http_request_max_retries = 3
 
 # Request environment key where the Swift cache object is stored. When
@@ -6132,10 +6058,11 @@
 
 # The region in which the identity server can be found. (string value)
 #region_name = <None>
+region_name = RegionOne
 
 # DEPRECATED: Directory used to cache files related to PKI tokens. This option
-# has been deprecated in the Ocata release and will be removed in the P release.
-# (string value)
+# has been deprecated in the Ocata release and will be removed in the P
+# release. (string value)
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
@@ -6145,43 +6072,29 @@
 # undefined, tokens will instead be cached in-process. (list value)
 # Deprecated group/name - [keystone_authtoken]/memcache_servers
 #memcached_servers = <None>
+memcached_servers=10.167.4.36:11211,10.167.4.37:11211,10.167.4.38:11211
 
 # In order to prevent excessive effort spent validating tokens, the middleware
-# caches previously-seen tokens for a configurable duration (in seconds). Set to
-# -1 to disable caching completely. (integer value)
+# caches previously-seen tokens for a configurable duration (in seconds). Set
+# to -1 to disable caching completely. (integer value)
 #token_cache_time = 300
 
 # DEPRECATED: Determines the frequency at which the list of revoked tokens is
 # retrieved from the Identity service (in seconds). A high number of revocation
 # events combined with a low cache duration may significantly reduce
-# performance. Only valid for PKI tokens. This option has been deprecated in the
-# Ocata release and will be removed in the P release. (integer value)
+# performance. Only valid for PKI tokens. This option has been deprecated in
+# the Ocata release and will be removed in the P release. (integer value)
 # This option is deprecated for removal since Ocata.
 # Its value may be silently ignored in the future.
 # Reason: PKI token format is no longer supported.
 #revocation_cache_time = 10
-
-# (Optional) If defined, indicate whether token data should be authenticated or
-# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
-# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
-# cache. If the value is not one of these options or empty, auth_token will
-# raise an exception on initialization. (string value)
-# Possible values:
-# None - <No description provided>
-# MAC - <No description provided>
-# ENCRYPT - <No description provided>
-#memcache_security_strategy = None
-
-# (Optional, mandatory if memcache_security_strategy is defined) This string is
-# used for key derivation. (string value)
-#memcache_secret_key = <None>
 
 # (Optional) Number of seconds memcached server is considered dead before it is
 # tried again. (integer value)
 #memcache_pool_dead_retry = 300
 
-# (Optional) Maximum total number of open connections to every memcached server.
-# (integer value)
+# (Optional) Maximum total number of open connections to every memcached
+# server. (integer value)
 #memcache_pool_maxsize = 10
 
 # (Optional) Socket timeout in seconds for communicating with a memcached
@@ -6207,11 +6120,11 @@
 
 # Used to control the use and type of token binding. Can be set to: "disabled"
 # to not check token binding. "permissive" (default) to validate binding
-# information if the bind type is of a form known to the server and ignore it if
-# not. "strict" like "permissive" but if the bind type is unknown the token will
-# be rejected. "required" any form of token binding is needed to be allowed.
-# Finally the name of a binding method that must be present in tokens. (string
-# value)
+# information if the bind type is of a form known to the server and ignore it
+# if not. "strict" like "permissive" but if the bind type is unknown the token
+# will be rejected. "required" any form of token binding is needed to be
+# allowed. Finally the name of a binding method that must be present in tokens.
+# (string value)
 #enforce_token_bind = permissive
 
 # DEPRECATED: If true, the revocation list will be checked for cached tokens.
@@ -6238,23 +6151,129 @@
 # A choice of roles that must be present in a service token. Service tokens are
 # allowed to request that an expired token can be used and so this check should
 # tightly control that only actual services should be sending this token. Roles
-# here are applied as an ANY check so any role in this list must be present. For
-# backwards compatibility reasons this currently only affects the allow_expired
-# check. (list value)
+# here are applied as an ANY check so any role in this list must be present.
+# For backwards compatibility reasons this currently only affects the
+# allow_expired check. (list value)
 #service_token_roles = service
 
-# For backwards compatibility reasons we must let valid service tokens pass that
-# don't pass the service_token_roles check as valid. Setting this true will
-# become the default in a future release and should be enabled if possible.
-# (boolean value)
+# For backwards compatibility reasons we must let valid service tokens pass
+# that don't pass the service_token_roles check as valid. Setting this true
+# will become the default in a future release and should be enabled if
+# possible. (boolean value)
 #service_token_roles_required = false
 
 # Authentication type to load (string value)
 # Deprecated group/name - [keystone_authtoken]/auth_plugin
 #auth_type = <None>
+auth_type = password
 
 # Config Section from which to load plugin specific options (string value)
 #auth_section = <None>
+
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internal
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
+
+# Authentication URL (string value)
+#auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = nova
 
 
 [libvirt]
@@ -6360,7 +6379,7 @@
 # uml - <No description provided>
 # xen - <No description provided>
 # parallels - <No description provided>
-#virt_type = kvm
+virt_type = kvm
 
 #
 # Overrides the default libvirt URI of the chosen virtualization type.
@@ -6378,76 +6397,6 @@
 # * ``virt_type``: Influences what is used as default value here.
 #  (string value)
 #connection_uri =
-
-#
-# Algorithm used to hash the injected password.
-# Note that it must be supported by libc on the compute host
-# _and_ by libc inside *any guest image* that will be booted by this compute
-# host whith requested password injection.
-# In case the specified algorithm is not supported by libc on the compute host,
-# a fallback to DES algorithm will be performed.
-#
-# Related options:
-#
-# * ``inject_password``
-# * ``inject_partition``
-#  (string value)
-# Possible values:
-# SHA-512 - <No description provided>
-# SHA-256 - <No description provided>
-# MD5 - <No description provided>
-#inject_password_algorithm = MD5
-
-#
-# Allow the injection of an admin password for instance only at ``create`` and
-# ``rebuild`` process.
-#
-# There is no agent needed within the image to do this. If *libguestfs* is
-# available on the host, it will be used. Otherwise *nbd* is used. The file
-# system of the image will be mounted and the admin password, which is provided
-# in the REST API call will be injected as password for the root user. If no
-# root user is available, the instance won't be launched and an error is thrown.
-# Be aware that the injection is *not* possible when the instance gets launched
-# from a volume.
-#
-# *Linux* distribution guest only.
-#
-# Possible values:
-#
-# * True: Allows the injection.
-# * False: Disallows the injection. Any via the REST API provided admin password
-#   will be silently ignored.
-#
-# Related options:
-#
-# * ``inject_partition``: That option will decide about the discovery and usage
-#   of the file system. It also can disable the injection at all.
-#  (boolean value)
-#inject_password = false
-
-#
-# Allow the injection of an SSH key at boot time.
-#
-# There is no agent needed within the image to do this. If *libguestfs* is
-# available on the host, it will be used. Otherwise *nbd* is used. The file
-# system of the image will be mounted and the SSH key, which is provided
-# in the REST API call will be injected as SSH key for the root user and
-# appended to the ``authorized_keys`` of that user. The SELinux context will
-# be set if necessary. Be aware that the injection is *not* possible when the
-# instance gets launched from a volume.
-#
-# This config option will enable directly modifying the instance disk and does
-# not affect what cloud-init may do using data from config_drive option or the
-# metadata service.
-#
-# *Linux* distribution guest only.
-#
-# Related options:
-#
-# * ``inject_partition``: That option will decide about the discovery and usage
-#   of the file system. It also can disable the injection at all.
-#  (boolean value)
-#inject_key = false
 
 #
 # Determines the way how the file system is chosen to inject data into it.
@@ -6480,7 +6429,7 @@
 #   single partition image
 #  (integer value)
 # Minimum value: -2
-#inject_partition = -2
+inject_partition = -2
 
 # DEPRECATED:
 # Enable a mouse cursor within a graphical VNC or SPICE sessions.
@@ -6500,6 +6449,56 @@
 # Its value may be silently ignored in the future.
 # Reason: This option is being replaced by the 'pointer_model' option.
 #use_usb_tablet = true
+#
+# Allow the injection of an admin password for instance only at ``create`` and
+# ``rebuild`` process.
+#
+# There is no agent needed within the image to do this. If *libguestfs* is
+# available on the host, it will be used. Otherwise *nbd* is used. The file
+# system of the image will be mounted and the admin password, which is provided
+# in the REST API call will be injected as password for the root user. If no
+# root user is available, the instance won't be launched and an error is thrown.
+# Be aware that the injection is *not* possible when the instance gets launched
+# from a volume.
+#
+# *Linux* distribution guest only.
+#
+# Possible values:
+#
+# * True: Allows the injection.
+# * False: Disallows the injection. Any via the REST API provided admin password
+#   will be silently ignored.
+#
+# Related options:
+#
+# * ``inject_partition``: That option will decide about the discovery and usage
+#   of the file system. It also can disable the injection at all.
+#  (boolean value)
+inject_password = false
+
+#
+# Allow the injection of an SSH key at boot time.
+#
+# There is no agent needed within the image to do this. If *libguestfs* is
+# available on the host, it will be used. Otherwise *nbd* is used. The file
+# system of the image will be mounted and the SSH key, which is provided
+# in the REST API call will be injected as SSH key for the root user and
+# appended to the ``authorized_keys`` of that user. The SELinux context will
+# be set if necessary. Be aware that the injection is *not* possible when the
+# instance gets launched from a volume.
+#
+# This config option will enable directly modifying the instance disk and does
+# not affect what cloud-init may do using data from config_drive option or the
+# metadata service.
+#
+# *Linux* distribution guest only.
+#
+# Related options:
+#
+# * ``inject_partition``: That option will decide about the discovery and usage
+#   of the file system. It also can disable the injection at all.
+#  (boolean value)
+inject_key = true
 
 #
 # The IP address or hostname to be used as the target for live migration
@@ -6523,6 +6522,7 @@
 #   ignored if tunneling is enabled.
 #  (string value)
 #live_migration_inbound_addr = <None>
+live_migration_inbound_addr = 10.167.4.56
 
 # DEPRECATED:
 # Live migration target URI to use.
@@ -6575,7 +6575,6 @@
 # * ``live_migration_uri``: If ``live_migration_uri`` value is not None, the
 #   scheme used for live migration is taken from ``live_migration_uri`` instead.
 #  (string value)
-#live_migration_scheme = <None>
 
 #
 # Enable tunnelled migration.
@@ -6785,7 +6784,7 @@
 # host-passthrough - <No description provided>
 # custom - <No description provided>
 # none - <No description provided>
-#cpu_mode = <None>
+cpu_mode = host-passthrough
 
 #
 # Set the name of the libvirt CPU model the instance should use.
@@ -6930,7 +6929,7 @@
 #   speeding up guest installations, but you should switch to another caching
 #   mode in production environments.
 #  (list value)
-#disk_cachemodes =
+disk_cachemodes = "file=directsync,block=none"
 
 #
 # The path to an RNG (Random Number Generator) device that will be used as
@@ -7061,7 +7060,6 @@
 #
 # * virt.use_cow_images
 # * images_volume_group
-# * [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup
 #  (string value)
 # Possible values:
 # raw - <No description provided>
@@ -7073,15 +7071,6 @@
 # default - <No description provided>
 #images_type = default
 
-#
-# LVM Volume Group that is used for VM images, when you specify images_type=lvm
-#
-# Related options:
-#
-# * images_type
-#  (string value)
-#images_volume_group = <None>
-
 # DEPRECATED:
 # Create sparse logical volumes (with virtualsize) if this flag is set to True.
 #  (boolean value)
@@ -7094,12 +7083,6 @@
 # use Cinder thin-provisioned volumes.
 #sparse_logical_volumes = false
 
-# The RADOS pool in which rbd volumes are stored (string value)
-#images_rbd_pool = rbd
-
-# Path to the ceph configuration file to use (string value)
-#images_rbd_ceph_conf =
-
 #
 # Discard option for nova managed disks.
 #
@@ -7138,45 +7121,6 @@
 # Reason: The image cache no longer periodically calculates checksums of stored
 # images. Data integrity can be checked at the block or filesystem level.
 #checksum_interval_seconds = 3600
-
-#
-# Method used to wipe ephemeral disks when they are deleted. Only takes effect
-# if LVM is set as backing storage.
-#
-# Possible values:
-#
-# * none - do not wipe deleted volumes
-# * zero - overwrite volumes with zeroes
-# * shred - overwrite volume repeatedly
-#
-# Related options:
-#
-# * images_type - must be set to ``lvm``
-# * volume_clear_size
-#  (string value)
-# Possible values:
-# none - <No description provided>
-# zero - <No description provided>
-# shred - <No description provided>
-#volume_clear = zero
-
-#
-# Size of area in MiB, counting from the beginning of the allocated volume,
-# that will be cleared using method set in ``volume_clear`` option.
-#
-# Possible values:
-#
-# * 0 - clear whole volume
-# * >0 - clear specified amount of MiB
-#
-# Related options:
-#
-# * images_type - must be set to ``lvm``
-# * volume_clear - must be set and the value must be different than ``none``
-#   for this option to have any impact
-#  (integer value)
-# Minimum value: 0
-#volume_clear_size = 0
 
 #
 # Enable snapshot compression for ``qcow2`` images.
@@ -7248,19 +7192,6 @@
 # availability and fault tolerance.
 #  (boolean value)
 #iser_use_multipath = false
-
-#
-# The RADOS client name for accessing rbd(RADOS Block Devices) volumes.
-#
-# Libvirt will refer to this user when connecting and authenticating with
-# the Ceph RBD server.
-#  (string value)
-#rbd_user = <None>
-
-#
-# The libvirt UUID of the secret for the rbd_user volumes.
-#  (string value)
-#rbd_secret_uuid = <None>
 
 #
 # Directory where the NFS volume is mounted on the compute node.
@@ -7723,7 +7654,7 @@
 # extensions with no wait.
 #  (integer value)
 # Minimum value: 0
-#extension_sync_interval = 600
+extension_sync_interval = 600
 
 #
 # List of physnets present on this host.
@@ -7798,7 +7729,7 @@
 #insecure = false
 
 # Timeout value for http requests (integer value)
-#timeout = <None>
+timeout = 300
 
 # Collect per-API call timing information. (boolean value)
 #collect_timing = false
@@ -7808,13 +7739,13 @@
 
 # Authentication type to load (string value)
 # Deprecated group/name - [neutron]/auth_plugin
-#auth_type = <None>
+auth_type = v3password
 
 # Config Section from which to load plugin specific options (string value)
 #auth_section = <None>
 
 # Authentication URL (string value)
-#auth_url = <None>
+auth_url = http://10.167.4.35:35357/v3
 
 # Scope for system operations (string value)
 #system_scope = <None>
@@ -7829,13 +7760,13 @@
 #project_id = <None>
 
 # Project name to scope to (string value)
-#project_name = <None>
+project_name = service
 
 # Domain ID containing project (string value)
 #project_domain_id = <None>
 
 # Domain name containing project (string value)
-#project_domain_name = <None>
+project_domain_name = Default
 
 # Trust ID (string value)
 #trust_id = <None>
@@ -7855,16 +7786,16 @@
 
 # Username (string value)
 # Deprecated group/name - [neutron]/user_name
-#username = <None>
+username = neutron
 
 # User's domain id (string value)
 #user_domain_id = <None>
 
 # User's domain name (string value)
-#user_domain_name = <None>
+user_domain_name = Default
 
 # User's password (string value)
-#password = <None>
+password = opnfv_secret
 
 # Tenant ID (string value)
 #tenant_id = <None>
@@ -7882,7 +7813,7 @@
 #valid_interfaces = internal,public
 
 # The default region_name for endpoint URL discovery. (string value)
-#region_name = <None>
+region_name= RegionOne
 
 # Always use this endpoint URL for requests for this client. NOTE: The
 # unversioned endpoint should be specified here; to request a particular API
@@ -7925,6 +7856,7 @@
 # vm_state - <No description provided>
 # vm_and_task_state - <No description provided>
 #notify_on_state_change = <None>
+notify_on_state_change = vm_and_task_state
 
 # Default notification level for outgoing notifications. (string value)
 # Possible values:
@@ -8027,296 +7959,16 @@
 #lock_path = <None>
 
 
-[oslo_messaging_amqp]
-
+[oslo_messaging_notifications]
 #
 # From oslo.messaging
 #
 
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional) (string
-# value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# By default SSL checks that the name in the server's certificate matches the
-# hostname in the transport_url. In some configurations it may be preferable to
-# use the virtual hostname instead, for example if the server uses the Server
-# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
-# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
-# virtual host name instead of the DNS name. (boolean value)
-#ssl_verify_vhost = false
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# Enable virtual host support for those message buses that do not natively
-# support virtual hosting (such as qpidd). When set to true the virtual host
-# name will be added to all message bus addresses, effectively creating a
-# private 'subnet' per virtual host. Set to False if the message bus supports
-# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
-# as the name of the virtual host. (boolean value)
-#pseudo_vhost = true
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-robin
-# fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# DEPRECATED: Pool Size for Kafka Consumers (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#pool_size = 10
-
-# DEPRECATED: The pool size limit for connections expiration policy (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_min_size = 2
-
-# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating point
-# value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
-# Enable asynchronous consumer commits (boolean value)
-#enable_auto_commit = false
-
-# The maximum number of records returned in a poll call (integer value)
-#max_poll_records = 500
-
-# Protocol used to communicate with brokers (string value)
-# Possible values:
-# PLAINTEXT - <No description provided>
-# SASL_PLAINTEXT - <No description provided>
-# SSL - <No description provided>
-# SASL_SSL - <No description provided>
-#security_protocol = PLAINTEXT
-
-# Mechanism when security protocol is SASL (string value)
-#sasl_mechanism = PLAIN
-
-# CA certificate PEM file used to verify the server certificate (string value)
-#ssl_cafile =
-
-
-[oslo_messaging_notifications]
-
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are messaging,
-# messagingv2, routing, log, test, noop (multi valued)
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
 # Deprecated group/name - [DEFAULT]/notification_driver
 #driver =
+driver = messagingv2
 
 # A URL representing the messaging driver to use for notifications. If not set,
 # we fall back to the same configuration used for RPC. (string value)
@@ -8332,10 +7984,7 @@
 # to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
 # (integer value)
 #retry = -1
-
-
 [oslo_messaging_rabbit]
-
 #
 # From oslo.messaging
 #
@@ -8352,24 +8001,6 @@
 # Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl
 #ssl = false
 
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
-
 # How long to wait before reconnecting in response to an AMQP consumer cancel
 # notification. (floating point value)
 #kombu_reconnect_delay = 1.0
@@ -8384,8 +8015,8 @@
 #kombu_missing_consumer_retry_timeout = 60
 
 # Determines how the next RabbitMQ node is chosen in case the one we are
-# currently connected to becomes unavailable. Takes effect only if more than one
-# RabbitMQ node is provided in config. (string value)
+# currently connected to becomes unavailable. Takes effect only if more than
+# one RabbitMQ node is provided in config. (string value)
 # Possible values:
 # round-robin - <No description provided>
 # shuffle - <No description provided>
@@ -8398,7 +8029,8 @@
 # Reason: Replaced by [DEFAULT]/transport_url
 #rabbit_host = localhost
 
-# DEPRECATED: The RabbitMQ broker port where a single node is used. (port value)
+# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
+# value)
 # Minimum value: 0
 # Maximum value: 65535
 # This option is deprecated for removal.
@@ -8456,20 +8088,20 @@
 
 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
 # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
-# is no longer controlled by the x-ha-policy argument when declaring a queue. If
-# you just want to make sure that all queues (except those with auto-generated
-# names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
-# '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
+# is no longer controlled by the x-ha-policy argument when declaring a queue.
+# If you just want to make sure that all queues (except those with auto-
+# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
+# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
 #rabbit_ha_queues = false
 
 # Positive integer representing duration in seconds for queue TTL (x-expires).
-# Queues which are unused for the duration of the TTL are automatically deleted.
-# The parameter affects only reply and fanout queues. (integer value)
+# Queues which are unused for the duration of the TTL are automatically
+# deleted. The parameter affects only reply and fanout queues. (integer value)
 # Minimum value: 1
 #rabbit_transient_queues_ttl = 1800
 
-# Specifies the number of messages to prefetch. Setting to zero allows unlimited
-# messages. (integer value)
+# Specifies the number of messages to prefetch. Setting to zero allows
+# unlimited messages. (integer value)
 #rabbit_qos_prefetch_count = 0
 
 # Number of seconds after which the Rabbit broker is considered down if
@@ -8477,163 +8109,13 @@
 # value)
 #heartbeat_timeout_threshold = 60
 
-# How often times during the heartbeat_timeout_threshold we check the heartbeat.
-# (integer value)
+# How often times during the heartbeat_timeout_threshold we check the
+# heartbeat. (integer value)
 #heartbeat_rate = 2
 
 
-[oslo_messaging_zmq]
-
-#
-# From oslo.messaging
-#
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger period.
-# The value of 0 specifies no linger period. Pending messages shall be discarded
-# immediately when the socket is closed. Positive values specify an upper bound
-# for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target (
-# < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to skip
-# any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value and
-# 0) means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0) means
-# to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is not
-# tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
 
 [oslo_middleware]
-
 #
 # From oslo.middleware
 #
@@ -8644,8 +8126,8 @@
 #max_request_body_size = 114688
 
 # DEPRECATED: The HTTP Header that will be used to determine what the original
-# request protocol scheme was, even if it was hidden by a SSL termination proxy.
-# (string value)
+# request protocol scheme was, even if it was hidden by a SSL termination
+# proxy. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
 #secure_proxy_ssl_header = X-Forwarded-Proto
@@ -8653,53 +8135,11 @@
 # Whether the application is behind a proxy or not. This determines if the
 # middleware should parse the headers or not. (boolean value)
 #enable_proxy_headers_parsing = false
+enable_proxy_headers_parsing = True
+
 
 
 [oslo_policy]
-
-#
-# From oslo.policy
-#
-
-# This option controls whether or not to enforce scope when evaluating policies.
-# If ``True``, the scope of the token used in the request is compared to the
-# ``scope_types`` of the policy being enforced. If the scopes do not match, an
-# ``InvalidScope`` exception will be raised. If ``False``, a message will be
-# logged informing operators that policies are being invoked with mismatching
-# scope. (boolean value)
-#enforce_scope = false
-
-# The file that defines policies. (string value)
-#policy_file = policy.json
-
-# Default rule. Enforced when a requested rule is not found. (string value)
-#policy_default_rule = default
-
-# Directories where policy configuration files are stored. They can be relative
-# to any directory in the search path defined by the config_dir option, or
-# absolute paths. The file defined by policy_file must exist for these
-# directories to be searched.  Missing or empty directories are ignored. (multi
-# valued)
-#policy_dirs = policy.d
-
-# Content Type to send and receive data for REST based policy check (string
-# value)
-# Possible values:
-# application/x-www-form-urlencoded - <No description provided>
-# application/json - <No description provided>
-#remote_content_type = application/x-www-form-urlencoded
-
-# server identity verification for REST based policy check (boolean value)
-#remote_ssl_verify_server_crt = false
-
-# Absolute path to ca cert file for REST based policy check (string value)
-#remote_ssl_ca_crt_file = <None>
-
-# Absolute path to client cert for REST based policy check (string value)
-#remote_ssl_client_crt_file = <None>
-
-# Absolute path client key file REST based policy check (string value)
-#remote_ssl_client_key_file = <None>
 
 
 [pci]
@@ -8831,7 +8271,6 @@
 
 
 [placement]
-os_region_name = openstack
 
 #
 # From nova.conf
@@ -8893,13 +8332,13 @@
 
 # Authentication type to load (string value)
 # Deprecated group/name - [placement]/auth_plugin
-#auth_type = <None>
+auth_type = password
 
 # Config Section from which to load plugin specific options (string value)
 #auth_section = <None>
 
 # Authentication URL (string value)
-#auth_url = <None>
+auth_url = http://10.167.4.35:35357/v3
 
 # Scope for system operations (string value)
 #system_scope = <None>
@@ -8914,10 +8353,10 @@
 #project_id = <None>
 
 # Project name to scope to (string value)
-#project_name = <None>
+project_name = service
 
 # Domain ID containing project (string value)
-#project_domain_id = <None>
+project_domain_id = default
 
 # Domain name containing project (string value)
 #project_domain_name = <None>
@@ -8940,16 +8379,16 @@
 
 # Username (string value)
 # Deprecated group/name - [placement]/user_name
-#username = <None>
+username = nova
 
 # User's domain id (string value)
-#user_domain_id = <None>
+user_domain_id = default
 
 # User's domain name (string value)
 #user_domain_name = <None>
 
 # User's password (string value)
-#password = <None>
+password = opnfv_secret
 
 # Tenant ID (string value)
 #tenant_id = <None>
@@ -8964,10 +8403,10 @@
 #service_name = <None>
 
 # List of interfaces, in order of preference, for endpoint URL. (list value)
-#valid_interfaces = internal,public
+valid_interfaces = internal
 
 # The default region_name for endpoint URL discovery. (string value)
-#region_name = <None>
+region_name = RegionOne
 
 # Always use this endpoint URL for requests for this client. NOTE: The
 # unversioned endpoint should be specified here; to request a particular API
@@ -9891,62 +9330,41 @@
 # middleware.
 #  (boolean value)
 #send_service_user_token = false
-
-# PEM encoded Certificate Authority to use when verifying HTTPs connections.
+send_service_user_token = True
+# Name of nova region to use. Useful if keystone manages more than one region.
 # (string value)
-#cafile = <None>
-
-# PEM encoded client certificate cert file (string value)
-#certfile = <None>
-
-# PEM encoded client certificate key file (string value)
-#keyfile = <None>
-
-# Verify HTTPS connections. (boolean value)
-#insecure = false
-
-# Timeout value for http requests (integer value)
-#timeout = <None>
-
-# Collect per-API call timing information. (boolean value)
-#collect_timing = false
-
-# Log requests to multiple loggers. (boolean value)
-#split_loggers = false
-
-# Authentication type to load (string value)
-# Deprecated group/name - [service_user]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internal
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
 
 # Authentication URL (string value)
 #auth_url = <None>
-
-# Scope for system operations (string value)
-#system_scope = <None>
-
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-#project_id = <None>
-
-# Project name to scope to (string value)
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
-
-# Trust ID (string value)
-#trust_id = <None>
+auth_url = http://10.167.4.35:5000
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
 
 # Optional domain ID to use with v3 and v2 parameters. It will be used for both
 # the user and project domain in v3 and ignored in v2 authentication. (string
@@ -9958,27 +9376,65 @@
 # (string value)
 #default_domain_name = <None>
 
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
+
+# Scope for system operations (string value)
+#system_scope = <None>
+
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
+
+# Trust ID (string value)
+#trust_id = <None>
+
+# User's domain id (string value)
+#user_domain_id = <None>
+user_domain_id = default
+
+# User's domain name (string value)
+#user_domain_name = <None>
+
 # User ID (string value)
 #user_id = <None>
 
 # Username (string value)
-# Deprecated group/name - [service_user]/user_name
+# Deprecated group/name - [neutron]/user_name
 #username = <None>
-
-# User's domain id (string value)
-#user_domain_id = <None>
-
-# User's domain name (string value)
-#user_domain_name = <None>
-
-# User's password (string value)
-#password = <None>
-
-# Tenant ID (string value)
-#tenant_id = <None>
-
-# Tenant Name (string value)
-#tenant_name = <None>
+username = nova
 
 
 [spice]
@@ -10049,6 +9505,7 @@
 #   and port where the ``nova-spicehtml5proxy`` service is listening.
 #  (uri value)
 #html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html
+html5proxy_base_url = https://172.30.10.101:6080/spice_auto.html
 
 #
 # The  address where the SPICE server running on the instances should listen.
@@ -10317,15 +9774,6 @@
 # root token for vault (string value)
 #root_token_id = <None>
 
-# AppRole role_id for authentication with vault (string value)
-#approle_role_id = <None>
-
-# AppRole secret_id for authentication with vault (string value)
-#approle_secret_id = <None>
-
-# Mountpoint of KV store in Vault to use, for example: secret (string value)
-#kv_mountpoint = secret
-
 # Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200"
 # (string value)
 #vault_url = http://127.0.0.1:8200
@@ -10433,287 +9881,6 @@
 
 # Tenant Name (string value)
 #tenant_name = <None>
-
-
-[vmware]
-#
-# Related options:
-# Following options must be set in order to launch VMware-based
-# virtual machines.
-#
-# * compute_driver: Must use vmwareapi.VMwareVCDriver.
-# * vmware.host_username
-# * vmware.host_password
-# * vmware.cluster_name
-
-#
-# From nova.conf
-#
-
-#
-# This option specifies the physical ethernet adapter name for VLAN
-# networking.
-#
-# Set the vlan_interface configuration option to match the ESX host
-# interface that handles VLAN-tagged VM traffic.
-#
-# Possible values:
-#
-# * Any valid string representing VLAN interface name
-#  (string value)
-#vlan_interface = vmnic0
-
-#
-# This option should be configured only when using the NSX-MH Neutron
-# plugin. This is the name of the integration bridge on the ESXi server
-# or host. This should not be set for any other Neutron plugin. Hence
-# the default value is not set.
-#
-# Possible values:
-#
-# * Any valid string representing the name of the integration bridge
-#  (string value)
-#integration_bridge = <None>
-
-#
-# Set this value if affected by an increased network latency causing
-# repeated characters when typing in a remote console.
-#  (integer value)
-# Minimum value: 0
-#console_delay_seconds = <None>
-
-#
-# Identifies the remote system where the serial port traffic will
-# be sent.
-#
-# This option adds a virtual serial port which sends console output to
-# a configurable service URI. At the service URI address there will be
-# virtual serial port concentrator that will collect console logs.
-# If this is not set, no serial ports will be added to the created VMs.
-#
-# Possible values:
-#
-# * Any valid URI
-#  (string value)
-#serial_port_service_uri = <None>
-
-#
-# Identifies a proxy service that provides network access to the
-# serial_port_service_uri.
-#
-# Possible values:
-#
-# * Any valid URI (The scheme is 'telnet' or 'telnets'.)
-#
-# Related options:
-# This option is ignored if serial_port_service_uri is not specified.
-# * serial_port_service_uri
-#  (uri value)
-#serial_port_proxy_uri = <None>
-
-#
-# Specifies the directory where the Virtual Serial Port Concentrator is
-# storing console log files. It should match the 'serial_log_dir' config
-# value of VSPC.
-#  (string value)
-#serial_log_dir = /opt/vmware/vspc
-
-#
-# Hostname or IP address for connection to VMware vCenter host. (host address
-# value)
-#host_ip = <None>
-
-# Port for connection to VMware vCenter host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#host_port = 443
-
-# Username for connection to VMware vCenter host. (string value)
-#host_username = <None>
-
-# Password for connection to VMware vCenter host. (string value)
-#host_password = <None>
-
-#
-# Specifies the CA bundle file to be used in verifying the vCenter
-# server certificate.
-#  (string value)
-#ca_file = <None>
-
-#
-# If true, the vCenter server certificate is not verified. If false,
-# then the default CA truststore is used for verification.
-#
-# Related options:
-# * ca_file: This option is ignored if "ca_file" is set.
-#  (boolean value)
-#insecure = false
-
-# Name of a VMware Cluster ComputeResource. (string value)
-#cluster_name = <None>
-
-#
-# Regular expression pattern to match the name of datastore.
-#
-# The datastore_regex setting specifies the datastores to use with
-# Compute. For example, datastore_regex="nas.*" selects all the data
-# stores that have a name starting with "nas".
-#
-# NOTE: If no regex is given, it just picks the datastore with the
-# most freespace.
-#
-# Possible values:
-#
-# * Any matching regular expression to a datastore must be given
-#  (string value)
-#datastore_regex = <None>
-
-#
-# Time interval in seconds to poll remote tasks invoked on
-# VMware VC server.
-#  (floating point value)
-#task_poll_interval = 0.5
-
-#
-# Number of times VMware vCenter server API must be retried on connection
-# failures, e.g. socket error, etc.
-#  (integer value)
-# Minimum value: 0
-#api_retry_count = 10
-
-#
-# This option specifies VNC starting port.
-#
-# Every VM created by ESX host has an option of enabling VNC client
-# for remote connection. Above option 'vnc_port' helps you to set
-# default starting port for the VNC client.
-#
-# Possible values:
-#
-# * Any valid port number within 5900 -(5900 + vnc_port_total)
-#
-# Related options:
-# Below options should be set to enable VNC client.
-# * vnc.enabled = True
-# * vnc_port_total
-#  (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#vnc_port = 5900
-
-#
-# Total number of VNC ports.
-#  (integer value)
-# Minimum value: 0
-#vnc_port_total = 10000
-
-#
-# Keymap for VNC.
-#
-# The keyboard mapping (keymap) determines which keyboard layout a VNC
-# session should use by default.
-#
-# Possible values:
-#
-# * A keyboard layout which is supported by the underlying hypervisor on
-#   this node. This is usually an 'IETF language tag' (for example
-#   'en-us').
-#  (string value)
-#vnc_keymap = en-us
-
-#
-# This option enables/disables the use of linked clone.
-#
-# The ESX hypervisor requires a copy of the VMDK file in order to boot
-# up a virtual machine. The compute driver must download the VMDK via
-# HTTP from the OpenStack Image service to a datastore that is visible
-# to the hypervisor and cache it. Subsequent virtual machines that need
-# the VMDK use the cached version and don't have to copy the file again
-# from the OpenStack Image service.
-#
-# If set to false, even with a cached VMDK, there is still a copy
-# operation from the cache location to the hypervisor file directory
-# in the shared datastore. If set to true, the above copy operation
-# is avoided as it creates copy of the virtual machine that shares
-# virtual disks with its parent VM.
-#  (boolean value)
-#use_linked_clone = true
-
-#
-# This option sets the http connection pool size
-#
-# The connection pool size is the maximum number of connections from nova to
-# vSphere.  It should only be increased if there are warnings indicating that
-# the connection pool is full, otherwise, the default should suffice.
-#  (integer value)
-# Minimum value: 10
-#connection_pool_size = 10
-
-#
-# This option enables or disables storage policy based placement
-# of instances.
-#
-# Related options:
-#
-# * pbm_default_policy
-#  (boolean value)
-#pbm_enabled = false
-
-#
-# This option specifies the PBM service WSDL file location URL.
-#
-# Setting this will disable storage policy based placement
-# of instances.
-#
-# Possible values:
-#
-# * Any valid file path
-#   e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl
-#  (string value)
-#pbm_wsdl_location = <None>
-
-#
-# This option specifies the default policy to be used.
-#
-# If pbm_enabled is set and there is no defined storage policy for the
-# specific request, then this policy will be used.
-#
-# Possible values:
-#
-# * Any valid storage policy such as VSAN default storage policy
-#
-# Related options:
-#
-# * pbm_enabled
-#  (string value)
-#pbm_default_policy = <None>
-
-#
-# This option specifies the limit on the maximum number of objects to
-# return in a single result.
-#
-# A positive value will cause the operation to suspend the retrieval
-# when the count of objects reaches the specified limit. The server may
-# still limit the count to something less than the configured value.
-# Any remaining objects may be retrieved with additional requests.
-#  (integer value)
-# Minimum value: 0
-#maximum_objects = 100
-
-#
-# This option adds a prefix to the folder where cached images are stored
-#
-# This is not the full path - just a folder prefix. This should only be
-# used when a datastore cache is shared between compute nodes.
-#
-# Note: This should only be used when the compute nodes are running on same
-# host or they have a shared file system.
-#
-# Possible values:
-#
-# * Any string representing the cache prefix to the folder
-#  (string value)
-#cache_prefix = <None>
 
 
 [vnc]
@@ -10757,7 +9924,7 @@
 # keyboards. You should instead use a VNC client that supports Extended Key
 # Event
 # messages, such as noVNC 1.0.0. Refer to bug #1682020 for more information.
-#keymap = <None>
+keymap = en-us
 
 #
 # The IP address or hostname on which an instance should listen to for
@@ -10766,6 +9933,7 @@
 # Deprecated group/name - [DEFAULT]/vncserver_listen
 # Deprecated group/name - [vnc]/vncserver_listen
 #server_listen = 127.0.0.1
+server_listen = 10.167.4.56
 
 #
 # Private, internal IP address or hostname of VNC console proxy.
@@ -10778,7 +9946,7 @@
 #  (host address value)
 # Deprecated group/name - [DEFAULT]/vncserver_proxyclient_address
 # Deprecated group/name - [vnc]/vncserver_proxyclient_address
-#server_proxyclient_address = 127.0.0.1
+server_proxyclient_address = 10.167.4.56
 
 #
 # Public address of noVNC VNC console proxy.
@@ -10800,6 +9968,7 @@
 # * novncproxy_port
 #  (uri value)
 #novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html
+novncproxy_base_url = https://172.30.10.101:6080/vnc_auto.html
 
 #
 # IP address or hostname that the XVP VNC console proxy should bind to.
@@ -10896,6 +10065,7 @@
 # Minimum value: 0
 # Maximum value: 65535
 #novncproxy_port = 6080
+novncproxy_port = 6080
 
 #
 # The authentication schemes to use with the compute node.
@@ -11009,7 +10179,7 @@
 # * False: Live snapshots are always used when snapshotting (as long as
 #   there is a new enough libvirt and the backend storage supports it)
 #  (boolean value)
-#disable_libvirt_livesnapshot = false
+disable_libvirt_livesnapshot = true
 
 #
 # Enable handling of events emitted from compute drivers.
@@ -11135,60 +10305,6 @@
 # See related bug https://bugs.launchpad.net/nova/+bug/1796920 for more details.
 #  (boolean value)
 #report_ironic_standard_resource_class_inventory = true
-
-#
-# Enable live migration of instances with NUMA topologies.
-#
-# Live migration of instances with NUMA topologies is disabled by default
-# when using the libvirt driver. This includes live migration of instances with
-# CPU pinning or hugepages. CPU pinning and huge page information for such
-# instances is not currently re-calculated, as noted in `bug #1289064`_.  This
-# means that if instances were already present on the destination host, the
-# migrated instance could be placed on the same dedicated cores as these
-# instances or use hugepages allocated for another instance. Alternately, if the
-# host platforms were not homogeneous, the instance could be assigned to
-# non-existent cores or be inadvertently split across host NUMA nodes.
-#
-# Despite these known issues, there may be cases where live migration is
-# necessary. By enabling this option, operators that are aware of the issues and
-# are willing to manually work around them can enable live migration support for
-# these instances.
-#
-# Related options:
-#
-# * ``compute_driver``: Only the libvirt driver is affected.
-#
-# .. _bug #1289064: https://bugs.launchpad.net/nova/+bug/1289064
-#  (boolean value)
-#enable_numa_live_migration = false
-
-#
-# Ensure the instance directory is removed during clean up when using rbd.
-#
-# When enabled this workaround will ensure that the instance directory is always
-# removed during cleanup on hosts using ``[libvirt]/images_type=rbd``. This
-# avoids the following bugs with evacuation and revert resize clean up that lead
-# to the instance directory remaining on the host:
-#
-# https://bugs.launchpad.net/nova/+bug/1414895
-#
-# https://bugs.launchpad.net/nova/+bug/1761062
-#
-# Both of these bugs can then result in ``DestinationDiskExists`` errors being
-# raised if the instances ever attempt to return to the host.
-#
-# .. warning:: Operators will need to ensure that the instance directory itself,
-#   specified by ``[DEFAULT]/instances_path``, is not shared between computes
-#   before enabling this workaround otherwise the console.log, kernels, ramdisks
-#   and any additional files being used by the running instance will be lost.
-#
-# Related options:
-#
-# * ``compute_driver`` (libvirt)
-# * ``[libvirt]/images_type`` (rbd)
-# * ``instances_path``
-#  (boolean value)
-#ensure_libvirt_rbd_instance_dir_cleanup = false
 
 
 [wsgi]

2019-04-30 22:30:59,371 [salt.state       :1951][INFO    ][22018] Completed state [/etc/nova/nova.conf] at time 22:30:59.371259 duration_in_ms=513.457
2019-04-30 22:30:59,371 [salt.state       :1780][INFO    ][22018] Running state [/etc/default/nova-compute] at time 22:30:59.371820
2019-04-30 22:30:59,372 [salt.state       :1813][INFO    ][22018] Executing state file.managed for [/etc/default/nova-compute]
2019-04-30 22:30:59,385 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'nova/files/default'
2019-04-30 22:30:59,391 [salt.state       :300 ][INFO    ][22018] File changed:
New file
2019-04-30 22:30:59,391 [salt.state       :1951][INFO    ][22018] Completed state [/etc/default/nova-compute] at time 22:30:59.391336 duration_in_ms=19.515
2019-04-30 22:30:59,396 [salt.state       :1780][INFO    ][22018] Running state [virsh net-destroy default; virsh net-undefine default] at time 22:30:59.395997
2019-04-30 22:30:59,396 [salt.state       :1813][INFO    ][22018] Executing state cmd.run for [virsh net-destroy default; virsh net-undefine default]
2019-04-30 22:30:59,396 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command 'virsh net-list --all --name |grep -w default' in directory '/root'
2019-04-30 22:30:59,416 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command 'virsh net-destroy default; virsh net-undefine default' in directory '/root'
2019-04-30 22:30:59,666 [salt.state       :300 ][INFO    ][22018] {'pid': 2834, 'retcode': 0, 'stderr': '', 'stdout': 'Network default destroyed\n\nNetwork default has been undefined'}
2019-04-30 22:30:59,666 [salt.state       :1951][INFO    ][22018] Completed state [virsh net-destroy default; virsh net-undefine default] at time 22:30:59.666387 duration_in_ms=270.39
2019-04-30 22:30:59,667 [salt.state       :1780][INFO    ][22018] Running state [/etc/default/libvirtd] at time 22:30:59.667824
2019-04-30 22:30:59,668 [salt.state       :1813][INFO    ][22018] Executing state file.managed for [/etc/default/libvirtd]
2019-04-30 22:30:59,683 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'nova/files/rocky/libvirt.Debian'
2019-04-30 22:30:59,690 [salt.state       :300 ][INFO    ][22018] File changed:
--- 
+++ 
@@ -1,17 +1,13 @@
-# Defaults for libvirtd initscript (/etc/init.d/libvirtd)
+# Defaults for libvirt-bin initscript (/etc/init.d/libvirt-bin)
 # This is a POSIX shell fragment
 
 # Start libvirtd to handle qemu/kvm:
 start_libvirtd="yes"
 
 # options passed to libvirtd, add "-l" to listen on tcp
-#libvirtd_opts=""
-
+# Don't use "-d" option with systemd
+libvirtd_opts="-l"
+LIBVIRTD_ARGS="--listen"
 # pass in location of kerberos keytab
 #export KRB5_KTNAME=/etc/libvirt/libvirt.keytab
 
-# Whether to mount a systemd like cgroup layout (only
-# useful when not running systemd)
-#mount_cgroups=yes
-# Which cgroups to mount
-#cgroups="memory devices"

2019-04-30 22:30:59,690 [salt.state       :1951][INFO    ][22018] Completed state [/etc/default/libvirtd] at time 22:30:59.690146 duration_in_ms=22.322
2019-04-30 22:30:59,690 [salt.state       :1780][INFO    ][22018] Running state [service.systemctl_reload] at time 22:30:59.690443
2019-04-30 22:30:59,690 [salt.state       :1813][INFO    ][22018] Executing state module.wait for [service.systemctl_reload]
2019-04-30 22:30:59,690 [salt.state       :300 ][INFO    ][22018] No changes made for service.systemctl_reload
2019-04-30 22:30:59,690 [salt.state       :1951][INFO    ][22018] Completed state [service.systemctl_reload] at time 22:30:59.690910 duration_in_ms=0.467
2019-04-30 22:30:59,691 [salt.state       :1780][INFO    ][22018] Running state [service.systemctl_reload] at time 22:30:59.691033
2019-04-30 22:30:59,691 [salt.state       :1813][INFO    ][22018] Executing state module.mod_watch for [service.systemctl_reload]
2019-04-30 22:30:59,691 [salt.utils.decorators:613 ][WARNING ][22018] The function "module.run" is using its deprecated version and will expire in version "Sodium".
2019-04-30 22:30:59,691 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemctl', '--system', 'daemon-reload'] in directory '/root'
2019-04-30 22:30:59,771 [salt.state       :300 ][INFO    ][22018] {'ret': True}
2019-04-30 22:30:59,772 [salt.state       :1951][INFO    ][22018] Completed state [service.systemctl_reload] at time 22:30:59.772244 duration_in_ms=81.21
2019-04-30 22:30:59,772 [salt.state       :1780][INFO    ][22018] Running state [/etc/libvirt/libvirtd.conf] at time 22:30:59.772751
2019-04-30 22:30:59,772 [salt.state       :1813][INFO    ][22018] Executing state file.managed for [/etc/libvirt/libvirtd.conf]
2019-04-30 22:30:59,790 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'nova/files/rocky/libvirtd.conf.Debian'
2019-04-30 22:30:59,896 [salt.state       :300 ][INFO    ][22018] File changed:
--- 
+++ 
@@ -1,6 +1,7 @@
+
 # Master libvirt daemon configuration file
 #
-# For further information consult https://libvirt.org/format.html
+# For further information consult http://libvirt.org/format.html
 #
 # NOTE: the tests/daemon-conf regression test script requires
 # that each "PARAMETER = VALUE" line in this file have the parameter
@@ -18,8 +19,9 @@
 # It is necessary to setup a CA and issue server certificates before
 # using this capability.
 #
+
 # This is enabled by default, uncomment this to disable it
-#listen_tls = 0
+listen_tls = 0
 
 # Listen for unencrypted TCP connections on the public TCP/IP port.
 # NB, must pass the --listen flag to the libvirtd process for this to
@@ -33,6 +35,7 @@
 #listen_tcp = 1
 
 
+listen_tcp = 1
 
 # Override the port for accepting secure TLS connections
 # This can be a port number, or service name
@@ -48,10 +51,6 @@
 # Override the default configuration which binds to all network
 # interfaces. This can be a numeric IPv4/6 address, or hostname
 #
-# If the libvirtd service is started in parallel with network
-# startup (e.g. with systemd), binding to addresses other than
-# the wildcards (0.0.0.0/::) might not be available yet.
-#
 #listen_addr = "192.168.0.1"
 
 
@@ -67,7 +66,7 @@
 # unique on the immediate broadcast network.
 #
 # The default is "Virtualization Host HOSTNAME", where HOSTNAME
-# is substituted for the short hostname of the machine (without domain)
+# is subsituted for the short hostname of the machine (without domain)
 #
 #mdns_name = "Virtualization Host Joe Demo"
 
@@ -82,13 +81,14 @@
 # without becoming root.
 #
 # This is restricted to 'root' by default.
-unix_sock_group = "libvirt"
+unix_sock_group = "libvirtd"
 
 # Set the UNIX socket permissions for the R/O socket. This is used
 # for monitoring VM status only
 #
-# Default allows any user. If setting group ownership, you may want to
-# restrict this too.
+# Default allows any user. If setting group ownership may want to
+# restrict this to:
+#unix_sock_ro_perms = "0777"
 unix_sock_ro_perms = "0777"
 
 # Set the UNIX socket permissions for the R/W socket. This is used
@@ -98,19 +98,11 @@
 # the default will change to allow everyone (eg, 0777)
 #
 # If not using PolicyKit and setting group ownership for access
-# control, then you may want to relax this too.
+# control then you may want to relax this to:
 unix_sock_rw_perms = "0770"
-
-# Set the UNIX socket permissions for the admin interface socket.
-#
-# Default allows only owner (root), do not change it unless you are
-# sure to whom you are exposing the access to.
-#unix_sock_admin_perms = "0700"
 
 # Set the name of the directory in which sockets will be found/created.
 #unix_sock_dir = "/var/run/libvirt"
-
-
 
 #################################################################
 #
@@ -125,7 +117,7 @@
 #  - sasl: use SASL infrastructure. The actual auth scheme is then
 #          controlled from /etc/sasl2/libvirt.conf. For the TCP
 #          socket only GSSAPI & DIGEST-MD5 mechanisms will be used.
-#          For non-TCP or TLS sockets, any scheme is allowed.
+#          For non-TCP or TLS sockets,  any scheme is allowed.
 #
 #  - polkit: use PolicyKit to authenticate. This is only suitable
 #            for use on the UNIX sockets. The default policy will
@@ -156,6 +148,8 @@
 # use, always enable SASL and use the GSSAPI or DIGEST-MD5
 # mechanism in /etc/sasl2/libvirt.conf
 #auth_tcp = "sasl"
+#auth_tcp = "none"
+auth_tcp = "none"
 
 # Change the authentication scheme for TLS sockets.
 #
@@ -167,15 +161,6 @@
 #auth_tls = "none"
 
 
-# Change the API access control scheme
-#
-# By default an authenticated user is allowed access
-# to all APIs. Access drivers can place restrictions
-# on this. By default the 'nop' driver is enabled,
-# meaning no access control checks are done once a
-# client has authenticated with libvirtd
-#
-#access_drivers = [ "polkit" ]
 
 #################################################################
 #
@@ -228,7 +213,7 @@
 #tls_no_verify_certificate = 1
 
 
-# A whitelist of allowed x509 Distinguished Names
+# A whitelist of allowed x509  Distinguished Names
 # This list may contain wildcards such as
 #
 #    "C=GB,ST=London,L=London,O=Red Hat,CN=*"
@@ -241,8 +226,7 @@
 # By default, no DN's are checked
 #tls_allowed_dn_list = ["DN1", "DN2"]
 
-
-# A whitelist of allowed SASL usernames. The format for username
+# A whitelist of allowed SASL usernames. The format for usernames
 # depends on the SASL authentication mechanism. Kerberos usernames
 # look like username@REALM
 #
@@ -259,13 +243,6 @@
 #sasl_allowed_username_list = ["joe@EXAMPLE.COM", "fred@EXAMPLE.COM" ]
 
 
-# Override the compile time default TLS priority string. The
-# default is usually "NORMAL" unless overridden at build time.
-# Only set this is it is desired for libvirt to deviate from
-# the global default settings.
-#
-#tls_priority="NORMAL"
-
 
 #################################################################
 #
@@ -274,22 +251,12 @@
 
 # The maximum number of concurrent client connections to allow
 # over all sockets combined.
-#max_clients = 5000
-
-# The maximum length of queue of connections waiting to be
-# accepted by the daemon. Note, that some protocols supporting
-# retransmission may obey this so that a later reattempt at
-# connection succeeds.
-#max_queued_clients = 1000
-
-# The maximum length of queue of accepted but not yet
-# authenticated clients. The default value is 20. Set this to
-# zero to turn this feature off.
-#max_anonymous_clients = 20
+#max_clients = 20
+
 
 # The minimum limit sets the number of workers to start up
 # initially. If the number of active clients exceeds this,
-# then more threads are spawned, up to max_workers limit.
+# then more threads are spawned, upto max_workers limit.
 # Typically you'd want max_workers to equal maximum number
 # of clients allowed
 #min_workers = 5
@@ -297,25 +264,25 @@
 
 
 # The number of priority workers. If all workers from above
-# pool are stuck, some calls marked as high priority
+# pool will stuck, some calls marked as high priority
 # (notably domainDestroy) can be executed in this pool.
 #prio_workers = 5
 
+# Total global limit on concurrent RPC calls. Should be
+# at least as large as max_workers. Beyond this, RPC requests
+# will be read into memory and queued. This directly impact
+# memory usage, currently each request requires 256 KB of
+# memory. So by default upto 5 MB of memory is used
+#
+# XXX this isn't actually enforced yet, only the per-client
+# limit is used so far
+#max_requests = 20
+
 # Limit on concurrent requests from a single client
 # connection. To avoid one client monopolizing the server
-# this should be a small fraction of the global max_workers
-# parameter.
+# this should be a small fraction of the global max_requests
+# and max_workers parameter
 #max_client_requests = 5
-
-# Same processing controls, but this time for the admin interface.
-# For description of each option, be so kind to scroll few lines
-# upwards.
-
-#admin_min_workers = 1
-#admin_max_workers = 5
-#admin_max_clients = 5
-#admin_max_queued_clients = 5
-#admin_max_client_requests = 5
 
 #################################################################
 #
@@ -324,34 +291,23 @@
 
 # Logging level: 4 errors, 3 warnings, 2 information, 1 debug
 # basically 1 will log everything possible
-# Note: Journald may employ rate limiting of the messages logged
-# and thus lock up the libvirt daemon. To use the debug level with
-# journald you have to specify it explicitly in 'log_outputs', otherwise
-# only information level messages will be logged.
 #log_level = 3
-
 # Logging filters:
 # A filter allows to select a different logging level for a given category
 # of logs
 # The format for a filter is one of:
 #    x:name
 #    x:+name
-
-#      where name is a string which is matched against the category
-#      given in the VIR_LOG_INIT() at the top of each libvirt source
-#      file, e.g., "remote", "qemu", or "util.json" (the name in the
-#      filter can be a substring of the full category name, in order
-#      to match multiple similar categories), the optional "+" prefix
-#      tells libvirt to log stack trace for each message matching
-#      name, and x is the minimal level where matching messages should
-#      be logged:
-
+#      where name is a string which is matched against source file name,
+#      e.g., "remote", "qemu", or "util/json", the optional "+" prefix
+#      tells libvirt to log stack trace for each message matching name,
+#      and x is the minimal level where matching messages should be logged:
 #    1: DEBUG
 #    2: INFO
 #    3: WARNING
 #    4: ERROR
 #
-# Multiple filters can be defined in a single @filters, they just need to be
+# Multiple filter can be defined in a single @filters, they just need to be
 # separated by spaces.
 #
 # e.g. to only get warning or errors from the remote layer and only errors
@@ -367,26 +323,23 @@
 #      use syslog for the output and use the given name as the ident
 #    x:file:file_path
 #      output to a file, with the given filepath
-#    x:journald
-#      output to journald logging system
 # In all case the x prefix is the minimal level, acting as a filter
 #    1: DEBUG
 #    2: INFO
 #    3: WARNING
 #    4: ERROR
 #
-# Multiple outputs can be defined, they just need to be separated by spaces.
+# Multiple output can be defined, they just need to be separated by spaces.
 # e.g. to log all warnings and errors to syslog under the libvirtd ident:
 #log_outputs="3:syslog:libvirtd"
 #
 
-# Log debug buffer size:
-#
-# This configuration option is no longer used, since the global
-# log buffer functionality has been removed. Please configure
-# suitable log_outputs/log_filters settings to obtain logs.
+# Log debug buffer size: default 64
+# The daemon keeps an internal debug log buffer which will be dumped in case
+# of crash or upon receiving a SIGUSR2 signal. This setting allows to override
+# the default buffer size in kilobytes.
+# If value is 0 or less the debug log buffer is deactivated
 #log_buffer_size = 64
-
 
 ##################################################################
 #
@@ -407,16 +360,10 @@
 
 ###################################################################
 # UUID of the host:
-# Host UUID is read from one of the sources specified in host_uuid_source.
-#
-# - 'smbios': fetch the UUID from 'dmidecode -s system-uuid'
-# - 'machine-id': fetch the UUID from /etc/machine-id
-#
-# The host_uuid_source default is 'smbios'. If 'dmidecode' does not provide
-# a valid UUID a temporary UUID will be generated.
-#
-# Another option is to specify host UUID in host_uuid.
-#
+# Provide the UUID of the host here in case the command
+# 'dmidecode -s system-uuid' does not provide a valid uuid. In case
+# 'dmidecode' does not provide a valid UUID and none is provided here, a
+# temporary UUID will be generated.
 # Keep the format of the example UUID below. UUID must not have all digits
 # be the same.
 
@@ -424,12 +371,11 @@
 # it with the output of the 'uuidgen' command and then
 # uncomment this entry
 #host_uuid = "00000000-0000-0000-0000-000000000000"
-#host_uuid_source = "smbios"
 
 ###################################################################
 # Keepalive protocol:
 # This allows libvirtd to detect broken client connections or even
-# dead clients.  A keepalive message is sent to a client after
+# dead client.  A keepalive message is sent to a client after
 # keepalive_interval seconds of inactivity to check if the client is
 # still responding; keepalive_count is a maximum number of keepalive
 # messages that are allowed to be sent to the client without getting
@@ -438,31 +384,15 @@
 # keepalive_interval * (keepalive_count + 1) seconds since the last
 # message received from the client.  If keepalive_interval is set to
 # -1, libvirtd will never send keepalive requests; however clients
-# can still send them and the daemon will send responses.  When
+# can still send them and the deamon will send responses.  When
 # keepalive_count is set to 0, connections will be automatically
 # closed after keepalive_interval seconds of inactivity without
 # sending any keepalive messages.
 #
 #keepalive_interval = 5
 #keepalive_count = 5
-
-#
-# These configuration options are no longer used.  There is no way to
-# restrict such clients from connecting since they first need to
-# connect in order to ask for keepalive.
+#
+# If set to 1, libvirtd will refuse to talk to clients that do not
+# support keepalive protocol.  Defaults to 0.
 #
 #keepalive_required = 1
-#admin_keepalive_required = 1
-
-# Keepalive settings for the admin interface
-#admin_keepalive_interval = 5
-#admin_keepalive_count = 5
-
-###################################################################
-# Open vSwitch:
-# This allows to specify a timeout for openvswitch calls made by
-# libvirt. The ovs-vsctl utility is used for the configuration and
-# its timeout option is set by default to 5 seconds to avoid
-# potential infinite waits blocking libvirt.
-#
-#ovs_timeout = 5

2019-04-30 22:30:59,896 [salt.state       :1951][INFO    ][22018] Completed state [/etc/libvirt/libvirtd.conf] at time 22:30:59.896802 duration_in_ms=124.051
2019-04-30 22:30:59,897 [salt.state       :1780][INFO    ][22018] Running state [/etc/libvirt/qemu.conf] at time 22:30:59.897131
2019-04-30 22:30:59,897 [salt.state       :1813][INFO    ][22018] Executing state file.managed for [/etc/libvirt/qemu.conf]
2019-04-30 22:30:59,912 [salt.fileclient  :1219][INFO    ][22018] Fetching file from saltenv 'base', ** done ** 'nova/files/rocky/qemu.conf.Debian'
2019-04-30 22:31:00,008 [salt.state       :300 ][INFO    ][22018] File changed:
--- 
+++ 
@@ -1,61 +1,8 @@
+
 # Master configuration file for the QEMU driver.
 # All settings described here are optional - if omitted, sensible
 # defaults are used.
 
-# Use of TLS requires that x509 certificates be issued. The default is
-# to keep them in /etc/pki/qemu. This directory must contain
-#
-#  ca-cert.pem - the CA master certificate
-#  server-cert.pem - the server certificate signed with ca-cert.pem
-#  server-key.pem  - the server private key
-#
-# and optionally may contain
-#
-#  dh-params.pem - the DH params configuration file
-#
-# If the directory does not exist, libvirtd will fail to start. If the
-# directory doesn't contain the necessary files, QEMU domains will fail
-# to start if they are configured to use TLS.
-#
-# In order to overwrite the default path alter the following. This path
-# definition will be used as the default path for other *_tls_x509_cert_dir
-# configuration settings if their default path does not exist or is not
-# specifically set.
-#
-#default_tls_x509_cert_dir = "/etc/pki/qemu"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client who does not have a
-# certificate signed by the CA in /etc/pki/qemu/ca-cert.pem
-#
-# The default_tls_x509_cert_dir directory must also contain
-#
-#  client-cert.pem - the client certificate signed with the ca-cert.pem
-#  client-key.pem - the client private key
-#
-#default_tls_x509_verify = 1
-
-#
-# Libvirt assumes the server-key.pem file is unencrypted by default.
-# To use an encrypted server-key.pem file, the password to decrypt
-# the PEM file is required. This can be provided by creating a secret
-# object in libvirt and then to uncomment this setting to set the UUID
-# of the secret.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#default_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
-
-
 # VNC is configured to listen on 127.0.0.1 by default.
 # To make it listen on all public interfaces, uncomment
 # this next option.
@@ -69,9 +16,9 @@
 # unix socket. This prevents unprivileged access from users on the
 # host machine, though most VNC clients do not support it.
 #
-# This will only be enabled for VNC configurations that have listen
-# type=address but without any address specified. This setting takes
-# preference over vnc_listen.
+# This will only be enabled for VNC configurations that do not have
+# a hardcoded 'listen' or 'socket' value. This setting takes preference
+# over vnc_listen.
 #
 #vnc_auto_unix_socket = 1
 
@@ -85,12 +32,15 @@
 #
 #vnc_tls = 1
 
-
-# In order to override the default TLS certificate location for
-# vnc certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but vnc_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
+# Use of TLS requires that x509 certificates be issued. The
+# default it to keep them in /etc/pki/libvirt-vnc. This directory
+# must contain
+#
+#  ca-cert.pem - the CA master certificate
+#  server-cert.pem - the server certificate signed with ca-cert.pem
+#  server-key.pem  - the server private key
+#
+# This option allows the certificate directory to be changed
 #
 #vnc_tls_x509_cert_dir = "/etc/pki/libvirt-vnc"
 
@@ -100,15 +50,10 @@
 # an encrypted channel.
 #
 # It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the vnc_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
+# issuing a x509 certificate to every client who needs to connect.
+#
+# Enabling this option will reject any client who does not have a
+# certificate signed by the CA in /etc/pki/libvirt-vnc/ca-cert.pem
 #
 #vnc_tls_x509_verify = 1
 
@@ -172,24 +117,17 @@
 #spice_tls = 1
 
 
-# In order to override the default TLS certificate location for
-# spice certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but spice_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
+# Use of TLS requires that x509 certificates be issued. The
+# default it to keep them in /etc/pki/libvirt-spice. This directory
+# must contain
+#
+#  ca-cert.pem - the CA master certificate
+#  server-cert.pem - the server certificate signed with ca-cert.pem
+#  server-key.pem  - the server private key
+#
+# This option allows the certificate directory to be changed.
 #
 #spice_tls_x509_cert_dir = "/etc/pki/libvirt-spice"
-
-
-# Enable this option to have SPICE served over an automatically created
-# unix socket. This prevents unprivileged access from users on the
-# host machine.
-#
-# This will only be enabled for SPICE configurations that have listen
-# type=address but without any address specified. This setting takes
-# preference over spice_listen.
-#
-#spice_auto_unix_socket = 1
 
 
 # The default SPICE password. This parameter is only used if the
@@ -216,123 +154,6 @@
 # point to the directory, and create a qemu.conf in that location
 #
 #spice_sasl_dir = "/some/directory/sasl2"
-
-# Enable use of TLS encryption on the chardev TCP transports.
-#
-# It is necessary to setup CA and issue a server certificate
-# before enabling this.
-#
-#chardev_tls = 1
-
-
-# In order to override the default TLS certificate location for character
-# device TCP certificates, supply a valid path to the certificate directory.
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but chardev_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
-#
-#chardev_tls_x509_cert_dir = "/etc/pki/libvirt-chardev"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the chardev_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
-#
-#chardev_tls_x509_verify = 1
-
-
-# Uncomment and use the following option to override the default secret
-# UUID provided in the default_tls_x509_secret_uuid parameter.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#chardev_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
-
-
-# Enable use of TLS encryption for all VxHS network block devices that
-# don't specifically disable.
-#
-# When the VxHS network block device server is set up appropriately,
-# x509 certificates are required for authentication between the clients
-# (qemu processes) and the remote VxHS server.
-#
-# It is necessary to setup CA and issue the client certificate before
-# enabling this.
-#
-#vxhs_tls = 1
-
-
-# In order to override the default TLS certificate location for VxHS
-# backed storage, supply a valid path to the certificate directory.
-# This is used to authenticate the VxHS block device clients to the VxHS
-# server.
-#
-# If the provided path does not exist, libvirtd will fail to start.
-# If the path is not provided, but vxhs_tls = 1, then the
-# default_tls_x509_cert_dir path will be used.
-#
-# VxHS block device clients expect the client certificate and key to be
-# present in the certificate directory along with the CA master certificate.
-# If using the default environment, default_tls_x509_verify must be configured.
-# Since this is only a client the server-key.pem certificate is not needed.
-# Thus a VxHS directory must contain the following:
-#
-#  ca-cert.pem - the CA master certificate
-#  client-cert.pem - the client certificate signed with the ca-cert.pem
-#  client-key.pem - the client private key
-#
-#vxhs_tls_x509_cert_dir = "/etc/pki/libvirt-vxhs"
-
-
-# In order to override the default TLS certificate location for migration
-# certificates, supply a valid path to the certificate directory. If the
-# provided path does not exist, libvirtd will fail to start. If the path is
-# not provided, but migrate_tls = 1, then the default_tls_x509_cert_dir path
-# will be used. Once/if a default certificate is enabled/defined, migration
-# will then be able to use the certificate via migration API flags.
-#
-#migrate_tls_x509_cert_dir = "/etc/pki/libvirt-migrate"
-
-
-# The default TLS configuration only uses certificates for the server
-# allowing the client to verify the server's identity and establish
-# an encrypted channel.
-#
-# It is possible to use x509 certificates for authentication too, by
-# issuing an x509 certificate to every client who needs to connect.
-#
-# Enabling this option will reject any client that does not have a
-# ca-cert.pem certificate signed by the CA in the migrate_tls_x509_cert_dir
-# (or default_tls_x509_cert_dir) as well as the corresponding client-*.pem
-# files described in default_tls_x509_cert_dir.
-#
-# If this option is not supplied, it will be set to the value of
-# "default_tls_x509_verify".
-#
-#migrate_tls_x509_verify = 1
-
-
-# Uncomment and use the following option to override the default secret
-# UUID provided in the default_tls_x509_secret_uuid parameter.
-#
-# NB This default all-zeros UUID will not work. Replace it with the
-# output from the UUID for the TLS secret from a 'virsh secret-list'
-# command and then uncomment the entry
-#
-#migrate_tls_x509_secret_uuid = "00000000-0000-0000-0000-000000000000"
 
 
 # By default, if no graphical front end is configured, libvirt will disable
@@ -416,10 +237,9 @@
 # Set to 0 to disable file ownership changes.
 #dynamic_ownership = 1
 
-
 # What cgroup controllers to make use of with QEMU guests
 #
-#  - 'cpu' - use for scheduler tunables
+#  - 'cpu' - use for schedular tunables
 #  - 'devices' - use for device whitelisting
 #  - 'memory' - use for memory tunables
 #  - 'blkio' - use for block devices I/O tunables
@@ -451,19 +271,11 @@
 #    "/dev/null", "/dev/full", "/dev/zero",
 #    "/dev/random", "/dev/urandom",
 #    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
-#    "/dev/rtc","/dev/hpet"
+#    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio"
 #]
-#
-# RDMA migration requires the following extra files to be added to the list:
-#   "/dev/infiniband/rdma_cm",
-#   "/dev/infiniband/issm0",
-#   "/dev/infiniband/issm1",
-#   "/dev/infiniband/umad0",
-#   "/dev/infiniband/umad1",
-#   "/dev/infiniband/uverbs0"
-
-
-# The default format for QEMU/KVM guest save images is raw; that is, the
+
+
+# The default format for Qemu/KVM guest save images is raw; that is, the
 # memory from the domain is dumped out directly to a file.  If you have
 # guests with a large amount of memory, however, this can take up quite
 # a bit of space.  If you would like to compress the images while they
@@ -517,20 +329,15 @@
 # unspecified here, determination of a host mount point in /proc/mounts
 # will be attempted.  Specifying an explicit mount overrides detection
 # of the same in /proc/mounts.  Setting the mount point to "" will
-# disable guest hugepage backing. If desired, multiple mount points can
-# be specified at once, separated by comma and enclosed in square
-# brackets, for example:
-#
-#     hugetlbfs_mount = ["/dev/hugepages2M", "/dev/hugepages1G"]
-#
-# The size of huge page served by specific mount point is determined by
-# libvirt at the daemon startup.
-#
-# NB, within these mount points, guests will create memory backing
-# files in a location of $MOUNTPOINT/libvirt/qemu
+# disable guest hugepage backing.
+#
+# NB, within this mount point, guests will create memory backing files
+# in a location of $MOUNTPOINT/libvirt/qemu
 #
 #hugetlbfs_mount = "/dev/hugepages"
-
+#hugetlbfs_mount = ["/run/hugepages/kvm", "/mnt/hugepages_1GB"]
+hugetlbfs_mount = ["/mnt/hugepages_1G"]
+security_driver="none"
 
 # Path to the setuid helper for creating tap devices.  This executable
 # is used to create <source type='bridge'> interfaces when libvirtd is
@@ -566,42 +373,6 @@
 # The same applies to max_files which sets the limit on the maximum
 # number of opened files.
 #
-#max_processes = 0
-#max_files = 0
-
-# If max_core is set to a non-zero integer, then QEMU will be
-# permitted to create core dumps when it crashes, provided its
-# RAM size is smaller than the limit set.
-#
-# Be warned that the core dump will include a full copy of the
-# guest RAM, if the 'dump_guest_core' setting has been enabled,
-# or if the guest XML contains
-#
-#   <memory dumpcore="on">...guest ram...</memory>
-#
-# If guest RAM is to be included, ensure the max_core limit
-# is set to at least the size of the largest expected guest
-# plus another 1GB for any QEMU host side memory mappings.
-#
-# As a special case it can be set to the string "unlimited" to
-# to allow arbitrarily sized core dumps.
-#
-# By default the core dump size is set to 0 disabling all dumps
-#
-# Size is a positive integer specifying bytes or the
-# string "unlimited"
-#
-#max_core = "unlimited"
-
-# Determine if guest RAM is included in QEMU core dumps. By
-# default guest RAM will be excluded if a new enough QEMU is
-# present. Setting this to '1' will force guest RAM to always
-# be included in QEMU core dumps.
-#
-# This setting will be ignored if the guest XML has set the
-# dumpcore attribute on the <memory> element.
-#
-#dump_guest_core = 1
 
 # mac_filter enables MAC addressed based filtering on bridge ports.
 # This currently requires ebtables to be installed.
@@ -628,13 +399,11 @@
 #allow_disk_format_probing = 1
 
 
-# In order to prevent accidentally starting two domains that
-# share one writable disk, libvirt offers two approaches for
-# locking files. The first one is sanlock, the other one,
-# virtlockd, is then our own implementation. Accepted values
-# are "sanlock" and "lockd".
-#
-#lock_manager = "lockd"
+# To enable 'Sanlock' project based locking of the file
+# content (to prevent two VMs writing to the same
+# disk), uncomment this
+#
+#lock_manager = "sanlock"
 
 
 
@@ -676,17 +445,10 @@
 #seccomp_sandbox = 1
 
 
+
 # Override the listen address for all incoming migrations. Defaults to
 # 0.0.0.0, or :: if both host and qemu are capable of IPv6.
-#migration_address = "0.0.0.0"
-
-
-# The default hostname or IP address which will be used by a migration
-# source for transferring migration data to this host.  The migration
-# source has to be able to resolve this hostname and connect to it so
-# setting "localhost" will not work.  By default, the host's configured
-# hostname is used.
-#migration_host = "host.example.com"
+#migration_address = "127.0.0.1"
 
 
 # Override the port range used for incoming migrations.
@@ -698,36 +460,12 @@
 #
 #migration_port_min = 49152
 #migration_port_max = 49215
-
-
-
-# Timestamp QEMU's log messages (if QEMU supports it)
-#
-# Defaults to 1.
-#
-#log_timestamp = 0
-
-
-# Location of master nvram file
-#
-# When a domain is configured to use UEFI instead of standard
-# BIOS it may use a separate storage for UEFI variables. If
-# that's the case libvirt creates the variable store per domain
-# using this master file as image. Each UEFI firmware can,
-# however, have different variables store. Therefore the nvram is
-# a list of strings when a single item is in form of:
-#   ${PATH_TO_UEFI_FW}:${PATH_TO_UEFI_VARS}.
-# Later, when libvirt creates per domain variable store, this list is
-# searched for the master image. The UEFI firmware can be called
-# differently for different guest architectures. For instance, it's OVMF
-# for x86_64 and i686, but it's AAVMF for aarch64. The libvirt default
-# follows this scheme.
-#nvram = [
-#   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
-#   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
-#   "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
-#   "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
-#]
+cgroup_device_acl = [
+    "/dev/null", "/dev/full", "/dev/zero",
+    "/dev/random", "/dev/urandom",
+    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
+    "/dev/rtc", "/dev/hpet","/dev/net/tun",
+]
 
 # The backend to use for handling stdout/stderr output from
 # QEMU processes.
@@ -743,41 +481,3 @@
 #          rollover when a size limit is hit.
 #
 #stdio_handler = "logd"
-
-# QEMU gluster libgfapi log level, debug levels are 0-9, with 9 being the
-# most verbose, and 0 representing no debugging output.
-#
-# The current logging levels defined in the gluster GFAPI are:
-#
-#    0 - None
-#    1 - Emergency
-#    2 - Alert
-#    3 - Critical
-#    4 - Error
-#    5 - Warning
-#    6 - Notice
-#    7 - Info
-#    8 - Debug
-#    9 - Trace
-#
-# Defaults to 4
-#
-#gluster_debug_level = 9
-
-# To enhance security, QEMU driver is capable of creating private namespaces
-# for each domain started. Well, so far only "mount" namespace is supported. If
-# enabled it means qemu process is unable to see all the devices on the system,
-# only those configured for the domain in question. Libvirt then manages
-# devices entries throughout the domain lifetime. This namespace is turned on
-# by default.
-#namespaces = [ "mount" ]
-
-# This directory is used for memoryBacking source if configured as file.
-# NOTE: big files will be stored here
-#memory_backing_dir = "/var/lib/libvirt/qemu/ram"
-
-# The following two values set the default RX/TX ring buffer size for virtio
-# interfaces. These values are taken unless overridden in domain XML. For more
-# info consult docs to corresponding attributes from domain XML.
-#rx_queue_size = 1024
-#tx_queue_size = 1024

2019-04-30 22:31:00,009 [salt.state       :1951][INFO    ][22018] Completed state [/etc/libvirt/qemu.conf] at time 22:31:00.009105 duration_in_ms=111.973
2019-04-30 22:31:00,009 [salt.state       :1780][INFO    ][22018] Running state [libvirtd] at time 22:31:00.009825
2019-04-30 22:31:00,010 [salt.state       :1813][INFO    ][22018] Executing state service.running for [libvirtd]
2019-04-30 22:31:00,010 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemctl', 'status', 'libvirtd.service', '-n', '0'] in directory '/root'
2019-04-30 22:31:00,022 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemctl', 'is-active', 'libvirtd.service'] in directory '/root'
2019-04-30 22:31:00,032 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemctl', 'is-enabled', 'libvirtd.service'] in directory '/root'
2019-04-30 22:31:00,042 [salt.state       :300 ][INFO    ][22018] The service libvirtd is already running
2019-04-30 22:31:00,042 [salt.state       :1951][INFO    ][22018] Completed state [libvirtd] at time 22:31:00.042420 duration_in_ms=32.594
2019-04-30 22:31:00,042 [salt.state       :1780][INFO    ][22018] Running state [libvirtd] at time 22:31:00.042625
2019-04-30 22:31:00,042 [salt.state       :1813][INFO    ][22018] Executing state service.mod_watch for [libvirtd]
2019-04-30 22:31:00,043 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemctl', 'is-active', 'libvirtd.service'] in directory '/root'
2019-04-30 22:31:00,054 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'libvirtd.service'] in directory '/root'
2019-04-30 22:31:00,111 [salt.state       :300 ][INFO    ][22018] {'libvirtd': True}
2019-04-30 22:31:00,112 [salt.state       :1951][INFO    ][22018] Completed state [libvirtd] at time 22:31:00.111931 duration_in_ms=69.306
2019-04-30 22:31:00,113 [salt.state       :1780][INFO    ][22018] Running state [nova-compute] at time 22:31:00.113271
2019-04-30 22:31:00,113 [salt.state       :1813][INFO    ][22018] Executing state service.running for [nova-compute]
2019-04-30 22:31:00,114 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemctl', 'status', 'nova-compute.service', '-n', '0'] in directory '/root'
2019-04-30 22:31:00,126 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemctl', 'is-active', 'nova-compute.service'] in directory '/root'
2019-04-30 22:31:00,135 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemctl', 'is-enabled', 'nova-compute.service'] in directory '/root'
2019-04-30 22:31:00,144 [salt.state       :300 ][INFO    ][22018] The service nova-compute is already running
2019-04-30 22:31:00,144 [salt.state       :1951][INFO    ][22018] Completed state [nova-compute] at time 22:31:00.144708 duration_in_ms=31.435
2019-04-30 22:31:00,144 [salt.state       :1780][INFO    ][22018] Running state [nova-compute] at time 22:31:00.144898
2019-04-30 22:31:00,145 [salt.state       :1813][INFO    ][22018] Executing state service.mod_watch for [nova-compute]
2019-04-30 22:31:00,145 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemctl', 'is-active', 'nova-compute.service'] in directory '/root'
2019-04-30 22:31:00,154 [salt.loaded.int.module.cmdmod:395 ][INFO    ][22018] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'nova-compute.service'] in directory '/root'
2019-04-30 22:31:00,174 [salt.state       :300 ][INFO    ][22018] {'nova-compute': True}
2019-04-30 22:31:00,175 [salt.state       :1951][INFO    ][22018] Completed state [nova-compute] at time 22:31:00.174978 duration_in_ms=30.077
2019-04-30 22:31:00,178 [salt.minion      :1711][INFO    ][22018] Returning information for job: 20190430222950081629
2019-04-30 22:33:22,443 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command state.sls with jid 20190430223322434574
2019-04-30 22:33:22,454 [salt.minion      :1432][INFO    ][3141] Starting a new job with PID 3141
2019-04-30 22:33:26,142 [salt.state       :915 ][INFO    ][3141] Loading fresh modules for state activity
2019-04-30 22:33:26,175 [salt.fileclient  :1219][INFO    ][3141] Fetching file from saltenv 'base', ** done ** 'barbican/init.sls'
2019-04-30 22:33:26,203 [salt.fileclient  :1219][INFO    ][3141] Fetching file from saltenv 'base', ** done ** 'barbican/client/init.sls'
2019-04-30 22:33:26,220 [salt.fileclient  :1219][INFO    ][3141] Fetching file from saltenv 'base', ** done ** 'barbican/client/service.sls'
2019-04-30 22:33:26,234 [salt.fileclient  :1219][INFO    ][3141] Fetching file from saltenv 'base', ** done ** 'barbican/map.jinja'
2019-04-30 22:33:26,260 [salt.fileclient  :1219][INFO    ][3141] Fetching file from saltenv 'base', ** done ** 'barbican/client/resources/init.sls'
2019-04-30 22:33:26,274 [salt.fileclient  :1219][INFO    ][3141] Fetching file from saltenv 'base', ** done ** 'barbican/client/resources/v1.sls'
2019-04-30 22:33:27,413 [salt.state       :1780][INFO    ][3141] Running state [python-barbicanclient] at time 22:33:27.413411
2019-04-30 22:33:27,413 [salt.state       :1813][INFO    ][3141] Executing state pkg.installed for [python-barbicanclient]
2019-04-30 22:33:27,414 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3141] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:33:27,702 [salt.state       :300 ][INFO    ][3141] All specified packages are already installed
2019-04-30 22:33:27,702 [salt.state       :1951][INFO    ][3141] Completed state [python-barbicanclient] at time 22:33:27.702591 duration_in_ms=289.179
2019-04-30 22:33:27,720 [salt.minion      :1711][INFO    ][3141] Returning information for job: 20190430223322434574
2019-04-30 22:36:25,771 [salt.utils.schedule:1377][INFO    ][3337] Running scheduled job: __mine_interval
2019-04-30 22:44:51,110 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command state.sls with jid 20190430224451101774
2019-04-30 22:44:51,122 [salt.minion      :1432][INFO    ][3445] Starting a new job with PID 3445
2019-04-30 22:44:54,867 [salt.state       :915 ][INFO    ][3445] Loading fresh modules for state activity
2019-04-30 22:44:54,902 [salt.fileclient  :1219][INFO    ][3445] Fetching file from saltenv 'base', ** done ** 'ceilometer/init.sls'
2019-04-30 22:44:54,923 [salt.fileclient  :1219][INFO    ][3445] Fetching file from saltenv 'base', ** done ** 'ceilometer/agent.sls'
2019-04-30 22:44:54,999 [salt.fileclient  :1219][INFO    ][3445] Fetching file from saltenv 'base', ** done ** 'ceilometer/_ssl/rabbitmq.sls'
2019-04-30 22:44:56,159 [salt.state       :1780][INFO    ][3445] Running state [ceilometer-agent-compute] at time 22:44:56.159601
2019-04-30 22:44:56,160 [salt.state       :1813][INFO    ][3445] Executing state pkg.installed for [ceilometer-agent-compute]
2019-04-30 22:44:56,160 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:44:56,467 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['apt-cache', '-q', 'policy', 'ceilometer-agent-compute'] in directory '/root'
2019-04-30 22:44:56,513 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['apt-get', '-q', 'update'] in directory '/root'
2019-04-30 22:44:58,343 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
2019-04-30 22:44:58,362 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'ceilometer-agent-compute'] in directory '/root'
2019-04-30 22:45:06,167 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command saltutil.find_job with jid 20190430224506155146
2019-04-30 22:45:06,175 [salt.minion      :1432][INFO    ][4653] Starting a new job with PID 4653
2019-04-30 22:45:06,185 [salt.minion      :1711][INFO    ][4653] Returning information for job: 20190430224506155146
2019-04-30 22:45:08,794 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
2019-04-30 22:45:08,823 [salt.state       :300 ][INFO    ][3445] Made the following changes:
'python-pysnmp4' changed from 'absent' to '4.4.3-1~u16.04+mcp'
'python-pysnmp4-mibs' changed from 'absent' to '0.1.3-1'
'python-pysnmp2' changed from 'absent' to '1'
'python-pysnmp-common' changed from 'absent' to '1'
'ceilometer-agent-compute' changed from 'absent' to '1:11.0.1-1~u16.04+mcp21'
'python-pam' changed from 'absent' to '0.4.2-13.2ubuntu2'
'python-cotyledon' changed from 'absent' to '1.7.1-1~u16.04+mcp'
'python2.7-twisted-core' changed from 'absent' to '1'
'python-twisted' changed from 'absent' to '16.0.0-1ubuntu0.2'
'python-ceilometer' changed from 'absent' to '1:11.0.1-1~u16.04+mcp21'
'ceilometer-common' changed from 'absent' to '1:11.0.1-1~u16.04+mcp21'
'libsmi2ldbl' changed from 'absent' to '0.4.8+dfsg2-11'
'python-pysmi' changed from 'absent' to '0.2.2-2~u16.04+mcp'
'python2.7-twisted' changed from 'absent' to '1'
'python-croniter' changed from 'absent' to '0.3.8-1'
'python-setproctitle' changed from 'absent' to '1.1.8-1build2'
'python-twisted-core' changed from 'absent' to '16.0.0-1ubuntu0.2'
'python-jsonpath-rw' changed from 'absent' to '1.4.0-1'
'python-attr' changed from 'absent' to '15.2.0-1'
'python-service-identity' changed from 'absent' to '16.0.0-2'
'python-serial' changed from 'absent' to '3.4-4~u16.04+mcp'
'smitools' changed from 'absent' to '0.4.8+dfsg2-11'
'python-pycryptodome' changed from 'absent' to '3.4.7-1.1~u16.04+mcp'
'python-jsonpath-rw-ext' changed from 'absent' to '0.1.9-1'
'python2.7-twisted-bin' changed from 'absent' to '1'
'python-pysnmp4-apps' changed from 'absent' to '0.3.2-1'
'python-twisted-bin' changed from 'absent' to '16.0.0-1ubuntu0.2'

2019-04-30 22:45:08,837 [salt.state       :915 ][INFO    ][3445] Loading fresh modules for state activity
2019-04-30 22:45:08,951 [salt.state       :1951][INFO    ][3445] Completed state [ceilometer-agent-compute] at time 22:45:08.951144 duration_in_ms=12791.543
2019-04-30 22:45:08,952 [salt.state       :1780][INFO    ][3445] Running state [ceilometer_ssl_rabbitmq] at time 22:45:08.952682
2019-04-30 22:45:08,952 [salt.state       :1813][INFO    ][3445] Executing state test.show_notification for [ceilometer_ssl_rabbitmq]
2019-04-30 22:45:08,953 [salt.state       :300 ][INFO    ][3445] Running ceilometer._ssl.rabbitmq
2019-04-30 22:45:08,953 [salt.state       :1951][INFO    ][3445] Completed state [ceilometer_ssl_rabbitmq] at time 22:45:08.953156 duration_in_ms=0.474
2019-04-30 22:45:08,954 [salt.state       :1780][INFO    ][3445] Running state [/etc/ceilometer/ceilometer.conf] at time 22:45:08.954691
2019-04-30 22:45:08,954 [salt.state       :1813][INFO    ][3445] Executing state file.managed for [/etc/ceilometer/ceilometer.conf]
2019-04-30 22:45:08,976 [salt.fileclient  :1219][INFO    ][3445] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/rocky/ceilometer-agent.conf.Debian'
2019-04-30 22:45:09,115 [salt.state       :300 ][INFO    ][3445] File changed:
--- 
+++ 
@@ -1,15 +1,330 @@
 [DEFAULT]
-
-#
-# From ceilometer
-#
-
-# DEPRECATED: To reduce polling agent load, samples are sent to the
-# notification agent in a batch. To gain higher throughput at the cost of load
-# set this to False. This option is deprecated, to disable batching set
-# batch_size = 0 in the polling group. (boolean value)
+#
+# From oslo.messaging
+#
+
+# Size of RPC connection pool. (integer value)
+#rpc_conn_pool_size = 30
+
+# The pool size limit for connections expiration policy (integer
+# value)
+#conn_pool_min_size = 2
+
+# The time-to-live in sec of idle connections in the pool (integer
+# value)
+#conn_pool_ttl = 1200
+
+# ZeroMQ bind address. Should be a wildcard (*), an ethernet
+# interface, or IP. The "host" option should point or resolve to this
+# address. (string value)
+#rpc_zmq_bind_address = *
+
+# MatchMaker driver. (string value)
+# Possible values:
+# redis - <No description provided>
+# sentinel - <No description provided>
+# dummy - <No description provided>
+#rpc_zmq_matchmaker = redis
+
+# Number of ZeroMQ contexts, defaults to 1. (integer value)
+#rpc_zmq_contexts = 1
+
+# Maximum number of ingress messages to locally buffer per topic.
+# Default is unlimited. (integer value)
+#rpc_zmq_topic_backlog = <None>
+
+# Directory for holding IPC sockets. (string value)
+#rpc_zmq_ipc_dir = /var/run/openstack
+
+# Name of this node. Must be a valid hostname, FQDN, or IP address.
+# Must match "host" option, if running Nova. (string value)
+#rpc_zmq_host = localhost
+
+# Number of seconds to wait before all pending messages will be sent
+# after closing a socket. The default value of -1 specifies an
+# infinite linger period. The value of 0 specifies no linger period.
+# Pending messages shall be discarded immediately when the socket is
+# closed. Positive values specify an upper bound for the linger
+# period. (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
+#zmq_linger = -1
+
+# The default number of seconds that poll should wait. Poll raises
+# timeout exception when timeout expired. (integer value)
+#rpc_poll_timeout = 1
+
+
+# Expiration timeout in seconds of a name service record about
+# existing target ( < 0 means no timeout). (integer value)
+#zmq_target_expire = 300
+
+# Update period in seconds of a name service record about existing
+# target. (integer value)
+#zmq_target_update = 180
+
+# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
+# (boolean value)
+#use_pub_sub = false
+
+# Use ROUTER remote proxy. (boolean value)
+#use_router_proxy = false
+
+# This option makes direct connections dynamic or static. It makes
+# sense only with use_router_proxy=False which means to use direct
+# connections for direct message types (ignored otherwise). (boolean
+# value)
+#use_dynamic_connections = false
+
+# How many additional connections to a host will be made for failover
+# reasons. This option is actual only in dynamic connections mode.
+# (integer value)
+#zmq_failover_connections = 2
+
+# Minimal port number for random ports range. (port value)
+# Minimum value: 0
+# Maximum value: 65535
+#rpc_zmq_min_port = 49153
+
+# Maximal port number for random ports range. (integer value)
+# Minimum value: 1
+# Maximum value: 65536
+#rpc_zmq_max_port = 65536
+
+# Number of retries to find free port number before fail with
+# ZMQBindError. (integer value)
+#rpc_zmq_bind_port_retries = 100
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#rpc_zmq_serialization = json
+
+# This option configures round-robin mode in zmq socket. True means
+# not keeping a queue when server side disconnects. False means to
+# keep queue and messages even if server is disconnected, when the
+# server appears we send all accumulated messages to it. (boolean
+# value)
+#zmq_immediate = true
+
+# Enable/disable TCP keepalive (KA) mechanism. The default value of -1
+# (or any other negative value) means to skip any overrides and leave
+# it to OS default; 0 and 1 (or any other positive value) mean to
+# disable and enable the option respectively. (integer value)
+#zmq_tcp_keepalive = -1
+
+# The duration between two keepalive transmissions in idle condition.
+# The unit is platform dependent, for example, seconds in Linux,
+# milliseconds in Windows etc. The default value of -1 (or any other
+# negative value and 0) means to skip any overrides and leave it to OS
+# default. (integer value)
+#zmq_tcp_keepalive_idle = -1
+
+# The number of retransmissions to be carried out before declaring
+# that remote end is not available. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_cnt = -1
+
+# The duration between two successive keepalive retransmissions, if
+# acknowledgement to the previous keepalive transmission is not
+# received. The unit is platform dependent, for example, seconds in
+# Linux, milliseconds in Windows etc. The default value of -1 (or any
+# other negative value and 0) means to skip any overrides and leave it
+# to OS default. (integer value)
+#zmq_tcp_keepalive_intvl = -1
+
+# Maximum number of (green) threads to work concurrently. (integer
+# value)
+#rpc_thread_pool_size = 100
+
+# Expiration timeout in seconds of a sent/received message after which
+# it is not tracked anymore by a client/server. (integer value)
+#rpc_message_ttl = 300
+
+# Wait for message acknowledgements from receivers. This mechanism
+# works only via proxy without PUB/SUB. (boolean value)
+#rpc_use_acks = false
+
+# Number of seconds to wait for an ack from a cast/call. After each
+# retry attempt this timeout is multiplied by some specified
+# multiplier. (integer value)
+#rpc_ack_timeout_base = 15
+
+# Number to multiply base ack timeout by after each retry attempt.
+# (integer value)
+#rpc_ack_timeout_multiplier = 2
+
+# Default number of message sending attempts in case of any problems
+# occurred: positive value N means at most N retries, 0 means no
+# retries, None or -1 (or any other negative values) mean to retry
+# forever. This option is used only if acknowledgments are enabled.
+# (integer value)
+#rpc_retry_attempts = 3
+
+# List of publisher hosts SubConsumer can subscribe on. This option
+# has higher priority then the default publishers list taken from the
+# matchmaker. (list value)
+#subscribe_on =
+
+# Size of executor thread pool when executor is threading or eventlet.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
+#executor_thread_pool_size = 64
+
+# Seconds to wait for a response from a call. (integer value)
+#rpc_response_timeout = 60
+
+# The network address and optional user credentials for connecting to
+# the messaging backend, in URL format. The expected format is:
+#
+# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
+#
+# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
+#
+# For full details on the fields in the URL see the documentation of
+# oslo_messaging.TransportURL at
+# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
+# (string value)
+#transport_url = <None>
+transport_url = rabbit://openstack:opnfv_secret@10.167.4.28:5672,openstack:opnfv_secret@10.167.4.29:5672,openstack:opnfv_secret@10.167.4.30:5672//openstack
+
+# DEPRECATED: The messaging driver to use, defaults to rabbit. Other
+# drivers include amqp and zmq. (string value)
 # This option is deprecated for removal.
 # Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rpc_backend = rabbit
+
+# The default exchange under which topics are scoped. May be
+# overridden by an exchange name specified in the transport_url
+# option. (string value)
+#control_exchange = openstack
+
+#
+# From oslo.log
+#
+
+# If set to true, the logging level will be set to DEBUG instead of
+# the default INFO level. (boolean value)
+# Note: This option can be changed without restarting.
+#debug = false
+
+# The name of a logging configuration file. This file is appended to
+# any existing logging configuration files. For details about logging
+# configuration files, see the Python logging module documentation.
+# Note that when logging configuration files are used then all logging
+# configuration is set in the configuration file and other logging
+# configuration options are ignored (for example,
+# logging_context_format_string). (string value)
+# Note: This option can be changed without restarting.
+# Deprecated group/name - [DEFAULT]/log_config
+
+# Defines the format string for %%(asctime)s in log records. Default:
+# %(default)s . This option is ignored if log_config_append is set.
+# (string value)
+#log_date_format = %Y-%m-%d %H:%M:%S
+
+# (Optional) Name of log file to send logging output to. If no default
+# is set, logging will go to stderr as defined by use_stderr. This
+# option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logfile
+#log_file = <None>
+
+# (Optional) The base directory used for relative log_file  paths.
+# This option is ignored if log_config_append is set. (string value)
+# Deprecated group/name - [DEFAULT]/logdir
+#log_dir = <None>
+
+# Uses logging handler designed to watch file system. When log file is
+# moved or removed this handler will open a new log file with
+# specified path instantaneously. It makes sense only if log_file
+# option is specified and Linux platform is used. This option is
+# ignored if log_config_append is set. (boolean value)
+#watch_log_file = false
+
+# Use syslog for logging. Existing syslog format is DEPRECATED and
+# will be changed later to honor RFC5424. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_syslog = false
+
+# Enable journald for logging. If running in a systemd environment you
+# may wish to enable journal support. Doing so will use the journal
+# native protocol which includes structured metadata in addition to
+# log messages.This option is ignored if log_config_append is set.
+# (boolean value)
+#use_journal = false
+
+# Syslog facility to receive log lines. This option is ignored if
+# log_config_append is set. (string value)
+#syslog_log_facility = LOG_USER
+
+# Use JSON formatting for logging. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_json = false
+
+# Log output to standard error. This option is ignored if
+# log_config_append is set. (boolean value)
+#use_stderr = false
+
+# Format string to use for log messages with context. (string value)
+#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+# Format string to use for log messages when context is undefined.
+# (string value)
+#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
+
+# Additional data to append to log message when logging level for the
+# message is DEBUG. (string value)
+#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
+
+# Prefix each line of exception output with this format. (string
+# value)
+#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
+
+# Defines the format string for %(user_identity)s that is used in
+# logging_context_format_string. (string value)
+#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
+
+# List of package logging levels in logger=LEVEL pairs. This option is
+# ignored if log_config_append is set. (list value)
+#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
+
+# Enables or disables publication of error events. (boolean value)
+#publish_errors = false
+
+# The format for an instance that is passed with the log message.
+# (string value)
+#instance_format = "[instance: %(uuid)s] "
+
+# The format for an instance UUID that is passed with the log message.
+# (string value)
+#instance_uuid_format = "[instance: %(uuid)s] "
+
+# Interval, number of seconds, of log rate limiting. (integer value)
+#rate_limit_interval = 0
+
+# Maximum number of logged messages per rate_limit_interval. (integer
+# value)
+#rate_limit_burst = 0
+
+# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
+# WARNING, DEBUG or empty string. Logs with level greater or equal to
+# rate_limit_except_level are not filtered. An empty string means that
+# all levels are filtered. (string value)
+#rate_limit_except_level = CRITICAL
+
+# Enables or disables fatal status of deprecations. (boolean value)
+#fatal_deprecations = false
+
+#
+# From ceilometer
+#
+
+# To reduce polling agent load, samples are sent to the notification agent in a
+# batch. To gain higher throughput at the cost of load set this to False.
+# (boolean value)
 #batch_polled_samples = true
 
 # Inspector to use for inspecting the hypervisor layer. Known inspectors are
@@ -30,7 +345,7 @@
 #libvirt_uri =
 
 # Swift reseller prefix. Must be on par with reseller_prefix in proxy-
-# server.conf. (string value)
+# agent.conf. (string value)
 #reseller_prefix = AUTH_
 
 # Configuration file for pipeline definition. (string value)
@@ -69,334 +384,6 @@
 # (integer value)
 # Minimum value: 1
 #max_parallel_requests = 64
-
-#
-# From oslo.log
-#
-
-# If set to true, the logging level will be set to DEBUG instead of the default
-# INFO level. (boolean value)
-# Note: This option can be changed without restarting.
-#debug = false
-
-# The name of a logging configuration file. This file is appended to any
-# existing logging configuration files. For details about logging configuration
-# files, see the Python logging module documentation. Note that when logging
-# configuration files are used then all logging configuration is set in the
-# configuration file and other logging configuration options are ignored (for
-# example, logging_context_format_string). (string value)
-# Note: This option can be changed without restarting.
-# Deprecated group/name - [DEFAULT]/log_config
-#log_config_append = <None>
-
-# Defines the format string for %%(asctime)s in log records. Default:
-# %(default)s . This option is ignored if log_config_append is set. (string
-# value)
-#log_date_format = %Y-%m-%d %H:%M:%S
-
-# (Optional) Name of log file to send logging output to. If no default is set,
-# logging will go to stderr as defined by use_stderr. This option is ignored if
-# log_config_append is set. (string value)
-# Deprecated group/name - [DEFAULT]/logfile
-#log_file = <None>
-
-# (Optional) The base directory used for relative log_file  paths. This option
-# is ignored if log_config_append is set. (string value)
-# Deprecated group/name - [DEFAULT]/logdir
-#log_dir = <None>
-
-# Uses logging handler designed to watch file system. When log file is moved or
-# removed this handler will open a new log file with specified path
-# instantaneously. It makes sense only if log_file option is specified and
-# Linux platform is used. This option is ignored if log_config_append is set.
-# (boolean value)
-#watch_log_file = false
-
-# Use syslog for logging. Existing syslog format is DEPRECATED and will be
-# changed later to honor RFC5424. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_syslog = false
-
-# Enable journald for logging. If running in a systemd environment you may wish
-# to enable journal support. Doing so will use the journal native protocol
-# which includes structured metadata in addition to log messages.This option is
-# ignored if log_config_append is set. (boolean value)
-#use_journal = false
-
-# Syslog facility to receive log lines. This option is ignored if
-# log_config_append is set. (string value)
-#syslog_log_facility = LOG_USER
-
-# Use JSON formatting for logging. This option is ignored if log_config_append
-# is set. (boolean value)
-#use_json = false
-
-# Log output to standard error. This option is ignored if log_config_append is
-# set. (boolean value)
-#use_stderr = false
-
-# Format string to use for log messages with context. (string value)
-#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
-
-# Format string to use for log messages when context is undefined. (string
-# value)
-#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-
-# Additional data to append to log message when logging level for the message
-# is DEBUG. (string value)
-#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
-
-# Prefix each line of exception output with this format. (string value)
-#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
-
-# Defines the format string for %(user_identity)s that is used in
-# logging_context_format_string. (string value)
-#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
-
-# List of package logging levels in logger=LEVEL pairs. This option is ignored
-# if log_config_append is set. (list value)
-#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
-
-# Enables or disables publication of error events. (boolean value)
-#publish_errors = false
-
-# The format for an instance that is passed with the log message. (string
-# value)
-#instance_format = "[instance: %(uuid)s] "
-
-# The format for an instance UUID that is passed with the log message. (string
-# value)
-#instance_uuid_format = "[instance: %(uuid)s] "
-
-# Interval, number of seconds, of log rate limiting. (integer value)
-#rate_limit_interval = 0
-
-# Maximum number of logged messages per rate_limit_interval. (integer value)
-#rate_limit_burst = 0
-
-# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
-# or empty string. Logs with level greater or equal to rate_limit_except_level
-# are not filtered. An empty string means that all levels are filtered. (string
-# value)
-#rate_limit_except_level = CRITICAL
-
-# Enables or disables fatal status of deprecations. (boolean value)
-#fatal_deprecations = false
-
-#
-# From oslo.messaging
-#
-
-# Size of RPC connection pool. (integer value)
-#rpc_conn_pool_size = 30
-
-# The pool size limit for connections expiration policy (integer value)
-#conn_pool_min_size = 2
-
-# The time-to-live in sec of idle connections in the pool (integer value)
-#conn_pool_ttl = 1200
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
-# Size of executor thread pool when executor is threading or eventlet. (integer
-# value)
-# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
-#executor_thread_pool_size = 64
-
-# Seconds to wait for a response from a call. (integer value)
-#rpc_response_timeout = 60
-
-# The network address and optional user credentials for connecting to the
-# messaging backend, in URL format. The expected format is:
-#
-# driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
-#
-# Example: rabbit://rabbitmq:password@127.0.0.1:5672//
-#
-# For full details on the fields in the URL see the documentation of
-# oslo_messaging.TransportURL at
-# https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
-# (string value)
-#transport_url = <None>
-
-# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
-# include amqp and zmq. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rpc_backend = rabbit
-
-# The default exchange under which topics are scoped. May be overridden by an
-# exchange name specified in the transport_url option. (string value)
-#control_exchange = openstack
-
-#
-# From oslo.service.service
-#
-
-# Enable eventlet backdoor.  Acceptable values are 0, <port>, and
-# <start>:<end>, where 0 results in listening on a random tcp port number;
-# <port> results in listening on the specified port number (and not enabling
-# backdoor if that port is in use); and <start>:<end> results in listening on
-# the smallest unused port number within the specified range of port numbers.
-# The chosen port is displayed in the service's log file. (string value)
-#backdoor_port = <None>
-
-# Enable eventlet backdoor, using the provided path as a unix socket that can
-# receive connections. This option is mutually exclusive with 'backdoor_port'
-# in that only one should be provided. If both are provided then the existence
-# of this option overrides the usage of that option. (string value)
-#backdoor_socket = <None>
-
-# Enables or disables logging values of all registered options when starting a
-# service (at DEBUG level). (boolean value)
-#log_options = true
-
-# Specify a timeout after which a gracefully shutdown server will exit. Zero
-# value means endless wait. (integer value)
-#graceful_shutdown_timeout = 60
 
 
 [compute]
@@ -416,6 +403,7 @@
 # workload_partitioning - <No description provided>
 # libvirt_metadata - <No description provided>
 #instance_discovery_method = libvirt_metadata
+instance_discovery_method = libvirt_metadata
 
 # New instances will be discovered periodically based on this option (in
 # seconds). By default, the agent discovers instances according to pipeline
@@ -437,23 +425,6 @@
 #resource_cache_expiry = 3600
 
 
-[coordination]
-
-#
-# From ceilometer
-#
-
-# The backend URL to use for distributed coordination. If left empty, per-
-# deployment central agent and per-host compute agent won't do workload
-# partitioning and will only function correctly if a single instance of that
-# service is running. (string value)
-#backend_url = <None>
-
-# Number of seconds between checks to see if group membership has changed
-# (floating point value)
-#check_watchers = 10.0
-
-
 [event]
 
 #
@@ -529,52 +500,6 @@
 # Tolerance of IPMI/NM polling failures before disable this pollster. Negative
 # indicates retrying forever. (integer value)
 #polling_retry = 3
-
-
-[matchmaker_redis]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Host to locate redis. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#host = 127.0.0.1
-
-# DEPRECATED: Use this port to connect to redis host. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#port = 6379
-
-# DEPRECATED: Password for Redis server (optional). (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#password =
-
-# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g.,
-# [host:port, host1:port ... ] (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#sentinel_hosts =
-
-# Redis replica set name. (string value)
-#sentinel_group_name = oslo-messaging-zeromq
-
-# Time in ms to wait between connection attempts. (integer value)
-#wait_timeout = 2000
-
-# Time in ms to wait before the transaction is killed. (integer value)
-#check_timeout = 20000
-
-# Timeout in ms on blocking socket operations. (integer value)
-#socket_timeout = 10000
 
 
 [meter]
@@ -593,7 +518,7 @@
 
 # List directory to find files of defining meter notifications. (multi valued)
 #meter_definitions_dirs = /etc/ceilometer/meters.d
-#meter_definitions_dirs = /build/ceilometer-mfW0lD/ceilometer-11.0.1/ceilometer/data/meters.d
+#meter_definitions_dirs = /usr/src/git/ceilometer/ceilometer/data/meters.d
 
 
 [notification]
@@ -666,633 +591,6 @@
 #notification_control_exchanges = aodh
 
 
-[oslo_concurrency]
-
-#
-# From oslo.concurrency
-#
-
-# Enables or disables inter-process locks. (boolean value)
-#disable_process_locking = false
-
-# Directory to use for lock files.  For security, the specified directory
-# should only be writable by the user running the processes that need locking.
-# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
-# a lock path must be set. (string value)
-#lock_path = <None>
-
-
-[oslo_messaging_amqp]
-
-#
-# From oslo.messaging
-#
-
-# Name for the AMQP container. must be globally unique. Defaults to a generated
-# UUID (string value)
-#container_name = <None>
-
-# Timeout for inactive connections (in seconds) (integer value)
-#idle_timeout = 0
-
-# Debug: dump AMQP frames to stdout (boolean value)
-#trace = false
-
-# Attempt to connect via SSL. If no other ssl-related parameters are given, it
-# will use the system's CA-bundle to verify the server's certificate. (boolean
-# value)
-#ssl = false
-
-# CA certificate PEM file used to verify the server's certificate (string
-# value)
-#ssl_ca_file =
-
-# Self-identifying certificate PEM file for client authentication (string
-# value)
-#ssl_cert_file =
-
-# Private key PEM file used to sign ssl_cert_file certificate (optional)
-# (string value)
-#ssl_key_file =
-
-# Password for decrypting ssl_key_file (if encrypted) (string value)
-#ssl_key_password = <None>
-
-# By default SSL checks that the name in the server's certificate matches the
-# hostname in the transport_url. In some configurations it may be preferable to
-# use the virtual hostname instead, for example if the server uses the Server
-# Name Indication TLS extension (rfc6066) to provide a certificate per virtual
-# host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
-# virtual host name instead of the DNS name. (boolean value)
-#ssl_verify_vhost = false
-
-# DEPRECATED: Accept clients using either SSL or plain TCP (boolean value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Not applicable - not a SSL server
-#allow_insecure_clients = false
-
-# Space separated list of acceptable SASL mechanisms (string value)
-#sasl_mechanisms =
-
-# Path to directory that contains the SASL configuration (string value)
-#sasl_config_dir =
-
-# Name of configuration file (without .conf suffix) (string value)
-#sasl_config_name =
-
-# SASL realm to use if no realm present in username (string value)
-#sasl_default_realm =
-
-# DEPRECATED: User name for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# username.
-#username =
-
-# DEPRECATED: Password for message broker authentication (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Should use configuration option transport_url to provide the
-# password.
-#password =
-
-# Seconds to pause before attempting to re-connect. (integer value)
-# Minimum value: 1
-#connection_retry_interval = 1
-
-# Increase the connection_retry_interval by this many seconds after each
-# unsuccessful failover attempt. (integer value)
-# Minimum value: 0
-#connection_retry_backoff = 2
-
-# Maximum limit for connection_retry_interval + connection_retry_backoff
-# (integer value)
-# Minimum value: 1
-#connection_retry_interval_max = 30
-
-# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
-# recoverable error. (integer value)
-# Minimum value: 1
-#link_retry_delay = 10
-
-# The maximum number of attempts to re-send a reply message which failed due to
-# a recoverable error. (integer value)
-# Minimum value: -1
-#default_reply_retry = 0
-
-# The deadline for an rpc reply message delivery. (integer value)
-# Minimum value: 5
-#default_reply_timeout = 30
-
-# The deadline for an rpc cast or call message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_send_timeout = 30
-
-# The deadline for a sent notification message delivery. Only used when caller
-# does not provide a timeout expiry. (integer value)
-# Minimum value: 5
-#default_notify_timeout = 30
-
-# The duration to schedule a purge of idle sender links. Detach link after
-# expiry. (integer value)
-# Minimum value: 1
-#default_sender_link_timeout = 600
-
-# Indicates the addressing mode used by the driver.
-# Permitted values:
-# 'legacy'   - use legacy non-routable addressing
-# 'routable' - use routable addresses
-# 'dynamic'  - use legacy addresses if the message bus does not support routing
-# otherwise use routable addressing (string value)
-#addressing_mode = dynamic
-
-# Enable virtual host support for those message buses that do not natively
-# support virtual hosting (such as qpidd). When set to true the virtual host
-# name will be added to all message bus addresses, effectively creating a
-# private 'subnet' per virtual host. Set to False if the message bus supports
-# virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
-# as the name of the virtual host. (boolean value)
-#pseudo_vhost = true
-
-# address prefix used when sending to a specific server (string value)
-#server_request_prefix = exclusive
-
-# address prefix used when broadcasting to all servers (string value)
-#broadcast_prefix = broadcast
-
-# address prefix when sending to any server in group (string value)
-#group_request_prefix = unicast
-
-# Address prefix for all generated RPC addresses (string value)
-#rpc_address_prefix = openstack.org/om/rpc
-
-# Address prefix for all generated Notification addresses (string value)
-#notify_address_prefix = openstack.org/om/notify
-
-# Appended to the address prefix when sending a fanout message. Used by the
-# message bus to identify fanout messages. (string value)
-#multicast_address = multicast
-
-# Appended to the address prefix when sending to a particular RPC/Notification
-# server. Used by the message bus to identify messages sent to a single
-# destination. (string value)
-#unicast_address = unicast
-
-# Appended to the address prefix when sending to a group of consumers. Used by
-# the message bus to identify messages that should be delivered in a round-
-# robin fashion across consumers. (string value)
-#anycast_address = anycast
-
-# Exchange name used in notification addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_notification_exchange if set
-# else control_exchange if set
-# else 'notify' (string value)
-#default_notification_exchange = <None>
-
-# Exchange name used in RPC addresses.
-# Exchange name resolution precedence:
-# Target.exchange if set
-# else default_rpc_exchange if set
-# else control_exchange if set
-# else 'rpc' (string value)
-#default_rpc_exchange = <None>
-
-# Window size for incoming RPC Reply messages. (integer value)
-# Minimum value: 1
-#reply_link_credit = 200
-
-# Window size for incoming RPC Request messages (integer value)
-# Minimum value: 1
-#rpc_server_credit = 100
-
-# Window size for incoming Notification messages (integer value)
-# Minimum value: 1
-#notify_server_credit = 100
-
-# Send messages of this type pre-settled.
-# Pre-settled messages will not receive acknowledgement
-# from the peer. Note well: pre-settled messages may be
-# silently discarded if the delivery fails.
-# Permitted values:
-# 'rpc-call' - send RPC Calls pre-settled
-# 'rpc-reply'- send RPC Replies pre-settled
-# 'rpc-cast' - Send RPC Casts pre-settled
-# 'notify'   - Send Notifications pre-settled
-#  (multi valued)
-#pre_settled = rpc-cast
-#pre_settled = rpc-reply
-
-
-[oslo_messaging_kafka]
-
-#
-# From oslo.messaging
-#
-
-# DEPRECATED: Default Kafka broker Host (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_host = localhost
-
-# DEPRECATED: Default Kafka broker Port (port value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#kafka_default_port = 9092
-
-# Max fetch bytes of Kafka consumer (integer value)
-#kafka_max_fetch_bytes = 1048576
-
-# Default timeout(s) for Kafka consumers (floating point value)
-#kafka_consumer_timeout = 1.0
-
-# DEPRECATED: Pool Size for Kafka Consumers (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#pool_size = 10
-
-# DEPRECATED: The pool size limit for connections expiration policy (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_min_size = 2
-
-# DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Driver no longer uses connection pool.
-#conn_pool_ttl = 1200
-
-# Group id for Kafka consumer. Consumers in one group will coordinate message
-# consumption (string value)
-#consumer_group = oslo_messaging_consumer
-
-# Upper bound on the delay for KafkaProducer batching in seconds (floating
-# point value)
-#producer_batch_timeout = 0.0
-
-# Size of batch for the producer async send (integer value)
-#producer_batch_size = 16384
-
-# Enable asynchronous consumer commits (boolean value)
-#enable_auto_commit = false
-
-# The maximum number of records returned in a poll call (integer value)
-#max_poll_records = 500
-
-# Protocol used to communicate with brokers (string value)
-# Possible values:
-# PLAINTEXT - <No description provided>
-# SASL_PLAINTEXT - <No description provided>
-# SSL - <No description provided>
-# SASL_SSL - <No description provided>
-#security_protocol = PLAINTEXT
-
-# Mechanism when security protocol is SASL (string value)
-#sasl_mechanism = PLAIN
-
-# CA certificate PEM file used to verify the server certificate (string value)
-#ssl_cafile =
-
-
-[oslo_messaging_notifications]
-
-#
-# From oslo.messaging
-#
-
-# The Drivers(s) to handle sending notifications. Possible values are
-# messaging, messagingv2, routing, log, test, noop (multi valued)
-# Deprecated group/name - [DEFAULT]/notification_driver
-#driver =
-
-# A URL representing the messaging driver to use for notifications. If not set,
-# we fall back to the same configuration used for RPC. (string value)
-# Deprecated group/name - [DEFAULT]/notification_transport_url
-#transport_url = <None>
-
-# AMQP topic used for OpenStack notifications. (list value)
-# Deprecated group/name - [rpc_notifier2]/topics
-# Deprecated group/name - [DEFAULT]/notification_topics
-#topics = notifications
-
-# The maximum number of attempts to re-send a notification message which failed
-# to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
-# (integer value)
-#retry = -1
-
-
-[oslo_messaging_rabbit]
-
-#
-# From oslo.messaging
-#
-
-# Use durable queues in AMQP. (boolean value)
-# Deprecated group/name - [DEFAULT]/amqp_durable_queues
-# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
-#amqp_durable_queues = false
-
-# Auto-delete queues in AMQP. (boolean value)
-#amqp_auto_delete = false
-
-# Connect over SSL. (boolean value)
-# Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl
-#ssl = false
-
-# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
-# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
-# distributions. (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
-#ssl_version =
-
-# SSL key file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
-#ssl_key_file =
-
-# SSL cert file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
-#ssl_cert_file =
-
-# SSL certification authority file (valid only if SSL enabled). (string value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
-#ssl_ca_file =
-
-# How long to wait before reconnecting in response to an AMQP consumer cancel
-# notification. (floating point value)
-#kombu_reconnect_delay = 1.0
-
-# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
-# be used. This option may not be available in future versions. (string value)
-#kombu_compression = <None>
-
-# How long to wait a missing client before abandoning to send it its replies.
-# This value should not be longer than rpc_response_timeout. (integer value)
-# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
-#kombu_missing_consumer_retry_timeout = 60
-
-# Determines how the next RabbitMQ node is chosen in case the one we are
-# currently connected to becomes unavailable. Takes effect only if more than
-# one RabbitMQ node is provided in config. (string value)
-# Possible values:
-# round-robin - <No description provided>
-# shuffle - <No description provided>
-#kombu_failover_strategy = round-robin
-
-# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
-# value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_host = localhost
-
-# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
-# value)
-# Minimum value: 0
-# Maximum value: 65535
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_port = 5672
-
-# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_hosts = $rabbit_host:$rabbit_port
-
-# DEPRECATED: The RabbitMQ userid. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_userid = guest
-
-# DEPRECATED: The RabbitMQ password. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_password = guest
-
-# The RabbitMQ login method. (string value)
-# Possible values:
-# PLAIN - <No description provided>
-# AMQPLAIN - <No description provided>
-# RABBIT-CR-DEMO - <No description provided>
-#rabbit_login_method = AMQPLAIN
-
-# DEPRECATED: The RabbitMQ virtual host. (string value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-# Reason: Replaced by [DEFAULT]/transport_url
-#rabbit_virtual_host = /
-
-# How frequently to retry connecting with RabbitMQ. (integer value)
-#rabbit_retry_interval = 1
-
-# How long to backoff for between retries when connecting to RabbitMQ. (integer
-# value)
-#rabbit_retry_backoff = 2
-
-# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
-# (integer value)
-#rabbit_interval_max = 30
-
-# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
-# (infinite retry count). (integer value)
-# This option is deprecated for removal.
-# Its value may be silently ignored in the future.
-#rabbit_max_retries = 0
-
-# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
-# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
-# is no longer controlled by the x-ha-policy argument when declaring a queue.
-# If you just want to make sure that all queues (except those with auto-
-# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
-# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
-#rabbit_ha_queues = false
-
-# Positive integer representing duration in seconds for queue TTL (x-expires).
-# Queues which are unused for the duration of the TTL are automatically
-# deleted. The parameter affects only reply and fanout queues. (integer value)
-# Minimum value: 1
-#rabbit_transient_queues_ttl = 1800
-
-# Specifies the number of messages to prefetch. Setting to zero allows
-# unlimited messages. (integer value)
-#rabbit_qos_prefetch_count = 0
-
-# Number of seconds after which the Rabbit broker is considered down if
-# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
-# value)
-#heartbeat_timeout_threshold = 60
-
-# How often times during the heartbeat_timeout_threshold we check the
-# heartbeat. (integer value)
-#heartbeat_rate = 2
-
-
-[oslo_messaging_zmq]
-
-#
-# From oslo.messaging
-#
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
-# The "host" option should point or resolve to this address. (string value)
-#rpc_zmq_bind_address = *
-
-# MatchMaker driver. (string value)
-# Possible values:
-# redis - <No description provided>
-# sentinel - <No description provided>
-# dummy - <No description provided>
-#rpc_zmq_matchmaker = redis
-
-# Number of ZeroMQ contexts, defaults to 1. (integer value)
-#rpc_zmq_contexts = 1
-
-# Maximum number of ingress messages to locally buffer per topic. Default is
-# unlimited. (integer value)
-#rpc_zmq_topic_backlog = <None>
-
-# Directory for holding IPC sockets. (string value)
-#rpc_zmq_ipc_dir = /var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
-# "host" option, if running Nova. (string value)
-#rpc_zmq_host = localhost
-
-# Number of seconds to wait before all pending messages will be sent after
-# closing a socket. The default value of -1 specifies an infinite linger
-# period. The value of 0 specifies no linger period. Pending messages shall be
-# discarded immediately when the socket is closed. Positive values specify an
-# upper bound for the linger period. (integer value)
-# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
-#zmq_linger = -1
-
-# The default number of seconds that poll should wait. Poll raises timeout
-# exception when timeout expired. (integer value)
-#rpc_poll_timeout = 1
-
-# Expiration timeout in seconds of a name service record about existing target
-# ( < 0 means no timeout). (integer value)
-#zmq_target_expire = 300
-
-# Update period in seconds of a name service record about existing target.
-# (integer value)
-#zmq_target_update = 180
-
-# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
-# value)
-#use_pub_sub = false
-
-# Use ROUTER remote proxy. (boolean value)
-#use_router_proxy = false
-
-# This option makes direct connections dynamic or static. It makes sense only
-# with use_router_proxy=False which means to use direct connections for direct
-# message types (ignored otherwise). (boolean value)
-#use_dynamic_connections = false
-
-# How many additional connections to a host will be made for failover reasons.
-# This option is actual only in dynamic connections mode. (integer value)
-#zmq_failover_connections = 2
-
-# Minimal port number for random ports range. (port value)
-# Minimum value: 0
-# Maximum value: 65535
-#rpc_zmq_min_port = 49153
-
-# Maximal port number for random ports range. (integer value)
-# Minimum value: 1
-# Maximum value: 65536
-#rpc_zmq_max_port = 65536
-
-# Number of retries to find free port number before fail with ZMQBindError.
-# (integer value)
-#rpc_zmq_bind_port_retries = 100
-
-# Default serialization mechanism for serializing/deserializing
-# outgoing/incoming messages (string value)
-# Possible values:
-# json - <No description provided>
-# msgpack - <No description provided>
-#rpc_zmq_serialization = json
-
-# This option configures round-robin mode in zmq socket. True means not keeping
-# a queue when server side disconnects. False means to keep queue and messages
-# even if server is disconnected, when the server appears we send all
-# accumulated messages to it. (boolean value)
-#zmq_immediate = true
-
-# Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any
-# other negative value) means to skip any overrides and leave it to OS default;
-# 0 and 1 (or any other positive value) mean to disable and enable the option
-# respectively. (integer value)
-#zmq_tcp_keepalive = -1
-
-# The duration between two keepalive transmissions in idle condition. The unit
-# is platform dependent, for example, seconds in Linux, milliseconds in Windows
-# etc. The default value of -1 (or any other negative value and 0) means to
-# skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_idle = -1
-
-# The number of retransmissions to be carried out before declaring that remote
-# end is not available. The default value of -1 (or any other negative value
-# and 0) means to skip any overrides and leave it to OS default. (integer
-# value)
-#zmq_tcp_keepalive_cnt = -1
-
-# The duration between two successive keepalive retransmissions, if
-# acknowledgement to the previous keepalive transmission is not received. The
-# unit is platform dependent, for example, seconds in Linux, milliseconds in
-# Windows etc. The default value of -1 (or any other negative value and 0)
-# means to skip any overrides and leave it to OS default. (integer value)
-#zmq_tcp_keepalive_intvl = -1
-
-# Maximum number of (green) threads to work concurrently. (integer value)
-#rpc_thread_pool_size = 100
-
-# Expiration timeout in seconds of a sent/received message after which it is
-# not tracked anymore by a client/server. (integer value)
-#rpc_message_ttl = 300
-
-# Wait for message acknowledgements from receivers. This mechanism works only
-# via proxy without PUB/SUB. (boolean value)
-#rpc_use_acks = false
-
-# Number of seconds to wait for an ack from a cast/call. After each retry
-# attempt this timeout is multiplied by some specified multiplier. (integer
-# value)
-#rpc_ack_timeout_base = 15
-
-# Number to multiply base ack timeout by after each retry attempt. (integer
-# value)
-#rpc_ack_timeout_multiplier = 2
-
-# Default number of message sending attempts in case of any problems occurred:
-# positive value N means at most N retries, 0 means no retries, None or -1 (or
-# any other negative values) mean to retry forever. This option is used only if
-# acknowledgments are enabled. (integer value)
-#rpc_retry_attempts = 3
-
-# List of publisher hosts SubConsumer can subscribe on. This option has higher
-# priority then the default publishers list taken from the matchmaker. (list
-# value)
-#subscribe_on =
-
-
 [polling]
 
 #
@@ -1307,10 +605,6 @@
 # pool with the same partitioning_group_prefix a disjoint subset of pollsters
 # should be loaded. (string value)
 #partitioning_group_prefix = <None>
-
-# Batch size of samples to send to notification agent, Set to 0 to disable
-# (integer value)
-#batch_size = 50
 
 
 [publisher]
@@ -1325,6 +619,7 @@
 # Deprecated group/name - [publisher_rpc]/metering_secret
 # Deprecated group/name - [publisher]/metering_secret
 #telemetry_secret = change this for valid signing
+telemetry_secret=opnfv_secret
 
 
 [publisher_notifier]
@@ -1357,87 +652,115 @@
 #secret_key = <None>
 
 
-[rgw_client]
-
-#
-# From ceilometer
-#
-
-# Whether RGW uses implicit tenants or not. (boolean value)
-#implicit_tenants = false
-
-
 [service_credentials]
 
 #
 # From ceilometer-auth
 #
-
-# Authentication type to load (string value)
-# Deprecated group/name - [service_credentials]/auth_plugin
-#auth_type = <None>
-
-# Config Section from which to load plugin specific options (string value)
-#auth_section = <None>
+# Name of nova region to use. Useful if keystone manages more than one region.
+# (string value)
+#region_name = <None>
+region_name = RegionOne
+
+# Type of the nova endpoint to use.  This endpoint will be looked up in the
+# keystone catalog and should be one of public, internal or admin. (string
+# value)
+# Possible values:
+# public - <No description provided>
+# admin - <No description provided>
+# internal - <No description provided>
+#endpoint_type = public
+endpoint_type = internalURL
+
+# API version of the admin Identity API endpoint. (string value)
+#auth_version = <None>
+
 
 # Authentication URL (string value)
 #auth_url = <None>
+auth_url = http://10.167.4.35:35357
+
+# Authentication type to load (string value)
+# Deprecated group/name - [nova]/auth_plugin
+#auth_type = <None>
+auth_type = password
+
+# Required if identity server requires client certificate (string value)
+#certfile = <None>
+
+# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
+# Defaults to system CAs. (string value)
+#cafile = <None>
+
+# Optional domain ID to use with v3 and v2 parameters. It will be used for both
+# the user and project domain in v3 and ignored in v2 authentication. (string
+# value)
+#default_domain_id = <None>
+
+# Optional domain name to use with v3 API and v2 parameters. It will be used for
+# both the user and project domain in v3 and ignored in v2 authentication.
+# (string value)
+#default_domain_name = <None>
+
+# Domain ID to scope to (string value)
+#domain_id = <None>
+
+# Domain name to scope to (string value)
+#domain_name = <None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure = false
+
+# Required if identity server requires client certificate (string value)
+#keyfile = <None>
+
+# User's password (string value)
+#password = <None>
+password = opnfv_secret
+
+# Domain ID containing project (string value)
+#project_domain_id = <None>
+project_domain_id = default
+
+# Domain name containing project (string value)
+#project_domain_name = <None>
+
+# Project ID to scope to (string value)
+#project_id = <None>
+
+# Project name to scope to (string value)
+#project_name = <None>
+project_name = service
 
 # Scope for system operations (string value)
 #system_scope = <None>
 
-# Domain ID to scope to (string value)
-#domain_id = <None>
-
-# Domain name to scope to (string value)
-#domain_name = <None>
-
-# Project ID to scope to (string value)
-# Deprecated group/name - [service_credentials]/tenant_id
-#project_id = <None>
-
-# Project name to scope to (string value)
-# Deprecated group/name - [service_credentials]/tenant_name
-#project_name = <None>
-
-# Domain ID containing project (string value)
-#project_domain_id = <None>
-
-# Domain name containing project (string value)
-#project_domain_name = <None>
+# Tenant ID (string value)
+#tenant_id = <None>
+
+# Tenant Name (string value)
+#tenant_name = <None>
+
+# Timeout value for http requests (integer value)
+#timeout = <None>
 
 # Trust ID (string value)
 #trust_id = <None>
 
-# Optional domain ID to use with v3 and v2 parameters. It will be used for both
-# the user and project domain in v3 and ignored in v2 authentication. (string
-# value)
-#default_domain_id = <None>
-
-# Optional domain name to use with v3 API and v2 parameters. It will be used
-# for both the user and project domain in v3 and ignored in v2 authentication.
-# (string value)
-#default_domain_name = <None>
-
-# User id (string value)
-#user_id = <None>
-
-# Username (string value)
-# Deprecated group/name - [service_credentials]/user_name
-#username = <None>
-
 # User's domain id (string value)
 #user_domain_id = <None>
+user_domain_id = default
 
 # User's domain name (string value)
 #user_domain_name = <None>
 
-# User's password (string value)
-#password = <None>
-
-# Region name to use for OpenStack service endpoints. (string value)
-# Deprecated group/name - [DEFAULT]/os_region_name
-#region_name = <None>
+# User ID (string value)
+#user_id = <None>
+
+# Username (string value)
+# Deprecated group/name - [neutron]/user_name
+#username = <None>
+username = ceilometer
 
 # Type of endpoint in Identity service catalog to use for communication with
 # OpenStack services. (string value)
@@ -1450,7 +773,7 @@
 # internalURL - <No description provided>
 # adminURL - <No description provided>
 # Deprecated group/name - [service_credentials]/os_endpoint_type
-#interface = public
+interface = internal
 
 
 [service_types]
@@ -1484,59 +807,306 @@
 # Deprecated group/name - [service_types]/cinderv2
 #cinder = volumev3
 
-
-[vmware]
-
-#
-# From ceilometer
-#
-
-# IP address of the VMware vSphere host. (host address value)
-#host_ip = 127.0.0.1
-
-# Port of the VMware vSphere host. (port value)
+[xenapi]
+
+#
+# From ceilometer
+#
+
+# URL for connection to XenServer/Xen Cloud Platform. (string value)
+#connection_url = <None>
+
+# Username for connection to XenServer/Xen Cloud Platform. (string value)
+#connection_username = root
+
+# Password for connection to XenServer/Xen Cloud Platform. (string value)
+#connection_password = <None>
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+#
+# From oslo.messaging
+#
+
+# The Drivers(s) to handle sending notifications. Possible values are
+# messaging, messagingv2, routing, log, test, noop (multi valued)
+# Deprecated group/name - [DEFAULT]/notification_driver
+#driver =
+driver = messagingv2
+
+# A URL representing the messaging driver to use for notifications. If
+# not set, we fall back to the same configuration used for RPC.
+# (string value)
+# Deprecated group/name - [DEFAULT]/notification_transport_url
+#transport_url = <None>
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+# Deprecated group/name - [DEFAULT]/notification_topics
+#topics = notifications
+topics = notifications
+
+# The maximum number of attempts to re-send a notification message
+# which failed to be delivered due to a recoverable error. 0 - No
+# retry, -1 - indefinite (integer value)
+#retry = -1
+[oslo_messaging_rabbit]
+#
+# From oslo.messaging
+#
+
+# Use durable queues in AMQP. (boolean value)
+# Deprecated group/name - [DEFAULT]/amqp_durable_queues
+# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
+#amqp_durable_queues = false
+
+# Auto-delete queues in AMQP. (boolean value)
+#amqp_auto_delete = false
+
+# Enable SSL (boolean value)
+#ssl = <None>
+
+
+# How long to wait before reconnecting in response to an AMQP consumer
+# cancel notification. (floating point value)
+#kombu_reconnect_delay = 1.0
+
+# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression
+# will not be used. This option may not be available in future
+# versions. (string value)
+#kombu_compression = <None>
+
+# How long to wait a missing client before abandoning to send it its
+# replies. This value should not be longer than rpc_response_timeout.
+# (integer value)
+# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
+#kombu_missing_consumer_retry_timeout = 60
+
+# Determines how the next RabbitMQ node is chosen in case the one we
+# are currently connected to becomes unavailable. Takes effect only if
+# more than one RabbitMQ node is provided in config. (string value)
+# Possible values:
+# round-robin - <No description provided>
+# shuffle - <No description provided>
+#kombu_failover_strategy = round-robin
+
+# DEPRECATED: The RabbitMQ broker address where a single node is used.
+# (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_host = localhost
+
+# DEPRECATED: The RabbitMQ broker port where a single node is used.
+# (port value)
 # Minimum value: 0
 # Maximum value: 65535
-#host_port = 443
-
-# Username of VMware vSphere. (string value)
-#host_username =
-
-# Password of VMware vSphere. (string value)
-#host_password =
-
-# CA bundle file to use in verifying the vCenter server certificate. (string
-# value)
-#ca_file = <None>
-
-# If true, the vCenter server certificate is not verified. If false, then the
-# default CA truststore is used for verification. This option is ignored if
-# "ca_file" is set. (boolean value)
-#insecure = false
-
-# Number of times a VMware vSphere API may be retried. (integer value)
-#api_retry_count = 10
-
-# Sleep time in seconds for polling an ongoing async task. (floating point
-# value)
-#task_poll_interval = 0.5
-
-# Optional vim service WSDL location e.g http://<server>/vimService.wsdl.
-# Optional over-ride to default location for bug work-arounds. (string value)
-#wsdl_location = <None>
-
-
-[xenapi]
-
-#
-# From ceilometer
-#
-
-# URL for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_url = <None>
-
-# Username for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_username = root
-
-# Password for connection to XenServer/Xen Cloud Platform. (string value)
-#connection_password = <None>
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_port = 5672
+
+# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_hosts = $rabbit_host:$rabbit_port
+
+# DEPRECATED: The RabbitMQ userid. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_userid = guest
+
+# DEPRECATED: The RabbitMQ password. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_password = guest
+
+# The RabbitMQ login method. (string value)
+# Possible values:
+# PLAIN - <No description provided>
+# AMQPLAIN - <No description provided>
+# RABBIT-CR-DEMO - <No description provided>
+#rabbit_login_method = AMQPLAIN
+
+# DEPRECATED: The RabbitMQ virtual host. (string value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+# Reason: Replaced by [DEFAULT]/transport_url
+#rabbit_virtual_host = /
+
+# How frequently to retry connecting with RabbitMQ. (integer value)
+#rabbit_retry_interval = 1
+
+# How long to backoff for between retries when connecting to RabbitMQ.
+# (integer value)
+#rabbit_retry_backoff = 2
+
+# Maximum interval of RabbitMQ connection retries. Default is 30
+# seconds. (integer value)
+#rabbit_interval_max = 30
+
+# DEPRECATED: Maximum number of RabbitMQ connection retries. Default
+# is 0 (infinite retry count). (integer value)
+# This option is deprecated for removal.
+# Its value may be silently ignored in the future.
+#rabbit_max_retries = 0
+
+# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change
+# this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0,
+# queue mirroring is no longer controlled by the x-ha-policy argument
+# when declaring a queue. If you just want to make sure that all
+# queues (except those with auto-generated names) are mirrored across
+# all nodes, run: "rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-
+# mode": "all"}' " (boolean value)
+#rabbit_ha_queues = false
+
+# Positive integer representing duration in seconds for queue TTL
+# (x-expires). Queues which are unused for the duration of the TTL are
+# automatically deleted. The parameter affects only reply and fanout
+# queues. (integer value)
+# Minimum value: 1
+#rabbit_transient_queues_ttl = 1800
+
+# Specifies the number of messages to prefetch. Setting to zero allows
+# unlimited messages. (integer value)
+
+# NOTE(dmescheryakov) hardcoding to >0 by default
+# Having no prefetch limit makes oslo.messaging consume all available
+# messages from the queue. That can lead to a situation when several
+# server processes hog all the messages leaving others out of business.
+# That leads to artificial high message processing latency and at the
+# extrime to MessagingTimeout errors.
+rabbit_qos_prefetch_count = 64
+
+# Number of seconds after which the Rabbit broker is considered down
+# if heartbeat's keep-alive fails (0 disable the heartbeat).
+# EXPERIMENTAL (integer value)
+#heartbeat_timeout_threshold = 60
+
+# How often times during the heartbeat_timeout_threshold we check the
+# heartbeat. (integer value)
+#heartbeat_rate = 2
+
+# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
+# (boolean value)
+#fake_rabbit = false
+
+# Maximum number of channels to allow (integer value)
+#channel_max = <None>
+
+
+# The maximum byte size for an AMQP frame (integer value)
+#frame_max = <None>
+
+# How often to send heartbeats for consumer's connections (integer
+# value)
+#heartbeat_interval = 3
+
+# Arguments passed to ssl.wrap_socket (dict value)
+#ssl_options = <None>
+
+# Set socket timeout in seconds for connection's socket (floating
+# point value)
+#socket_timeout = 0.25
+
+# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating
+# point value)
+#tcp_user_timeout = 0.25
+
+# Set delay for reconnection to some host which has connection error
+# (floating point value)
+#host_connection_reconnect_delay = 0.25
+
+# Connection factory implementation (string value)
+# Possible values:
+# new - <No description provided>
+# single - <No description provided>
+# read_write - <No description provided>
+#connection_factory = single
+
+# Maximum number of connections to keep queued. (integer value)
+#pool_max_size = 30
+
+# Maximum number of connections to create above `pool_max_size`.
+# (integer value)
+#pool_max_overflow = 0
+
+# Default number of seconds to wait for a connections to available
+# (integer value)
+#pool_timeout = 30
+
+# Lifetime of a connection (since creation) in seconds or None for no
+# recycling. Expired connections are closed on acquire. (integer
+# value)
+#pool_recycle = 600
+
+# Threshold at which inactive (since release) connections are
+# considered stale in seconds or None for no staleness. Stale
+# connections are closed on acquire. (integer value)
+#pool_stale = 60
+
+# Default serialization mechanism for serializing/deserializing
+# outgoing/incoming messages (string value)
+# Possible values:
+# json - <No description provided>
+# msgpack - <No description provided>
+#default_serializer_type = json
+
+# Persist notification messages. (boolean value)
+#notification_persistence = false
+
+# Exchange name for sending notifications (string value)
+#default_notification_exchange = ${control_exchange}_notification
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# notification listener. (integer value)
+#notification_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending notification, -1 means infinite retry. (integer value)
+#default_notification_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending notification message (floating point value)
+#notification_retry_delay = 0.25
+
+# Time to live for rpc queues without consumers in seconds. (integer
+# value)
+#rpc_queue_expiration = 60
+
+# Exchange name for sending RPC messages (string value)
+#default_rpc_exchange = ${control_exchange}_rpc
+
+# Exchange name for receiving RPC replies (string value)
+#rpc_reply_exchange = ${control_exchange}_rpc_reply
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc listener. (integer value)
+#rpc_listener_prefetch_count = 100
+
+# Max number of not acknowledged message which RabbitMQ can send to
+# rpc reply listener. (integer value)
+#rpc_reply_listener_prefetch_count = 100
+
+# Reconnecting retry count in case of connectivity problem during
+# sending reply. -1 means infinite retry during rpc_timeout (integer
+# value)
+#rpc_reply_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending reply. (floating point value)
+#rpc_reply_retry_delay = 0.25
+
+# Reconnecting retry count in case of connectivity problem during
+# sending RPC message, -1 means infinite retry. If actual retry
+# attempts in not 0 the rpc request could be processed more than one
+# time (integer value)
+#default_rpc_retry_attempts = -1
+
+# Reconnecting retry delay in case of connectivity problem during
+# sending RPC message (floating point value)
+#rpc_retry_delay = 0.25

2019-04-30 22:45:09,116 [salt.state       :1951][INFO    ][3445] Completed state [/etc/ceilometer/ceilometer.conf] at time 22:45:09.116718 duration_in_ms=162.026
2019-04-30 22:45:09,116 [salt.state       :1780][INFO    ][3445] Running state [/etc/default/ceilometer-agent-compute] at time 22:45:09.116953
2019-04-30 22:45:09,117 [salt.state       :1813][INFO    ][3445] Executing state file.managed for [/etc/default/ceilometer-agent-compute]
2019-04-30 22:45:09,130 [salt.fileclient  :1219][INFO    ][3445] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/default'
2019-04-30 22:45:09,135 [salt.state       :300 ][INFO    ][3445] File changed:
New file
2019-04-30 22:45:09,135 [salt.state       :1951][INFO    ][3445] Completed state [/etc/default/ceilometer-agent-compute] at time 22:45:09.135772 duration_in_ms=18.818
2019-04-30 22:45:09,136 [salt.state       :1780][INFO    ][3445] Running state [/etc/ceilometer/pipeline.yaml] at time 22:45:09.135988
2019-04-30 22:45:09,136 [salt.state       :1813][INFO    ][3445] Executing state file.managed for [/etc/ceilometer/pipeline.yaml]
2019-04-30 22:45:09,149 [salt.fileclient  :1219][INFO    ][3445] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/rocky/pipeline.yaml'
2019-04-30 22:45:09,187 [salt.state       :300 ][INFO    ][3445] File changed:
New file
2019-04-30 22:45:09,187 [salt.state       :1951][INFO    ][3445] Completed state [/etc/ceilometer/pipeline.yaml] at time 22:45:09.187356 duration_in_ms=51.367
2019-04-30 22:45:09,187 [salt.state       :1780][INFO    ][3445] Running state [/etc/ceilometer/event_pipeline.yaml] at time 22:45:09.187571
2019-04-30 22:45:09,187 [salt.state       :1813][INFO    ][3445] Executing state file.managed for [/etc/ceilometer/event_pipeline.yaml]
2019-04-30 22:45:09,200 [salt.fileclient  :1219][INFO    ][3445] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/rocky/event_pipeline.yaml'
2019-04-30 22:45:09,233 [salt.state       :300 ][INFO    ][3445] File changed:
New file
2019-04-30 22:45:09,234 [salt.state       :1951][INFO    ][3445] Completed state [/etc/ceilometer/event_pipeline.yaml] at time 22:45:09.234037 duration_in_ms=46.466
2019-04-30 22:45:09,234 [salt.state       :1780][INFO    ][3445] Running state [/etc/ceilometer/polling.yaml] at time 22:45:09.234231
2019-04-30 22:45:09,234 [salt.state       :1813][INFO    ][3445] Executing state file.managed for [/etc/ceilometer/polling.yaml]
2019-04-30 22:45:09,247 [salt.fileclient  :1219][INFO    ][3445] Fetching file from saltenv 'base', ** done ** 'ceilometer/files/rocky/polling.yaml'
2019-04-30 22:45:09,283 [salt.state       :300 ][INFO    ][3445] File changed:
--- 
+++ 
@@ -1,27 +1,7 @@
+
 ---
 sources:
-    - name: some_pollsters
-      interval: 300
+    - name: default_pollsters
+      interval: 180
       meters:
-        - cpu
-        - cpu_l3_cache
-        - memory.usage
-        - network.incoming.bytes
-        - network.incoming.packets
-        - network.outgoing.bytes
-        - network.outgoing.packets
-        - disk.device.read.bytes
-        - disk.device.read.requests
-        - disk.device.write.bytes
-        - disk.device.write.requests
-        - hardware.cpu.util
-        - hardware.memory.used
-        - hardware.memory.total
-        - hardware.memory.buffer
-        - hardware.memory.cached
-        - hardware.memory.swap.avail
-        - hardware.memory.swap.total
-        - hardware.system_stats.io.outgoing.blocks
-        - hardware.system_stats.io.incoming.blocks
-        - hardware.network.ip.incoming.datagrams
-        - hardware.network.ip.outgoing.datagrams
+        - "*"

2019-04-30 22:45:09,284 [salt.state       :1951][INFO    ][3445] Completed state [/etc/ceilometer/polling.yaml] at time 22:45:09.283943 duration_in_ms=49.71
2019-04-30 22:45:09,595 [salt.state       :1780][INFO    ][3445] Running state [ceilometer-agent-compute] at time 22:45:09.595900
2019-04-30 22:45:09,596 [salt.state       :1813][INFO    ][3445] Executing state service.running for [ceilometer-agent-compute]
2019-04-30 22:45:09,596 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['systemctl', 'status', 'ceilometer-agent-compute.service', '-n', '0'] in directory '/root'
2019-04-30 22:45:09,606 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['systemctl', 'is-active', 'ceilometer-agent-compute.service'] in directory '/root'
2019-04-30 22:45:09,614 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['systemctl', 'is-enabled', 'ceilometer-agent-compute.service'] in directory '/root'
2019-04-30 22:45:09,621 [salt.state       :300 ][INFO    ][3445] The service ceilometer-agent-compute is already running
2019-04-30 22:45:09,621 [salt.state       :1951][INFO    ][3445] Completed state [ceilometer-agent-compute] at time 22:45:09.621539 duration_in_ms=25.639
2019-04-30 22:45:09,621 [salt.state       :1780][INFO    ][3445] Running state [ceilometer-agent-compute] at time 22:45:09.621686
2019-04-30 22:45:09,621 [salt.state       :1813][INFO    ][3445] Executing state service.mod_watch for [ceilometer-agent-compute]
2019-04-30 22:45:09,622 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['systemctl', 'is-active', 'ceilometer-agent-compute.service'] in directory '/root'
2019-04-30 22:45:09,629 [salt.loaded.int.module.cmdmod:395 ][INFO    ][3445] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'ceilometer-agent-compute.service'] in directory '/root'
2019-04-30 22:45:09,711 [salt.state       :300 ][INFO    ][3445] {'ceilometer-agent-compute': True}
2019-04-30 22:45:09,711 [salt.state       :1951][INFO    ][3445] Completed state [ceilometer-agent-compute] at time 22:45:09.711694 duration_in_ms=90.006
2019-04-30 22:45:09,713 [salt.minion      :1711][INFO    ][3445] Returning information for job: 20190430224451101774
2019-04-30 22:50:27,001 [salt.minion      :1308][INFO    ][3337] User sudo_ubuntu Executing command cp.push_dir with jid 20190430225026989463
2019-04-30 22:50:27,012 [salt.minion      :1432][INFO    ][4950] Starting a new job with PID 4950
