Nov 16 16:51:35 cmp001 systemd-modules-load[427]: Inserted module 'iscsi_tcp'
Nov 16 16:51:35 cmp001 systemd-modules-load[427]: Inserted module 'ib_iser'
Nov 16 16:51:35 cmp001 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 16 16:51:35 cmp001 systemd[1]: Started Create Static Device Nodes in /dev.
Nov 16 16:51:35 cmp001 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 16 16:51:35 cmp001 systemd[1]: Starting udev Kernel Device Manager...
Nov 16 16:51:35 cmp001 systemd[1]: Started Set the console keyboard layout.
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Linux version 4.15.0-70-generic (buildd@lgw01-amd64-055) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 (Ubuntu 4.15.0-70.79-generic 4.15.18)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:51:35 cmp001 kernel: [    0.000000] KERNEL supported cpus:
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Local File Systems (Pre).
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   Intel GenuineIntel
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   AMD AuthenticAMD
Nov 16 16:51:35 cmp001 systemd[1]: Started Flush Journal to Persistent Storage.
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   Centaur CentaurHauls
Nov 16 16:51:35 cmp001 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 16 16:51:35 cmp001 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 16 16:51:35 cmp001 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 16 16:51:35 cmp001 systemd[1]: Started udev Kernel Device Manager.
Nov 16 16:51:35 cmp001 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Local Encrypted Volumes.
Nov 16 16:51:35 cmp001 systemd-udevd[460]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:35 cmp001 systemd-udevd[474]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:35 cmp001 systemd-udevd[461]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:35 cmp001 systemd-udevd[472]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:35 cmp001 systemd[1]: Found device /dev/ttyS0.
Nov 16 16:51:35 cmp001 systemd-udevd[467]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:35 cmp001 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 16 16:51:35 cmp001 systemd[1]: Found device /dev/disk/by-label/UEFI.
Nov 16 16:51:35 cmp001 systemd[1]: Mounting /boot/efi...
Nov 16 16:51:35 cmp001 systemd[1]: Mounted /boot/efi.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Local File Systems.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Commit a transient machine-id on disk...
Nov 16 16:51:35 cmp001 systemd[1]: Starting ebtables ruleset management...
Nov 16 16:51:35 cmp001 systemd[1]: Starting Set console font and keymap...
Nov 16 16:51:35 cmp001 systemd[1]: Starting Create Volatile Files and Directories...
Nov 16 16:51:35 cmp001 systemd[1]: Starting Tell Plymouth To Write Out Runtime Data...
Nov 16 16:51:35 cmp001 systemd[1]: Starting AppArmor initialization...
Nov 16 16:51:35 cmp001 systemd[1]: Started Set console font and keymap.
Nov 16 16:51:35 cmp001 systemd[1]: Started Create Volatile Files and Directories.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Update UTMP about System Boot/Shutdown...
Nov 16 16:51:35 cmp001 systemd[1]: Starting Network Time Synchronization...
Nov 16 16:51:35 cmp001 systemd[1]: Started Tell Plymouth To Write Out Runtime Data.
Nov 16 16:51:35 cmp001 systemd[1]: Started Update UTMP about System Boot/Shutdown.
Nov 16 16:51:35 cmp001 apparmor[610]:  * Starting AppArmor profiles
Nov 16 16:51:35 cmp001 systemd[1]: Started ebtables ruleset management.
Nov 16 16:51:35 cmp001 systemd[1]: Started Commit a transient machine-id on disk.
Nov 16 16:51:35 cmp001 apparmor[610]: Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
Nov 16 16:51:35 cmp001 systemd[1]: Started Network Time Synchronization.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target System Time Synchronized.
Nov 16 16:51:35 cmp001 apparmor[610]:    ...done.
Nov 16 16:51:35 cmp001 systemd[1]: Started AppArmor initialization.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Initial cloud-init job (pre-networking)...
Nov 16 16:51:35 cmp001 cloud-init[738]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init-local' at Sat, 16 Nov 2019 16:51:25 +0000. Up 7.72 seconds.
Nov 16 16:51:35 cmp001 systemd[1]: Started Initial cloud-init job (pre-networking).
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Network (Pre).
Nov 16 16:51:35 cmp001 systemd[1]: Starting Raise network interfaces...
Nov 16 16:51:35 cmp001 dhclient[822]: Internet Systems Consortium DHCP Client 4.3.5
Nov 16 16:51:35 cmp001 ifup[801]: Internet Systems Consortium DHCP Client 4.3.5
Nov 16 16:51:35 cmp001 ifup[801]: Copyright 2004-2016 Internet Systems Consortium.
Nov 16 16:51:35 cmp001 ifup[801]: All rights reserved.
Nov 16 16:51:35 cmp001 ifup[801]: For info, please visit https://www.isc.org/software/dhcp/
Nov 16 16:51:35 cmp001 dhclient[822]: Copyright 2004-2016 Internet Systems Consortium.
Nov 16 16:51:35 cmp001 dhclient[822]: All rights reserved.
Nov 16 16:51:35 cmp001 dhclient[822]: For info, please visit https://www.isc.org/software/dhcp/
Nov 16 16:51:35 cmp001 dhclient[822]: 
Nov 16 16:51:35 cmp001 dhclient[822]: Listening on LPF/ens3/52:54:00:da:c3:64
Nov 16 16:51:35 cmp001 ifup[801]: Listening on LPF/ens3/52:54:00:da:c3:64
Nov 16 16:51:35 cmp001 ifup[801]: Sending on   LPF/ens3/52:54:00:da:c3:64
Nov 16 16:51:35 cmp001 ifup[801]: Sending on   Socket/fallback
Nov 16 16:51:35 cmp001 ifup[801]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 3 (xid=0x338c1157)
Nov 16 16:51:35 cmp001 dhclient[822]: Sending on   LPF/ens3/52:54:00:da:c3:64
Nov 16 16:51:35 cmp001 dhclient[822]: Sending on   Socket/fallback
Nov 16 16:51:35 cmp001 dhclient[822]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 3 (xid=0x338c1157)
Nov 16 16:51:35 cmp001 dhclient[822]: DHCPREQUEST of 192.168.11.40 on ens3 to 255.255.255.255 port 67 (xid=0x57118c33)
Nov 16 16:51:35 cmp001 ifup[801]: DHCPREQUEST of 192.168.11.40 on ens3 to 255.255.255.255 port 67 (xid=0x57118c33)
Nov 16 16:51:35 cmp001 ifup[801]: DHCPOFFER of 192.168.11.40 from 192.168.11.3
Nov 16 16:51:35 cmp001 ifup[801]: DHCPACK of 192.168.11.40 from 192.168.11.3
Nov 16 16:51:35 cmp001 dhclient[822]: DHCPOFFER of 192.168.11.40 from 192.168.11.3
Nov 16 16:51:35 cmp001 dhclient[822]: DHCPACK of 192.168.11.40 from 192.168.11.3
Nov 16 16:51:35 cmp001 ifup[801]: Failed to try-reload-or-restart systemd-resolved.service: Unit systemd-resolved.service is masked.
Nov 16 16:51:35 cmp001 dhclient[822]: bound to 192.168.11.40 -- renewal in 1419 seconds.
Nov 16 16:51:35 cmp001 ifup[801]: bound to 192.168.11.40 -- renewal in 1419 seconds.
Nov 16 16:51:35 cmp001 systemd[1]: Started Raise network interfaces.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Network.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Nov 16 16:51:35 cmp001 cloud-init[879]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init' at Sat, 16 Nov 2019 16:51:29 +0000. Up 11.73 seconds.
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +--------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: | Device |   Up  |          Address           |      Mask     | Scope  |     Hw-Address    |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +--------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |  ens3  |  True |       192.168.11.40        | 255.255.255.0 | global | 52:54:00:da:c3:64 |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |  ens3  |  True | fe80::5054:ff:feda:c364/64 |       .       |  link  | 52:54:00:da:c3:64 |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |  ens4  | False |             .              |       .       |   .    | 52:54:00:88:dd:d4 |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |  ens5  | False |             .              |       .       |   .    | 52:54:00:5d:c3:57 |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |  ens6  | False |             .              |       .       |   .    | 52:54:00:48:c5:4b |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |   lo   |  True |         127.0.0.1          |   255.0.0.0   |  host  |         .         |
Nov 16 16:51:35 cmp001 kernel: [    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |   lo   |  True |          ::1/128           |       .       |  host  |         .         |
Nov 16 16:51:35 cmp001 kernel: [    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +--------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:51:35 cmp001 kernel: [    0.000000] e820: BIOS-provided physical RAM map:
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdefff] usable
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x00000000bffdf000-0x00000000bfffffff] reserved
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000033fffffff] usable
Nov 16 16:51:35 cmp001 kernel: [    0.000000] NX (Execute Disable) protection: active
Nov 16 16:51:35 cmp001 kernel: [    0.000000] SMBIOS 2.8 present.
Nov 16 16:51:35 cmp001 kernel: [    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: | Route | Destination  |   Gateway    |    Genmask    | Interface | Flags |
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Hypervisor detected: KVM
Nov 16 16:51:35 cmp001 kernel: [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 16 16:51:35 cmp001 kernel: [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 16 16:51:35 cmp001 kernel: [    0.000000] e820: last_pfn = 0x340000 max_arch_pfn = 0x400000000
Nov 16 16:51:35 cmp001 kernel: [    0.000000] MTRR default type: write-back
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:51:35 cmp001 kernel: [    0.000000] MTRR fixed ranges enabled:
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   00000-9FFFF write-back
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |   0   |   0.0.0.0    | 192.168.11.3 |    0.0.0.0    |    ens3   |   UG  |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |   1   | 192.168.11.0 |   0.0.0.0    | 255.255.255.0 |    ens3   |   U   |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |   1   |  fe80::/64  |    ::   |    ens3   |   U   |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |   3   |    local    |    ::   |    ens3   |   U   |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: |   4   |   ff00::/8  |    ::   |    ens3   |   U   |
Nov 16 16:51:35 cmp001 cloud-init[879]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:51:35 cmp001 cloud-init[879]: Generating public/private rsa key pair.
Nov 16 16:51:35 cmp001 cloud-init[879]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
Nov 16 16:51:35 cmp001 cloud-init[879]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
Nov 16 16:51:35 cmp001 cloud-init[879]: The key fingerprint is:
Nov 16 16:51:35 cmp001 cloud-init[879]: SHA256:Nft6e43RyKK7wV+nChDGzlG/41vQ3fMbubKxPu+olfg root@cmp001
Nov 16 16:51:35 cmp001 cloud-init[879]: The key's randomart image is:
Nov 16 16:51:35 cmp001 cloud-init[879]: +---[RSA 2048]----+
Nov 16 16:51:35 cmp001 cloud-init[879]: |          .      |
Nov 16 16:51:35 cmp001 cloud-init[879]: |       . . .     |
Nov 16 16:51:35 cmp001 cloud-init[879]: |        = o .    |
Nov 16 16:51:35 cmp001 cloud-init[879]: |       + + o o ..|
Nov 16 16:51:35 cmp001 cloud-init[879]: |        S . +..+o|
Nov 16 16:51:35 cmp001 cloud-init[879]: |         o ooo= =|
Nov 16 16:51:35 cmp001 cloud-init[879]: |          +oo=o*o|
Nov 16 16:51:35 cmp001 cloud-init[879]: |          .=+**o=|
Nov 16 16:51:35 cmp001 cloud-init[879]: |          +=BEB= |
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   A0000-BFFFF uncachable
Nov 16 16:51:35 cmp001 cloud-init[879]: +----[SHA256]-----+
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   C0000-FFFFF write-protect
Nov 16 16:51:35 cmp001 kernel: [    0.000000] MTRR variable ranges enabled:
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   0 base 00C0000000 mask FFC0000000 uncachable
Nov 16 16:51:35 cmp001 cloud-init[879]: Generating public/private dsa key pair.
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   1 disabled
Nov 16 16:51:35 cmp001 cloud-init[879]: Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   2 disabled
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   3 disabled
Nov 16 16:51:35 cmp001 cloud-init[879]: Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   4 disabled
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   5 disabled
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   6 disabled
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   7 disabled
Nov 16 16:51:35 cmp001 cloud-init[879]: The key fingerprint is:
Nov 16 16:51:35 cmp001 kernel: [    0.000000] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 16 16:51:35 cmp001 kernel: [    0.000000] e820: last_pfn = 0xbffdf max_arch_pfn = 0x400000000
Nov 16 16:51:35 cmp001 cloud-init[879]: SHA256:P3yGWKDn3IDcTGMBBWDA4CcYxZBcgD99hnXx03LPnVY root@cmp001
Nov 16 16:51:35 cmp001 kernel: [    0.000000] found SMP MP-table at [mem 0x000f6590-0x000f659f]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Scanning 1 areas for low memory corruption
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Using GB pages for direct mapping
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BRK [0x26ed41000, 0x26ed41fff] PGTABLE
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BRK [0x26ed42000, 0x26ed42fff] PGTABLE
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BRK [0x26ed43000, 0x26ed43fff] PGTABLE
Nov 16 16:51:35 cmp001 cloud-init[879]: The key's randomart image is:
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BRK [0x26ed44000, 0x26ed44fff] PGTABLE
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BRK [0x26ed45000, 0x26ed45fff] PGTABLE
Nov 16 16:51:35 cmp001 cloud-init[879]: +---[DSA 1024]----+
Nov 16 16:51:35 cmp001 kernel: [    0.000000] BRK [0x26ed46000, 0x26ed46fff] PGTABLE
Nov 16 16:51:35 cmp001 kernel: [    0.000000] RAMDISK: [mem 0x35a87000-0x36d3afff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: Early table checksum verification disabled
Nov 16 16:51:35 cmp001 cloud-init[879]: |*O=oo.o++.       |
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: RSDP 0x00000000000F6540 000014 (v00 BOCHS )
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: RSDT 0x00000000BFFE14B2 000030 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: FACP 0x00000000BFFE08D4 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
Nov 16 16:51:35 cmp001 cloud-init[879]: |=oo.  . .o .     |
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: DSDT 0x00000000BFFDFD00 000BD4 (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: FACS 0x00000000BFFDFCC0 000040
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: SSDT 0x00000000BFFE0948 000ACA (v01 BOCHS  BXPCSSDT 00000001 BXPC 00000001)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: APIC 0x00000000BFFE1412 0000A0 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:51:35 cmp001 cloud-init[879]: |.+ o o .= + o   E|
Nov 16 16:51:35 cmp001 kernel: [    0.000000] No NUMA configuration found
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Faking a node at [mem 0x0000000000000000-0x000000033fffffff]
Nov 16 16:51:35 cmp001 cloud-init[879]: |  = o.oB o + o .o|
Nov 16 16:51:35 cmp001 kernel: [    0.000000] NODE_DATA(0) allocated [mem 0x33ffd5000-0x33fffffff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] kvm-clock: cpu 0, msr 3:3ff54001, primary cpu clock
Nov 16 16:51:35 cmp001 kernel: [    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 16 16:51:35 cmp001 cloud-init[879]: |   . o+ S .   oo.|
Nov 16 16:51:35 cmp001 kernel: [    0.000000] kvm-clock: using sched offset of 11087575933 cycles
Nov 16 16:51:35 cmp001 kernel: [    0.000000] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Zone ranges:
Nov 16 16:51:35 cmp001 cloud-init[879]: |       + B .  .  |
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   Normal   [mem 0x0000000100000000-0x000000033fffffff]
Nov 16 16:51:35 cmp001 cloud-init[879]: |        + * o    |
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   Device   empty
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Movable zone start for each node
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Early memory node ranges
Nov 16 16:51:35 cmp001 cloud-init[879]: |           +     |
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   node   0: [mem 0x0000000000100000-0x00000000bffdefff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   node   0: [mem 0x0000000100000000-0x000000033fffffff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Reserved but unavailable: 98 pages
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x000000033fffffff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] On node 0 totalpages: 3145597
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   DMA zone: 64 pages used for memmap
Nov 16 16:51:35 cmp001 cloud-init[879]: |                 |
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   DMA zone: 21 pages reserved
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   DMA zone: 3998 pages, LIFO batch:0
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   DMA32 zone: 12224 pages used for memmap
Nov 16 16:51:35 cmp001 cloud-init[879]: +----[SHA256]-----+
Nov 16 16:51:35 cmp001 cloud-init[879]: Generating public/private ecdsa key pair.
Nov 16 16:51:35 cmp001 cloud-init[879]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key.
Nov 16 16:51:35 cmp001 cloud-init[879]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub.
Nov 16 16:51:35 cmp001 cloud-init[879]: The key fingerprint is:
Nov 16 16:51:35 cmp001 cloud-init[879]: SHA256:l40SYRJZ6crcf+2UnO+4luDN9pMWYswyAN2FYHfn9Dk root@cmp001
Nov 16 16:51:35 cmp001 cloud-init[879]: The key's randomart image is:
Nov 16 16:51:35 cmp001 cloud-init[879]: +---[ECDSA 256]---+
Nov 16 16:51:35 cmp001 cloud-init[879]: |      o+=+o.oo o |
Nov 16 16:51:35 cmp001 cloud-init[879]: |      .+oo.o. + o|
Nov 16 16:51:35 cmp001 cloud-init[879]: |       .o      E.|
Nov 16 16:51:35 cmp001 cloud-init[879]: |        .o +    .|
Nov 16 16:51:35 cmp001 cloud-init[879]: |     o oS = +    |
Nov 16 16:51:35 cmp001 cloud-init[879]: |      + .o o.* + |
Nov 16 16:51:35 cmp001 cloud-init[879]: |         . .+=*.o|
Nov 16 16:51:35 cmp001 cloud-init[879]: |          . o.B* |
Nov 16 16:51:35 cmp001 cloud-init[879]: |           . +=+=|
Nov 16 16:51:35 cmp001 cloud-init[879]: +----[SHA256]-----+
Nov 16 16:51:35 cmp001 cloud-init[879]: Generating public/private ed25519 key pair.
Nov 16 16:51:35 cmp001 cloud-init[879]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key.
Nov 16 16:51:35 cmp001 cloud-init[879]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub.
Nov 16 16:51:35 cmp001 cloud-init[879]: The key fingerprint is:
Nov 16 16:51:35 cmp001 cloud-init[879]: SHA256:DMhZ2ZmMXiBrTFdp53cT/E2y7q/+bqqGPKqNI0cVNf0 root@cmp001
Nov 16 16:51:35 cmp001 cloud-init[879]: The key's randomart image is:
Nov 16 16:51:35 cmp001 cloud-init[879]: +--[ED25519 256]--+
Nov 16 16:51:35 cmp001 cloud-init[879]: |    o +B.=o. .   |
Nov 16 16:51:35 cmp001 cloud-init[879]: |   + Bo X ... + .|
Nov 16 16:51:35 cmp001 cloud-init[879]: |    B..o +   . *.|
Nov 16 16:51:35 cmp001 cloud-init[879]: |   .  .o. . . E o|
Nov 16 16:51:35 cmp001 cloud-init[879]: |       .S  . o . |
Nov 16 16:51:35 cmp001 cloud-init[879]: |      .       .  |
Nov 16 16:51:35 cmp001 cloud-init[879]: |     .   . . .   |
Nov 16 16:51:35 cmp001 cloud-init[879]: |    . oo  + . . .|
Nov 16 16:51:35 cmp001 cloud-init[879]: |     oooo. o.o+B+|
Nov 16 16:51:35 cmp001 cloud-init[879]: +----[SHA256]-----+
Nov 16 16:51:35 cmp001 systemd[1]: Started Initial cloud-init job (metadata service crawler).
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Cloud-config availability.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Network is Online.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Remote File Systems (Pre).
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Remote File Systems.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Availability of block devices...
Nov 16 16:51:35 cmp001 systemd[1]: Reached target System Initialization.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Socket activation for snappy daemon.
Nov 16 16:51:35 cmp001 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 16 16:51:35 cmp001 systemd[1]: Starting LXD - unix socket.
Nov 16 16:51:35 cmp001 systemd[1]: Started ACPI Events Check.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Paths.
Nov 16 16:51:35 cmp001 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 16 16:51:35 cmp001 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 16 16:51:35 cmp001 systemd[1]: Started Discard unused blocks once a week.
Nov 16 16:51:35 cmp001 systemd[1]: Listening on ACPID Listen Socket.
Nov 16 16:51:35 cmp001 systemd[1]: Started Message of the Day.
Nov 16 16:51:35 cmp001 systemd[1]: Listening on UUID daemon activation socket.
Nov 16 16:51:35 cmp001 systemd[1]: Started Daily apt download activities.
Nov 16 16:51:35 cmp001 systemd[1]: Started Daily apt upgrade and clean activities.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Timers.
Nov 16 16:51:35 cmp001 systemd[1]: Started Availability of block devices.
Nov 16 16:51:35 cmp001 systemd[1]: Listening on Socket activation for snappy daemon.
Nov 16 16:51:35 cmp001 systemd[1]: Listening on LXD - unix socket.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Sockets.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Basic System.
Nov 16 16:51:35 cmp001 systemd[1]: Started Regular background program processing daemon.
Nov 16 16:51:35 cmp001 systemd[1]: Started D-Bus System Message Bus.
Nov 16 16:51:35 cmp001 cron[999]: (CRON) INFO (pidfile fd = 3)
Nov 16 16:51:35 cmp001 cron[999]: (CRON) INFO (Running @reboot jobs)
Nov 16 16:51:35 cmp001 dbus-daemon[1005]: [system] AppArmor D-Bus mediation is enabled
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   DMA32 zone: 782303 pages, LIFO batch:31
Nov 16 16:51:35 cmp001 systemd[1]: Starting LSB: automatic crash report generation...
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   Normal zone: 36864 pages used for memmap
Nov 16 16:51:35 cmp001 kernel: [    0.000000]   Normal zone: 2359296 pages, LIFO batch:31
Nov 16 16:51:35 cmp001 systemd[1]: Started Deferred execution scheduler.
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: PM-Timer IO Port: 0x608
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 16 16:51:35 cmp001 systemd[1]: Started irqbalance daemon.
Nov 16 16:51:35 cmp001 kernel: [    0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 16 16:51:35 cmp001 systemd[1]: Starting Accounts Service...
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: IRQ0 used by override.
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: IRQ5 used by override.
Nov 16 16:51:35 cmp001 systemd[1]: Starting dnsmasq - A lightweight DHCP and caching DNS server...
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: IRQ9 used by override.
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: IRQ10 used by override.
Nov 16 16:51:35 cmp001 systemd[1]: Starting The Salt Minion...
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ACPI: IRQ11 used by override.
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Using ACPI (MADT) for SMP configuration information
Nov 16 16:51:35 cmp001 kernel: [    0.000000] smpboot: Allowing 6 CPUs, 0 hotplug CPUs
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 16 16:51:35 cmp001 systemd[1]: Started FUSE filesystem for LXC.
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xbffdf000-0xbfffffff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 16 16:51:35 cmp001 systemd[1]: Starting LSB: Record successful boot for GRUB...
Nov 16 16:51:35 cmp001 kernel: [    0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Booting paravirtualized kernel on KVM
Nov 16 16:51:35 cmp001 systemd[1]: Starting Login Service...
Nov 16 16:51:35 cmp001 kernel: [    0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
Nov 16 16:51:35 cmp001 kernel: [    0.000000] random: get_random_bytes called from start_kernel+0x99/0x4fd with crng_init=0
Nov 16 16:51:35 cmp001 kernel: [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:6 nr_cpu_ids:6 nr_node_ids:1
Nov 16 16:51:35 cmp001 kernel: [    0.000000] percpu: Embedded 45 pages/cpu s147456 r8192 d28672 u262144
Nov 16 16:51:35 cmp001 systemd[1]: Starting LXD - container startup/shutdown...
Nov 16 16:51:35 cmp001 systemd[1]: Starting Snappy daemon...
Nov 16 16:51:35 cmp001 systemd[1]: Starting Permit User Sessions...
Nov 16 16:51:35 cmp001 systemd[1]: Starting System Logging Service...
Nov 16 16:51:35 cmp001 systemd[1]: Starting Pollinate to seed the pseudo random number generator...
Nov 16 16:51:35 cmp001 systemd[1]: Started Permit User Sessions.
Nov 16 16:51:35 cmp001 lxcfs[1055]: mount namespace: 5
Nov 16 16:51:35 cmp001 lxcfs[1055]: hierarchies:
Nov 16 16:51:35 cmp001 lxcfs[1055]:   0: fd:   6: blkio
Nov 16 16:51:35 cmp001 lxcfs[1055]:   1: fd:   7: pids
Nov 16 16:51:35 cmp001 lxcfs[1055]:   2: fd:   8: memory
Nov 16 16:51:35 cmp001 kernel: [    0.000000] pcpu-alloc: s147456 r8192 d28672 u262144 alloc=1*2097152
Nov 16 16:51:35 cmp001 kernel: [    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 - - 
Nov 16 16:51:35 cmp001 lxcfs[1055]:   3: fd:   9: cpuset
Nov 16 16:51:35 cmp001 kernel: [    0.000000] KVM setup async PF for cpu 0
Nov 16 16:51:35 cmp001 kernel: [    0.000000] kvm-stealtime: cpu 0, msr 33fc23040
Nov 16 16:51:35 cmp001 kernel: [    0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
Nov 16 16:51:35 cmp001 lxcfs[1055]:   4: fd:  10: freezer
Nov 16 16:51:35 cmp001 lxcfs[1055]:   5: fd:  11: cpu,cpuacct
Nov 16 16:51:35 cmp001 lxcfs[1055]:   6: fd:  12: rdma
Nov 16 16:51:35 cmp001 lxcfs[1055]:   7: fd:  13: hugetlb
Nov 16 16:51:35 cmp001 lxcfs[1055]:   8: fd:  14: perf_event
Nov 16 16:51:35 cmp001 lxcfs[1055]:   9: fd:  15: devices
Nov 16 16:51:35 cmp001 lxcfs[1055]:  10: fd:  16: net_cls,net_prio
Nov 16 16:51:35 cmp001 lxcfs[1055]:  11: fd:  17: name=systemd
Nov 16 16:51:35 cmp001 lxcfs[1055]:  12: fd:  18: unified
Nov 16 16:51:35 cmp001 systemd[1]: Started Login Service.
Nov 16 16:51:35 cmp001 dnsmasq[1042]: dnsmasq: syntax check OK.
Nov 16 16:51:35 cmp001 apport[1028]:  * Starting automatic crash report generation: apport
Nov 16 16:51:35 cmp001 grub-common[1062]:  * Recording successful boot for GRUB
Nov 16 16:51:35 cmp001 systemd[1]: Started Unattended Upgrades Shutdown.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Hold until boot process finishes up...
Nov 16 16:51:35 cmp001 systemd[1]: Starting Terminate Plymouth Boot Screen...
Nov 16 16:51:35 cmp001 systemd[1]: Started Hold until boot process finishes up.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Set console scheme...
Nov 16 16:51:35 cmp001 systemd[1]: Started Serial Getty on ttyS0.
Nov 16 16:51:35 cmp001 systemd[1]: Started Terminate Plymouth Boot Screen.
Nov 16 16:51:35 cmp001 apport[1028]:    ...done.
Nov 16 16:51:35 cmp001 systemd[1]: Started LSB: automatic crash report generation.
Nov 16 16:51:35 cmp001 systemd[1]: Started Set console scheme.
Nov 16 16:51:35 cmp001 systemd[1]: Created slice system-getty.slice.
Nov 16 16:51:35 cmp001 systemd[1]: Started Getty on tty1.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Login Prompts.
Nov 16 16:51:35 cmp001 dbus-daemon[1005]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.2' (uid=0 pid=1038 comm="/usr/lib/accountsservice/accounts-daemon " label="unconfined")
Nov 16 16:51:35 cmp001 pollinate[1117]: client sent challenge to [https://entropy.ubuntu.com/]
Nov 16 16:51:35 cmp001 grub-common[1062]:    ...done.
Nov 16 16:51:35 cmp001 systemd[1]: Started LSB: Record successful boot for GRUB.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Authorization Manager...
Nov 16 16:51:35 cmp001 dnsmasq[1276]: started, version 2.79 cachesize 150
Nov 16 16:51:35 cmp001 dnsmasq[1276]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify
Nov 16 16:51:35 cmp001 dnsmasq[1276]: reading /etc/resolv.conf
Nov 16 16:51:35 cmp001 dnsmasq[1276]: using nameserver 8.8.8.8#53
Nov 16 16:51:35 cmp001 dnsmasq[1276]: read /etc/hosts - 7 addresses
Nov 16 16:51:35 cmp001 systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.
Nov 16 16:51:35 cmp001 systemd[1]: Reached target Host and Network Name Lookups.
Nov 16 16:51:35 cmp001 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.32.0]
Nov 16 16:51:35 cmp001 systemd[1]: Started System Logging Service.
Nov 16 16:51:35 cmp001 rsyslogd: rsyslogd's groupid changed to 106
Nov 16 16:51:35 cmp001 rsyslogd: rsyslogd's userid changed to 102
Nov 16 16:51:35 cmp001 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="1109" x-info="http://www.rsyslog.com"] start
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 3096424
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Policy zone: Normal
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Calgary: detecting Calgary via BIOS EBDA area
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Memory: 12270808K/12582388K available (12300K kernel code, 2481K rwdata, 4264K rodata, 2432K init, 2388K bss, 311580K reserved, 0K cma-reserved)
Nov 16 16:51:35 cmp001 kernel: [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
Nov 16 16:51:35 cmp001 kernel: [    0.000000] Kernel/User page tables isolation: enabled
Nov 16 16:51:35 cmp001 kernel: [    0.000000] ftrace: allocating 39315 entries in 154 pages
Nov 16 16:51:35 cmp001 kernel: [    0.004000] Hierarchical RCU implementation.
Nov 16 16:51:35 cmp001 kernel: [    0.004000] 	RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=6.
Nov 16 16:51:35 cmp001 kernel: [    0.004000] 	Tasks RCU enabled.
Nov 16 16:51:35 cmp001 kernel: [    0.004000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
Nov 16 16:51:35 cmp001 kernel: [    0.004000] NR_IRQS: 524544, nr_irqs: 472, preallocated irqs: 16
Nov 16 16:51:35 cmp001 kernel: [    0.004000] Console: colour VGA+ 80x25
Nov 16 16:51:35 cmp001 kernel: [    0.004000] console [tty1] enabled
Nov 16 16:51:35 cmp001 kernel: [    0.004000] console [ttyS0] enabled
Nov 16 16:51:35 cmp001 kernel: [    0.004000] ACPI: Core revision 20170831
Nov 16 16:51:35 cmp001 kernel: [    0.004000] ACPI: 2 ACPI AML tables successfully acquired and loaded
Nov 16 16:51:35 cmp001 kernel: [    0.004005] APIC: Switch to symmetric I/O mode setup
Nov 16 16:51:35 cmp001 kernel: [    0.005551] x2apic enabled
Nov 16 16:51:35 cmp001 kernel: [    0.006655] Switched APIC routing to physical x2apic.
Nov 16 16:51:35 cmp001 kernel: [    0.008000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Nov 16 16:51:35 cmp001 kernel: [    0.008000] tsc: Detected 2799.994 MHz processor
Nov 16 16:51:35 cmp001 kernel: [    0.008000] Calibrating delay loop (skipped) preset value.. 5599.98 BogoMIPS (lpj=11199976)
Nov 16 16:51:35 cmp001 kernel: [    0.008002] pid_max: default: 32768 minimum: 301
Nov 16 16:51:35 cmp001 kernel: [    0.009183] Security Framework initialized
Nov 16 16:51:35 cmp001 kernel: [    0.012003] Yama: becoming mindful.
Nov 16 16:51:35 cmp001 kernel: [    0.012936] AppArmor: AppArmor initialized
Nov 16 16:51:35 cmp001 kernel: [    0.019027] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.022502] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.024036] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.026140] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.028303] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Nov 16 16:51:35 cmp001 kernel: [    0.029612] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Nov 16 16:51:35 cmp001 kernel: [    0.032004] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 16 16:51:35 cmp001 kernel: [    0.034032] Spectre V2 : Mitigation: Full generic retpoline
Nov 16 16:51:35 cmp001 kernel: [    0.035387] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Nov 16 16:51:35 cmp001 kernel: [    0.036002] Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 16 16:51:35 cmp001 kernel: [    0.037662] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 16 16:51:35 cmp001 kernel: [    0.040002] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Nov 16 16:51:35 cmp001 kernel: [    0.042280] MDS: Mitigation: Clear CPU buffers
Nov 16 16:51:35 cmp001 kernel: [    0.050232] Freeing SMP alternatives memory: 36K
Nov 16 16:51:35 cmp001 kernel: [    0.053496] TSC deadline timer enabled
Nov 16 16:51:35 cmp001 kernel: [    0.053500] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (family: 0x6, model: 0x3e, stepping: 0x4)
Nov 16 16:51:35 cmp001 kernel: [    0.056000] Performance Events: IvyBridge events, Intel PMU driver.
Nov 16 16:51:35 cmp001 kernel: [    0.056009] ... version:                2
Nov 16 16:51:35 cmp001 kernel: [    0.057039] ... bit width:              48
Nov 16 16:51:35 cmp001 kernel: [    0.058071] ... generic registers:      4
Nov 16 16:51:35 cmp001 kernel: [    0.059078] ... value mask:             0000ffffffffffff
Nov 16 16:51:35 cmp001 kernel: [    0.060004] ... max period:             000000007fffffff
Nov 16 16:51:35 cmp001 kernel: [    0.061302] ... fixed-purpose events:   3
Nov 16 16:51:35 cmp001 kernel: [    0.062318] ... event mask:             000000070000000f
Nov 16 16:51:35 cmp001 kernel: [    0.063643] Hierarchical SRCU implementation.
Nov 16 16:51:35 cmp001 kernel: [    0.064711] smp: Bringing up secondary CPUs ...
Nov 16 16:51:35 cmp001 kernel: [    0.065947] x86: Booting SMP configuration:
Nov 16 16:51:35 cmp001 kernel: [    0.067011] .... node  #0, CPUs:      #1
Nov 16 16:51:35 cmp001 kernel: [    0.004000] kvm-clock: cpu 1, msr 3:3ff54041, secondary cpu clock
Nov 16 16:51:35 cmp001 kernel: [    0.072055] KVM setup async PF for cpu 1
Nov 16 16:51:35 cmp001 kernel: [    0.073023] kvm-stealtime: cpu 1, msr 33fc63040
Nov 16 16:51:35 cmp001 kernel: [    0.074143]  #2
Nov 16 16:51:35 cmp001 kernel: [    0.004000] kvm-clock: cpu 2, msr 3:3ff54081, secondary cpu clock
Nov 16 16:51:35 cmp001 kernel: [    0.076032] KVM setup async PF for cpu 2
Nov 16 16:51:35 cmp001 kernel: [    0.076996] kvm-stealtime: cpu 2, msr 33fca3040
Nov 16 16:51:35 cmp001 kernel: [    0.078117]  #3
Nov 16 16:51:35 cmp001 kernel: [    0.004000] kvm-clock: cpu 3, msr 3:3ff540c1, secondary cpu clock
Nov 16 16:51:35 cmp001 kernel: [    0.080020] KVM setup async PF for cpu 3
Nov 16 16:51:35 cmp001 kernel: [    0.080984] kvm-stealtime: cpu 3, msr 33fce3040
Nov 16 16:51:35 cmp001 kernel: [    0.082100]  #4
Nov 16 16:51:35 cmp001 kernel: [    0.004000] kvm-clock: cpu 4, msr 3:3ff54101, secondary cpu clock
Nov 16 16:51:35 cmp001 kernel: [    0.088026] KVM setup async PF for cpu 4
Nov 16 16:51:35 cmp001 kernel: [    0.088997] kvm-stealtime: cpu 4, msr 33fd23040
Nov 16 16:51:35 cmp001 kernel: [    0.090123]  #5
Nov 16 16:51:35 cmp001 kernel: [    0.004000] kvm-clock: cpu 5, msr 3:3ff54141, secondary cpu clock
Nov 16 16:51:35 cmp001 kernel: [    0.096033] KVM setup async PF for cpu 5
Nov 16 16:51:35 cmp001 kernel: [    0.097037] kvm-stealtime: cpu 5, msr 33fd63040
Nov 16 16:51:35 cmp001 kernel: [    0.098155] smp: Brought up 1 node, 6 CPUs
Nov 16 16:51:35 cmp001 kernel: [    0.098155] smpboot: Max logical packages: 6
Nov 16 16:51:35 cmp001 kernel: [    0.100006] smpboot: Total of 6 processors activated (33599.92 BogoMIPS)
Nov 16 16:51:35 cmp001 kernel: [    0.102288] devtmpfs: initialized
Nov 16 16:51:35 cmp001 kernel: [    0.102288] x86/mm: Memory block size: 128MB
Nov 16 16:51:35 cmp001 kernel: [    0.105276] evm: security.selinux
Nov 16 16:51:35 cmp001 kernel: [    0.106156] evm: security.SMACK64
Nov 16 16:51:35 cmp001 kernel: [    0.107026] evm: security.SMACK64EXEC
Nov 16 16:51:35 cmp001 kernel: [    0.107966] evm: security.SMACK64TRANSMUTE
Nov 16 16:51:35 cmp001 kernel: [    0.108006] evm: security.SMACK64MMAP
Nov 16 16:51:35 cmp001 kernel: [    0.108955] evm: security.apparmor
Nov 16 16:51:35 cmp001 kernel: [    0.109842] evm: security.ima
Nov 16 16:51:35 cmp001 kernel: [    0.110634] evm: security.capability
Nov 16 16:51:35 cmp001 kernel: [    0.111595] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
Nov 16 16:51:35 cmp001 kernel: [    0.112015] futex hash table entries: 2048 (order: 5, 131072 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.113601] pinctrl core: initialized pinctrl subsystem
Nov 16 16:51:35 cmp001 kernel: [    0.115068] RTC time: 16:51:17, date: 11/16/19
Nov 16 16:51:35 cmp001 kernel: [    0.116832] NET: Registered protocol family 16
Nov 16 16:51:35 cmp001 kernel: [    0.118039] audit: initializing netlink subsys (disabled)
Nov 16 16:51:35 cmp001 kernel: [    0.119388] audit: type=2000 audit(1573923077.634:1): state=initialized audit_enabled=0 res=1
Nov 16 16:51:35 cmp001 kernel: [    0.120009] cpuidle: using governor ladder
Nov 16 16:51:35 cmp001 kernel: [    0.121099] cpuidle: using governor menu
Nov 16 16:51:35 cmp001 kernel: [    0.124189] ACPI: bus type PCI registered
Nov 16 16:51:35 cmp001 kernel: [    0.125235] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 16 16:51:35 cmp001 kernel: [    0.126905] PCI: Using configuration type 1 for base access
Nov 16 16:51:35 cmp001 kernel: [    0.128038] core: PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off
Nov 16 16:51:35 cmp001 kernel: [    0.130868] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Nov 16 16:51:35 cmp001 kernel: [    0.132008] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Nov 16 16:51:35 cmp001 kernel: [    0.133661] ACPI: Added _OSI(Module Device)
Nov 16 16:51:35 cmp001 kernel: [    0.133661] ACPI: Added _OSI(Processor Device)
Nov 16 16:51:35 cmp001 kernel: [    0.136006] ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 16 16:51:35 cmp001 kernel: [    0.137181] ACPI: Added _OSI(Processor Aggregator Device)
Nov 16 16:51:35 cmp001 kernel: [    0.138492] ACPI: Added _OSI(Linux-Dell-Video)
Nov 16 16:51:35 cmp001 kernel: [    0.139608] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Nov 16 16:51:35 cmp001 kernel: [    0.140005] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Nov 16 16:51:35 cmp001 kernel: [    0.142826] ACPI: Interpreter enabled
Nov 16 16:51:35 cmp001 kernel: [    0.143794] ACPI: (supports S0 S5)
Nov 16 16:51:35 cmp001 kernel: [    0.144005] ACPI: Using IOAPIC for interrupt routing
Nov 16 16:51:35 cmp001 kernel: [    0.145230] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 16 16:51:35 cmp001 kernel: [    0.147972] ACPI: Enabled 16 GPEs in block 00 to 0F
Nov 16 16:51:35 cmp001 kernel: [    0.151655] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 16 16:51:35 cmp001 kernel: [    0.152011] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
Nov 16 16:51:35 cmp001 kernel: [    0.153646] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
Nov 16 16:51:35 cmp001 kernel: [    0.155213] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Nov 16 16:51:35 cmp001 kernel: [    0.156370] acpiphp: Slot [3] registered
Nov 16 16:51:35 cmp001 kernel: [    0.157426] acpiphp: Slot [4] registered
Nov 16 16:51:35 cmp001 kernel: [    0.158487] acpiphp: Slot [5] registered
Nov 16 16:51:35 cmp001 kernel: [    0.159532] acpiphp: Slot [6] registered
Nov 16 16:51:35 cmp001 kernel: [    0.160053] acpiphp: Slot [7] registered
Nov 16 16:51:35 cmp001 kernel: [    0.161132] acpiphp: Slot [9] registered
Nov 16 16:51:35 cmp001 kernel: [    0.162181] acpiphp: Slot [10] registered
Nov 16 16:51:35 cmp001 kernel: [    0.163260] acpiphp: Slot [11] registered
Nov 16 16:51:35 cmp001 kernel: [    0.164053] acpiphp: Slot [12] registered
Nov 16 16:51:35 cmp001 kernel: [    0.165134] acpiphp: Slot [13] registered
Nov 16 16:51:35 cmp001 kernel: [    0.166195] acpiphp: Slot [14] registered
Nov 16 16:51:35 cmp001 kernel: [    0.167275] acpiphp: Slot [15] registered
Nov 16 16:51:35 cmp001 kernel: [    0.168052] acpiphp: Slot [16] registered
Nov 16 16:51:35 cmp001 kernel: [    0.169128] acpiphp: Slot [17] registered
Nov 16 16:51:35 cmp001 kernel: [    0.170187] acpiphp: Slot [18] registered
Nov 16 16:51:35 cmp001 kernel: [    0.171273] acpiphp: Slot [19] registered
Nov 16 16:51:35 cmp001 kernel: [    0.172052] acpiphp: Slot [20] registered
Nov 16 16:51:35 cmp001 kernel: [    0.173122] acpiphp: Slot [21] registered
Nov 16 16:51:35 cmp001 kernel: [    0.174185] acpiphp: Slot [22] registered
Nov 16 16:51:35 cmp001 kernel: [    0.175272] acpiphp: Slot [23] registered
Nov 16 16:51:35 cmp001 kernel: [    0.176052] acpiphp: Slot [24] registered
Nov 16 16:51:35 cmp001 kernel: [    0.177115] acpiphp: Slot [25] registered
Nov 16 16:51:35 cmp001 kernel: [    0.178191] acpiphp: Slot [26] registered
Nov 16 16:51:35 cmp001 kernel: [    0.179260] acpiphp: Slot [27] registered
Nov 16 16:51:35 cmp001 kernel: [    0.180053] acpiphp: Slot [28] registered
Nov 16 16:51:35 cmp001 kernel: [    0.181115] acpiphp: Slot [29] registered
Nov 16 16:51:35 cmp001 kernel: [    0.182201] acpiphp: Slot [30] registered
Nov 16 16:51:35 cmp001 kernel: [    0.183271] acpiphp: Slot [31] registered
Nov 16 16:51:35 cmp001 kernel: [    0.184022] PCI host bridge to bus 0000:00
Nov 16 16:51:35 cmp001 kernel: [    0.185054] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 16 16:51:35 cmp001 kernel: [    0.186674] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 16 16:51:35 cmp001 kernel: [    0.188005] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 16 16:51:35 cmp001 kernel: [    0.189822] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 16 16:51:35 cmp001 kernel: [    0.191980] pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 16 16:51:35 cmp001 kernel: [    0.192057] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Nov 16 16:51:35 cmp001 kernel: [    0.192657] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Nov 16 16:51:35 cmp001 kernel: [    0.193448] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Nov 16 16:51:35 cmp001 kernel: [    0.200990] pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Nov 16 16:51:35 cmp001 kernel: [    0.204651] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Nov 16 16:51:35 cmp001 kernel: [    0.206360] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Nov 16 16:51:35 cmp001 kernel: [    0.207919] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Nov 16 16:51:35 cmp001 kernel: [    0.208005] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Nov 16 16:51:35 cmp001 kernel: [    0.209787] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Nov 16 16:51:35 cmp001 kernel: [    0.210411] pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 16 16:51:35 cmp001 kernel: [    0.212017] pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 16 16:51:35 cmp001 kernel: [    0.214183] pci 0000:00:02.0: [1013:00b8] type 00 class 0x030000
Nov 16 16:51:35 cmp001 kernel: [    0.216011] pci 0000:00:02.0: reg 0x10: [mem 0xfc000000-0xfdffffff pref]
Nov 16 16:51:35 cmp001 kernel: [    0.218879] pci 0000:00:02.0: reg 0x14: [mem 0xfebd0000-0xfebd0fff]
Nov 16 16:51:35 cmp001 kernel: [    0.230459] pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref]
Nov 16 16:51:35 cmp001 kernel: [    0.230747] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:35 cmp001 kernel: [    0.234161] pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc05f]
Nov 16 16:51:35 cmp001 kernel: [    0.236006] pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff]
Nov 16 16:51:35 cmp001 kernel: [    0.248006] pci 0000:00:03.0: reg 0x30: [mem 0xfeac0000-0xfeafffff pref]
Nov 16 16:51:35 cmp001 kernel: [    0.248461] pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:35 cmp001 kernel: [    0.250748] pci 0000:00:04.0: reg 0x10: [io  0xc060-0xc07f]
Nov 16 16:51:35 cmp001 kernel: [    0.252015] pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff]
Nov 16 16:51:35 cmp001 kernel: [    0.263168] pci 0000:00:04.0: reg 0x30: [mem 0xfeb00000-0xfeb3ffff pref]
Nov 16 16:51:35 cmp001 kernel: [    0.263616] pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:35 cmp001 kernel: [    0.265122] pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Nov 16 16:51:35 cmp001 kernel: [    0.269020] pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff]
Nov 16 16:51:35 cmp001 kernel: [    0.280008] pci 0000:00:05.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
Nov 16 16:51:35 cmp001 kernel: [    0.280480] pci 0000:00:06.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:35 cmp001 kernel: [    0.282590] pci 0000:00:06.0: reg 0x10: [io  0xc0a0-0xc0bf]
Nov 16 16:51:35 cmp001 kernel: [    0.284006] pci 0000:00:06.0: reg 0x14: [mem 0xfebd4000-0xfebd4fff]
Nov 16 16:51:35 cmp001 kernel: [    0.295080] pci 0000:00:06.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
Nov 16 16:51:35 cmp001 kernel: [    0.295542] pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Nov 16 16:51:35 cmp001 kernel: [    0.297096] pci 0000:00:07.0: reg 0x10: [io  0xc000-0xc03f]
Nov 16 16:51:35 cmp001 kernel: [    0.299593] pci 0000:00:07.0: reg 0x14: [mem 0xfebd5000-0xfebd5fff]
Nov 16 16:51:35 cmp001 kernel: [    0.310702] pci 0000:00:08.0: [8086:2934] type 00 class 0x0c0300
Nov 16 16:51:35 cmp001 kernel: [    0.315908] pci 0000:00:08.0: reg 0x20: [io  0xc0c0-0xc0df]
Nov 16 16:51:35 cmp001 kernel: [    0.317943] pci 0000:00:08.1: [8086:2935] type 00 class 0x0c0300
Nov 16 16:51:35 cmp001 kernel: [    0.323900] pci 0000:00:08.1: reg 0x20: [io  0xc0e0-0xc0ff]
Nov 16 16:51:35 cmp001 kernel: [    0.325957] pci 0000:00:08.2: [8086:2936] type 00 class 0x0c0300
Nov 16 16:51:35 cmp001 kernel: [    0.332572] pci 0000:00:08.2: reg 0x20: [io  0xc100-0xc11f]
Nov 16 16:51:35 cmp001 kernel: [    0.335086] pci 0000:00:08.7: [8086:293a] type 00 class 0x0c0320
Nov 16 16:51:35 cmp001 kernel: [    0.336006] pci 0000:00:08.7: reg 0x10: [mem 0xfebd6000-0xfebd6fff]
Nov 16 16:51:35 cmp001 kernel: [    0.344068] pci 0000:00:09.0: [1af4:1002] type 00 class 0x00ff00
Nov 16 16:51:35 cmp001 kernel: [    0.345289] pci 0000:00:09.0: reg 0x10: [io  0xc120-0xc13f]
Nov 16 16:51:35 cmp001 kernel: [    0.354677] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
Nov 16 16:51:35 cmp001 kernel: [    0.356167] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
Nov 16 16:51:35 cmp001 kernel: [    0.357717] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
Nov 16 16:51:35 cmp001 kernel: [    0.359285] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
Nov 16 16:51:35 cmp001 kernel: [    0.360079] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
Nov 16 16:51:35 cmp001 kernel: [    0.362231] SCSI subsystem initialized
Nov 16 16:51:35 cmp001 kernel: [    0.363332] libata version 3.00 loaded.
Nov 16 16:51:35 cmp001 kernel: [    0.363332] pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 16 16:51:35 cmp001 kernel: [    0.363332] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 16 16:51:35 cmp001 kernel: [    0.364015] pci 0000:00:02.0: vgaarb: bridge control possible
Nov 16 16:51:35 cmp001 kernel: [    0.365451] vgaarb: loaded
Nov 16 16:51:35 cmp001 kernel: [    0.366244] ACPI: bus type USB registered
Nov 16 16:51:35 cmp001 kernel: [    0.367310] usbcore: registered new interface driver usbfs
Nov 16 16:51:35 cmp001 kernel: [    0.368016] usbcore: registered new interface driver hub
Nov 16 16:51:35 cmp001 kernel: [    0.369404] usbcore: registered new device driver usb
Nov 16 16:51:35 cmp001 kernel: [    0.370790] EDAC MC: Ver: 3.0.0
Nov 16 16:51:35 cmp001 kernel: [    0.372164] PCI: Using ACPI for IRQ routing
Nov 16 16:51:35 cmp001 kernel: [    0.373165] PCI: pci_cache_line_size set to 64 bytes
Nov 16 16:51:35 cmp001 kernel: [    0.373511] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 16 16:51:35 cmp001 kernel: [    0.373513] e820: reserve RAM buffer [mem 0xbffdf000-0xbfffffff]
Nov 16 16:51:35 cmp001 kernel: [    0.373612] NetLabel: Initializing
Nov 16 16:51:35 cmp001 kernel: [    0.374573] NetLabel:  domain hash size = 128
Nov 16 16:51:35 cmp001 kernel: [    0.375711] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 16 16:51:35 cmp001 kernel: [    0.376024] NetLabel:  unlabeled traffic allowed by default
Nov 16 16:51:35 cmp001 kernel: [    0.377486] clocksource: Switched to clocksource kvm-clock
Nov 16 16:51:35 cmp001 kernel: [    0.389584] VFS: Disk quotas dquot_6.6.0
Nov 16 16:51:35 cmp001 kernel: [    0.390672] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.392563] AppArmor: AppArmor Filesystem Enabled
Nov 16 16:51:35 cmp001 kernel: [    0.393811] pnp: PnP ACPI init
Nov 16 16:51:35 cmp001 kernel: [    0.394719] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
Nov 16 16:51:35 cmp001 kernel: [    0.394758] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active)
Nov 16 16:51:35 cmp001 kernel: [    0.394786] pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active)
Nov 16 16:51:35 cmp001 kernel: [    0.394820] pnp 00:03: [dma 2]
Nov 16 16:51:35 cmp001 kernel: [    0.394836] pnp 00:03: Plug and Play ACPI device, IDs PNP0700 (active)
Nov 16 16:51:35 cmp001 kernel: [    0.394936] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
Nov 16 16:51:35 cmp001 kernel: [    0.395231] pnp: PnP ACPI: found 5 devices
Nov 16 16:51:35 cmp001 kernel: [    0.403291] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 16 16:51:35 cmp001 kernel: [    0.405496] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 16 16:51:35 cmp001 kernel: [    0.405497] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 16 16:51:35 cmp001 kernel: [    0.405498] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 16 16:51:35 cmp001 kernel: [    0.405499] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 16 16:51:35 cmp001 kernel: [    0.405564] NET: Registered protocol family 2
Nov 16 16:51:35 cmp001 kernel: [    0.406920] TCP established hash table entries: 131072 (order: 8, 1048576 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.409791] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.411520] TCP: Hash tables configured (established 131072 bind 65536)
Nov 16 16:51:35 cmp001 kernel: [    0.413195] UDP hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.414759] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:51:35 cmp001 kernel: [    0.416468] NET: Registered protocol family 1
Nov 16 16:51:35 cmp001 kernel: [    0.417622] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 16 16:51:35 cmp001 kernel: [    0.419112] pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 16 16:51:35 cmp001 kernel: [    0.420591] pci 0000:00:01.0: Activating ISA DMA hang workarounds
Nov 16 16:51:35 cmp001 kernel: [    0.422179] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 16 16:51:35 cmp001 kernel: [    0.454543] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
Nov 16 16:51:35 cmp001 kernel: [    0.515803] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
Nov 16 16:51:35 cmp001 kernel: [    0.575772] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
Nov 16 16:51:35 cmp001 kernel: [    0.634642] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
Nov 16 16:51:35 cmp001 kernel: [    0.665249] PCI: CLS 0 bytes, default 64
Nov 16 16:51:35 cmp001 kernel: [    0.665292] Unpacking initramfs...
Nov 16 16:51:35 cmp001 kernel: [    0.941247] Freeing initrd memory: 19152K
Nov 16 16:51:35 cmp001 kernel: [    0.942351] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 16 16:51:35 cmp001 kernel: [    0.943885] software IO TLB: mapped [mem 0xbbfdf000-0xbffdf000] (64MB)
Nov 16 16:51:35 cmp001 kernel: [    0.945470] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3aeaff3, max_idle_ns: 440795255742 ns
Nov 16 16:51:35 cmp001 kernel: [    0.947870] Scanning for low memory corruption every 60 seconds
Nov 16 16:51:35 cmp001 kernel: [    0.950049] Initialise system trusted keyrings
Nov 16 16:51:35 cmp001 kernel: [    0.951186] Key type blacklist registered
Nov 16 16:51:35 cmp001 kernel: [    0.952288] workingset: timestamp_bits=36 max_order=22 bucket_order=0
Nov 16 16:51:35 cmp001 kernel: [    0.955257] zbud: loaded
Nov 16 16:51:35 cmp001 kernel: [    0.956537] squashfs: version 4.0 (2009/01/31) Phillip Lougher
Nov 16 16:51:35 cmp001 kernel: [    0.958181] fuse init (API version 7.26)
Nov 16 16:51:35 cmp001 kernel: [    0.960944] Key type asymmetric registered
Nov 16 16:51:35 cmp001 kernel: [    0.961992] Asymmetric key parser 'x509' registered
Nov 16 16:51:35 cmp001 kernel: [    0.963219] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 16 16:51:35 cmp001 kernel: [    0.965148] io scheduler noop registered
Nov 16 16:51:35 cmp001 kernel: [    0.966162] io scheduler deadline registered
Nov 16 16:51:35 cmp001 kernel: [    0.967260] io scheduler cfq registered (default)
Nov 16 16:51:35 cmp001 kernel: [    0.968781] intel_idle: Please enable MWAIT in BIOS SETUP
Nov 16 16:51:35 cmp001 kernel: [    0.968852] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 16 16:51:35 cmp001 kernel: [    0.970723] ACPI: Power Button [PWRF]
Nov 16 16:51:35 cmp001 kernel: [    1.000690] virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:35 cmp001 kernel: [    1.031475] virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:35 cmp001 kernel: [    1.062564] virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:35 cmp001 kernel: [    1.093357] virtio-pci 0000:00:06.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:35 cmp001 kernel: [    1.124209] virtio-pci 0000:00:07.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:35 cmp001 kernel: [    1.154606] virtio-pci 0000:00:09.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:35 cmp001 kernel: [    1.157341] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
Nov 16 16:51:35 cmp001 kernel: [    1.184651] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 16 16:51:35 cmp001 kernel: [    1.188240] Linux agpgart interface v0.103
Nov 16 16:51:35 cmp001 kernel: [    1.191732] loop: module loaded
Nov 16 16:51:35 cmp001 kernel: [    1.192694] ata_piix 0000:00:01.1: version 2.13
Nov 16 16:51:35 cmp001 kernel: [    1.193962] scsi host0: ata_piix
Nov 16 16:51:35 cmp001 kernel: [    1.195116] scsi host1: ata_piix
Nov 16 16:51:35 cmp001 kernel: [    1.196034] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Nov 16 16:51:35 cmp001 kernel: [    1.197631] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Nov 16 16:51:35 cmp001 kernel: [    1.199301] libphy: Fixed MDIO Bus: probed
Nov 16 16:51:35 cmp001 kernel: [    1.200454] tun: Universal TUN/TAP device driver, 1.6
Nov 16 16:51:35 cmp001 kernel: [    1.201746] PPP generic driver version 2.4.2
Nov 16 16:51:35 cmp001 kernel: [    1.202870] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Nov 16 16:51:35 cmp001 kernel: [    1.204531] ehci-pci: EHCI PCI platform driver
Nov 16 16:51:35 cmp001 kernel: [    1.234931] ehci-pci 0000:00:08.7: EHCI Host Controller
Nov 16 16:51:35 cmp001 kernel: [    1.236233] ehci-pci 0000:00:08.7: new USB bus registered, assigned bus number 1
Nov 16 16:51:35 cmp001 kernel: [    1.238259] ehci-pci 0000:00:08.7: irq 11, io mem 0xfebd6000
Nov 16 16:51:35 cmp001 kernel: [    1.252084] ehci-pci 0000:00:08.7: USB 2.0 started, EHCI 1.00
Nov 16 16:51:35 cmp001 kernel: [    1.265715] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Nov 16 16:51:35 cmp001 kernel: [    1.267330] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:35 cmp001 kernel: [    1.269082] usb usb1: Product: EHCI Host Controller
Nov 16 16:51:35 cmp001 kernel: [    1.270286] usb usb1: Manufacturer: Linux 4.15.0-70-generic ehci_hcd
Nov 16 16:51:35 cmp001 kernel: [    1.271789] usb usb1: SerialNumber: 0000:00:08.7
Nov 16 16:51:35 cmp001 kernel: [    1.273096] hub 1-0:1.0: USB hub found
Nov 16 16:51:35 cmp001 kernel: [    1.274079] hub 1-0:1.0: 6 ports detected
Nov 16 16:51:35 cmp001 kernel: [    1.275302] ehci-platform: EHCI generic platform driver
Nov 16 16:51:35 cmp001 kernel: [    1.276617] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Nov 16 16:51:35 cmp001 kernel: [    1.278097] ohci-pci: OHCI PCI platform driver
Nov 16 16:51:35 cmp001 kernel: [    1.279214] ohci-platform: OHCI generic platform driver
Nov 16 16:51:35 cmp001 kernel: [    1.280506] uhci_hcd: USB Universal Host Controller Interface driver
Nov 16 16:51:35 cmp001 kernel: [    1.313886] uhci_hcd 0000:00:08.0: UHCI Host Controller
Nov 16 16:51:35 cmp001 kernel: [    1.315274] uhci_hcd 0000:00:08.0: new USB bus registered, assigned bus number 2
Nov 16 16:51:35 cmp001 kernel: [    1.317224] uhci_hcd 0000:00:08.0: detected 2 ports
Nov 16 16:51:35 cmp001 kernel: [    1.318596] uhci_hcd 0000:00:08.0: irq 11, io base 0x0000c0c0
Nov 16 16:51:35 cmp001 kernel: [    1.320189] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:51:35 cmp001 kernel: [    1.321970] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:35 cmp001 kernel: [    1.324082] usb usb2: Product: UHCI Host Controller
Nov 16 16:51:35 cmp001 kernel: [    1.325404] usb usb2: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:51:35 cmp001 kernel: [    1.327042] usb usb2: SerialNumber: 0000:00:08.0
Nov 16 16:51:35 cmp001 kernel: [    1.328468] hub 2-0:1.0: USB hub found
Nov 16 16:51:35 cmp001 kernel: [    1.329520] hub 2-0:1.0: 2 ports detected
Nov 16 16:51:35 cmp001 kernel: [    1.356735] ata1.01: NODEV after polling detection
Nov 16 16:51:35 cmp001 kernel: [    1.357105] ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 16 16:51:35 cmp001 kernel: [    1.359191] ata1.00: configured for MWDMA2
Nov 16 16:51:35 cmp001 kernel: [    1.361191] scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 16 16:51:35 cmp001 kernel: [    1.362516] uhci_hcd 0000:00:08.1: UHCI Host Controller
Nov 16 16:51:35 cmp001 kernel: [    1.365873] sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 16 16:51:35 cmp001 kernel: [    1.365897] uhci_hcd 0000:00:08.1: new USB bus registered, assigned bus number 3
Nov 16 16:51:35 cmp001 kernel: [    1.367603] cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 16 16:51:35 cmp001 kernel: [    1.369609] uhci_hcd 0000:00:08.1: detected 2 ports
Nov 16 16:51:35 cmp001 kernel: [    1.371076] sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 16 16:51:35 cmp001 kernel: [    1.372520] uhci_hcd 0000:00:08.1: irq 10, io base 0x0000c0e0
Nov 16 16:51:35 cmp001 kernel: [    1.372773] sr 0:0:0:0: Attached scsi generic sg0 type 5
Nov 16 16:51:35 cmp001 kernel: [    1.374092] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:51:35 cmp001 kernel: [    1.376942] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:35 cmp001 kernel: [    1.378790] usb usb3: Product: UHCI Host Controller
Nov 16 16:51:35 cmp001 kernel: [    1.380021] usb usb3: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:51:35 cmp001 kernel: [    1.381707] usb usb3: SerialNumber: 0000:00:08.1
Nov 16 16:51:35 cmp001 kernel: [    1.383122] hub 3-0:1.0: USB hub found
Nov 16 16:51:35 cmp001 kernel: [    1.384235] hub 3-0:1.0: 2 ports detected
Nov 16 16:51:35 cmp001 kernel: [    1.417057] uhci_hcd 0000:00:08.2: UHCI Host Controller
Nov 16 16:51:35 cmp001 kernel: [    1.418357] uhci_hcd 0000:00:08.2: new USB bus registered, assigned bus number 4
Nov 16 16:51:35 cmp001 kernel: [    1.420226] uhci_hcd 0000:00:08.2: detected 2 ports
Nov 16 16:51:35 cmp001 kernel: [    1.421507] uhci_hcd 0000:00:08.2: irq 10, io base 0x0000c100
Nov 16 16:51:35 cmp001 kernel: [    1.422983] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:51:35 cmp001 kernel: [    1.424661] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:35 cmp001 kernel: [    1.426542] usb usb4: Product: UHCI Host Controller
Nov 16 16:51:35 cmp001 kernel: [    1.427779] usb usb4: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:51:35 cmp001 kernel: [    1.429338] usb usb4: SerialNumber: 0000:00:08.2
Nov 16 16:51:35 cmp001 kernel: [    1.430637] hub 4-0:1.0: USB hub found
Nov 16 16:51:35 cmp001 kernel: [    1.431642] hub 4-0:1.0: 2 ports detected
Nov 16 16:51:35 cmp001 kernel: [    1.432872] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 16 16:51:35 cmp001 kernel: [    1.435728] serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 16 16:51:35 cmp001 kernel: [    1.437026] serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 16 16:51:35 cmp001 kernel: [    1.438431] mousedev: PS/2 mouse device common for all mice
Nov 16 16:51:35 cmp001 kernel: [    1.440147] rtc_cmos 00:00: RTC can wake from S4
Nov 16 16:51:35 cmp001 kernel: [    1.441776] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 16 16:51:35 cmp001 kernel: [    1.444164] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
Nov 16 16:51:35 cmp001 kernel: [    1.445767] rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Nov 16 16:51:35 cmp001 kernel: [    1.447252] i2c /dev entries driver
Nov 16 16:51:35 cmp001 kernel: [    1.448235] device-mapper: uevent: version 1.0.3
Nov 16 16:51:35 cmp001 kernel: [    1.449460] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: dm-devel@redhat.com
Nov 16 16:51:35 cmp001 kernel: [    1.451734] ledtrig-cpu: registered to indicate activity on CPUs
Nov 16 16:51:35 cmp001 kernel: [    1.453648] NET: Registered protocol family 10
Nov 16 16:51:35 cmp001 kernel: [    1.459795] Segment Routing with IPv6
Nov 16 16:51:35 cmp001 kernel: [    1.460846] NET: Registered protocol family 17
Nov 16 16:51:35 cmp001 kernel: [    1.462163] Key type dns_resolver registered
Nov 16 16:51:35 cmp001 kernel: [    1.463920] mce: Using 10 MCE banks
Nov 16 16:51:35 cmp001 kernel: [    1.464913] RAS: Correctable Errors collector initialized.
Nov 16 16:51:35 cmp001 kernel: [    1.466323] sched_clock: Marking stable (1464888953, 0)->(1976776439, -511887486)
Nov 16 16:51:35 cmp001 kernel: [    1.468669] registered taskstats version 1
Nov 16 16:51:35 cmp001 kernel: [    1.469760] Loading compiled-in X.509 certificates
Nov 16 16:51:35 cmp001 kernel: [    1.473801] Loaded X.509 cert 'Build time autogenerated kernel key: 1859b0531897959199376c446a0bd70df75fd1fc'
Nov 16 16:51:35 cmp001 kernel: [    1.476222] zswap: loaded using pool lzo/zbud
Nov 16 16:51:35 cmp001 kernel: [    1.482852] Key type big_key registered
Nov 16 16:51:35 cmp001 kernel: [    1.483935] Key type trusted registered
Nov 16 16:51:35 cmp001 kernel: [    1.488079] Key type encrypted registered
Nov 16 16:51:35 cmp001 kernel: [    1.489314] AppArmor: AppArmor sha1 policy hashing enabled
Nov 16 16:51:35 cmp001 kernel: [    1.490893] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
Nov 16 16:51:35 cmp001 kernel: [    1.492738] ima: Allocated hash algorithm: sha1
Nov 16 16:51:35 cmp001 kernel: [    1.494108] evm: HMAC attrs: 0x1
Nov 16 16:51:35 cmp001 kernel: [    1.495431]   Magic number: 11:270:894
Nov 16 16:51:35 cmp001 kernel: [    1.496648] rtc_cmos 00:00: setting system clock to 2019-11-16 16:51:19 UTC (1573923079)
Nov 16 16:51:35 cmp001 kernel: [    1.498828] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
Nov 16 16:51:35 cmp001 kernel: [    1.500330] EDD information not available.
Nov 16 16:51:35 cmp001 kernel: [    1.875650] Freeing unused kernel image memory: 2432K
Nov 16 16:51:35 cmp001 kernel: [    1.900080] Write protecting the kernel read-only data: 20480k
Nov 16 16:51:35 cmp001 kernel: [    1.904207] Freeing unused kernel image memory: 2008K
Nov 16 16:51:35 cmp001 kernel: [    1.906749] Freeing unused kernel image memory: 1880K
Nov 16 16:51:35 cmp001 kernel: [    1.916768] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:51:35 cmp001 kernel: [    1.918311] x86/mm: Checking user space page tables
Nov 16 16:51:35 cmp001 kernel: [    1.926968] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:51:35 cmp001 kernel: [    2.008874] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 16 16:51:35 cmp001 kernel: [    2.011577] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 16 16:51:35 cmp001 kernel: [    2.019453] AVX version of gcm_enc/dec engaged.
Nov 16 16:51:35 cmp001 kernel: [    2.020672] AES CTR mode by8 optimization enabled
Nov 16 16:51:35 cmp001 kernel: [    2.021554] FDC 0 is a S82078B
Nov 16 16:51:35 cmp001 kernel: [    2.036060] virtio_net virtio2 ens5: renamed from eth2
Nov 16 16:51:35 cmp001 kernel: [    2.060402] virtio_net virtio3 ens6: renamed from eth3
Nov 16 16:51:35 cmp001 kernel: [    2.080119] virtio_net virtio0 ens3: renamed from eth0
Nov 16 16:51:35 cmp001 kernel: [    2.092151] GPT:Primary header thinks Alt. header is not at the end of the disk.
Nov 16 16:51:35 cmp001 kernel: [    2.094665] GPT:4612095 != 209715199
Nov 16 16:51:35 cmp001 kernel: [    2.095971] GPT:Alternate GPT header not at the end of the disk.
Nov 16 16:51:35 cmp001 kernel: [    2.097407] GPT:4612095 != 209715199
Nov 16 16:51:35 cmp001 kernel: [    2.098330] GPT: Use GNU Parted to correct GPT errors.
Nov 16 16:51:35 cmp001 kernel: [    2.099586]  vda: vda1 vda14 vda15
Nov 16 16:51:35 cmp001 kernel: [    2.104134] virtio_net virtio1 ens4: renamed from eth1
Nov 16 16:51:35 cmp001 kernel: [    3.904023] raid6: sse2x1   gen()  7565 MB/s
Nov 16 16:51:35 cmp001 kernel: [    3.952054] raid6: sse2x1   xor()  3846 MB/s
Nov 16 16:51:35 cmp001 kernel: [    4.000031] raid6: sse2x2   gen()  9467 MB/s
Nov 16 16:51:35 cmp001 kernel: [    4.048030] raid6: sse2x2   xor()  6221 MB/s
Nov 16 16:51:35 cmp001 kernel: [    4.096027] raid6: sse2x4   gen() 10574 MB/s
Nov 16 16:51:35 cmp001 kernel: [    4.144045] raid6: sse2x4   xor()  6998 MB/s
Nov 16 16:51:35 cmp001 kernel: [    4.145410] raid6: using algorithm sse2x4 gen() 10574 MB/s
Nov 16 16:51:35 cmp001 kernel: [    4.146725] raid6: .... xor() 6998 MB/s, rmw enabled
Nov 16 16:51:35 cmp001 kernel: [    4.148171] raid6: using ssse3x2 recovery algorithm
Nov 16 16:51:35 cmp001 kernel: [    4.150900] xor: automatically using best checksumming function   avx       
Nov 16 16:51:35 cmp001 kernel: [    4.153858] async_tx: api initialized (async)
Nov 16 16:51:35 cmp001 kernel: [    4.203204] Btrfs loaded, crc32c=crc32c-intel
Nov 16 16:51:35 cmp001 kernel: [    4.277544] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null)
Nov 16 16:51:35 cmp001 kernel: [    4.527820] random: fast init done
Nov 16 16:51:35 cmp001 kernel: [    4.954553] ip_tables: (C) 2000-2006 Netfilter Core Team
Nov 16 16:51:35 cmp001 kernel: [    4.964722] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:51:35 cmp001 kernel: [    4.970682] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Nov 16 16:51:35 cmp001 kernel: [    4.975566] systemd[1]: Detected virtualization kvm.
Nov 16 16:51:35 cmp001 kernel: [    4.976820] systemd[1]: Detected architecture x86-64.
Nov 16 16:51:35 cmp001 kernel: [    4.978118] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:51:35 cmp001 kernel: [    4.979691] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:51:35 cmp001 kernel: [    4.997377] systemd[1]: Set hostname to <ubuntu>.
Nov 16 16:51:35 cmp001 kernel: [    5.000081] systemd[1]: Initializing machine ID from KVM UUID.
Nov 16 16:51:35 cmp001 kernel: [    5.002230] systemd[1]: Installed transient /etc/machine-id file.
Nov 16 16:51:35 cmp001 kernel: [    5.295490] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 16 16:51:35 cmp001 kernel: [    5.300972] systemd[1]: Created slice System Slice.
Nov 16 16:51:35 cmp001 kernel: [    5.303059] systemd[1]: Listening on udev Control Socket.
Nov 16 16:51:35 cmp001 kernel: [    5.305591] systemd[1]: Created slice system-serial\x2dgetty.slice.
Nov 16 16:51:35 cmp001 kernel: [    5.322166] EXT4-fs (vda1): re-mounted. Opts: (null)
Nov 16 16:51:35 cmp001 kernel: [    5.343318] Loading iSCSI transport class v2.0-870.
Nov 16 16:51:35 cmp001 kernel: [    5.352545] iscsi: registered transport (tcp)
Nov 16 16:51:35 cmp001 kernel: [    5.399013] iscsi: registered transport (iser)
Nov 16 16:51:35 cmp001 kernel: [    5.460755] systemd-journald[440]: Received request to flush runtime journal from PID 1
Nov 16 16:51:35 cmp001 kernel: [    6.176498] audit: type=1400 audit(1573923084.176:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=694 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    6.245208] audit: type=1400 audit(1573923084.244:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=696 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    6.245594] audit: type=1400 audit(1573923084.244:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=696 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    6.245984] audit: type=1400 audit(1573923084.244:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=696 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    6.272388] audit: type=1400 audit(1573923084.272:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/tcpdump" pid=705 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    6.347086] audit: type=1400 audit(1573923084.344:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/sbin/dhclient" pid=722 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    6.347540] audit: type=1400 audit(1573923084.344:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=722 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    6.347985] audit: type=1400 audit(1573923084.344:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=722 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    6.348423] audit: type=1400 audit(1573923084.348:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=722 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    6.410411] audit: type=1400 audit(1573923084.408:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=697 comm="apparmor_parser"
Nov 16 16:51:35 cmp001 kernel: [    7.886967] ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 16 16:51:35 cmp001 kernel: [    7.890185] ISO 9660 Extensions: RRIP_1991A
Nov 16 16:51:35 cmp001 kernel: [   12.609241] EXT4-fs (vda1): resizing filesystem from 548091 to 26185979 blocks
Nov 16 16:51:35 cmp001 kernel: [   12.835716] EXT4-fs (vda1): resized filesystem to 26185979
Nov 16 16:51:35 cmp001 kernel: [   16.522290] new mount options do not match the existing superblock, will be ignored
Nov 16 16:51:35 cmp001 polkitd[1264]: started daemon version 0.105 using authority implementation `local' version `0.105'
Nov 16 16:51:35 cmp001 dbus-daemon[1005]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Nov 16 16:51:35 cmp001 systemd[1]: Started Authorization Manager.
Nov 16 16:51:35 cmp001 accounts-daemon[1038]: started daemon version 0.6.45
Nov 16 16:51:35 cmp001 systemd[1]: Started Accounts Service.
Nov 16 16:51:35 cmp001 snapd[1095]: AppArmor status: apparmor is enabled and all features are available
Nov 16 16:51:35 cmp001 snapd[1095]: helpers.go:145: error trying to compare the snap system key: system-key missing on disk
Nov 16 16:51:35 cmp001 systemd[1]: Started LXD - container startup/shutdown.
Nov 16 16:51:35 cmp001 snapd[1095]: daemon.go:338: started snapd/2.40+18.04 (series 16; classic) ubuntu/18.04 (amd64) linux/4.15.0-70-generic.
Nov 16 16:51:35 cmp001 systemd[1]: Started Snappy daemon.
Nov 16 16:51:35 cmp001 systemd[1]: Starting Wait until snapd is fully seeded...
Nov 16 16:51:35 cmp001 pollinate[1117]: client verified challenge/response with [https://entropy.ubuntu.com/]
Nov 16 16:51:35 cmp001 pollinate[1117]: client hashed response from [https://entropy.ubuntu.com/]
Nov 16 16:51:35 cmp001 pollinate[1117]: client successfully seeded [/dev/urandom]
Nov 16 16:51:35 cmp001 systemd[1]: Started Pollinate to seed the pseudo random number generator.
Nov 16 16:51:35 cmp001 systemd[1]: Starting OpenBSD Secure Shell server...
Nov 16 16:51:36 cmp001 systemd[1]: Started OpenBSD Secure Shell server.
Nov 16 16:51:36 cmp001 kernel: [   18.071562] random: crng init done
Nov 16 16:51:36 cmp001 kernel: [   18.071565] random: 7 urandom warning(s) missed due to ratelimiting
Nov 16 16:51:36 cmp001 systemd[1]: Started The Salt Minion.
Nov 16 16:51:36 cmp001 systemd[1]: Started Wait until snapd is fully seeded.
Nov 16 16:51:36 cmp001 systemd[1]: Starting Apply the settings specified in cloud-config...
Nov 16 16:51:36 cmp001 systemd[1]: Reached target Multi-User System.
Nov 16 16:51:36 cmp001 systemd[1]: Reached target Graphical Interface.
Nov 16 16:51:36 cmp001 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Nov 16 16:51:36 cmp001 systemd[1]: Started Update UTMP about System Runlevel Changes.
Nov 16 16:51:37 cmp001 salt-minion[1051]: [ERROR   ] DNS lookup or connection check of 'salt' failed.
Nov 16 16:51:37 cmp001 salt-minion[1051]: [ERROR   ] Master hostname: 'salt' not found or not responsive. Retrying in 30 seconds
Nov 16 16:51:38 cmp001 cloud-init[1478]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:config' at Sat, 16 Nov 2019 16:51:37 +0000. Up 19.52 seconds.
Nov 16 16:51:38 cmp001 systemd[1]: Started Apply the settings specified in cloud-config.
Nov 16 16:51:38 cmp001 systemd[1]: Starting Execute cloud user/final scripts...
Nov 16 16:51:39 cmp001 systemd[1]: Reloading.
Nov 16 16:51:39 cmp001 cloud-init[1564]: Synchronizing state of networking.service with SysV service script with /lib/systemd/systemd-sysv-install.
Nov 16 16:51:39 cmp001 cloud-init[1564]: Executing: /lib/systemd/systemd-sysv-install enable networking
Nov 16 16:51:39 cmp001 systemd[1]: Reloading.
Nov 16 16:51:39 cmp001 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:51:39 cmp001 cloud-init[1564]: Synchronizing state of salt-minion.service with SysV service script with /lib/systemd/systemd-sysv-install.
Nov 16 16:51:39 cmp001 cloud-init[1564]: Executing: /lib/systemd/systemd-sysv-install enable salt-minion
Nov 16 16:51:39 cmp001 systemd[1]: Reloading.
Nov 16 16:51:40 cmp001 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:51:40 cmp001 systemd[1]: Stopping The Salt Minion...
Nov 16 16:51:40 cmp001 salt-minion[1051]: [WARNING ] Minion received a SIGTERM. Exiting.
Nov 16 16:51:40 cmp001 salt-minion[1051]: The Salt Minion is shutdown. Minion received a SIGTERM. Exited.
Nov 16 16:51:40 cmp001 systemd[1]: Stopped The Salt Minion.
Nov 16 16:51:40 cmp001 systemd[1]: Starting The Salt Minion...
Nov 16 16:51:40 cmp001 snapd[1095]: daemon.go:576: gracefully waiting for running hooks
Nov 16 16:51:40 cmp001 snapd[1095]: daemon.go:578: done waiting for running hooks
Nov 16 16:51:40 cmp001 snapd[1095]: daemon stop requested to wait for socket activation
Nov 16 16:51:40 cmp001 systemd[1]: Started The Salt Minion.
Nov 16 16:51:40 cmp001 ec2: 
Nov 16 16:51:40 cmp001 ec2: #############################################################
Nov 16 16:51:40 cmp001 ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 16 16:51:40 cmp001 ec2: 1024 SHA256:P3yGWKDn3IDcTGMBBWDA4CcYxZBcgD99hnXx03LPnVY root@cmp001 (DSA)
Nov 16 16:51:40 cmp001 ec2: 256 SHA256:l40SYRJZ6crcf+2UnO+4luDN9pMWYswyAN2FYHfn9Dk root@cmp001 (ECDSA)
Nov 16 16:51:40 cmp001 ec2: 256 SHA256:DMhZ2ZmMXiBrTFdp53cT/E2y7q/+bqqGPKqNI0cVNf0 root@cmp001 (ED25519)
Nov 16 16:51:40 cmp001 ec2: 2048 SHA256:Nft6e43RyKK7wV+nChDGzlG/41vQ3fMbubKxPu+olfg root@cmp001 (RSA)
Nov 16 16:51:40 cmp001 ec2: -----END SSH HOST KEY FINGERPRINTS-----
Nov 16 16:51:40 cmp001 ec2: #############################################################
Nov 16 16:51:40 cmp001 cloud-init[1564]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:final' at Sat, 16 Nov 2019 16:51:38 +0000. Up 20.65 seconds.
Nov 16 16:51:40 cmp001 cloud-init[1564]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 finished at Sat, 16 Nov 2019 16:51:40 +0000. Datasource DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 22.97 seconds
Nov 16 16:51:41 cmp001 systemd[1]: Started Execute cloud user/final scripts.
Nov 16 16:51:41 cmp001 systemd[1]: Reached target Cloud-init target.
Nov 16 16:51:41 cmp001 systemd[1]: Startup finished in 4.843s (kernel) + 18.184s (userspace) = 23.027s.
Nov 16 16:51:54 cmp001 systemd-timesyncd[640]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).
Nov 16 16:53:57 cmp001 salt-minion[1815]: [WARNING ] The function "module.run" is using its deprecated version and will expire in version "Sodium".
Nov 16 16:54:03 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y remove telnet.
Nov 16 16:54:09 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install smartmontools.
Nov 16 16:54:20 cmp001 systemd[1]: Reloading.
Nov 16 16:54:22 cmp001 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:54:23 cmp001 systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Nov 16 16:54:23 cmp001 systemd[1]: Reloading.
Nov 16 16:54:23 cmp001 smartd[4241]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-70-generic] (local build)
Nov 16 16:54:23 cmp001 smartd[4241]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Nov 16 16:54:23 cmp001 smartd[4241]: Opened configuration file /etc/smartd.conf
Nov 16 16:54:23 cmp001 smartd[4241]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Nov 16 16:54:23 cmp001 smartd[4241]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Nov 16 16:54:23 cmp001 smartd[4241]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Nov 16 16:54:23 cmp001 smartd[4241]: In the system's table of devices NO devices found to scan
Nov 16 16:54:23 cmp001 smartd[4241]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
Nov 16 16:54:24 cmp001 systemd[1]: smartd.service: Main process exited, code=exited, status=17/n/a
Nov 16 16:54:24 cmp001 systemd[1]: smartd.service: Failed with result 'exit-code'.
Nov 16 16:54:24 cmp001 systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Nov 16 16:54:24 cmp001 smartd[4286]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-70-generic] (local build)
Nov 16 16:54:24 cmp001 smartd[4286]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Nov 16 16:54:24 cmp001 smartd[4286]: Opened configuration file /etc/smartd.conf
Nov 16 16:54:24 cmp001 smartd[4286]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Nov 16 16:54:24 cmp001 smartd[4286]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Nov 16 16:54:24 cmp001 smartd[4286]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Nov 16 16:54:24 cmp001 smartd[4286]: In the system's table of devices NO devices found to scan
Nov 16 16:54:24 cmp001 smartd[4286]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
Nov 16 16:54:24 cmp001 systemd[1]: smartd.service: Main process exited, code=exited, status=17/n/a
Nov 16 16:54:24 cmp001 systemd[1]: smartd.service: Failed with result 'exit-code'.
Nov 16 16:54:25 cmp001 systemd[1]: Reloading.
Nov 16 16:54:28 cmp001 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:54:29 cmp001 systemd[1]: Created slice system-postfix.slice.
Nov 16 16:54:29 cmp001 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Nov 16 16:54:29 cmp001 configure-instance.sh[4422]: postconf: fatal: open /etc/postfix/main.cf: No such file or directory
Nov 16 16:54:30 cmp001 configure-instance.sh[4422]: postconf: fatal: open /etc/postfix/main.cf: No such file or directory
Nov 16 16:54:31 cmp001 systemd[1]: postfix@-.service: Control process exited, code=exited status=1
Nov 16 16:54:31 cmp001 systemd[1]: postfix@-.service: Failed with result 'exit-code'.
Nov 16 16:54:31 cmp001 systemd[1]: Failed to start Postfix Mail Transport Agent (instance -).
Nov 16 16:54:37 cmp001 systemd[1]: Reloading.
Nov 16 16:54:37 cmp001 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Nov 16 16:54:38 cmp001 postfix/postfix-script[4784]: starting the Postfix mail system
Nov 16 16:54:38 cmp001 postfix/master[4786]: daemon started -- version 3.3.0, configuration /etc/postfix
Nov 16 16:54:38 cmp001 systemd[1]: Started Postfix Mail Transport Agent (instance -).
Nov 16 16:54:38 cmp001 systemd[1]: Starting Postfix Mail Transport Agent...
Nov 16 16:54:38 cmp001 systemd[1]: Started Postfix Mail Transport Agent.
Nov 16 16:54:38 cmp001 systemd[1]: Reloading.
Nov 16 16:54:39 cmp001 systemd[1]: Reloading.
Nov 16 16:54:40 cmp001 systemd[1]: Stopping System Logging Service...
Nov 16 16:54:40 cmp001 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="1109" x-info="http://www.rsyslog.com"] exiting on signal 15.
Nov 16 16:54:40 cmp001 systemd[1]: Stopped System Logging Service.
Nov 16 16:54:40 cmp001 systemd[1]: Starting System Logging Service...
Nov 16 16:54:40 cmp001 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.32.0]
Nov 16 16:54:40 cmp001 rsyslogd: rsyslogd's groupid changed to 106
Nov 16 16:54:40 cmp001 systemd[1]: Started System Logging Service.
Nov 16 16:54:40 cmp001 rsyslogd: rsyslogd's userid changed to 102
Nov 16 16:54:40 cmp001 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="5275" x-info="http://www.rsyslog.com"] start
Nov 16 16:54:42 cmp001 dbus-daemon[1005]: [system] Activating via systemd: service name='org.freedesktop.timedate1' unit='dbus-org.freedesktop.timedate1.service' requested by ':1.22' (uid=0 pid=5354 comm="timedatectl " label="unconfined")
Nov 16 16:54:42 cmp001 systemd[1]: Starting Time & Date Service...
Nov 16 16:54:42 cmp001 dbus-daemon[1005]: [system] Successfully activated service 'org.freedesktop.timedate1'
Nov 16 16:54:42 cmp001 systemd[1]: Started Time & Date Service.
Nov 16 16:54:43 cmp001 salt-minion[1815]: [WARNING ] State for file: /boot/grub/grub.cfg - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:43 cmp001 kernel: [  205.569793] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Nov 16 16:54:48 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install sysfsutils.
Nov 16 16:54:50 cmp001 systemd[1]: Reloading.
Nov 16 16:54:50 cmp001 systemd[1]: Reloading.
Nov 16 16:54:50 cmp001 systemd[1]: Starting LSB: Set sysfs variables from /etc/sysfs.conf...
Nov 16 16:54:50 cmp001 sysfsutils[6173]:  * Setting sysfs variables...
Nov 16 16:54:50 cmp001 sysfsutils[6173]:    ...done.
Nov 16 16:54:50 cmp001 systemd[1]: Started LSB: Set sysfs variables from /etc/sysfs.conf.
Nov 16 16:54:50 cmp001 systemd[1]: Reloading.
Nov 16 16:54:53 cmp001 systemd[1]: Started /bin/systemctl disable ondemand.service.
Nov 16 16:54:53 cmp001 systemd[1]: Reloading.
Nov 16 16:54:53 cmp001 dbus-daemon[1005]: [system] Activating via systemd: service name='org.freedesktop.locale1' unit='dbus-org.freedesktop.locale1.service' requested by ':1.25' (uid=0 pid=6438 comm="localectl " label="unconfined")
Nov 16 16:54:53 cmp001 systemd[1]: Starting Locale Service...
Nov 16 16:54:54 cmp001 dbus-daemon[1005]: [system] Successfully activated service 'org.freedesktop.locale1'
Nov 16 16:54:54 cmp001 systemd[1]: Started Locale Service.
Nov 16 16:54:54 cmp001 systemd-localed[6445]: Changed locale to LANG=en_US.UTF-8.
Nov 16 16:54:54 cmp001 salt-minion[1815]: [WARNING ] The function "module.run" is using its deprecated version and will expire in version "Sodium".
Nov 16 16:54:54 cmp001 systemd[1]: Reloading.
Nov 16 16:54:54 cmp001 salt-minion[1815]: [WARNING ] State for file: /etc/shadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp001 salt-minion[1815]: [WARNING ] State for file: /etc/gshadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp001 salt-minion[1815]: [WARNING ] State for file: /etc/group- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp001 salt-minion[1815]: [WARNING ] State for file: /etc/group - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp001 salt-minion[1815]: [WARNING ] State for file: /etc/passwd- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp001 salt-minion[1815]: [WARNING ] State for file: /etc/passwd - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp001 salt-minion[1815]: [WARNING ] State for file: /etc/gshadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp001 salt-minion[1815]: [WARNING ] State for file: /etc/shadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:55 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install openvswitch-switch bridge-utils vlan.
Nov 16 16:54:59 cmp001 systemd[1]: Reloading.
Nov 16 16:54:59 cmp001 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:54:59 cmp001 systemd[1]: Starting Open vSwitch Database Unit...
Nov 16 16:54:59 cmp001 ovs-ctl[6749]:  * /etc/openvswitch/conf.db does not exist
Nov 16 16:54:59 cmp001 ovs-ctl[6749]:  * Creating empty database /etc/openvswitch/conf.db
Nov 16 16:54:59 cmp001 ovs-ctl[6749]:  * Starting ovsdb-server
Nov 16 16:54:59 cmp001 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.16.1
Nov 16 16:54:59 cmp001 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.11.1 "external-ids:system-id=\"eacde882-4a7f-4bc7-a674-16ffc95b89eb\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"ubuntu\"" "system-version=\"18.04\""
Nov 16 16:54:59 cmp001 ovs-ctl[6749]:  * Configuring Open vSwitch system IDs
Nov 16 16:54:59 cmp001 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=cmp001
Nov 16 16:54:59 cmp001 ovs-ctl[6749]:  * Enabling remote OVSDB managers
Nov 16 16:54:59 cmp001 systemd[1]: Started Open vSwitch Database Unit.
Nov 16 16:54:59 cmp001 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 16 16:54:59 cmp001 ovs-ctl[6809]:  * Inserting openvswitch module
Nov 16 16:54:59 cmp001 kernel: [  222.170369] openvswitch: Open vSwitch switching datapath
Nov 16 16:55:00 cmp001 ovs-ctl[6809]:  * Starting ovs-vswitchd
Nov 16 16:55:00 cmp001 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=cmp001
Nov 16 16:55:00 cmp001 ovs-ctl[6809]:  * Enabling remote OVSDB managers
Nov 16 16:55:00 cmp001 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 16 16:55:00 cmp001 systemd[1]: Starting Open vSwitch...
Nov 16 16:55:00 cmp001 systemd[1]: Started Open vSwitch.
Nov 16 16:55:00 cmp001 systemd[1]: Reloading.
Nov 16 16:55:05 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install bridge-utils.
Nov 16 16:55:06 cmp001 systemd[1]: Reloading.
Nov 16 16:55:06 cmp001 systemd[1]: Started /bin/systemctl enable networking.service.
Nov 16 16:55:06 cmp001 systemd[1]: Reloading.
Nov 16 16:55:07 cmp001 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:55:07 cmp001 dnsmasq[1276]: reading /etc/resolv.conf
Nov 16 16:55:07 cmp001 dnsmasq[1276]: using nameserver 8.8.8.8#53
Nov 16 16:55:07 cmp001 salt-minion[1815]: [WARNING ] The network state sls is requiring a reboot of the system to properly apply network configuration.
Nov 16 16:55:07 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install vlan.
Nov 16 16:55:08 cmp001 kernel: [  230.473886] 8021q: 802.1Q VLAN Support v1.8
Nov 16 16:55:08 cmp001 kernel: [  230.473896] 8021q: adding VLAN 0 to HW filter on device ens3
Nov 16 16:55:08 cmp001 kernel: [  230.473954] 8021q: adding VLAN 0 to HW filter on device ens5
Nov 16 16:55:08 cmp001 systemd-udevd[7632]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:08 cmp001 systemd[1]: Reloading OpenBSD Secure Shell server.
Nov 16 16:55:08 cmp001 systemd[1]: Reloaded OpenBSD Secure Shell server.
Nov 16 16:55:08 cmp001 systemd[1]: Found device /sys/subsystem/net/devices/ens5.1000.
Nov 16 16:55:08 cmp001 systemd[1]: Started ifup for ens5.1000.
Nov 16 16:55:08 cmp001 sh[7740]: ifup: waiting for lock on /run/network/ifstate.ens5
Nov 16 16:55:08 cmp001 systemd[1]: Reloading Postfix Mail Transport Agent (instance -).
Nov 16 16:55:08 cmp001 postfix/postfix-script[7757]: refreshing the Postfix mail system
Nov 16 16:55:08 cmp001 postfix/master[4786]: reload -- version 3.3.0, configuration /etc/postfix
Nov 16 16:55:08 cmp001 systemd[1]: Reloaded Postfix Mail Transport Agent (instance -).
Nov 16 16:55:08 cmp001 systemd[1]: Reloading Postfix Mail Transport Agent.
Nov 16 16:55:08 cmp001 systemd[1]: Reloaded Postfix Mail Transport Agent.
Nov 16 16:55:08 cmp001 sh[7740]: ifup: interface ens5.1000 already configured
Nov 16 16:55:09 cmp001 salt-minion[1815]: [ERROR   ] Command '['umount', '/dev/shm']' failed with return code: 32
Nov 16 16:55:09 cmp001 salt-minion[1815]: [ERROR   ] stderr: umount: /dev/shm: target is busy.
Nov 16 16:55:09 cmp001 salt-minion[1815]: [ERROR   ] retcode: 32
Nov 16 16:55:13 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef dist-upgrade.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped target Cloud-init target.
Nov 16 16:55:26 cmp001 systemd[1]: Closed Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped target Timers.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped Discard unused blocks once a week.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped Daily apt upgrade and clean activities.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped Daily apt download activities.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped Daily Cleanup of Temporary Directories.
Nov 16 16:55:26 cmp001 systemd[1]: Stopping Availability of block devices...
Nov 16 16:55:26 cmp001 systemd[1]: Stopped Message of the Day.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped target System Time Synchronized.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped target Graphical Interface.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped Execute cloud user/final scripts.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped Apply the settings specified in cloud-config.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped target Cloud-config availability.
Nov 16 16:55:26 cmp001 systemd[1]: Stopped target Multi-User System.
Nov 16 16:55:26 cmp001 systemd[1]: Stopping Regular background program processing daemon...
Nov 16 16:55:54 cmp001 systemd-modules-load[431]: Inserted module 'iscsi_tcp'
Nov 16 16:55:54 cmp001 systemd-modules-load[431]: Inserted module 'ib_iser'
Nov 16 16:55:54 cmp001 systemd-modules-load[431]: Inserted module 'nf_conntrack'
Nov 16 16:55:54 cmp001 systemd[1]: Mounted Kernel Configuration File System.
Nov 16 16:55:54 cmp001 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 16 16:55:54 cmp001 systemd[1]: Started Create Static Device Nodes in /dev.
Nov 16 16:55:54 cmp001 systemd[1]: Starting udev Kernel Device Manager...
Nov 16 16:55:54 cmp001 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 16 16:55:54 cmp001 systemd[1]: Started Apply Kernel Variables.
Nov 16 16:55:54 cmp001 systemd[1]: Started udev Coldplug all Devices.
Nov 16 16:55:54 cmp001 systemd[1]: Started Set the console keyboard layout.
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Local File Systems (Pre).
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Linux version 4.15.0-70-generic (buildd@lgw01-amd64-055) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 (Ubuntu 4.15.0-70.79-generic 4.15.18)
Nov 16 16:55:54 cmp001 systemd[1]: Started Flush Journal to Persistent Storage.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:55:54 cmp001 kernel: [    0.000000] KERNEL supported cpus:
Nov 16 16:55:54 cmp001 systemd[1]: Started udev Kernel Device Manager.
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   Intel GenuineIntel
Nov 16 16:55:54 cmp001 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   AMD AuthenticAMD
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Local Encrypted Volumes.
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   Centaur CentaurHauls
Nov 16 16:55:54 cmp001 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 16 16:55:54 cmp001 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 16 16:55:54 cmp001 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 16 16:55:54 cmp001 systemd-udevd[474]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 16 16:55:54 cmp001 kernel: [    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] e820: BIOS-provided physical RAM map:
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdefff] usable
Nov 16 16:55:54 cmp001 systemd-udevd[475]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x00000000bffdf000-0x00000000bfffffff] reserved
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 16 16:55:54 cmp001 systemd-udevd[471]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000033fffffff] usable
Nov 16 16:55:54 cmp001 kernel: [    0.000000] NX (Execute Disable) protection: active
Nov 16 16:55:54 cmp001 kernel: [    0.000000] SMBIOS 2.8 present.
Nov 16 16:55:54 cmp001 systemd-udevd[466]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Hypervisor detected: KVM
Nov 16 16:55:54 cmp001 systemd[1]: Found device /dev/ttyS0.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 16 16:55:54 cmp001 kernel: [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 16 16:55:54 cmp001 kernel: [    0.000000] e820: last_pfn = 0x340000 max_arch_pfn = 0x400000000
Nov 16 16:55:54 cmp001 kernel: [    0.000000] MTRR default type: write-back
Nov 16 16:55:54 cmp001 kernel: [    0.000000] MTRR fixed ranges enabled:
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   00000-9FFFF write-back
Nov 16 16:55:54 cmp001 systemd-udevd[467]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   A0000-BFFFF uncachable
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   C0000-FFFFF write-protect
Nov 16 16:55:54 cmp001 kernel: [    0.000000] MTRR variable ranges enabled:
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   0 base 00C0000000 mask FFC0000000 uncachable
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   1 disabled
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   2 disabled
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   3 disabled
Nov 16 16:55:54 cmp001 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   4 disabled
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   5 disabled
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   6 disabled
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   7 disabled
Nov 16 16:55:54 cmp001 kernel: [    0.000000] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 16 16:55:54 cmp001 systemd-udevd[472]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] e820: last_pfn = 0xbffdf max_arch_pfn = 0x400000000
Nov 16 16:55:54 cmp001 systemd-udevd[472]: Could not generate persistent MAC address for br-mgmt: No such file or directory
Nov 16 16:55:54 cmp001 kernel: [    0.000000] found SMP MP-table at [mem 0x000f6590-0x000f659f]
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Scanning 1 areas for low memory corruption
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Using GB pages for direct mapping
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BRK [0x302941000, 0x302941fff] PGTABLE
Nov 16 16:55:54 cmp001 systemd[1]: Found device Virtio network device.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BRK [0x302942000, 0x302942fff] PGTABLE
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BRK [0x302943000, 0x302943fff] PGTABLE
Nov 16 16:55:54 cmp001 systemd[1]: Found device /sys/subsystem/net/devices/br-mgmt.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BRK [0x302944000, 0x302944fff] PGTABLE
Nov 16 16:55:54 cmp001 systemd[1]: Found device Virtio network device.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BRK [0x302945000, 0x302945fff] PGTABLE
Nov 16 16:55:54 cmp001 kernel: [    0.000000] BRK [0x302946000, 0x302946fff] PGTABLE
Nov 16 16:55:54 cmp001 kernel: [    0.000000] RAMDISK: [mem 0x35a87000-0x36d3afff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: Early table checksum verification disabled
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: RSDP 0x00000000000F6540 000014 (v00 BOCHS )
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: RSDT 0x00000000BFFE14B2 000030 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: FACP 0x00000000BFFE08D4 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: DSDT 0x00000000BFFDFD00 000BD4 (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
Nov 16 16:55:54 cmp001 systemd[1]: Found device Virtio network device.
Nov 16 16:55:54 cmp001 systemd[1]: Found device /dev/disk/by-label/UEFI.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: FACS 0x00000000BFFDFCC0 000040
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: SSDT 0x00000000BFFE0948 000ACA (v01 BOCHS  BXPCSSDT 00000001 BXPC 00000001)
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: APIC 0x00000000BFFE1412 0000A0 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:55:54 cmp001 systemd[1]: Mounting /boot/efi...
Nov 16 16:55:54 cmp001 kernel: [    0.000000] No NUMA configuration found
Nov 16 16:55:54 cmp001 systemd[1]: Mounted /boot/efi.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Faking a node at [mem 0x0000000000000000-0x000000033fffffff]
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Local File Systems.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] NODE_DATA(0) allocated [mem 0x33ffd5000-0x33fffffff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000] kvm-clock: cpu 0, msr 3:3ff54001, primary cpu clock
Nov 16 16:55:54 cmp001 systemd[1]: Starting Set console font and keymap...
Nov 16 16:55:54 cmp001 kernel: [    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 16 16:55:54 cmp001 kernel: [    0.000000] kvm-clock: using sched offset of 273912655882 cycles
Nov 16 16:55:54 cmp001 kernel: [    0.000000] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 16 16:55:54 cmp001 systemd[1]: Starting Tell Plymouth To Write Out Runtime Data...
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Zone ranges:
Nov 16 16:55:54 cmp001 systemd[1]: Starting AppArmor initialization...
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 16 16:55:54 cmp001 systemd[1]: Starting ebtables ruleset management...
Nov 16 16:55:54 cmp001 systemd[1]: Starting Create Volatile Files and Directories...
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   Normal   [mem 0x0000000100000000-0x000000033fffffff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   Device   empty
Nov 16 16:55:54 cmp001 systemd[1]: Started Set console font and keymap.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Movable zone start for each node
Nov 16 16:55:54 cmp001 systemd[1]: Started Tell Plymouth To Write Out Runtime Data.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Early memory node ranges
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 16 16:55:54 cmp001 apparmor[904]:  * Starting AppArmor profiles
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   node   0: [mem 0x0000000000100000-0x00000000bffdefff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   node   0: [mem 0x0000000100000000-0x000000033fffffff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Reserved but unavailable: 98 pages
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x000000033fffffff]
Nov 16 16:55:54 cmp001 systemd[1]: Started Create Volatile Files and Directories.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] On node 0 totalpages: 3145597
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   DMA zone: 64 pages used for memmap
Nov 16 16:55:54 cmp001 systemd[1]: Starting Update UTMP about System Boot/Shutdown...
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   DMA zone: 21 pages reserved
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   DMA zone: 3998 pages, LIFO batch:0
Nov 16 16:55:54 cmp001 systemd[1]: Starting Network Time Synchronization...
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   DMA32 zone: 12224 pages used for memmap
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   DMA32 zone: 782303 pages, LIFO batch:31
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   Normal zone: 36864 pages used for memmap
Nov 16 16:55:54 cmp001 systemd[1]: Started Update UTMP about System Boot/Shutdown.
Nov 16 16:55:54 cmp001 kernel: [    0.000000]   Normal zone: 2359296 pages, LIFO batch:31
Nov 16 16:55:54 cmp001 systemd[1]: Started ebtables ruleset management.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: PM-Timer IO Port: 0x608
Nov 16 16:55:54 cmp001 apparmor[904]: Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 16 16:55:54 cmp001 apparmor[904]:    ...done.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 16 16:55:54 cmp001 systemd[1]: Started AppArmor initialization.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 16 16:55:54 cmp001 systemd[1]: Starting Initial cloud-init job (pre-networking)...
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 16 16:55:54 cmp001 systemd[1]: Started Network Time Synchronization.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: IRQ0 used by override.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: IRQ5 used by override.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: IRQ9 used by override.
Nov 16 16:55:54 cmp001 systemd[1]: Reached target System Time Synchronized.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: IRQ10 used by override.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ACPI: IRQ11 used by override.
Nov 16 16:55:54 cmp001 cloud-init[1034]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init-local' at Sat, 16 Nov 2019 16:55:50 +0000. Up 9.13 seconds.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Using ACPI (MADT) for SMP configuration information
Nov 16 16:55:54 cmp001 kernel: [    0.000000] smpboot: Allowing 6 CPUs, 0 hotplug CPUs
Nov 16 16:55:54 cmp001 systemd[1]: Started Initial cloud-init job (pre-networking).
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Network (Pre).
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xbffdf000-0xbfffffff]
Nov 16 16:55:54 cmp001 systemd[1]: Starting Open vSwitch Database Unit...
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 16 16:55:54 cmp001 systemd[1]: Started ifup for br-mgmt.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 16 16:55:54 cmp001 systemd[1]: Started ifup for ens3.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 16 16:55:54 cmp001 systemd[1]: Started ifup for ens4.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Booting paravirtualized kernel on KVM
Nov 16 16:55:54 cmp001 kernel: [    0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
Nov 16 16:55:54 cmp001 kernel: [    0.000000] random: get_random_bytes called from start_kernel+0x99/0x4fd with crng_init=0
Nov 16 16:55:54 cmp001 systemd[1]: Started ifup for ens5.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:6 nr_cpu_ids:6 nr_node_ids:1
Nov 16 16:55:54 cmp001 kernel: [    0.000000] percpu: Embedded 45 pages/cpu s147456 r8192 d28672 u262144
Nov 16 16:55:54 cmp001 sh[1105]: Waiting for br-mgmt to get ready (MAXWAIT is 32 seconds).
Nov 16 16:55:54 cmp001 sh[1121]: WARNING:  Could not open /proc/net/vlan/config.  Maybe you need to load the 8021q module, or maybe you are not using PROCFS??
Nov 16 16:55:54 cmp001 sh[1121]: Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
Nov 16 16:55:54 cmp001 sh[1121]: Added VLAN with VID == 1000 to IF -:ens5:-
Nov 16 16:55:54 cmp001 systemd-udevd[1389]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:54 cmp001 systemd[1]: Found device /sys/subsystem/net/devices/ens5.1000.
Nov 16 16:55:54 cmp001 systemd[1]: Started ifup for ens5.1000.
Nov 16 16:55:54 cmp001 sh[1473]: Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
Nov 16 16:55:54 cmp001 ovs-ctl[1101]:  * Starting ovsdb-server
Nov 16 16:55:54 cmp001 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.16.1
Nov 16 16:55:54 cmp001 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.11.1 "external-ids:system-id=\"eacde882-4a7f-4bc7-a674-16ffc95b89eb\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"ubuntu\"" "system-version=\"18.04\""
Nov 16 16:55:54 cmp001 ovs-ctl[1101]:  * Configuring Open vSwitch system IDs
Nov 16 16:55:54 cmp001 ovs-ctl[1101]:  * Enabling remote OVSDB managers
Nov 16 16:55:54 cmp001 systemd[1]: Started Open vSwitch Database Unit.
Nov 16 16:55:54 cmp001 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 16 16:55:54 cmp001 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=cmp001
Nov 16 16:55:54 cmp001 ovs-ctl[1622]:  * Inserting openvswitch module
Nov 16 16:55:54 cmp001 ovs-ctl[1622]:  * Starting ovs-vswitchd
Nov 16 16:55:54 cmp001 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=cmp001
Nov 16 16:55:54 cmp001 ovs-ctl[1622]:  * Enabling remote OVSDB managers
Nov 16 16:55:54 cmp001 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 16 16:55:54 cmp001 systemd[1]: Starting Raise network interfaces...
Nov 16 16:55:54 cmp001 systemd[1]: Started Raise network interfaces.
Nov 16 16:55:54 cmp001 systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Nov 16 16:55:54 cmp001 kernel: [    0.000000] pcpu-alloc: s147456 r8192 d28672 u262144 alloc=1*2097152
Nov 16 16:55:54 cmp001 kernel: [    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 - - 
Nov 16 16:55:54 cmp001 cloud-init[1855]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init' at Sat, 16 Nov 2019 16:55:53 +0000. Up 12.39 seconds.
Nov 16 16:55:54 cmp001 kernel: [    0.000000] KVM setup async PF for cpu 0
Nov 16 16:55:54 cmp001 kernel: [    0.000000] kvm-stealtime: cpu 0, msr 33fc23040
Nov 16 16:55:54 cmp001 kernel: [    0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: ++++++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++++++
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 3096424
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Policy zone: Normal
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +-----------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Calgary: detecting Calgary via BIOS EBDA area
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   Device  |   Up  |          Address           |      Mask     | Scope  |     Hw-Address    |
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Memory: 12270808K/12582388K available (12300K kernel code, 2481K rwdata, 4264K rodata, 2432K init, 2388K bss, 311580K reserved, 0K cma-reserved)
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +-----------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:55:54 cmp001 kernel: [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
Nov 16 16:55:54 cmp001 kernel: [    0.000000] Kernel/User page tables isolation: enabled
Nov 16 16:55:54 cmp001 kernel: [    0.000000] ftrace: allocating 39315 entries in 154 pages
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |  br-mgmt  |  True |        172.16.10.55        | 255.255.255.0 | global | 52:54:00:88:dd:d4 |
Nov 16 16:55:54 cmp001 kernel: [    0.004000] Hierarchical RCU implementation.
Nov 16 16:55:54 cmp001 kernel: [    0.004000] 	RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=6.
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |  br-mgmt  |  True | fe80::5054:ff:fe88:ddd4/64 |       .       |  link  | 52:54:00:88:dd:d4 |
Nov 16 16:55:54 cmp001 kernel: [    0.004000] 	Tasks RCU enabled.
Nov 16 16:55:54 cmp001 kernel: [    0.004000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
Nov 16 16:55:54 cmp001 kernel: [    0.004000] NR_IRQS: 524544, nr_irqs: 472, preallocated irqs: 16
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |    ens3   |  True |       192.168.11.36        | 255.255.255.0 | global | 52:54:00:da:c3:64 |
Nov 16 16:55:54 cmp001 kernel: [    0.004000] Console: colour VGA+ 80x25
Nov 16 16:55:54 cmp001 kernel: [    0.004000] console [tty1] enabled
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |    ens3   |  True | fe80::5054:ff:feda:c364/64 |       .       |  link  | 52:54:00:da:c3:64 |
Nov 16 16:55:54 cmp001 kernel: [    0.004000] console [ttyS0] enabled
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |    ens4   |  True | fe80::5054:ff:fe88:ddd4/64 |       .       |  link  | 52:54:00:88:dd:d4 |
Nov 16 16:55:54 cmp001 kernel: [    0.004000] ACPI: Core revision 20170831
Nov 16 16:55:54 cmp001 kernel: [    0.004000] ACPI: 2 ACPI AML tables successfully acquired and loaded
Nov 16 16:55:54 cmp001 kernel: [    0.004005] APIC: Switch to symmetric I/O mode setup
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |    ens5   |  True | fe80::5054:ff:fe5d:c357/64 |       .       |  link  | 52:54:00:5d:c3:57 |
Nov 16 16:55:54 cmp001 kernel: [    0.005263] x2apic enabled
Nov 16 16:55:54 cmp001 kernel: [    0.006165] Switched APIC routing to physical x2apic.
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: | ens5.1000 |  True | fe80::5054:ff:fe5d:c357/64 |       .       |  link  | 52:54:00:5d:c3:57 |
Nov 16 16:55:54 cmp001 kernel: [    0.008000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |    ens6   | False |             .              |       .       |   .    | 52:54:00:48:c5:4b |
Nov 16 16:55:54 cmp001 kernel: [    0.008000] tsc: Detected 2799.994 MHz processor
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |     lo    |  True |         127.0.0.1          |   255.0.0.0   |  host  |         .         |
Nov 16 16:55:54 cmp001 kernel: [    0.008000] Calibrating delay loop (skipped) preset value.. 5599.98 BogoMIPS (lpj=11199976)
Nov 16 16:55:54 cmp001 kernel: [    0.008002] pid_max: default: 32768 minimum: 301
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |     lo    |  True |          ::1/128           |       .       |  host  |         .         |
Nov 16 16:55:54 cmp001 kernel: [    0.008967] Security Framework initialized
Nov 16 16:55:54 cmp001 kernel: [    0.009845] Yama: becoming mindful.
Nov 16 16:55:54 cmp001 kernel: [    0.010587] AppArmor: AppArmor initialized
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +-----------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:55:54 cmp001 kernel: [    0.013981] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes)
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
Nov 16 16:55:54 cmp001 kernel: [    0.016699] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes)
Nov 16 16:55:54 cmp001 kernel: [    0.018204] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:55:54 cmp001 kernel: [    0.019539] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:55:54 cmp001 kernel: [    0.020296] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Nov 16 16:55:54 cmp001 kernel: [    0.021359] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Nov 16 16:55:54 cmp001 kernel: [    0.022495] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:55:54 cmp001 kernel: [    0.024002] Spectre V2 : Mitigation: Full generic retpoline
Nov 16 16:55:54 cmp001 kernel: [    0.025106] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Nov 16 16:55:54 cmp001 kernel: [    0.026686] Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 16 16:55:54 cmp001 kernel: [    0.028008] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: | Route | Destination  |   Gateway    |    Genmask    | Interface | Flags |
Nov 16 16:55:54 cmp001 kernel: [    0.029641] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Nov 16 16:55:54 cmp001 kernel: [    0.032029] MDS: Mitigation: Clear CPU buffers
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:55:54 cmp001 kernel: [    0.038201] Freeing SMP alternatives memory: 36K
Nov 16 16:55:54 cmp001 kernel: [    0.040838] TSC deadline timer enabled
Nov 16 16:55:54 cmp001 kernel: [    0.040841] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (family: 0x6, model: 0x3e, stepping: 0x4)
Nov 16 16:55:54 cmp001 kernel: [    0.042839] Performance Events: IvyBridge events, Intel PMU driver.
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   0   |   0.0.0.0    | 192.168.11.3 |    0.0.0.0    |    ens3   |   UG  |
Nov 16 16:55:54 cmp001 kernel: [    0.044000] ... version:                2
Nov 16 16:55:54 cmp001 kernel: [    0.044004] ... bit width:              48
Nov 16 16:55:54 cmp001 kernel: [    0.044916] ... generic registers:      4
Nov 16 16:55:54 cmp001 kernel: [    0.045729] ... value mask:             0000ffffffffffff
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   1   |  10.254.0.0  | 172.16.10.55 |  255.255.0.0  |  br-mgmt  |   UG  |
Nov 16 16:55:54 cmp001 kernel: [    0.046753] ... max period:             000000007fffffff
Nov 16 16:55:54 cmp001 kernel: [    0.047779] ... fixed-purpose events:   3
Nov 16 16:55:54 cmp001 kernel: [    0.048003] ... event mask:             000000070000000f
Nov 16 16:55:54 cmp001 kernel: [    0.049075] Hierarchical SRCU implementation.
Nov 16 16:55:54 cmp001 kernel: [    0.050577] smp: Bringing up secondary CPUs ...
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   2   | 172.16.10.0  |   0.0.0.0    | 255.255.255.0 |  br-mgmt  |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.051612] x86: Booting SMP configuration:
Nov 16 16:55:54 cmp001 kernel: [    0.052006] .... node  #0, CPUs:      #1
Nov 16 16:55:54 cmp001 kernel: [    0.004000] kvm-clock: cpu 1, msr 3:3ff54041, secondary cpu clock
Nov 16 16:55:54 cmp001 kernel: [    0.056039] KVM setup async PF for cpu 1
Nov 16 16:55:54 cmp001 kernel: [    0.056888] kvm-stealtime: cpu 1, msr 33fc63040
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   3   | 192.168.11.0 |   0.0.0.0    | 255.255.255.0 |    ens3   |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.057876]  #2
Nov 16 16:55:54 cmp001 kernel: [    0.004000] kvm-clock: cpu 2, msr 3:3ff54081, secondary cpu clock
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:55:54 cmp001 kernel: [    0.060019] KVM setup async PF for cpu 2
Nov 16 16:55:54 cmp001 kernel: [    0.060848] kvm-stealtime: cpu 2, msr 33fca3040
Nov 16 16:55:54 cmp001 kernel: [    0.061803]  #3
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 16 16:55:54 cmp001 kernel: [    0.004000] kvm-clock: cpu 3, msr 3:3ff540c1, secondary cpu clock
Nov 16 16:55:54 cmp001 kernel: [    0.064017] KVM setup async PF for cpu 3
Nov 16 16:55:54 cmp001 kernel: [    0.064848] kvm-stealtime: cpu 3, msr 33fce3040
Nov 16 16:55:54 cmp001 kernel: [    0.065790]  #4
Nov 16 16:55:54 cmp001 kernel: [    0.004000] kvm-clock: cpu 4, msr 3:3ff54101, secondary cpu clock
Nov 16 16:55:54 cmp001 kernel: [    0.068017] KVM setup async PF for cpu 4
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:55:54 cmp001 kernel: [    0.068831] kvm-stealtime: cpu 4, msr 33fd23040
Nov 16 16:55:54 cmp001 kernel: [    0.069755]  #5
Nov 16 16:55:54 cmp001 kernel: [    0.004000] kvm-clock: cpu 5, msr 3:3ff54141, secondary cpu clock
Nov 16 16:55:54 cmp001 kernel: [    0.072024] KVM setup async PF for cpu 5
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 16 16:55:54 cmp001 kernel: [    0.072888] kvm-stealtime: cpu 5, msr 33fd63040
Nov 16 16:55:54 cmp001 kernel: [    0.073809] smp: Brought up 1 node, 6 CPUs
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:55:54 cmp001 kernel: [    0.073809] smpboot: Max logical packages: 6
Nov 16 16:55:54 cmp001 kernel: [    0.073809] smpboot: Total of 6 processors activated (33599.92 BogoMIPS)
Nov 16 16:55:54 cmp001 kernel: [    0.076735] devtmpfs: initialized
Nov 16 16:55:54 cmp001 kernel: [    0.076791] x86/mm: Memory block size: 128MB
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   1   |  fe80::/64  |    ::   |    ens4   |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.080868] evm: security.selinux
Nov 16 16:55:54 cmp001 kernel: [    0.081593] evm: security.SMACK64
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   2   |  fe80::/64  |    ::   |  br-mgmt  |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.082355] evm: security.SMACK64EXEC
Nov 16 16:55:54 cmp001 kernel: [    0.083164] evm: security.SMACK64TRANSMUTE
Nov 16 16:55:54 cmp001 kernel: [    0.084005] evm: security.SMACK64MMAP
Nov 16 16:55:54 cmp001 kernel: [    0.084795] evm: security.apparmor
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   3   |  fe80::/64  |    ::   |    ens3   |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.085523] evm: security.ima
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   4   |  fe80::/64  |    ::   |    ens5   |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.086230] evm: security.capability
Nov 16 16:55:54 cmp001 kernel: [    0.087043] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
Nov 16 16:55:54 cmp001 kernel: [    0.088020] futex hash table entries: 2048 (order: 5, 131072 bytes)
Nov 16 16:55:54 cmp001 kernel: [    0.089491] pinctrl core: initialized pinctrl subsystem
Nov 16 16:55:54 cmp001 kernel: [    0.090734] RTC time: 16:55:41, date: 11/16/19
Nov 16 16:55:54 cmp001 kernel: [    0.092425] NET: Registered protocol family 16
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   5   |  fe80::/64  |    ::   | ens5.1000 |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.093520] audit: initializing netlink subsys (disabled)
Nov 16 16:55:54 cmp001 kernel: [    0.094661] audit: type=2000 audit(1573923340.365:1): state=initialized audit_enabled=0 res=1
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   7   |    local    |    ::   |    ens5   |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.096009] cpuidle: using governor ladder
Nov 16 16:55:54 cmp001 kernel: [    0.096974] cpuidle: using governor menu
Nov 16 16:55:54 cmp001 kernel: [    0.098039] ACPI: bus type PCI registered
Nov 16 16:55:54 cmp001 kernel: [    0.098962] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 16 16:55:54 cmp001 kernel: [    0.100158] PCI: Using configuration type 1 for base access
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   8   |    local    |    ::   |    ens4   |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.101321] core: PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off
Nov 16 16:55:54 cmp001 kernel: [    0.104574] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   9   |   ff00::/8  |    ::   |    ens4   |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.105440] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Nov 16 16:55:54 cmp001 kernel: [    0.106811] ACPI: Added _OSI(Module Device)
Nov 16 16:55:54 cmp001 kernel: [    0.108005] ACPI: Added _OSI(Processor Device)
Nov 16 16:55:54 cmp001 kernel: [    0.108946] ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   10  |   ff00::/8  |    ::   |  br-mgmt  |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.109928] ACPI: Added _OSI(Processor Aggregator Device)
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   11  |   ff00::/8  |    ::   |    ens3   |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.111021] ACPI: Added _OSI(Linux-Dell-Video)
Nov 16 16:55:54 cmp001 kernel: [    0.111944] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Nov 16 16:55:54 cmp001 kernel: [    0.112004] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   12  |   ff00::/8  |    ::   |    ens5   |   U   |
Nov 16 16:55:54 cmp001 kernel: [    0.114384] ACPI: Interpreter enabled
Nov 16 16:55:54 cmp001 kernel: [    0.115194] ACPI: (supports S0 S5)
Nov 16 16:55:54 cmp001 kernel: [    0.115909] ACPI: Using IOAPIC for interrupt routing
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: |   13  |   ff00::/8  |    ::   | ens5.1000 |   U   |
Nov 16 16:55:54 cmp001 cloud-init[1855]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:55:54 cmp001 kernel: [    0.116024] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 16 16:55:54 cmp001 systemd[1]: Started Initial cloud-init job (metadata service crawler).
Nov 16 16:55:54 cmp001 kernel: [    0.118356] ACPI: Enabled 16 GPEs in block 00 to 0F
Nov 16 16:55:54 cmp001 kernel: [    0.122603] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Cloud-config availability.
Nov 16 16:55:54 cmp001 kernel: [    0.123880] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
Nov 16 16:55:54 cmp001 kernel: [    0.124010] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
Nov 16 16:55:54 cmp001 systemd[1]: Reached target System Initialization.
Nov 16 16:55:54 cmp001 kernel: [    0.125350] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Nov 16 16:55:54 cmp001 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 16 16:55:54 cmp001 kernel: [    0.127906] acpiphp: Slot [3] registered
Nov 16 16:55:54 cmp001 kernel: [    0.128043] acpiphp: Slot [4] registered
Nov 16 16:55:54 cmp001 systemd[1]: Listening on ACPID Listen Socket.
Nov 16 16:55:54 cmp001 kernel: [    0.128924] acpiphp: Slot [5] registered
Nov 16 16:55:54 cmp001 kernel: [    0.129826] acpiphp: Slot [6] registered
Nov 16 16:55:54 cmp001 kernel: [    0.130705] acpiphp: Slot [7] registered
Nov 16 16:55:54 cmp001 systemd[1]: Listening on UUID daemon activation socket.
Nov 16 16:55:54 cmp001 systemd[1]: Started Message of the Day.
Nov 16 16:55:54 cmp001 systemd[1]: Started ACPI Events Check.
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Paths.
Nov 16 16:55:54 cmp001 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 16 16:55:54 cmp001 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 16 16:55:54 cmp001 systemd[1]: Started Daily apt download activities.
Nov 16 16:55:54 cmp001 systemd[1]: Started Daily apt upgrade and clean activities.
Nov 16 16:55:54 cmp001 systemd[1]: Starting Socket activation for snappy daemon.
Nov 16 16:55:54 cmp001 systemd[1]: Starting LXD - unix socket.
Nov 16 16:55:54 cmp001 systemd[1]: Started Discard unused blocks once a week.
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Timers.
Nov 16 16:55:54 cmp001 systemd[1]: Listening on Socket activation for snappy daemon.
Nov 16 16:55:54 cmp001 systemd[1]: Listening on LXD - unix socket.
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Sockets.
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Basic System.
Nov 16 16:55:54 cmp001 systemd[1]: Starting LXD - container startup/shutdown...
Nov 16 16:55:54 cmp001 systemd[1]: Started Regular background program processing daemon.
Nov 16 16:55:54 cmp001 cron[1934]: (CRON) INFO (pidfile fd = 3)
Nov 16 16:55:54 cmp001 systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Nov 16 16:55:54 cmp001 cron[1934]: (CRON) INFO (Running @reboot jobs)
Nov 16 16:55:54 cmp001 systemd[1]: Starting Snappy daemon...
Nov 16 16:55:54 cmp001 systemd[1]: Starting LSB: Record successful boot for GRUB...
Nov 16 16:55:54 cmp001 systemd[1]: Starting Login Service...
Nov 16 16:55:54 cmp001 kernel: [    0.131631] acpiphp: Slot [9] registered
Nov 16 16:55:54 cmp001 systemd[1]: Starting Open vSwitch...
Nov 16 16:55:54 cmp001 kernel: [    0.132043] acpiphp: Slot [10] registered
Nov 16 16:55:54 cmp001 systemd[1]: Starting Accounts Service...
Nov 16 16:55:54 cmp001 kernel: [    0.132955] acpiphp: Slot [11] registered
Nov 16 16:55:54 cmp001 kernel: [    0.133844] acpiphp: Slot [12] registered
Nov 16 16:55:54 cmp001 kernel: [    0.134769] acpiphp: Slot [13] registered
Nov 16 16:55:54 cmp001 systemd[1]: Started irqbalance daemon.
Nov 16 16:55:54 cmp001 kernel: [    0.135682] acpiphp: Slot [14] registered
Nov 16 16:55:54 cmp001 kernel: [    0.136045] acpiphp: Slot [15] registered
Nov 16 16:55:54 cmp001 kernel: [    0.136951] acpiphp: Slot [16] registered
Nov 16 16:55:54 cmp001 systemd[1]: Started Deferred execution scheduler.
Nov 16 16:55:54 cmp001 kernel: [    0.137843] acpiphp: Slot [17] registered
Nov 16 16:55:54 cmp001 systemd[1]: Starting System Logging Service...
Nov 16 16:55:54 cmp001 kernel: [    0.138773] acpiphp: Slot [18] registered
Nov 16 16:55:54 cmp001 kernel: [    0.139670] acpiphp: Slot [19] registered
Nov 16 16:55:54 cmp001 kernel: [    0.140042] acpiphp: Slot [20] registered
Nov 16 16:55:54 cmp001 systemd[1]: Starting LSB: Set sysfs variables from /etc/sysfs.conf...
Nov 16 16:55:54 cmp001 kernel: [    0.140936] acpiphp: Slot [21] registered
Nov 16 16:55:54 cmp001 kernel: [    0.141859] acpiphp: Slot [22] registered
Nov 16 16:55:54 cmp001 kernel: [    0.142762] acpiphp: Slot [23] registered
Nov 16 16:55:54 cmp001 systemd[1]: Started D-Bus System Message Bus.
Nov 16 16:55:54 cmp001 kernel: [    0.143655] acpiphp: Slot [24] registered
Nov 16 16:55:54 cmp001 kernel: [    0.144047] acpiphp: Slot [25] registered
Nov 16 16:55:54 cmp001 kernel: [    0.144941] acpiphp: Slot [26] registered
Nov 16 16:55:54 cmp001 kernel: [    0.145869] acpiphp: Slot [27] registered
Nov 16 16:55:54 cmp001 kernel: [    0.146767] acpiphp: Slot [28] registered
Nov 16 16:55:54 cmp001 kernel: [    0.147676] acpiphp: Slot [29] registered
Nov 16 16:55:54 cmp001 kernel: [    0.148042] acpiphp: Slot [30] registered
Nov 16 16:55:54 cmp001 kernel: [    0.148941] acpiphp: Slot [31] registered
Nov 16 16:55:54 cmp001 smartd[1941]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-70-generic] (local build)
Nov 16 16:55:54 cmp001 kernel: [    0.149828] PCI host bridge to bus 0000:00
Nov 16 16:55:54 cmp001 smartd[1941]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Nov 16 16:55:54 cmp001 kernel: [    0.150707] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 16 16:55:54 cmp001 kernel: [    0.152004] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 16 16:55:54 cmp001 kernel: [    0.153393] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 16 16:55:54 cmp001 smartd[1941]: Opened configuration file /etc/smartd.conf
Nov 16 16:55:54 cmp001 kernel: [    0.154903] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 16 16:55:54 cmp001 kernel: [    0.156005] pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 16 16:55:54 cmp001 smartd[1941]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Nov 16 16:55:54 cmp001 kernel: [    0.157266] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Nov 16 16:55:54 cmp001 kernel: [    0.157825] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Nov 16 16:55:54 cmp001 kernel: [    0.158549] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Nov 16 16:55:54 cmp001 smartd[1941]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Nov 16 16:55:54 cmp001 kernel: [    0.164010] pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Nov 16 16:55:54 cmp001 kernel: [    0.166483] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Nov 16 16:55:54 cmp001 smartd[1941]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Nov 16 16:55:54 cmp001 kernel: [    0.168006] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Nov 16 16:55:54 cmp001 smartd[1941]: In the system's table of devices NO devices found to scan
Nov 16 16:55:54 cmp001 kernel: [    0.170651] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Nov 16 16:55:54 cmp001 kernel: [    0.172005] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Nov 16 16:55:54 cmp001 smartd[1941]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
Nov 16 16:55:54 cmp001 kernel: [    0.173627] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Nov 16 16:55:54 cmp001 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.32.0]
Nov 16 16:55:54 cmp001 kernel: [    0.174158] pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 16 16:55:54 cmp001 kernel: [    0.175665] pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 16 16:55:54 cmp001 kernel: [    0.176295] pci 0000:00:02.0: [1013:00b8] type 00 class 0x030000
Nov 16 16:55:54 cmp001 rsyslogd: rsyslogd's groupid changed to 106
Nov 16 16:55:54 cmp001 kernel: [    0.179057] pci 0000:00:02.0: reg 0x10: [mem 0xfc000000-0xfdffffff pref]
Nov 16 16:55:54 cmp001 kernel: [    0.181127] pci 0000:00:02.0: reg 0x14: [mem 0xfebd0000-0xfebd0fff]
Nov 16 16:55:54 cmp001 rsyslogd: rsyslogd's userid changed to 102
Nov 16 16:55:54 cmp001 kernel: [    0.193240] pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref]
Nov 16 16:55:54 cmp001 kernel: [    0.193525] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:54 cmp001 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="1984" x-info="http://www.rsyslog.com"] start
Nov 16 16:55:54 cmp001 kernel: [    0.195743] pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc05f]
Nov 16 16:55:54 cmp001 kernel: [    0.197110] pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff]
Nov 16 16:55:54 cmp001 kernel: [    0.207874] pci 0000:00:03.0: reg 0x30: [mem 0xfeac0000-0xfeafffff pref]
Nov 16 16:55:54 cmp001 kernel: [    0.208295] pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:54 cmp001 kernel: [    0.210816] pci 0000:00:04.0: reg 0x10: [io  0xc060-0xc07f]
Nov 16 16:55:54 cmp001 kernel: [    0.214049] pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff]
Nov 16 16:55:54 cmp001 kernel: [    0.222578] pci 0000:00:04.0: reg 0x30: [mem 0xfeb00000-0xfeb3ffff pref]
Nov 16 16:55:54 cmp001 kernel: [    0.224301] pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:54 cmp001 kernel: [    0.226253] pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Nov 16 16:55:54 cmp001 kernel: [    0.228011] pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff]
Nov 16 16:55:54 cmp001 kernel: [    0.237764] pci 0000:00:05.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
Nov 16 16:55:54 cmp001 kernel: [    0.238145] pci 0000:00:06.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:54 cmp001 kernel: [    0.240005] pci 0000:00:06.0: reg 0x10: [io  0xc0a0-0xc0bf]
Nov 16 16:55:54 cmp001 kernel: [    0.241914] pci 0000:00:06.0: reg 0x14: [mem 0xfebd4000-0xfebd4fff]
Nov 16 16:55:54 cmp001 kernel: [    0.251615] pci 0000:00:06.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
Nov 16 16:55:54 cmp001 kernel: [    0.252017] pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Nov 16 16:55:54 cmp001 kernel: [    0.253989] pci 0000:00:07.0: reg 0x10: [io  0xc000-0xc03f]
Nov 16 16:55:54 cmp001 kernel: [    0.256006] pci 0000:00:07.0: reg 0x14: [mem 0xfebd5000-0xfebd5fff]
Nov 16 16:55:54 cmp001 kernel: [    0.269387] pci 0000:00:08.0: [8086:2934] type 00 class 0x0c0300
Nov 16 16:55:54 cmp001 kernel: [    0.273907] pci 0000:00:08.0: reg 0x20: [io  0xc0c0-0xc0df]
Nov 16 16:55:54 cmp001 kernel: [    0.276107] pci 0000:00:08.1: [8086:2935] type 00 class 0x0c0300
Nov 16 16:55:54 cmp001 kernel: [    0.281874] pci 0000:00:08.1: reg 0x20: [io  0xc0e0-0xc0ff]
Nov 16 16:55:54 cmp001 kernel: [    0.283934] pci 0000:00:08.2: [8086:2936] type 00 class 0x0c0300
Nov 16 16:55:54 cmp001 kernel: [    0.288535] pci 0000:00:08.2: reg 0x20: [io  0xc100-0xc11f]
Nov 16 16:55:54 cmp001 kernel: [    0.291952] pci 0000:00:08.7: [8086:293a] type 00 class 0x0c0320
Nov 16 16:55:54 cmp001 kernel: [    0.292944] pci 0000:00:08.7: reg 0x10: [mem 0xfebd6000-0xfebd6fff]
Nov 16 16:55:54 cmp001 kernel: [    0.298498] pci 0000:00:09.0: [1af4:1002] type 00 class 0x00ff00
Nov 16 16:55:54 cmp001 kernel: [    0.299527] pci 0000:00:09.0: reg 0x10: [io  0xc120-0xc13f]
Nov 16 16:55:54 cmp001 kernel: [    0.307101] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
Nov 16 16:55:54 cmp001 kernel: [    0.308126] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
Nov 16 16:55:54 cmp001 kernel: [    0.309440] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
Nov 16 16:55:54 cmp001 kernel: [    0.310703] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
Nov 16 16:55:54 cmp001 kernel: [    0.312074] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
Nov 16 16:55:54 cmp001 kernel: [    0.313953] SCSI subsystem initialized
Nov 16 16:55:54 cmp001 kernel: [    0.314908] libata version 3.00 loaded.
Nov 16 16:55:54 cmp001 kernel: [    0.314908] pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 16 16:55:54 cmp001 kernel: [    0.314908] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 16 16:55:54 cmp001 kernel: [    0.316025] pci 0000:00:02.0: vgaarb: bridge control possible
Nov 16 16:55:54 cmp001 kernel: [    0.317223] vgaarb: loaded
Nov 16 16:55:54 cmp001 kernel: [    0.317877] ACPI: bus type USB registered
Nov 16 16:55:54 cmp001 kernel: [    0.318825] usbcore: registered new interface driver usbfs
Nov 16 16:55:54 cmp001 kernel: [    0.319943] usbcore: registered new interface driver hub
Nov 16 16:55:54 cmp001 kernel: [    0.320050] usbcore: registered new device driver usb
Nov 16 16:55:54 cmp001 kernel: [    0.321248] EDAC MC: Ver: 3.0.0
Nov 16 16:55:54 cmp001 kernel: [    0.321248] PCI: Using ACPI for IRQ routing
Nov 16 16:55:54 cmp001 kernel: [    0.321248] PCI: pci_cache_line_size set to 64 bytes
Nov 16 16:55:54 cmp001 kernel: [    0.321248] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 16 16:55:54 cmp001 kernel: [    0.321248] e820: reserve RAM buffer [mem 0xbffdf000-0xbfffffff]
Nov 16 16:55:54 cmp001 kernel: [    0.321304] NetLabel: Initializing
Nov 16 16:55:54 cmp001 kernel: [    0.324004] NetLabel:  domain hash size = 128
Nov 16 16:55:54 cmp001 kernel: [    0.324980] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 16 16:55:54 cmp001 kernel: [    0.326502] NetLabel:  unlabeled traffic allowed by default
Nov 16 16:55:54 cmp001 kernel: [    0.327650] clocksource: Switched to clocksource kvm-clock
Nov 16 16:55:54 cmp001 kernel: [    0.335125] VFS: Disk quotas dquot_6.6.0
Nov 16 16:55:54 cmp001 kernel: [    0.336012] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 16 16:55:54 cmp001 kernel: [    0.337458] AppArmor: AppArmor Filesystem Enabled
Nov 16 16:55:54 cmp001 kernel: [    0.338418] pnp: PnP ACPI init
Nov 16 16:55:54 cmp001 kernel: [    0.339121] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
Nov 16 16:55:54 cmp001 kernel: [    0.339154] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active)
Nov 16 16:55:54 cmp001 kernel: [    0.339176] pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active)
Nov 16 16:55:54 cmp001 kernel: [    0.339202] pnp 00:03: [dma 2]
Nov 16 16:55:54 cmp001 kernel: [    0.339216] pnp 00:03: Plug and Play ACPI device, IDs PNP0700 (active)
Nov 16 16:55:54 cmp001 kernel: [    0.339293] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
Nov 16 16:55:54 cmp001 kernel: [    0.339521] pnp: PnP ACPI: found 5 devices
Nov 16 16:55:54 cmp001 kernel: [    0.346671] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 16 16:55:54 cmp001 kernel: [    0.348385] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 16 16:55:54 cmp001 kernel: [    0.348386] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 16 16:55:54 cmp001 kernel: [    0.348386] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 16 16:55:54 cmp001 kernel: [    0.348387] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 16 16:55:54 cmp001 kernel: [    0.348438] NET: Registered protocol family 2
Nov 16 16:55:54 cmp001 kernel: [    0.349550] TCP established hash table entries: 131072 (order: 8, 1048576 bytes)
Nov 16 16:55:54 cmp001 kernel: [    0.351177] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Nov 16 16:55:54 cmp001 kernel: [    0.352572] TCP: Hash tables configured (established 131072 bind 65536)
Nov 16 16:55:54 cmp001 kernel: [    0.353959] UDP hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:55:54 cmp001 kernel: [    0.355161] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:55:54 cmp001 kernel: [    0.356477] NET: Registered protocol family 1
Nov 16 16:55:54 cmp001 kernel: [    0.357407] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 16 16:55:54 cmp001 kernel: [    0.358555] pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 16 16:55:54 cmp001 kernel: [    0.359685] pci 0000:00:01.0: Activating ISA DMA hang workarounds
Nov 16 16:55:54 cmp001 kernel: [    0.360937] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 16 16:55:54 cmp001 kernel: [    0.385812] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
Nov 16 16:55:54 cmp001 kernel: [    0.434586] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
Nov 16 16:55:54 cmp001 kernel: [    0.482846] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
Nov 16 16:55:54 cmp001 kernel: [    0.530823] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
Nov 16 16:55:54 cmp001 kernel: [    0.555660] PCI: CLS 0 bytes, default 64
Nov 16 16:55:54 cmp001 kernel: [    0.555706] Unpacking initramfs...
Nov 16 16:55:54 cmp001 kernel: [    0.786084] Freeing initrd memory: 19152K
Nov 16 16:55:54 cmp001 kernel: [    0.787046] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 16 16:55:54 cmp001 kernel: [    0.788368] software IO TLB: mapped [mem 0xbbfdf000-0xbffdf000] (64MB)
Nov 16 16:55:54 cmp001 kernel: [    0.789743] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3aeaff3, max_idle_ns: 440795255742 ns
Nov 16 16:55:54 cmp001 kernel: [    0.791658] Scanning for low memory corruption every 60 seconds
Nov 16 16:55:54 cmp001 kernel: [    0.793658] Initialise system trusted keyrings
Nov 16 16:55:54 cmp001 kernel: [    0.794570] Key type blacklist registered
Nov 16 16:55:54 cmp001 kernel: [    0.795508] workingset: timestamp_bits=36 max_order=22 bucket_order=0
Nov 16 16:55:54 cmp001 kernel: [    0.797909] zbud: loaded
Nov 16 16:55:54 cmp001 kernel: [    0.798994] squashfs: version 4.0 (2009/01/31) Phillip Lougher
Nov 16 16:55:54 cmp001 kernel: [    0.800345] fuse init (API version 7.26)
Nov 16 16:55:54 cmp001 kernel: [    0.802747] Key type asymmetric registered
Nov 16 16:55:54 cmp001 kernel: [    0.803657] Asymmetric key parser 'x509' registered
Nov 16 16:55:54 cmp001 kernel: [    0.804717] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 16 16:55:54 cmp001 kernel: [    0.806388] io scheduler noop registered
Nov 16 16:55:54 cmp001 kernel: [    0.807252] io scheduler deadline registered
Nov 16 16:55:54 cmp001 kernel: [    0.808192] io scheduler cfq registered (default)
Nov 16 16:55:54 cmp001 kernel: [    0.809560] intel_idle: Please enable MWAIT in BIOS SETUP
Nov 16 16:55:54 cmp001 kernel: [    0.809651] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 16 16:55:54 cmp001 kernel: [    0.811195] ACPI: Power Button [PWRF]
Nov 16 16:55:54 cmp001 kernel: [    0.836768] virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:54 cmp001 kernel: [    0.863262] virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:54 cmp001 kernel: [    0.888703] virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:54 cmp001 kernel: [    0.914136] virtio-pci 0000:00:06.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:54 cmp001 kernel: [    0.939511] virtio-pci 0000:00:07.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:54 cmp001 kernel: [    0.964924] virtio-pci 0000:00:09.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:54 cmp001 kernel: [    0.967276] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
Nov 16 16:55:54 cmp001 kernel: [    0.991893] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 16 16:55:54 cmp001 kernel: [    0.995094] Linux agpgart interface v0.103
Nov 16 16:55:54 cmp001 kernel: [    0.998233] loop: module loaded
Nov 16 16:55:54 cmp001 kernel: [    0.999038] ata_piix 0000:00:01.1: version 2.13
Nov 16 16:55:54 cmp001 kernel: [    1.000242] scsi host0: ata_piix
Nov 16 16:55:54 cmp001 kernel: [    1.001253] scsi host1: ata_piix
Nov 16 16:55:54 cmp001 kernel: [    1.002035] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Nov 16 16:55:54 cmp001 kernel: [    1.003399] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Nov 16 16:55:54 cmp001 kernel: [    1.004920] libphy: Fixed MDIO Bus: probed
Nov 16 16:55:54 cmp001 kernel: [    1.005941] tun: Universal TUN/TAP device driver, 1.6
Nov 16 16:55:54 cmp001 kernel: [    1.007071] PPP generic driver version 2.4.2
Nov 16 16:55:54 cmp001 kernel: [    1.008086] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Nov 16 16:55:54 cmp001 kernel: [    1.009542] ehci-pci: EHCI PCI platform driver
Nov 16 16:55:54 cmp001 kernel: [    1.035999] ehci-pci 0000:00:08.7: EHCI Host Controller
Nov 16 16:55:54 cmp001 kernel: [    1.037171] ehci-pci 0000:00:08.7: new USB bus registered, assigned bus number 1
Nov 16 16:55:54 cmp001 kernel: [    1.038900] ehci-pci 0000:00:08.7: irq 11, io mem 0xfebd6000
Nov 16 16:55:54 cmp001 kernel: [    1.056030] ehci-pci 0000:00:08.7: USB 2.0 started, EHCI 1.00
Nov 16 16:55:54 cmp001 kernel: [    1.064298] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Nov 16 16:55:54 cmp001 kernel: [    1.065705] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:54 cmp001 kernel: [    1.067190] usb usb1: Product: EHCI Host Controller
Nov 16 16:55:54 cmp001 kernel: [    1.068209] usb usb1: Manufacturer: Linux 4.15.0-70-generic ehci_hcd
Nov 16 16:55:54 cmp001 kernel: [    1.069530] usb usb1: SerialNumber: 0000:00:08.7
Nov 16 16:55:54 cmp001 kernel: [    1.070646] hub 1-0:1.0: USB hub found
Nov 16 16:55:54 cmp001 kernel: [    1.071496] hub 1-0:1.0: 6 ports detected
Nov 16 16:55:54 cmp001 kernel: [    1.072970] ehci-platform: EHCI generic platform driver
Nov 16 16:55:54 cmp001 kernel: [    1.074053] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Nov 16 16:55:54 cmp001 kernel: [    1.075240] ohci-pci: OHCI PCI platform driver
Nov 16 16:55:54 cmp001 kernel: [    1.076158] ohci-platform: OHCI generic platform driver
Nov 16 16:55:54 cmp001 kernel: [    1.077233] uhci_hcd: USB Universal Host Controller Interface driver
Nov 16 16:55:54 cmp001 kernel: [    1.102501] uhci_hcd 0000:00:08.0: UHCI Host Controller
Nov 16 16:55:54 cmp001 kernel: [    1.103543] uhci_hcd 0000:00:08.0: new USB bus registered, assigned bus number 2
Nov 16 16:55:54 cmp001 kernel: [    1.105058] uhci_hcd 0000:00:08.0: detected 2 ports
Nov 16 16:55:54 cmp001 kernel: [    1.106093] uhci_hcd 0000:00:08.0: irq 11, io base 0x0000c0c0
Nov 16 16:55:54 cmp001 kernel: [    1.107291] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:55:54 cmp001 kernel: [    1.108657] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:54 cmp001 kernel: [    1.110171] usb usb2: Product: UHCI Host Controller
Nov 16 16:55:54 cmp001 kernel: [    1.111172] usb usb2: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:55:54 cmp001 kernel: [    1.112616] usb usb2: SerialNumber: 0000:00:08.0
Nov 16 16:55:54 cmp001 kernel: [    1.113838] hub 2-0:1.0: USB hub found
Nov 16 16:55:54 cmp001 kernel: [    1.114686] hub 2-0:1.0: 2 ports detected
Nov 16 16:55:54 cmp001 kernel: [    1.140708] uhci_hcd 0000:00:08.1: UHCI Host Controller
Nov 16 16:55:54 cmp001 kernel: [    1.141832] uhci_hcd 0000:00:08.1: new USB bus registered, assigned bus number 3
Nov 16 16:55:54 cmp001 kernel: [    1.143342] uhci_hcd 0000:00:08.1: detected 2 ports
Nov 16 16:55:54 cmp001 kernel: [    1.144472] uhci_hcd 0000:00:08.1: irq 10, io base 0x0000c0e0
Nov 16 16:55:54 cmp001 kernel: [    1.145753] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:55:54 cmp001 kernel: [    1.147128] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:54 cmp001 kernel: [    1.148615] usb usb3: Product: UHCI Host Controller
Nov 16 16:55:54 cmp001 kernel: [    1.149712] usb usb3: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:55:54 cmp001 kernel: [    1.151152] usb usb3: SerialNumber: 0000:00:08.1
Nov 16 16:55:54 cmp001 kernel: [    1.152230] hub 3-0:1.0: USB hub found
Nov 16 16:55:54 cmp001 kernel: [    1.153109] hub 3-0:1.0: 2 ports detected
Nov 16 16:55:54 cmp001 kernel: [    1.164546] ata1.01: NODEV after polling detection
Nov 16 16:55:54 cmp001 kernel: [    1.164871] ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 16 16:55:54 cmp001 kernel: [    1.166585] ata1.00: configured for MWDMA2
Nov 16 16:55:54 cmp001 kernel: [    1.168309] scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 16 16:55:54 cmp001 kernel: [    1.171021] sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 16 16:55:54 cmp001 kernel: [    1.172410] cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 16 16:55:54 cmp001 kernel: [    1.173646] sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 16 16:55:54 cmp001 kernel: [    1.173682] sr 0:0:0:0: Attached scsi generic sg0 type 5
Nov 16 16:55:54 cmp001 kernel: [    1.183847] uhci_hcd 0000:00:08.2: UHCI Host Controller
Nov 16 16:55:54 cmp001 kernel: [    1.184998] uhci_hcd 0000:00:08.2: new USB bus registered, assigned bus number 4
Nov 16 16:55:54 cmp001 kernel: [    1.186561] uhci_hcd 0000:00:08.2: detected 2 ports
Nov 16 16:55:54 cmp001 kernel: [    1.187629] uhci_hcd 0000:00:08.2: irq 10, io base 0x0000c100
Nov 16 16:55:54 cmp001 kernel: [    1.188950] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:55:54 cmp001 kernel: [    1.190335] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:54 cmp001 kernel: [    1.191785] usb usb4: Product: UHCI Host Controller
Nov 16 16:55:54 cmp001 kernel: [    1.192868] usb usb4: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:55:54 cmp001 kernel: [    1.194169] usb usb4: SerialNumber: 0000:00:08.2
Nov 16 16:55:54 cmp001 kernel: [    1.195267] hub 4-0:1.0: USB hub found
Nov 16 16:55:54 cmp001 kernel: [    1.196136] hub 4-0:1.0: 2 ports detected
Nov 16 16:55:54 cmp001 kernel: [    1.197617] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 16 16:55:54 cmp001 kernel: [    1.200097] serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 16 16:55:54 cmp001 kernel: [    1.201249] serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 16 16:55:54 cmp001 kernel: [    1.202454] mousedev: PS/2 mouse device common for all mice
Nov 16 16:55:54 cmp001 kernel: [    1.203975] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 16 16:55:54 cmp001 kernel: [    1.206075] rtc_cmos 00:00: RTC can wake from S4
Nov 16 16:55:54 cmp001 kernel: [    1.207385] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
Nov 16 16:55:54 cmp001 kernel: [    1.208798] rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Nov 16 16:55:54 cmp001 kernel: [    1.210065] i2c /dev entries driver
Nov 16 16:55:54 cmp001 kernel: [    1.210890] device-mapper: uevent: version 1.0.3
Nov 16 16:55:54 cmp001 kernel: [    1.211944] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: dm-devel@redhat.com
Nov 16 16:55:54 cmp001 kernel: [    1.213995] ledtrig-cpu: registered to indicate activity on CPUs
Nov 16 16:55:54 cmp001 kernel: [    1.215748] NET: Registered protocol family 10
Nov 16 16:55:54 cmp001 kernel: [    1.220151] Segment Routing with IPv6
Nov 16 16:55:54 cmp001 kernel: [    1.221061] NET: Registered protocol family 17
Nov 16 16:55:54 cmp001 kernel: [    1.222085] Key type dns_resolver registered
Nov 16 16:55:54 cmp001 kernel: [    1.223755] mce: Using 10 MCE banks
Nov 16 16:55:54 cmp001 kernel: [    1.224666] RAS: Correctable Errors collector initialized.
Nov 16 16:55:54 cmp001 kernel: [    1.225843] sched_clock: Marking stable (1224649762, 0)->(1626552805, -401903043)
Nov 16 16:55:54 cmp001 kernel: [    1.227774] registered taskstats version 1
Nov 16 16:55:54 cmp001 kernel: [    1.228746] Loading compiled-in X.509 certificates
Nov 16 16:55:54 cmp001 kernel: [    1.232033] Loaded X.509 cert 'Build time autogenerated kernel key: 1859b0531897959199376c446a0bd70df75fd1fc'
Nov 16 16:55:54 cmp001 kernel: [    1.234124] zswap: loaded using pool lzo/zbud
Nov 16 16:55:54 cmp001 kernel: [    1.238629] Key type big_key registered
Nov 16 16:55:54 cmp001 kernel: [    1.239506] Key type trusted registered
Nov 16 16:55:54 cmp001 kernel: [    1.242138] Key type encrypted registered
Nov 16 16:55:54 cmp001 kernel: [    1.243084] AppArmor: AppArmor sha1 policy hashing enabled
Nov 16 16:55:54 cmp001 kernel: [    1.244238] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
Nov 16 16:55:54 cmp001 kernel: [    1.245570] ima: Allocated hash algorithm: sha1
Nov 16 16:55:54 cmp001 kernel: [    1.246525] evm: HMAC attrs: 0x1
Nov 16 16:55:54 cmp001 kernel: [    1.247609]   Magic number: 11:820:944
Nov 16 16:55:54 cmp001 kernel: [    1.248496] tty tty1: hash matches
Nov 16 16:55:54 cmp001 kernel: [    1.249536] rtc_cmos 00:00: setting system clock to 2019-11-16 16:55:42 UTC (1573923342)
Nov 16 16:55:54 cmp001 kernel: [    1.251400] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
Nov 16 16:55:54 cmp001 kernel: [    1.252742] EDD information not available.
Nov 16 16:55:54 cmp001 kernel: [    1.256064] Freeing unused kernel image memory: 2432K
Nov 16 16:55:54 cmp001 kernel: [    1.276042] Write protecting the kernel read-only data: 20480k
Nov 16 16:55:54 cmp001 kernel: [    1.278250] Freeing unused kernel image memory: 2008K
Nov 16 16:55:54 cmp001 kernel: [    1.279934] Freeing unused kernel image memory: 1880K
Nov 16 16:55:54 cmp001 kernel: [    1.288129] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:55:54 cmp001 kernel: [    1.289603] x86/mm: Checking user space page tables
Nov 16 16:55:54 cmp001 kernel: [    1.297430] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:55:54 cmp001 kernel: [    1.365031]  vda: vda1 vda14 vda15
Nov 16 16:55:54 cmp001 kernel: [    1.368344] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 16 16:55:54 cmp001 kernel: [    1.370672] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 16 16:55:54 cmp001 kernel: [    1.378734] AVX version of gcm_enc/dec engaged.
Nov 16 16:55:54 cmp001 kernel: [    1.379790] AES CTR mode by8 optimization enabled
Nov 16 16:55:54 cmp001 kernel: [    1.391119] virtio_net virtio0 ens3: renamed from eth0
Nov 16 16:55:54 cmp001 kernel: [    1.392822] FDC 0 is a S82078B
Nov 16 16:55:54 cmp001 kernel: [    1.424219] virtio_net virtio1 ens4: renamed from eth1
Nov 16 16:55:54 cmp001 kernel: [    1.444117] virtio_net virtio3 ens6: renamed from eth3
Nov 16 16:55:54 cmp001 kernel: [    1.464226] virtio_net virtio2 ens5: renamed from eth2
Nov 16 16:55:54 cmp001 kernel: [    3.100028] raid6: sse2x1   gen()  7128 MB/s
Nov 16 16:55:54 cmp001 kernel: [    3.148030] raid6: sse2x1   xor()  5684 MB/s
Nov 16 16:55:54 cmp001 kernel: [    3.196027] raid6: sse2x2   gen()  9232 MB/s
Nov 16 16:55:54 cmp001 kernel: [    3.244026] raid6: sse2x2   xor()  6548 MB/s
Nov 16 16:55:54 cmp001 kernel: [    3.292025] raid6: sse2x4   gen() 11496 MB/s
Nov 16 16:55:54 cmp001 kernel: [    3.340023] raid6: sse2x4   xor()  7698 MB/s
Nov 16 16:55:54 cmp001 kernel: [    3.341119] raid6: using algorithm sse2x4 gen() 11496 MB/s
Nov 16 16:55:54 cmp001 kernel: [    3.342460] raid6: .... xor() 7698 MB/s, rmw enabled
Nov 16 16:55:54 cmp001 kernel: [    3.343692] raid6: using ssse3x2 recovery algorithm
Nov 16 16:55:54 cmp001 kernel: [    3.346058] xor: automatically using best checksumming function   avx       
Nov 16 16:55:54 cmp001 kernel: [    3.348888] async_tx: api initialized (async)
Nov 16 16:55:54 cmp001 kernel: [    3.404771] Btrfs loaded, crc32c=crc32c-intel
Nov 16 16:55:54 cmp001 kernel: [    3.464965] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null)
Nov 16 16:55:54 cmp001 kernel: [    3.477270] random: fast init done
Nov 16 16:55:54 cmp001 kernel: [    3.772346] ip_tables: (C) 2000-2006 Netfilter Core Team
Nov 16 16:55:54 cmp001 kernel: [    3.781435] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:55:54 cmp001 kernel: [    3.787565] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Nov 16 16:55:54 cmp001 kernel: [    3.793370] systemd[1]: Detected virtualization kvm.
Nov 16 16:55:54 cmp001 kernel: [    3.794894] systemd[1]: Detected architecture x86-64.
Nov 16 16:55:54 cmp001 kernel: [    3.796482] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:55:54 cmp001 kernel: [    3.798441] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:55:54 cmp001 kernel: [    3.805681] systemd[1]: Set hostname to <cmp001>.
Nov 16 16:55:54 cmp001 kernel: [    4.093513] systemd[1]: Created slice System Slice.
Nov 16 16:55:54 cmp001 kernel: [    4.095616] systemd[1]: Listening on udev Control Socket.
Nov 16 16:55:54 cmp001 kernel: [    4.097793] systemd[1]: Listening on Syslog Socket.
Nov 16 16:55:54 cmp001 kernel: [    4.099763] systemd[1]: Listening on udev Kernel Socket.
Nov 16 16:55:54 cmp001 kernel: [    4.101934] systemd[1]: Listening on Journal Audit Socket.
Nov 16 16:55:54 cmp001 kernel: [    4.104358] systemd[1]: Created slice system-postfix.slice.
Nov 16 16:55:54 cmp001 kernel: [    4.125178] EXT4-fs (vda1): re-mounted. Opts: (null)
Nov 16 16:55:54 cmp001 kernel: [    4.136146] Loading iSCSI transport class v2.0-870.
Nov 16 16:55:54 cmp001 kernel: [    4.155137] iscsi: registered transport (tcp)
Nov 16 16:55:54 cmp001 kernel: [    4.204721] iscsi: registered transport (iser)
Nov 16 16:55:54 cmp001 kernel: [    4.214307] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Nov 16 16:55:54 cmp001 kernel: [    4.254901] systemd-journald[441]: Received request to flush runtime journal from PID 1
Nov 16 16:55:54 cmp001 kernel: [    4.597287] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 16 16:55:54 cmp001 kernel: [    4.598457] br-mgmt: port 1(ens4) entered blocking state
Nov 16 16:55:54 cmp001 kernel: [    4.598459] br-mgmt: port 1(ens4) entered disabled state
Nov 16 16:55:54 cmp001 kernel: [    4.598522] device ens4 entered promiscuous mode
Nov 16 16:55:54 cmp001 kernel: [    5.693508] audit: type=1400 audit(1573923346.940:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=1028 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [    5.696117] audit: type=1400 audit(1573923346.944:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=1029 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [    5.696120] audit: type=1400 audit(1573923346.944:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=1029 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [    5.696122] audit: type=1400 audit(1573923346.944:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=1029 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [    5.701117] audit: type=1400 audit(1573923346.948:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=1030 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [    5.701127] audit: type=1400 audit(1573923346.948:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=1030 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [    5.702396] audit: type=1400 audit(1573923346.948:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/tcpdump" pid=1032 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [    5.703618] audit: type=1400 audit(1573923346.948:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/sbin/dhclient" pid=1027 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [    5.703622] audit: type=1400 audit(1573923346.948:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1027 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [    5.703624] audit: type=1400 audit(1573923346.948:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=1027 comm="apparmor_parser"
Nov 16 16:55:54 cmp001 kernel: [   10.052504] ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 16 16:55:54 cmp001 kernel: [   10.056504] ISO 9660 Extensions: RRIP_1991A
Nov 16 16:55:54 cmp001 kernel: [   11.064935] br-mgmt: port 1(ens4) entered blocking state
Nov 16 16:55:54 cmp001 kernel: [   11.064939] br-mgmt: port 1(ens4) entered forwarding state
Nov 16 16:55:54 cmp001 kernel: [   11.065049] IPv6: ADDRCONF(NETDEV_UP): br-mgmt: link is not ready
Nov 16 16:55:54 cmp001 kernel: [   11.065069] IPv6: ADDRCONF(NETDEV_CHANGE): br-mgmt: link becomes ready
Nov 16 16:55:54 cmp001 kernel: [   11.133504] 8021q: 802.1Q VLAN Support v1.8
Nov 16 16:55:54 cmp001 kernel: [   11.133512] 8021q: adding VLAN 0 to HW filter on device ens3
Nov 16 16:55:54 cmp001 kernel: [   11.133566] 8021q: adding VLAN 0 to HW filter on device ens4
Nov 16 16:55:54 cmp001 kernel: [   11.133589] 8021q: adding VLAN 0 to HW filter on device ens5
Nov 16 16:55:54 cmp001 kernel: [   11.557915] openvswitch: Open vSwitch switching datapath
Nov 16 16:55:54 cmp001 dbus-daemon[1991]: [system] AppArmor D-Bus mediation is enabled
Nov 16 16:55:54 cmp001 systemd[1]: Started FUSE filesystem for LXC.
Nov 16 16:55:54 cmp001 systemd[1]: Started System Logging Service.
Nov 16 16:55:54 cmp001 systemd[1]: smartd.service: Main process exited, code=exited, status=17/n/a
Nov 16 16:55:54 cmp001 systemd[1]: smartd.service: Failed with result 'exit-code'.
Nov 16 16:55:54 cmp001 systemd[1]: Started Open vSwitch.
Nov 16 16:55:54 cmp001 systemd[1]: Started Login Service.
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Network.
Nov 16 16:55:54 cmp001 sysfsutils[1987]:  * Setting sysfs variables...
Nov 16 16:55:54 cmp001 systemd[1]: Starting dnsmasq - A lightweight DHCP and caching DNS server...
Nov 16 16:55:54 cmp001 grub-common[1950]:  * Recording successful boot for GRUB
Nov 16 16:55:54 cmp001 systemd[1]: Started Unattended Upgrades Shutdown.
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Network is Online.
Nov 16 16:55:54 cmp001 sysfsutils[1987]:    ...done.
Nov 16 16:55:54 cmp001 systemd[1]: Starting OpenBSD Secure Shell server...
Nov 16 16:55:54 cmp001 dbus-daemon[1991]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.2' (uid=0 pid=1967 comm="/usr/lib/accountsservice/accounts-daemon " label="unconfined")
Nov 16 16:55:54 cmp001 kernel: [   13.365271] new mount options do not match the existing superblock, will be ignored
Nov 16 16:55:54 cmp001 systemd[1]: Starting Availability of block devices...
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Remote File Systems (Pre).
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Remote File Systems.
Nov 16 16:55:54 cmp001 dnsmasq[2074]: dnsmasq: syntax check OK.
Nov 16 16:55:54 cmp001 lxcfs[2066]: mount namespace: 5
Nov 16 16:55:54 cmp001 lxcfs[2066]: hierarchies:
Nov 16 16:55:54 cmp001 lxcfs[2066]:   0: fd:   6: cpuset
Nov 16 16:55:54 cmp001 lxcfs[2066]:   1: fd:   7: blkio
Nov 16 16:55:54 cmp001 lxcfs[2066]:   2: fd:   8: pids
Nov 16 16:55:54 cmp001 lxcfs[2066]:   3: fd:   9: hugetlb
Nov 16 16:55:54 cmp001 lxcfs[2066]:   4: fd:  10: freezer
Nov 16 16:55:54 cmp001 lxcfs[2066]:   5: fd:  11: cpu,cpuacct
Nov 16 16:55:54 cmp001 lxcfs[2066]:   6: fd:  12: memory
Nov 16 16:55:54 cmp001 lxcfs[2066]:   7: fd:  13: devices
Nov 16 16:55:54 cmp001 lxcfs[2066]:   8: fd:  14: rdma
Nov 16 16:55:54 cmp001 lxcfs[2066]:   9: fd:  15: net_cls,net_prio
Nov 16 16:55:54 cmp001 lxcfs[2066]:  10: fd:  16: perf_event
Nov 16 16:55:54 cmp001 lxcfs[2066]:  11: fd:  17: name=systemd
Nov 16 16:55:54 cmp001 lxcfs[2066]:  12: fd:  18: unified
Nov 16 16:55:54 cmp001 systemd[1]: Starting LSB: automatic crash report generation...
Nov 16 16:55:54 cmp001 systemd[1]: Starting Permit User Sessions...
Nov 16 16:55:54 cmp001 systemd[1]: Starting The Salt Minion...
Nov 16 16:55:54 cmp001 systemd[1]: Started LSB: Set sysfs variables from /etc/sysfs.conf.
Nov 16 16:55:54 cmp001 systemd[1]: Started Availability of block devices.
Nov 16 16:55:54 cmp001 systemd[1]: Started Permit User Sessions.
Nov 16 16:55:54 cmp001 grub-common[1950]:    ...done.
Nov 16 16:55:54 cmp001 systemd[1]: Started LSB: Record successful boot for GRUB.
Nov 16 16:55:54 cmp001 systemd[1]: Starting Authorization Manager...
Nov 16 16:55:54 cmp001 systemd[1]: Starting Hold until boot process finishes up...
Nov 16 16:55:54 cmp001 systemd[1]: Starting Terminate Plymouth Boot Screen...
Nov 16 16:55:54 cmp001 systemd[1]: Started Hold until boot process finishes up.
Nov 16 16:55:54 cmp001 systemd[1]: Started Terminate Plymouth Boot Screen.
Nov 16 16:55:54 cmp001 systemd[1]: Started Serial Getty on ttyS0.
Nov 16 16:55:54 cmp001 snapd[1947]: AppArmor status: apparmor is enabled and all features are available
Nov 16 16:55:54 cmp001 systemd[1]: Starting Set console scheme...
Nov 16 16:55:54 cmp001 apport[2128]:  * Starting automatic crash report generation: apport
Nov 16 16:55:54 cmp001 systemd[1]: Started Set console scheme.
Nov 16 16:55:54 cmp001 systemd[1]: Created slice system-getty.slice.
Nov 16 16:55:54 cmp001 systemd[1]: Started Getty on tty1.
Nov 16 16:55:54 cmp001 systemd[1]: Reached target Login Prompts.
Nov 16 16:55:54 cmp001 apport[2128]:    ...done.
Nov 16 16:55:54 cmp001 systemd[1]: Started LSB: automatic crash report generation.
Nov 16 16:55:54 cmp001 polkitd[2156]: started daemon version 0.105 using authority implementation `local' version `0.105'
Nov 16 16:55:54 cmp001 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Nov 16 16:55:54 cmp001 systemd[1]: Started Authorization Manager.
Nov 16 16:55:54 cmp001 accounts-daemon[1967]: started daemon version 0.6.45
Nov 16 16:55:54 cmp001 systemd[1]: Started Accounts Service.
Nov 16 16:55:54 cmp001 snapd[1947]: patch.go:64: Patching system state level 6 to sublevel 1...
Nov 16 16:55:54 cmp001 systemd[1]: Started OpenBSD Secure Shell server.
Nov 16 16:55:55 cmp001 dnsmasq[2252]: started, version 2.79 cachesize 150
Nov 16 16:55:55 cmp001 dnsmasq[2252]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify
Nov 16 16:55:55 cmp001 dnsmasq[2252]: reading /etc/resolv.conf
Nov 16 16:55:55 cmp001 dnsmasq[2252]: using nameserver 8.8.8.8#53
Nov 16 16:55:55 cmp001 dnsmasq[2252]: read /etc/hosts - 11 addresses
Nov 16 16:55:55 cmp001 systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.
Nov 16 16:55:55 cmp001 systemd[1]: Reached target Host and Network Name Lookups.
Nov 16 16:55:55 cmp001 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Nov 16 16:55:55 cmp001 systemd[1]: Started LXD - container startup/shutdown.
Nov 16 16:55:55 cmp001 snapd[1947]: daemon.go:338: started snapd/2.40+18.04 (series 16; classic) ubuntu/18.04 (amd64) linux/4.15.0-70-generic.
Nov 16 16:55:55 cmp001 systemd[1]: Started Snappy daemon.
Nov 16 16:55:55 cmp001 systemd[1]: Starting Wait until snapd is fully seeded...
Nov 16 16:55:55 cmp001 systemd[1]: Started Wait until snapd is fully seeded.
Nov 16 16:55:55 cmp001 systemd[1]: Starting Apply the settings specified in cloud-config...
Nov 16 16:55:55 cmp001 postfix/postfix-script[2510]: starting the Postfix mail system
Nov 16 16:55:55 cmp001 postfix/master[2514]: daemon started -- version 3.3.0, configuration /etc/postfix
Nov 16 16:55:55 cmp001 systemd[1]: Started Postfix Mail Transport Agent (instance -).
Nov 16 16:55:55 cmp001 systemd[1]: Starting Postfix Mail Transport Agent...
Nov 16 16:55:55 cmp001 systemd[1]: Started Postfix Mail Transport Agent.
Nov 16 16:55:55 cmp001 systemd[1]: Started The Salt Minion.
Nov 16 16:55:55 cmp001 systemd[1]: Reached target Multi-User System.
Nov 16 16:55:55 cmp001 systemd[1]: Reached target Graphical Interface.
Nov 16 16:55:55 cmp001 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Nov 16 16:55:55 cmp001 systemd[1]: Started Update UTMP about System Runlevel Changes.
Nov 16 16:55:55 cmp001 cloud-init[2383]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:config' at Sat, 16 Nov 2019 16:55:55 +0000. Up 14.38 seconds.
Nov 16 16:55:55 cmp001 systemd[1]: Started Apply the settings specified in cloud-config.
Nov 16 16:55:55 cmp001 systemd[1]: Starting Execute cloud user/final scripts...
Nov 16 16:55:56 cmp001 cloud-init[2567]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:final' at Sat, 16 Nov 2019 16:55:56 +0000. Up 15.05 seconds.
Nov 16 16:55:56 cmp001 cloud-init[2567]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 finished at Sat, 16 Nov 2019 16:55:56 +0000. Datasource DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 15.15 seconds
Nov 16 16:55:56 cmp001 systemd[1]: Started Execute cloud user/final scripts.
Nov 16 16:55:56 cmp001 systemd[1]: Reached target Cloud-init target.
Nov 16 16:55:56 cmp001 systemd[1]: Startup finished in 3.720s (kernel) + 11.490s (userspace) = 15.211s.
Nov 16 16:56:00 cmp001 snapd[1947]: daemon.go:576: gracefully waiting for running hooks
Nov 16 16:56:00 cmp001 snapd[1947]: daemon.go:578: done waiting for running hooks
Nov 16 16:56:00 cmp001 snapd[1947]: daemon stop requested to wait for socket activation
Nov 16 16:56:09 cmp001 kernel: [   28.657490] random: crng init done
Nov 16 16:56:09 cmp001 kernel: [   28.657495] random: 7 urandom warning(s) missed due to ratelimiting
Nov 16 16:56:16 cmp001 systemd-timesyncd[947]: Synchronized to time server 91.189.89.198:123 (ntp.ubuntu.com).
Nov 16 16:56:30 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install python-oauth python-m2crypto.
Nov 16 16:56:38 cmp001 systemd[1]: Reloading.
Nov 16 16:56:39 cmp001 salt-minion[2139]: [WARNING ] The function "module.run" is using its deprecated version and will expire in version "Sodium".
Nov 16 16:56:41 cmp001 salt-minion[2139]: .......................................................................................................................................................................................................................................................................................................................++++
Nov 16 16:56:41 cmp001 salt-minion[2139]: ..................................++++
Nov 16 16:56:43 cmp001 salt-minion[2139]: [WARNING ] State for file: /etc/kubernetes/ssl/ca-kubernetes.crt - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:56:43 cmp001 salt-minion[2139]: ......................................................................................................++++
Nov 16 16:56:44 cmp001 salt-minion[2139]: ......................................................................++++
Nov 16 16:56:45 cmp001 salt-minion[2139]: ....................++++
Nov 16 16:56:45 cmp001 salt-minion[2139]: ...........................................................................................................++++
Nov 16 16:56:46 cmp001 salt-minion[2139]: [WARNING ] State for file: /var/lib/etcd/ca.pem - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:56:47 cmp001 salt-minion[2139]: ..................................................................................................................................................................................................................++++
Nov 16 16:56:47 cmp001 salt-minion[2139]: ........................++++
Nov 16 16:56:48 cmp001 salt-minion[2139]: ...............................................++++
Nov 16 16:56:49 cmp001 salt-minion[2139]: ...............................................................................................++++
Nov 16 16:56:50 cmp001 salt-minion[2139]: ..................................++++
Nov 16 16:56:50 cmp001 salt-minion[2139]: ........................................................++++
Nov 16 16:56:52 cmp001 salt-minion[2139]: .................................................................................................................++++
Nov 16 16:56:53 cmp001 salt-minion[2139]: ..................................................................................................................................................................................................................++++
Nov 16 16:56:54 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install ntp.
Nov 16 16:56:56 cmp001 systemd[1]: Reloading.
Nov 16 16:56:57 cmp001 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:56:57 cmp001 systemd[1]: Started ntp-systemd-netif.path.
Nov 16 16:56:57 cmp001 kernel: [   77.078374] kauditd_printk_skb: 5 callbacks suppressed
Nov 16 16:56:57 cmp001 kernel: [   77.078376] audit: type=1400 audit(1573923417.615:17): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/ntpd" pid=3894 comm="apparmor_parser"
Nov 16 16:56:57 cmp001 systemd[1]: Reloading.
Nov 16 16:56:58 cmp001 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:56:58 cmp001 systemd[1]: Stopping Network Time Synchronization...
Nov 16 16:56:58 cmp001 systemd[1]: Starting Network Time Service...
Nov 16 16:56:58 cmp001 ntpd[4010]: ntpd 4.2.8p10@1.3728-o (1): Starting
Nov 16 16:56:58 cmp001 ntpd[4010]: Command line: /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 112:118
Nov 16 16:56:58 cmp001 systemd[1]: Started Network Time Service.
Nov 16 16:56:58 cmp001 systemd[1]: Stopped Network Time Synchronization.
Nov 16 16:56:58 cmp001 systemd[1]: Reloading.
Nov 16 16:56:58 cmp001 ntpd[4016]: proto: precision = 0.063 usec (-24)
Nov 16 16:56:58 cmp001 ntpd[4016]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature
Nov 16 16:56:58 cmp001 ntpd[4016]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2020-06-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen and drop on 0 v6wildcard [::]:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen normally on 2 lo 127.0.0.1:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen normally on 3 ens3 192.168.11.36:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen normally on 4 br-mgmt 172.16.10.55:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen normally on 5 lo [::1]:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen normally on 6 ens3 [fe80::5054:ff:feda:c364%2]:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen normally on 7 ens4 [fe80::5054:ff:fe88:ddd4%3]:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen normally on 8 ens5 [fe80::5054:ff:fe5d:c357%4]:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen normally on 9 br-mgmt [fe80::5054:ff:fe88:ddd4%6]:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listen normally on 10 ens5.1000 [fe80::5054:ff:fe5d:c357%7]:123
Nov 16 16:56:58 cmp001 ntpd[4016]: Listening on routing socket on fd #27 for interface updates
Nov 16 16:56:59 cmp001 ntpd[4016]: Soliciting pool server 162.159.200.123
Nov 16 16:57:00 cmp001 ntpd[4016]: Soliciting pool server 192.36.143.130
Nov 16 16:57:00 cmp001 ntpd[4016]: Soliciting pool server 193.182.111.143
Nov 16 16:57:01 cmp001 ntpd[4016]: Soliciting pool server 83.168.200.198
Nov 16 16:57:01 cmp001 ntpd[4016]: Soliciting pool server 5.186.65.2
Nov 16 16:57:01 cmp001 ntpd[4016]: Soliciting pool server 162.159.200.1
Nov 16 16:57:01 cmp001 systemd[1]: Started /bin/systemctl restart ntp.service.
Nov 16 16:57:01 cmp001 ntpd[4016]: ntpd exiting on signal 15 (Terminated)
Nov 16 16:57:01 cmp001 systemd[1]: Stopping Network Time Service...
Nov 16 16:57:01 cmp001 ntpd[4016]: 162.159.200.123 local addr 192.168.11.36 -> <null>
Nov 16 16:57:01 cmp001 ntpd[4016]: 192.36.143.130 local addr 192.168.11.36 -> <null>
Nov 16 16:57:01 cmp001 ntpd[4016]: 193.182.111.143 local addr 192.168.11.36 -> <null>
Nov 16 16:57:01 cmp001 ntpd[4016]: 83.168.200.198 local addr 192.168.11.36 -> <null>
Nov 16 16:57:01 cmp001 ntpd[4016]: 5.186.65.2 local addr 192.168.11.36 -> <null>
Nov 16 16:57:01 cmp001 ntpd[4016]: 162.159.200.1 local addr 192.168.11.36 -> <null>
Nov 16 16:57:01 cmp001 systemd[1]: Stopped Network Time Service.
Nov 16 16:57:01 cmp001 systemd[1]: Starting Network Time Service...
Nov 16 16:57:01 cmp001 ntpd[4336]: ntpd 4.2.8p10@1.3728-o (1): Starting
Nov 16 16:57:01 cmp001 ntpd[4336]: Command line: /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 112:118
Nov 16 16:57:01 cmp001 systemd[1]: Started Network Time Service.
Nov 16 16:57:02 cmp001 ntpd[4339]: proto: precision = 0.110 usec (-23)
Nov 16 16:57:02 cmp001 ntpd[4339]: restrict 0.0.0.0: KOD does nothing without LIMITED.
Nov 16 16:57:02 cmp001 ntpd[4339]: restrict ::: KOD does nothing without LIMITED.
Nov 16 16:57:02 cmp001 ntpd[4339]: switching logging to file /var/log/ntp.log
Nov 16 16:57:05 cmp001 salt-minion[2139]: [INFO    ] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
Nov 16 16:57:05 cmp001 salt-minion[2139]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'salt-minion.service'] in directory '/root'
Nov 16 16:57:05 cmp001 systemd[1]: Started /bin/systemctl restart salt-minion.service.
Nov 16 16:57:05 cmp001 systemd[1]: Stopping The Salt Minion...
Nov 16 16:57:05 cmp001 salt-minion[2139]: [WARNING ] Minion received a SIGTERM. Exiting.
Nov 16 16:57:06 cmp001 salt-minion[2139]: The Salt Minion is shutdown. Minion received a SIGTERM. Exited.
Nov 16 16:57:06 cmp001 systemd[1]: Stopped The Salt Minion.
Nov 16 16:57:06 cmp001 systemd[1]: salt-minion.service: Found left-over process 4342 (bash) in control group while starting unit. Ignoring.
Nov 16 16:57:06 cmp001 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Nov 16 16:57:06 cmp001 systemd[1]: salt-minion.service: Found left-over process 4408 (salt-call) in control group while starting unit. Ignoring.
Nov 16 16:57:06 cmp001 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Nov 16 16:57:06 cmp001 systemd[1]: Starting The Salt Minion...
Nov 16 16:57:06 cmp001 systemd[1]: Started The Salt Minion.
Nov 16 16:57:06 cmp001 salt-minion[2139]: local:
Nov 16 16:57:06 cmp001 salt-minion[2139]:     True
Nov 16 16:57:06 cmp001 salt-minion[4458]: [INFO    ] Setting up the Salt Minion "cmp001.mcp-k8s-calico-noha.local"
Nov 16 16:57:06 cmp001 salt-minion[4458]: [INFO    ] Starting up the Salt Minion
Nov 16 16:57:06 cmp001 salt-minion[4458]: [INFO    ] Starting pull socket on /var/run/salt/minion/minion_event_23b3bf79fe_pull.ipc
Nov 16 16:57:07 cmp001 salt-minion[4458]: [INFO    ] Creating minion process manager
Nov 16 16:57:09 cmp001 salt-minion[4458]: [INFO    ] Executing command ['date', '+%z'] in directory '/root'
Nov 16 16:57:09 cmp001 salt-minion[4458]: [INFO    ] Updating job settings for scheduled job: __mine_interval
Nov 16 16:57:09 cmp001 salt-minion[4458]: [INFO    ] Added mine.update to scheduler
Nov 16 16:57:09 cmp001 salt-minion[4458]: [INFO    ] Minion is starting as user 'root'
Nov 16 16:57:09 cmp001 salt-minion[4458]: [INFO    ] Minion is ready to receive requests!
Nov 16 16:57:53 cmp001 salt-minion[4458]: [INFO    ] User sudo_ubuntu Executing command state.sls with jid 20191116165753766584
Nov 16 16:57:53 cmp001 salt-minion[4458]: [INFO    ] Starting a new job with PID 4549
Nov 16 16:57:54 cmp001 salt-minion[4458]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:57:54 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/init.sls'
Nov 16 16:57:54 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:54 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/calico.sls'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/service.sls'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/_common.sls'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/kube-proxy.sls'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Running state [/usr/bin/calicoctl] at time 16:57:55.604315
Nov 16 16:57:55 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/usr/bin/calicoctl]
Nov 16 16:57:57 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:57:57 cmp001 salt-minion[4458]: New file
Nov 16 16:57:57 cmp001 salt-minion[4458]: [INFO    ] Completed state [/usr/bin/calicoctl] at time 16:57:57.163008 duration_in_ms=1558.692
Nov 16 16:57:57 cmp001 salt-minion[4458]: [INFO    ] Running state [/usr/bin/birdcl] at time 16:57:57.163297
Nov 16 16:57:57 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/usr/bin/birdcl]
Nov 16 16:57:57 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:57:57 cmp001 salt-minion[4458]: New file
Nov 16 16:57:57 cmp001 salt-minion[4458]: [INFO    ] Completed state [/usr/bin/birdcl] at time 16:57:57.622958 duration_in_ms=459.66
Nov 16 16:57:57 cmp001 salt-minion[4458]: [INFO    ] Running state [/opt/cni/bin/calico] at time 16:57:57.623226
Nov 16 16:57:57 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/opt/cni/bin/calico]
Nov 16 16:57:59 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:57:59 cmp001 salt-minion[4458]: New file
Nov 16 16:57:59 cmp001 salt-minion[4458]: [INFO    ] Completed state [/opt/cni/bin/calico] at time 16:57:59.067623 duration_in_ms=1444.395
Nov 16 16:57:59 cmp001 salt-minion[4458]: [INFO    ] Running state [/opt/cni/bin/calico-ipam] at time 16:57:59.068045
Nov 16 16:57:59 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/opt/cni/bin/calico-ipam]
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:58:00 cmp001 salt-minion[4458]: New file
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Completed state [/opt/cni/bin/calico-ipam] at time 16:58:00.222616 duration_in_ms=1154.57
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/cni/net.d/10-calico.conf] at time 16:58:00.222876
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/cni/net.d/10-calico.conf]
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/calico.conf'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:58:00 cmp001 salt-minion[4458]: New file
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/cni/net.d/10-calico.conf] at time 16:58:00.354635 duration_in_ms=131.757
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/calico/network-environment] at time 16:58:00.355008
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/calico/network-environment]
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/network-environment.pool'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:58:00 cmp001 salt-minion[4458]: New file
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/calico/network-environment] at time 16:58:00.452607 duration_in_ms=97.598
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/calico/calicoctl.cfg] at time 16:58:00.452902
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/calico/calicoctl.cfg]
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/calicoctl.cfg.pool'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:58:00 cmp001 salt-minion[4458]: New file
Nov 16 16:58:00 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/calico/calicoctl.cfg] at time 16:58:00.542844 duration_in_ms=89.942
Nov 16 16:58:01 cmp001 salt-minion[4458]: [INFO    ] Running state [containerd] at time 16:58:01.173426
Nov 16 16:58:01 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [containerd]
Nov 16 16:58:01 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:01 cmp001 salt-minion[4458]: [INFO    ] Executing command ['apt-cache', '-q', 'policy', 'containerd'] in directory '/root'
Nov 16 16:58:02 cmp001 salt-minion[4458]: [INFO    ] Executing command ['apt-get', '-q', 'update'] in directory '/root'
Nov 16 16:58:04 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:05 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'containerd'] in directory '/root'
Nov 16 16:58:05 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install containerd.
Nov 16 16:58:08 cmp001 salt-minion[4458]: [INFO    ] User sudo_ubuntu Executing command saltutil.find_job with jid 20191116165808820551
Nov 16 16:58:08 cmp001 salt-minion[4458]: [INFO    ] Starting a new job with PID 5298
Nov 16 16:58:08 cmp001 salt-minion[4458]: [INFO    ] Returning information for job: 20191116165808820551
Nov 16 16:58:09 cmp001 systemd[1]: Reloading.
Nov 16 16:58:09 cmp001 systemd[1]: Reloading.
Nov 16 16:58:10 cmp001 systemd[1]: Starting containerd container runtime...
Nov 16 16:58:10 cmp001 systemd[1]: Started containerd container runtime.
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.126406839Z" level=info msg="starting containerd" revision= version="1.2.6-0ubuntu1~18.04.2"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.127565281Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.127969016Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.128385907Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.128594118Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.140561633Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.140831938Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 cmp001 kernel: [  149.599609] aufs 4.15-20180219
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.141153926Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.141603389Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.141749908Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.141940951Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.142083953Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.146236221Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.146579935Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.146845387Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.147065586Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.147267785Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.147461887Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.147667237Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.147854041Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.148029249Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.148234401Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.148544310Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.148823258Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.149559536Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.149789760Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.150009930Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.150190322Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.150367286Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.150542128Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.150712621Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.150889870Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.151049525Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.151229241Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.151421170Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.151794008Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.151964762Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.152172988Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.152384033Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.152618609Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.152899841Z" level=info msg="Connect containerd service"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.153180774Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs""
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.154175706Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.154317214Z" level=info msg="Start subscribing containerd event"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.154390496Z" level=info msg="Start recovering state"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.154413719Z" level=info msg=serving... address="/run/containerd/containerd.sock"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.154433105Z" level=info msg="containerd successfully booted in 0.028707s"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.154536875Z" level=info msg="Start event monitor"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.154565374Z" level=info msg="Start snapshots syncer"
Nov 16 16:58:10 cmp001 containerd[5379]: time="2019-11-16T16:58:10.154577087Z" level=info msg="Start streaming server"
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Made the following changes:
Nov 16 16:58:12 cmp001 salt-minion[4458]: 'containerd' changed from 'absent' to '1.2.6-0ubuntu1~18.04.2'
Nov 16 16:58:12 cmp001 salt-minion[4458]: 'runc' changed from 'absent' to '1.0.0~rc7+git20190403.029124da-0ubuntu1~18.04.2'
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Completed state [containerd] at time 16:58:12.827616 duration_in_ms=11654.19
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/containerd/config.toml] at time 16:58:12.831404
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/containerd/config.toml]
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/containerd/config.toml'
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:58:12 cmp001 salt-minion[4458]: New file
Nov 16 16:58:12 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/containerd/config.toml] at time 16:58:12.931405 duration_in_ms=100.001
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Running state [containerd] at time 16:58:13.437847
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing state service.running for [containerd]
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'status', 'containerd.service', '-n', '0'] in directory '/root'
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-active', 'containerd.service'] in directory '/root'
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'containerd.service'] in directory '/root'
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] The service containerd is already running
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Completed state [containerd] at time 16:58:13.490859 duration_in_ms=53.011
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Running state [containerd] at time 16:58:13.492028
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing state service.mod_watch for [containerd]
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-active', 'containerd.service'] in directory '/root'
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'containerd.service'] in directory '/root'
Nov 16 16:58:13 cmp001 systemd[1]: Started /bin/systemctl restart containerd.service.
Nov 16 16:58:13 cmp001 systemd[1]: Stopping containerd container runtime...
Nov 16 16:58:13 cmp001 containerd[5379]: time="2019-11-16T16:58:13.527761187Z" level=info msg="Stop CRI service"
Nov 16 16:58:13 cmp001 systemd[1]: Stopped containerd container runtime.
Nov 16 16:58:13 cmp001 systemd[1]: Starting containerd container runtime...
Nov 16 16:58:13 cmp001 systemd[1]: Started containerd container runtime.
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] {'containerd': True}
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Completed state [containerd] at time 16:58:13.537129 duration_in_ms=45.1
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/systemd/system/calico-node.service] at time 16:58:13.538243
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/systemd/system/calico-node.service]
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/calico-node.service.ctr'
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.564264135Z" level=info msg="starting containerd" revision= version="1.2.6-0ubuntu1~18.04.2"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.565392566Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.565464502Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.565717484Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.565821991Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.569377123Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.569522632Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.569693099Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.569992851Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.570110833Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.570225352Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.570325595Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.570512652Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.570621481Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.570747953Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.570848049Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.570944989Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.571046819Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.571149175Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.571246255Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.571348805Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.571451271Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.571580898Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.571718012Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.572166794Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.572281691Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.572409121Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.572512437Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.572617280Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.572716695Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.572813362Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.572912245Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.573008591Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.573110214Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.573211564Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.573394663Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.573507239Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.573603435Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.573701375Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.573866043Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.574037240Z" level=info msg="Connect containerd service"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.574243640Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs""
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.574754553Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.575069845Z" level=info msg=serving... address="/run/containerd/containerd.sock"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.575177623Z" level=info msg="containerd successfully booted in 0.011377s"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.575722617Z" level=info msg="Start subscribing containerd event"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.575804326Z" level=info msg="Start recovering state"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.575939286Z" level=info msg="Start event monitor"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.575969588Z" level=info msg="Start snapshots syncer"
Nov 16 16:58:13 cmp001 containerd[5688]: time="2019-11-16T16:58:13.575989068Z" level=info msg="Start streaming server"
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:58:13 cmp001 salt-minion[4458]: New file
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/systemd/system/calico-node.service] at time 16:58:13.581575 duration_in_ms=43.331
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Running state [/var/lib/calico] at time 16:58:13.581843
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing state file.directory for [/var/lib/calico]
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] {'/var/lib/calico': 'New Dir'}
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Completed state [/var/lib/calico] at time 16:58:13.583442 duration_in_ms=1.599
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Running state [/var/log/calico] at time 16:58:13.583672
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing state file.directory for [/var/log/calico]
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] {'/var/log/calico': 'New Dir'}
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Completed state [/var/log/calico] at time 16:58:13.585205 duration_in_ms=1.533
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Running state [calico-node] at time 16:58:13.587429
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing state service.running for [calico-node]
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'status', 'calico-node.service', '-n', '0'] in directory '/root'
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-active', 'calico-node.service'] in directory '/root'
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:13 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'calico-node.service'] in directory '/root'
Nov 16 16:58:13 cmp001 systemd[1]: Started /bin/systemctl start calico-node.service.
Nov 16 16:58:13 cmp001 systemd[1]: Starting calico-node...
Nov 16 16:58:13 cmp001 ctr[5731]: ctr: container "calico-node" in namespace "default": not found
Nov 16 16:58:13 cmp001 ctr[5751]: time="2019-11-16T16:58:13Z" level=error msg="failed to delete container "calico-node"" error="container "calico-node" in namespace "default": not found"
Nov 16 16:58:13 cmp001 ctr[5751]: ctr: container "calico-node" in namespace "default": not found
Nov 16 16:58:13 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:13 cmp001 ctr[5759]: elapsed: 0.1 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 0.2 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 0.3 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 0.4 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 0.5 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 0.6 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 0.7 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 0.8 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 0.9 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 1.0 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:14 cmp001 ctr[5759]: elapsed: 1.1 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 1.2 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 1.3 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/3.2 KiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.9 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.8 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/47.1 KiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/2.1 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/13.7 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 1.4 s                                                                        total:  1.3 Ki (977.0 B/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/3.2 KiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.9 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.8 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/47.1 KiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/2.1 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/13.7 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 1.5 s                                                                        total:  1.3 Ki (912.0 B/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m#033[0m--------------------------------------| 39.1 KiB/1.9 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m#033[0m--------------------------------------| 23.1 KiB/1.8 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        downloading    |#033[32m+++++++++++++++++++++++++#033[0m-------------| 31.1 KiB/47.1 KiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m#033[0m--------------------------------------| 39.1 KiB/2.1 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------| 31.1 KiB/13.7 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 1.6 s                                                                        total:  168.2  (105.0 KiB/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m++++#033[0m----------------------------------| 207.1 Ki/1.9 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m+++#033[0m-----------------------------------| 191.1 Ki/1.8 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m+++#033[0m-----------------------------------| 199.1 Ki/2.1 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------| 191.1 Ki/13.7 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 1.7 s                                                                        total:  840.2  (493.7 KiB/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m+++++++++++++++++++++#033[0m-----------------|  1.1 MiB/1.9 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m++++++++++++++++++#033[0m--------------------| 927.3 Ki/1.8 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m++++++++++++++++++++++++#033[0m--------------|  1.3 MiB/2.1 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++#033[0m------------------------------------| 991.3 Ki/13.7 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 1.8 s                                                                        total:  4.3 Mi (2.4 MiB/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|  1.9 MiB/1.9 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|  1.8 MiB/1.8 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m+++++++++#033[0m-----------------------------|  3.4 MiB/13.7 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 1.9 s                                                                        total:  9.3 Mi (4.9 MiB/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m+++++++++++++++++++#033[0m-------------------|  7.1 MiB/13.7 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 2.0 s                                                                        total:  13.0 M (6.5 MiB/s)
Nov 16 16:58:15 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++++++++++++#033[0m--------| 10.9 MiB/13.7 MiB
Nov 16 16:58:15 cmp001 ctr[5759]: elapsed: 2.1 s                                                                        total:  16.7 M (8.0 MiB/s)
Nov 16 16:58:16 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m| 13.7 MiB/13.7 MiB
Nov 16 16:58:16 cmp001 ctr[5759]: elapsed: 2.2 s                                                                        total:  19.6 M (8.9 MiB/s)
Nov 16 16:58:16 cmp001 ctr[5759]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:16 cmp001 ctr[5759]: elapsed: 2.3 s                                                                        total:  19.6 M (8.5 MiB/s)
Nov 16 16:58:16 cmp001 ctr[5759]: unpacking linux/amd64 sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18...
Nov 16 16:58:18 cmp001 ctr[5759]: done
Nov 16 16:58:18 cmp001 systemd[1]: Started calico-node.
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-active', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp001 containerd[5688]: time="2019-11-16T16:58:18.419467614Z" level=info msg="shim containerd-shim started" address="/containerd-shim/default/calico-node/shim.sock" debug=false pid=5815
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp001 systemd[1]: Started /bin/systemctl enable calico-node.service.
Nov 16 16:58:18 cmp001 systemd[1]: Reloading.
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.565 [INFO][9] startup.go 264: Early log level set to info
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.565 [INFO][9] startup.go 280: Using NODENAME environment for node name
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.565 [INFO][9] startup.go 292: Determined node name: cmp001
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.615 [INFO][9] startup.go 105: Skipping datastore connection test
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.616 [INFO][9] startup.go 365: Building new node resource Name="cmp001"
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.616 [INFO][9] startup.go 380: Initialize BGP data
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.617 [INFO][9] startup.go 474: Using IPv4 address from environment: IP=172.16.10.55
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.618 [INFO][9] startup.go 507: IPv4 address 172.16.10.55 discovered on interface br-mgmt
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.618 [INFO][9] startup.go 450: Node IPv4 changed, will check for conflicts
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.619 [INFO][9] startup.go 640: Using AS number specified in environment (AS=64512)
Nov 16 16:58:18 cmp001 ctr[5792]: 2019-11-16 16:58:18.626 [INFO][9] startup.go 189: Using node name: cmp001
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] {'calico-node': True}
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Completed state [calico-node] at time 16:58:18.658412 duration_in_ms=5070.983
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Running state [curl] at time 16:58:18.660260
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [curl]
Nov 16 16:58:18 cmp001 ctr[5792]: Calico node started successfully
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] All specified packages are already installed
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Completed state [curl] at time 16:58:18.872551 duration_in_ms=212.29
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Running state [git] at time 16:58:18.873010
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [git]
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] All specified packages are already installed
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Completed state [git] at time 16:58:18.884034 duration_in_ms=11.025
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Running state [apt-transport-https] at time 16:58:18.884455
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [apt-transport-https]
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] All specified packages are already installed
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Completed state [apt-transport-https] at time 16:58:18.895118 duration_in_ms=10.663
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Running state [python-apt] at time 16:58:18.895508
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [python-apt]
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] All specified packages are already installed
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Completed state [python-apt] at time 16:58:18.906213 duration_in_ms=10.705
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Running state [socat] at time 16:58:18.906603
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [socat]
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:18 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'socat'] in directory '/root'
Nov 16 16:58:18 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install socat.
Nov 16 16:58:19 cmp001 kernel: [  159.433621] Netfilter messages via NETLINK v0.30.
Nov 16 16:58:19 cmp001 kernel: [  159.437559] ip_set: protocol 6
Nov 16 16:58:20 cmp001 kernel: [  159.595403] ip6_tables: (C) 2000-2006 Netfilter Core Team
Nov 16 16:58:22 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:22 cmp001 salt-minion[4458]: [INFO    ] Made the following changes:
Nov 16 16:58:22 cmp001 salt-minion[4458]: 'socat' changed from 'absent' to '1.7.3.2-2ubuntu2'
Nov 16 16:58:22 cmp001 salt-minion[4458]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:23 cmp001 salt-minion[4458]: [INFO    ] Completed state [socat] at time 16:58:23.043776 duration_in_ms=4137.172
Nov 16 16:58:23 cmp001 salt-minion[4458]: [INFO    ] Running state [openssl] at time 16:58:23.050331
Nov 16 16:58:23 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [openssl]
Nov 16 16:58:23 cmp001 salt-minion[4458]: [INFO    ] All specified packages are already installed
Nov 16 16:58:23 cmp001 salt-minion[4458]: [INFO    ] Completed state [openssl] at time 16:58:23.760486 duration_in_ms=710.155
Nov 16 16:58:23 cmp001 salt-minion[4458]: [INFO    ] Running state [conntrack] at time 16:58:23.760873
Nov 16 16:58:23 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [conntrack]
Nov 16 16:58:23 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:23 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'conntrack'] in directory '/root'
Nov 16 16:58:23 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install conntrack.
Nov 16 16:58:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:27 cmp001 salt-minion[4458]: [INFO    ] Made the following changes:
Nov 16 16:58:27 cmp001 salt-minion[4458]: 'conntrack' changed from 'absent' to '1:1.4.4+snapshot20161117-6ubuntu2'
Nov 16 16:58:27 cmp001 salt-minion[4458]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:27 cmp001 salt-minion[4458]: [INFO    ] Completed state [conntrack] at time 16:58:27.970151 duration_in_ms=4209.277
Nov 16 16:58:27 cmp001 salt-minion[4458]: [INFO    ] Running state [nfs-common] at time 16:58:27.976359
Nov 16 16:58:27 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [nfs-common]
Nov 16 16:58:28 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:28 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nfs-common'] in directory '/root'
Nov 16 16:58:28 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install nfs-common.
Nov 16 16:58:31 cmp001 systemd[1]: Reloading.
Nov 16 16:58:32 cmp001 systemd[1]: message repeated 4 times: [ Reloading.]
Nov 16 16:58:32 cmp001 systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 16 16:58:32 cmp001 systemd[1]: Starting RPC bind portmap service...
Nov 16 16:58:32 cmp001 systemd[1]: Started RPC bind portmap service.
Nov 16 16:58:32 cmp001 systemd[1]: Reached target RPC Port Mapper.
Nov 16 16:58:33 cmp001 systemd[1]: Reloading.
Nov 16 16:58:34 cmp001 systemd[1]: message repeated 4 times: [ Reloading.]
Nov 16 16:58:37 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] Made the following changes:
Nov 16 16:58:38 cmp001 salt-minion[4458]: 'keyutils' changed from 'absent' to '1.5.9-9.2ubuntu2'
Nov 16 16:58:38 cmp001 salt-minion[4458]: 'nfs-common' changed from 'absent' to '1:1.3.4-2.1ubuntu5.2'
Nov 16 16:58:38 cmp001 salt-minion[4458]: 'rpcbind' changed from 'absent' to '0.2.3-0.6'
Nov 16 16:58:38 cmp001 salt-minion[4458]: 'libtirpc1' changed from 'absent' to '0.2.5-1.2ubuntu0.1'
Nov 16 16:58:38 cmp001 salt-minion[4458]: 'nfs-client' changed from 'absent' to '1'
Nov 16 16:58:38 cmp001 salt-minion[4458]: 'libnfsidmap2' changed from 'absent' to '0.25-5.1'
Nov 16 16:58:38 cmp001 salt-minion[4458]: 'portmap' changed from 'absent' to '1'
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] Completed state [nfs-common] at time 16:58:38.083846 duration_in_ms=10107.486
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] Running state [cifs-utils] at time 16:58:38.090315
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] Executing state pkg.installed for [cifs-utils]
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'cifs-utils'] in directory '/root'
Nov 16 16:58:38 cmp001 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install cifs-utils.
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] User sudo_ubuntu Executing command saltutil.find_job with jid 20191116165838849070
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] Starting a new job with PID 7702
Nov 16 16:58:38 cmp001 salt-minion[4458]: [INFO    ] Returning information for job: 20191116165838849070
Nov 16 16:58:51 cmp001 salt-minion[4458]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:51 cmp001 salt-minion[4458]: [INFO    ] Made the following changes:
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'python2.7-ldb' changed from 'absent' to '1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'python-ldb' changed from 'absent' to '2:1.2.3-1ubuntu0.1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libtdb1' changed from 'absent' to '1.3.15-2'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libavahi-common3' changed from 'absent' to '0.7-3.1ubuntu1.2'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'python2.7-talloc' changed from 'absent' to '1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libavahi-client3' changed from 'absent' to '0.7-3.1ubuntu1.2'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libwbclient0' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libavahi-common-data' changed from 'absent' to '0.7-3.1ubuntu1.2'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libcups2' changed from 'absent' to '2.2.7-1ubuntu2.7'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'cifs-utils' changed from 'absent' to '2:6.8-1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'samba-common' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'python2.7-tdb' changed from 'absent' to '1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'samba-libs' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libldb1' changed from 'absent' to '2:1.2.3-1ubuntu0.1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libtevent0' changed from 'absent' to '0.9.34-1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'python-talloc' changed from 'absent' to '2.1.10-2ubuntu1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'samba-common-bin' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'python-samba' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libtalloc2' changed from 'absent' to '2.1.10-2ubuntu1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'python2.7-samba' changed from 'absent' to '1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'libjansson4' changed from 'absent' to '2.11-1'
Nov 16 16:58:51 cmp001 salt-minion[4458]: 'python-tdb' changed from 'absent' to '1.3.15-2'
Nov 16 16:58:51 cmp001 salt-minion[4458]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:51 cmp001 salt-minion[4458]: [INFO    ] Completed state [cifs-utils] at time 16:58:51.649298 duration_in_ms=13558.982
Nov 16 16:58:51 cmp001 salt-minion[4458]: [INFO    ] Running state [/usr/bin/hyperkube] at time 16:58:51.654856
Nov 16 16:58:51 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/usr/bin/hyperkube]
Nov 16 16:59:08 cmp001 salt-minion[4458]: [INFO    ] User sudo_ubuntu Executing command saltutil.find_job with jid 20191116165908909449
Nov 16 16:59:08 cmp001 salt-minion[4458]: [INFO    ] Starting a new job with PID 8377
Nov 16 16:59:08 cmp001 salt-minion[4458]: [INFO    ] Returning information for job: 20191116165908909449
Nov 16 16:59:22 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:22 cmp001 salt-minion[4458]: New file
Nov 16 16:59:22 cmp001 salt-minion[4458]: [INFO    ] Completed state [/usr/bin/hyperkube] at time 16:59:22.098084 duration_in_ms=30443.227
Nov 16 16:59:22 cmp001 salt-minion[4458]: [INFO    ] Running state [/usr/bin/kubectl] at time 16:59:22.099235
Nov 16 16:59:22 cmp001 salt-minion[4458]: [INFO    ] Executing state file.symlink for [/usr/bin/kubectl]
Nov 16 16:59:22 cmp001 salt-minion[4458]: [INFO    ] {'new': '/usr/bin/kubectl'}
Nov 16 16:59:22 cmp001 salt-minion[4458]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:59:22 cmp001 salt-minion[4458]: [INFO    ] Completed state [/usr/bin/kubectl] at time 16:59:22.204815 duration_in_ms=105.579
Nov 16 16:59:22 cmp001 salt-minion[4458]: [INFO    ] Running state [/tmp/crictl] at time 16:59:22.212108
Nov 16 16:59:22 cmp001 salt-minion[4458]: [INFO    ] Executing state archive.extracted for [/tmp/crictl]
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Executing command ['tar', 'xz', '-f', '/var/cache/salt/minion/extrn_files/base/github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz'] in directory '/tmp/crictl/'
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Executing command ['tar', '--version'] in directory '/root'
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] {'extracted_files': 'no tar output so far', 'directories_created': ['/tmp/crictl/']}
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Completed state [/tmp/crictl] at time 16:59:24.617694 duration_in_ms=2405.581
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Running state [/usr/local/bin/crictl] at time 16:59:24.619193
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/usr/local/bin/crictl]
Nov 16 16:59:24 cmp001 salt-minion[4458]: [WARNING ] Use of argument owner found, "owner" is invalid, please use "user"
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:24 cmp001 salt-minion[4458]: New file
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Completed state [/usr/local/bin/crictl] at time 16:59:24.897559 duration_in_ms=278.368
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/crictl.yaml] at time 16:59:24.897831
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/crictl.yaml]
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:24 cmp001 salt-minion[4458]: New file
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/crictl.yaml] at time 16:59:24.977787 duration_in_ms=79.956
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/criproxy] at time 16:59:24.978122
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Executing state file.absent for [/etc/criproxy]
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] File /etc/criproxy is not present
Nov 16 16:59:24 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/criproxy] at time 16:59:24.979241 duration_in_ms=1.119
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Running state [criproxy] at time 16:59:25.529186
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Executing state service.dead for [criproxy]
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'status', 'criproxy.service', '-n', '0'] in directory '/root'
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] The named service criproxy is not available
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Completed state [criproxy] at time 16:59:25.553831 duration_in_ms=24.645
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/systemd/system/kubelet.service] at time 16:59:25.554224
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kubelet.service]
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/systemd/kubelet.service'
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:25 cmp001 salt-minion[4458]: New file
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/systemd/system/kubelet.service] at time 16:59:25.914110 duration_in_ms=359.884
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/kubernetes/config] at time 16:59:25.914433
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Executing state file.absent for [/etc/kubernetes/config]
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] File /etc/kubernetes/config is not present
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/kubernetes/config] at time 16:59:25.915331 duration_in_ms=0.897
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/default/kubelet] at time 16:59:25.915572
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/default/kubelet]
Nov 16 16:59:25 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kubelet/default.pool'
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:26 cmp001 salt-minion[4458]: New file
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/default/kubelet] at time 16:59:26.841889 duration_in_ms=926.315
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/kubernetes/kubelet.kubeconfig] at time 16:59:26.842226
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/kubernetes/kubelet.kubeconfig]
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kubelet/kubelet.kubeconfig.pool'
Nov 16 16:59:26 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:27 cmp001 salt-minion[4458]: New file
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/kubernetes/kubelet.kubeconfig] at time 16:59:27.548566 duration_in_ms=706.337
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/kubernetes/manifests] at time 16:59:27.549068
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing state file.directory for [/etc/kubernetes/manifests]
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] {'/etc/kubernetes/manifests': 'New Dir'}
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/kubernetes/manifests] at time 16:59:27.552078 duration_in_ms=3.01
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Running state [kubelet] at time 16:59:27.555248
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing state service.running for [kubelet]
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'status', 'kubelet.service', '-n', '0'] in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-active', 'kubelet.service'] in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'kubelet.service'] in directory '/root'
Nov 16 16:59:27 cmp001 systemd[1]: Started /bin/systemctl start kubelet.service.
Nov 16 16:59:27 cmp001 systemd[1]: Started Kubernetes Kubelet Server.
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-active', 'kubelet.service'] in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'kubelet.service'] in directory '/root'
Nov 16 16:59:27 cmp001 systemd[1]: Started /bin/systemctl enable kubelet.service.
Nov 16 16:59:27 cmp001 systemd[1]: Reloading.
Nov 16 16:59:27 cmp001 kubelet[8649]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:27 cmp001 kubelet[8649]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:27 cmp001 kubelet[8649]: Flag --allow-privileged has been deprecated, will be removed in a future version
Nov 16 16:59:27 cmp001 kubelet[8649]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:27 cmp001 kubelet[8649]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:27 cmp001 kubelet[8649]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:27 cmp001 kubelet[8649]: Flag --file-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.799888    8649 flags.go:33] FLAG: --address="172.16.10.55"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.799940    8649 flags.go:33] FLAG: --allow-privileged="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.799949    8649 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.799962    8649 flags.go:33] FLAG: --alsologtostderr="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.799969    8649 flags.go:33] FLAG: --anonymous-auth="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.799975    8649 flags.go:33] FLAG: --application-metrics-count-limit="100"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.799981    8649 flags.go:33] FLAG: --authentication-token-webhook="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.799987    8649 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.799994    8649 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800001    8649 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800007    8649 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800013    8649 flags.go:33] FLAG: --azure-container-registry-config=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800018    8649 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800025    8649 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800031    8649 flags.go:33] FLAG: --bootstrap-kubeconfig=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800039    8649 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800045    8649 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800054    8649 flags.go:33] FLAG: --cgroup-root=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800060    8649 flags.go:33] FLAG: --cgroups-per-qos="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800065    8649 flags.go:33] FLAG: --chaos-chance="0"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800074    8649 flags.go:33] FLAG: --client-ca-file=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800079    8649 flags.go:33] FLAG: --cloud-config=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800084    8649 flags.go:33] FLAG: --cloud-provider=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800090    8649 flags.go:33] FLAG: --cluster-dns="[10.254.0.10]"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800097    8649 flags.go:33] FLAG: --cluster-domain="cluster.local"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800103    8649 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800108    8649 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800114    8649 flags.go:33] FLAG: --config=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800119    8649 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800125    8649 flags.go:33] FLAG: --container-log-max-files="5"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800133    8649 flags.go:33] FLAG: --container-log-max-size="10Mi"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800142    8649 flags.go:33] FLAG: --container-runtime="remote"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800148    8649 flags.go:33] FLAG: --container-runtime-endpoint="unix:///run/containerd/containerd.sock"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800157    8649 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800164    8649 flags.go:33] FLAG: --containerized="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800169    8649 flags.go:33] FLAG: --contention-profiling="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800174    8649 flags.go:33] FLAG: --cpu-cfs-quota="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800180    8649 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800186    8649 flags.go:33] FLAG: --cpu-manager-policy="none"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800191    8649 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800197    8649 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800203    8649 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800208    8649 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800214    8649 flags.go:33] FLAG: --docker-only="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800219    8649 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800224    8649 flags.go:33] FLAG: --docker-tls="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800230    8649 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800237    8649 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800243    8649 flags.go:33] FLAG: --docker-tls-key="key.pem"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800252    8649 flags.go:33] FLAG: --dynamic-config-dir=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800260    8649 flags.go:33] FLAG: --enable-controller-attach-detach="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800266    8649 flags.go:33] FLAG: --enable-debugging-handlers="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800272    8649 flags.go:33] FLAG: --enable-load-reader="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800277    8649 flags.go:33] FLAG: --enable-server="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800283    8649 flags.go:33] FLAG: --enforce-node-allocatable="[pods]"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800293    8649 flags.go:33] FLAG: --event-burst="10"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800299    8649 flags.go:33] FLAG: --event-qps="5"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800304    8649 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800310    8649 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800316    8649 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800332    8649 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800338    8649 flags.go:33] FLAG: --eviction-minimum-reclaim=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800347    8649 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800355    8649 flags.go:33] FLAG: --eviction-soft=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800362    8649 flags.go:33] FLAG: --eviction-soft-grace-period=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800371    8649 flags.go:33] FLAG: --exit-on-lock-contention="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800377    8649 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800382    8649 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800387    8649 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800393    8649 flags.go:33] FLAG: --experimental-dockershim="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800399    8649 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800404    8649 flags.go:33] FLAG: --experimental-fail-swap-on="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800410    8649 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800415    8649 flags.go:33] FLAG: --experimental-mounter-path=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800420    8649 flags.go:33] FLAG: --fail-swap-on="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800426    8649 flags.go:33] FLAG: --feature-gates=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800434    8649 flags.go:33] FLAG: --file-check-frequency="5s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800439    8649 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800445    8649 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800454    8649 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800459    8649 flags.go:33] FLAG: --healthz-port="10248"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800469    8649 flags.go:33] FLAG: --help="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800474    8649 flags.go:33] FLAG: --host-ipc-sources="[*]"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800481    8649 flags.go:33] FLAG: --host-network-sources="[*]"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800490    8649 flags.go:33] FLAG: --host-pid-sources="[*]"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800497    8649 flags.go:33] FLAG: --hostname-override="cmp001"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800503    8649 flags.go:33] FLAG: --housekeeping-interval="10s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800510    8649 flags.go:33] FLAG: --http-check-frequency="20s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800516    8649 flags.go:33] FLAG: --image-gc-high-threshold="85"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800522    8649 flags.go:33] FLAG: --image-gc-low-threshold="80"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800527    8649 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800533    8649 flags.go:33] FLAG: --image-service-endpoint=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800539    8649 flags.go:33] FLAG: --iptables-drop-bit="15"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800544    8649 flags.go:33] FLAG: --iptables-masquerade-bit="14"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800550    8649 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800557    8649 flags.go:33] FLAG: --kube-api-burst="10"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800563    8649 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800572    8649 flags.go:33] FLAG: --kube-api-qps="5"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800578    8649 flags.go:33] FLAG: --kube-reserved=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800584    8649 flags.go:33] FLAG: --kube-reserved-cgroup=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800589    8649 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.kubeconfig"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800596    8649 flags.go:33] FLAG: --kubelet-cgroups=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800601    8649 flags.go:33] FLAG: --lock-file=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800606    8649 flags.go:33] FLAG: --log-backtrace-at=":0"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800613    8649 flags.go:33] FLAG: --log-cadvisor-usage="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800618    8649 flags.go:33] FLAG: --log-dir=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800624    8649 flags.go:33] FLAG: --log-file=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800629    8649 flags.go:33] FLAG: --log-flush-frequency="5s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800635    8649 flags.go:33] FLAG: --logtostderr="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800640    8649 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800646    8649 flags.go:33] FLAG: --make-iptables-util-chains="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800654    8649 flags.go:33] FLAG: --manifest-url=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800660    8649 flags.go:33] FLAG: --manifest-url-header=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800673    8649 flags.go:33] FLAG: --master-service-namespace="default"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800679    8649 flags.go:33] FLAG: --max-open-files="1000000"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800687    8649 flags.go:33] FLAG: --max-pods="110"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800693    8649 flags.go:33] FLAG: --maximum-dead-containers="-1"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800699    8649 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800705    8649 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800710    8649 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800716    8649 flags.go:33] FLAG: --network-plugin="cni"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800721    8649 flags.go:33] FLAG: --network-plugin-mtu="0"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800726    8649 flags.go:33] FLAG: --node-ip="172.16.10.55"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800732    8649 flags.go:33] FLAG: --node-labels="extraRuntime=virtlet,node-role.kubernetes.io/node=true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800744    8649 flags.go:33] FLAG: --node-status-max-images="50"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800750    8649 flags.go:33] FLAG: --node-status-update-frequency="10s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800756    8649 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800764    8649 flags.go:33] FLAG: --oom-score-adj="-999"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800770    8649 flags.go:33] FLAG: --pod-cidr=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800779    8649 flags.go:33] FLAG: --pod-infra-container-image="docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800787    8649 flags.go:33] FLAG: --pod-manifest-path="/etc/kubernetes/manifests"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800793    8649 flags.go:33] FLAG: --pod-max-pids="-1"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800799    8649 flags.go:33] FLAG: --pods-per-core="0"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800804    8649 flags.go:33] FLAG: --port="10250"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800810    8649 flags.go:33] FLAG: --protect-kernel-defaults="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800816    8649 flags.go:33] FLAG: --provider-id=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800821    8649 flags.go:33] FLAG: --qos-reserved=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800827    8649 flags.go:33] FLAG: --read-only-port="10255"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800833    8649 flags.go:33] FLAG: --really-crash-for-testing="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800838    8649 flags.go:33] FLAG: --redirect-container-streaming="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800844    8649 flags.go:33] FLAG: --register-node="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800849    8649 flags.go:33] FLAG: --register-schedulable="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800855    8649 flags.go:33] FLAG: --register-with-taints=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800864    8649 flags.go:33] FLAG: --registry-burst="10"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800869    8649 flags.go:33] FLAG: --registry-qps="5"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800878    8649 flags.go:33] FLAG: --resolv-conf="/etc/resolv.conf"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800884    8649 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800890    8649 flags.go:33] FLAG: --rotate-certificates="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800895    8649 flags.go:33] FLAG: --rotate-server-certificates="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800900    8649 flags.go:33] FLAG: --runonce="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800906    8649 flags.go:33] FLAG: --runtime-cgroups=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800911    8649 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800916    8649 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800922    8649 flags.go:33] FLAG: --serialize-image-pulls="true"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800928    8649 flags.go:33] FLAG: --stderrthreshold="2"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800933    8649 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800939    8649 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800945    8649 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800950    8649 flags.go:33] FLAG: --storage-driver-password="root"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800957    8649 flags.go:33] FLAG: --storage-driver-secure="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800963    8649 flags.go:33] FLAG: --storage-driver-table="stats"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800972    8649 flags.go:33] FLAG: --storage-driver-user="root"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800978    8649 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800984    8649 flags.go:33] FLAG: --sync-frequency="1m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800989    8649 flags.go:33] FLAG: --system-cgroups=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.800994    8649 flags.go:33] FLAG: --system-reserved=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801000    8649 flags.go:33] FLAG: --system-reserved-cgroup=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801006    8649 flags.go:33] FLAG: --tls-cert-file=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801011    8649 flags.go:33] FLAG: --tls-cipher-suites="[]"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801020    8649 flags.go:33] FLAG: --tls-min-version=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801025    8649 flags.go:33] FLAG: --tls-private-key-file=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801030    8649 flags.go:33] FLAG: --v="2"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801036    8649 flags.go:33] FLAG: --version="false"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801044    8649 flags.go:33] FLAG: --vmodule=""
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801050    8649 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801059    8649 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801096    8649 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:27 cmp001 kubelet[8649]: W1116 16:59:27.801119    8649 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/node]
Nov 16 16:59:27 cmp001 kubelet[8649]: W1116 16:59:27.801131    8649 options.go:266] in 1.15, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
Nov 16 16:59:27 cmp001 kubelet[8649]: W1116 16:59:27.801143    8649 server.go:182] Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead
Nov 16 16:59:27 cmp001 kubelet[8649]: I1116 16:59:27.801190    8649 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:27 cmp001 systemd[1]: kubelet.service: Dependency Conflicts=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:27 cmp001 systemd[1]: kubelet.service: Dependency ConflictedBy=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] {'kubelet': True}
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Completed state [kubelet] at time 16:59:27.949194 duration_in_ms=393.946
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/logrotate.d/kubernetes] at time 16:59:27.949977
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/logrotate.d/kubernetes]
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/logrotate'
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:27 cmp001 salt-minion[4458]: New file
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/logrotate.d/kubernetes] at time 16:59:27.982562 duration_in_ms=32.585
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Running state [/opt/cni/bin] at time 16:59:27.982804
Nov 16 16:59:27 cmp001 salt-minion[4458]: [INFO    ] Executing state archive.extracted for [/opt/cni/bin]
Nov 16 16:59:28 cmp001 systemd[1]: Started Kubernetes systemd probe.
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.023231    8649 mount_linux.go:180] Detected OS with systemd
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.023348    8649 server.go:407] Version: v1.13.5-3+98374c02d2d8c1
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.023444    8649 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.023579    8649 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:28 cmp001 kubelet[8649]: W1116 16:59:28.023621    8649 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/node]
Nov 16 16:59:28 cmp001 kubelet[8649]: W1116 16:59:28.023651    8649 options.go:266] in 1.15, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.023826    8649 plugins.go:103] No cloud provider specified.
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.023857    8649 server.go:523] No cloud provider specified: "" from the config file: ""
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.030769    8649 manager.go:155] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.032336    8649 fs.go:142] Filesystem UUIDs: map[2019-11-16-17-50-43-00:/dev/sr0 2ff13334-aecb-43c4-82f2-b8bb5fa56dda:/dev/vda1 9E29-9F5A:/dev/vda15]
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.032386    8649 fs.go:143] Filesystem partitions: map[tmpfs:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /dev/vda1:{mountpoint:/ major:252 minor:1 fsType:ext4 blockSize:0}]
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.039759    8649 manager.go:229] Machine: {NumCores:6 CpuFrequency:2799994 MemoryCapacity:12591427584 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:6a383b59001245f0b0d4b46da28e2be5 SystemUUID:6A383B59-0012-45F0-B0D4-B46DA28E2BE5 BootID:ecea669e-0132-41b5-b1b7-cadb58547ffe Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:24 Capacity:1259143168 Type:vfs Inodes:1537039 HasInodes:true} {Device:/dev/vda1 DeviceMajor:252 DeviceMinor:1 Capacity:103880232960 Type:vfs Inodes:12902400 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:107374182400 Scheduler:none}] NetworkDevices:[{Name:br-mgmt MacAddress:52:54:00:88:dd:d4 Speed:0 Mtu:9000} {Name:ens3 MacAddress:52:54:00:da:c3:64 Speed:-1 Mtu:9000} {Name:ens4 MacAddress:52:54:00:88:dd:d4 Speed:-1 Mtu:9000} {Name:ens5 MacAddress:52:54:00:5d:c3:57 Speed:-1 Mtu:9000} {Name:ens5.1000 MacAddress:52:54:00:5d:c3:57 Speed:-1 Mtu:9000} {Name:ens6 MacAddress:52:54:00:48:c5:4b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:12591427584 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:1 Memory:0 Cores:[{Id:0 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:2 Memory:0 Cores:[{Id:0 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:3 Memory:0 Cores:[{Id:0 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:4 Memory:0 Cores:[{Id:0 Threads:[4] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:5 Memory:0 Cores:[{Id:0 Threads:[5] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.040228    8649 manager.go:235] Version: {KernelVersion:4.15.0-70-generic ContainerOsVersion:Ubuntu 18.04.3 LTS DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.040371    8649 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.040761    8649 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: []
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.040790    8649 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.040965    8649 container_manager_linux.go:272] Creating device plugin manager: true
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.040988    8649 manager.go:109] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.041083    8649 state_mem.go:36] [cpumanager] initializing new in-memory state store
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.053728    8649 server.go:941] Using root directory: /var/lib/kubelet
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.053769    8649 kubelet.go:281] Adding pod path: /etc/kubernetes/manifests
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.053815    8649 file.go:68] Watching path "/etc/kubernetes/manifests"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.054001    8649 kubelet.go:306] Watching apiserver
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.055245    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.055369    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.055512    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.072491    8649 kuberuntime_manager.go:198] Container runtime containerd initialized, version: 1.2.6-0ubuntu1~18.04.2, apiVersion: v1alpha2
Nov 16 16:59:28 cmp001 kubelet[8649]: W1116 16:59:28.072968    8649 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073458    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/aws-ebs"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073529    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/empty-dir"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073554    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/gce-pd"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073584    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/git-repo"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073600    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/host-path"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073616    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/nfs"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073632    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/secret"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073649    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/iscsi"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073678    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/glusterfs"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073705    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/rbd"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073724    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/cinder"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073740    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/quobyte"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073755    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/cephfs"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073783    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/downward-api"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073800    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/fc"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073815    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/flocker"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073831    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-file"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073855    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/configmap"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073872    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/vsphere-volume"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073888    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-disk"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073904    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/photon-pd"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073921    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/projected"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073960    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/portworx-volume"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073979    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/scaleio"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.073997    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/local-volume"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.074013    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/storageos"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.074049    8649 plugins.go:547] Loaded volume plugin "kubernetes.io/csi"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.074564    8649 server.go:999] Started kubelet
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.076397    8649 server.go:137] Starting to listen on 172.16.10.55:10250
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.076880    8649 server.go:157] Starting to listen read-only on 172.16.10.55:10255
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.077844    8649 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.077906    8649 status_manager.go:152] Starting to sync pod status with apiserver
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.077931    8649 kubelet.go:1829] Starting kubelet main sync loop.
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.078160    8649 volume_manager.go:246] The desired_state_of_world populator starts
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.078176    8649 volume_manager.go:248] Starting Kubelet Volume Manager
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.078166    8649 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.078508    8649 desired_state_of_world_populator.go:130] Desired state populator starts to run
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.078898    8649 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.080690    8649 cri_stats_provider.go:320] Failed to get the info of the filesystem with mountpoint "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.081854    8649 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.081641    8649 server.go:333] Adding debug handlers to kubelet server.
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.084112    8649 factory.go:136] Registering containerd factory
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.085906    8649 factory.go:54] Registering systemd factory
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.086189    8649 factory.go:97] Registering Raw factory
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.086431    8649 manager.go:1222] Started watching for new ooms in manager
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.087995    8649 manager.go:365] Starting recovery of all containers
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.151076    8649 manager.go:370] Recovery completed
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.178500    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.178554    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.178706    8649 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet]
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.179218    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.180435    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.180627    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.180842    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.180987    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.183981    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.208227    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.208657    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.209745    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.209933    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.210081    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.210242    8649 cpu_manager.go:155] [cpumanager] starting with none policy
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.210365    8649 cpu_manager.go:156] [cpumanager] reconciling every 10s
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.210500    8649 policy_none.go:42] [cpumanager] none policy: Start
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.210974    8649 container_manager_linux.go:376] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.211190    8649 container_manager_linux.go:376] Updating kernel flag: kernel/panic, expected value: 10, actual value: 60
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.211382    8649 container_manager_linux.go:376] Updating kernel flag: kernel/panic_on_oops, expected value: 1, actual value: 0
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.225171    8649 manager.go:196] Starting Device Plugin manager
Nov 16 16:59:28 cmp001 kubelet[8649]: W1116 16:59:28.225365    8649 manager.go:537] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.225638    8649 manager.go:231] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.225827    8649 plugin_watcher.go:90] Plugin Watcher Start at /var/lib/kubelet/plugins_registry
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.226315    8649 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp001" not found
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.278919    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.379119    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.379120    8649 kubelet.go:1908] SyncLoop (ADD, "file"): ""
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.384358    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.385069    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.388279    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.388432    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.388469    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.388536    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.389774    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.479337    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.579484    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.679778    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.779951    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.790091    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.790514    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.791743    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.791872    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.791914    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: I1116 16:59:28.792043    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.793187    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.880194    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:28 cmp001 kubelet[8649]: E1116 16:59:28.980458    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.056522    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.057598    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.058753    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.080721    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.181044    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.281180    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.381350    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.481676    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.581881    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: I1116 16:59:29.593601    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:29 cmp001 kubelet[8649]: I1116 16:59:29.594029    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:29 cmp001 kubelet[8649]: I1116 16:59:29.594848    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:29 cmp001 kubelet[8649]: I1116 16:59:29.594886    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:29 cmp001 kubelet[8649]: I1116 16:59:29.594899    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:29 cmp001 kubelet[8649]: I1116 16:59:29.594924    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.595413    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.682080    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.782573    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.882914    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:29 cmp001 kubelet[8649]: E1116 16:59:29.983549    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.057818    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.058635    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.059627    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.083707    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.183980    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.284174    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.384416    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.484583    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.584784    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.684967    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.785140    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.885332    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:30 cmp001 kubelet[8649]: E1116 16:59:30.985497    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.058778    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.059544    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.060712    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.085686    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.185865    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 kubelet[8649]: I1116 16:59:31.195668    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:31 cmp001 kubelet[8649]: I1116 16:59:31.196009    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:31 cmp001 kubelet[8649]: I1116 16:59:31.196804    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:31 cmp001 kubelet[8649]: I1116 16:59:31.196855    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:31 cmp001 kubelet[8649]: I1116 16:59:31.196869    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:31 cmp001 kubelet[8649]: I1116 16:59:31.196888    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.197538    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.286028    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 salt-minion[4458]: [INFO    ] Executing command ['tar', 'xz', '-f', '/var/cache/salt/minion/extrn_files/base/docker-prod-local.artifactory.mirantis.com/artifactory/binary-prod-local/mirantis/kubernetes/containernetworking-plugins/containernetworking-plugins_v0.7.2-173-g8db2808.tar.gz'] in directory '/opt/cni/bin/'
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.386189    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.486366    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.586582    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.686788    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.786965    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.887143    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:31 cmp001 salt-minion[4458]: [INFO    ] Executing command ['tar', '--version'] in directory '/root'
Nov 16 16:59:31 cmp001 salt-minion[4458]: [INFO    ] {'extracted_files': 'no tar output so far'}
Nov 16 16:59:31 cmp001 salt-minion[4458]: [INFO    ] Completed state [/opt/cni/bin] at time 16:59:31.983344 duration_in_ms=4000.533
Nov 16 16:59:31 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/kubernetes/proxy.kubeconfig] at time 16:59:31.983919
Nov 16 16:59:31 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/kubernetes/proxy.kubeconfig]
Nov 16 16:59:31 cmp001 kubelet[8649]: E1116 16:59:31.987370    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-proxy/proxy.kubeconfig'
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.059827    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.060524    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.061663    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.087504    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.187824    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.288031    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.388194    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.488354    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.588704    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:32 cmp001 salt-minion[4458]: New file
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/kubernetes/proxy.kubeconfig] at time 16:59:32.596376 duration_in_ms=612.454
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/systemd/system/kube-proxy.service] at time 16:59:32.596947
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kube-proxy.service]
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/systemd/kube-proxy.service'
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:32 cmp001 salt-minion[4458]: New file
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/systemd/system/kube-proxy.service] at time 16:59:32.635502 duration_in_ms=38.554
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Running state [/etc/default/kube-proxy] at time 16:59:32.635859
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing state file.managed for [/etc/default/kube-proxy]
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] File changed:
Nov 16 16:59:32 cmp001 salt-minion[4458]: New file
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Completed state [/etc/default/kube-proxy] at time 16:59:32.638169 duration_in_ms=2.309
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Running state [kube-proxy] at time 16:59:32.639934
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing state service.running for [kube-proxy]
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'status', 'kube-proxy.service', '-n', '0'] in directory '/root'
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.688884    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:32 cmp001 systemd[1]: Started /bin/systemctl start kube-proxy.service.
Nov 16 16:59:32 cmp001 systemd[1]: Started Kubernetes Kube-Proxy Server.
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.789044    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:32 cmp001 systemd[1]: Started /bin/systemctl enable kube-proxy.service.
Nov 16 16:59:32 cmp001 systemd[1]: Reloading.
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.889322    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.890862    8852 flags.go:33] FLAG: --alsologtostderr="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892135    8852 flags.go:33] FLAG: --application-metrics-count-limit="100"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892177    8852 flags.go:33] FLAG: --azure-container-registry-config=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892192    8852 flags.go:33] FLAG: --bind-address="0.0.0.0"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892204    8852 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892215    8852 flags.go:33] FLAG: --cleanup="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892227    8852 flags.go:33] FLAG: --cleanup-iptables="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892237    8852 flags.go:33] FLAG: --cleanup-ipvs="true"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892245    8852 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892261    8852 flags.go:33] FLAG: --cluster-cidr="192.168.0.0/16"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892323    8852 flags.go:33] FLAG: --config=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892546    8852 flags.go:33] FLAG: --config-sync-period="15m0s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892566    8852 flags.go:33] FLAG: --conntrack-max="0"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892737    8852 flags.go:33] FLAG: --conntrack-max-per-core="32768"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892974    8852 flags.go:33] FLAG: --conntrack-min="131072"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.892989    8852 flags.go:33] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893000    8852 flags.go:33] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893009    8852 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893112    8852 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893466    8852 flags.go:33] FLAG: --default-not-ready-toleration-seconds="300"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893494    8852 flags.go:33] FLAG: --default-unreachable-toleration-seconds="300"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893505    8852 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893515    8852 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893525    8852 flags.go:33] FLAG: --docker-only="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893534    8852 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893661    8852 flags.go:33] FLAG: --docker-tls="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893720    8852 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893777    8852 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893832    8852 flags.go:33] FLAG: --docker-tls-key="key.pem"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893923    8852 flags.go:33] FLAG: --enable-load-reader="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.893939    8852 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894070    8852 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894129    8852 flags.go:33] FLAG: --feature-gates=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894191    8852 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894248    8852 flags.go:33] FLAG: --healthz-bind-address="0.0.0.0:10256"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894341    8852 flags.go:33] FLAG: --healthz-port="10256"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894360    8852 flags.go:33] FLAG: --help="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894412    8852 flags.go:33] FLAG: --hostname-override=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894504    8852 flags.go:33] FLAG: --housekeeping-interval="10s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894523    8852 flags.go:33] FLAG: --iptables-masquerade-bit="14"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894583    8852 flags.go:33] FLAG: --iptables-min-sync-period="0s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894638    8852 flags.go:33] FLAG: --iptables-sync-period="30s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894696    8852 flags.go:33] FLAG: --ipvs-exclude-cidrs="[]"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894824    8852 flags.go:33] FLAG: --ipvs-min-sync-period="0s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894837    8852 flags.go:33] FLAG: --ipvs-scheduler=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894846    8852 flags.go:33] FLAG: --ipvs-sync-period="30s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.894907    8852 flags.go:33] FLAG: --kube-api-burst="10"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895001    8852 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895016    8852 flags.go:33] FLAG: --kube-api-qps="5"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895123    8852 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/proxy.kubeconfig"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895182    8852 flags.go:33] FLAG: --log-backtrace-at=":0"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895284    8852 flags.go:33] FLAG: --log-cadvisor-usage="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895343    8852 flags.go:33] FLAG: --log-dir=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895400    8852 flags.go:33] FLAG: --log-file=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895416    8852 flags.go:33] FLAG: --log-flush-frequency="5s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895471    8852 flags.go:33] FLAG: --logtostderr="true"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895484    8852 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895579    8852 flags.go:33] FLAG: --masquerade-all="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895635    8852 flags.go:33] FLAG: --master=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895691    8852 flags.go:33] FLAG: --mesos-agent="127.0.0.1:5051"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895795    8852 flags.go:33] FLAG: --mesos-agent-timeout="10s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895853    8852 flags.go:33] FLAG: --metrics-bind-address="127.0.0.1:10249"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895867    8852 flags.go:33] FLAG: --metrics-port="10249"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895925    8852 flags.go:33] FLAG: --nodeport-addresses="[]"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.895987    8852 flags.go:33] FLAG: --oom-score-adj="-999"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896086    8852 flags.go:33] FLAG: --profiling="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896143    8852 flags.go:33] FLAG: --proxy-mode="iptables"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896205    8852 flags.go:33] FLAG: --proxy-port-range=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896264    8852 flags.go:33] FLAG: --resource-container="/kube-proxy"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896279    8852 flags.go:33] FLAG: --skip-headers="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896336    8852 flags.go:33] FLAG: --stderrthreshold="2"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896395    8852 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896492    8852 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896550    8852 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896607    8852 flags.go:33] FLAG: --storage-driver-password="root"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896666    8852 flags.go:33] FLAG: --storage-driver-secure="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896725    8852 flags.go:33] FLAG: --storage-driver-table="stats"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896739    8852 flags.go:33] FLAG: --storage-driver-user="root"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896796    8852 flags.go:33] FLAG: --udp-timeout="250ms"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896893    8852 flags.go:33] FLAG: --v="2"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.896913    8852 flags.go:33] FLAG: --version="false"
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.897015    8852 flags.go:33] FLAG: --vmodule=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.897073    8852 flags.go:33] FLAG: --write-config-to=""
Nov 16 16:59:32 cmp001 kube-proxy[8852]: W1116 16:59:32.897133    8852 server.go:198] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Nov 16 16:59:32 cmp001 kube-proxy[8852]: I1116 16:59:32.897340    8852 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:32 cmp001 kernel: [  232.387191] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
Nov 16 16:59:32 cmp001 kernel: [  232.387669] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
Nov 16 16:59:32 cmp001 systemd[1]: kubelet.service: Dependency Conflicts=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:32 cmp001 systemd[1]: kubelet.service: Dependency ConflictedBy=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:32 cmp001 kubelet[8649]: E1116 16:59:32.989472    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 salt-minion[4458]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:33 cmp001 salt-minion[4458]: [INFO    ] {'kube-proxy': True}
Nov 16 16:59:33 cmp001 salt-minion[4458]: [INFO    ] Completed state [kube-proxy] at time 16:59:33.032181 duration_in_ms=392.246
Nov 16 16:59:33 cmp001 salt-minion[4458]: [INFO    ] Returning information for job: 20191116165753766584
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.060879    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.061693    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.062678    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.089797    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 kernel: [  232.636758] IPVS: ipvs loaded.
Nov 16 16:59:33 cmp001 kernel: [  232.644128] IPVS: [rr] scheduler registered.
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.190037    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 kernel: [  232.650424] IPVS: [wrr] scheduler registered.
Nov 16 16:59:33 cmp001 kernel: [  232.655362] IPVS: [sh] scheduler registered.
Nov 16 16:59:33 cmp001 kube-proxy[8852]: W1116 16:59:33.212514    8852 node.go:103] Failed to retrieve node info: Get https://172.16.10.36:443/api/v1/nodes/cmp001: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.212664    8852 server_others.go:148] Using iptables Proxier.
Nov 16 16:59:33 cmp001 kube-proxy[8852]: W1116 16:59:33.212837    8852 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.212971    8852 server_others.go:178] Tearing down inactive rules.
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.235957    8852 server.go:483] Version: v1.13.5-3+98374c02d2d8c1
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.241725    8852 server.go:509] Running in resource-only container "/kube-proxy"
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.242454    8852 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 196608
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.242501    8852 conntrack.go:52] Setting nf_conntrack_max to 196608
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.242628    8852 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.242694    8852 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.242850    8852 config.go:102] Starting endpoints config controller
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.242871    8852 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.242892    8852 config.go:202] Starting service config controller
Nov 16 16:59:33 cmp001 kube-proxy[8852]: I1116 16:59:33.242900    8852 controller_utils.go:1027] Waiting for caches to sync for service config controller
Nov 16 16:59:33 cmp001 kube-proxy[8852]: E1116 16:59:33.243408    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp001 kube-proxy[8852]: E1116 16:59:33.243437    8852 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:33 cmp001 kube-proxy[8852]: E1116 16:59:33.243483    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.290304    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.390555    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.490884    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.591094    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.691349    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 salt-minion[4458]: [INFO    ] User sudo_ubuntu Executing command cmd.run with jid 20191116165933687300
Nov 16 16:59:33 cmp001 salt-minion[4458]: [INFO    ] Starting a new job with PID 8969
Nov 16 16:59:33 cmp001 salt-minion[4458]: [INFO    ] Executing command 'calicoctl node status' in directory '/root'
Nov 16 16:59:33 cmp001 salt-minion[4458]: [INFO    ] Returning information for job: 20191116165933687300
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.791580    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.891998    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:33 cmp001 kubelet[8649]: E1116 16:59:33.992978    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.062178    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.062767    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.063745    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.093455    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.193759    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 kube-proxy[8852]: E1116 16:59:34.245094    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp001 kube-proxy[8852]: E1116 16:59:34.245284    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.294043    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.394230    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 kubelet[8649]: I1116 16:59:34.397750    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:34 cmp001 kubelet[8649]: I1116 16:59:34.398011    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:34 cmp001 kubelet[8649]: I1116 16:59:34.398977    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:34 cmp001 kubelet[8649]: I1116 16:59:34.399016    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:34 cmp001 kubelet[8649]: I1116 16:59:34.399031    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:34 cmp001 kubelet[8649]: I1116 16:59:34.399053    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.399820    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp001 salt-minion[4458]: [INFO    ] User sudo_ubuntu Executing command cmd.run with jid 20191116165934415277
Nov 16 16:59:34 cmp001 salt-minion[4458]: [INFO    ] Starting a new job with PID 8986
Nov 16 16:59:34 cmp001 salt-minion[4458]: [INFO    ] Executing command 'calicoctl get ippool' in directory '/root'
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.494519    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 salt-minion[4458]: [INFO    ] Returning information for job: 20191116165934415277
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.594829    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.695196    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.763097    8649 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.795540    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.895870    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:34 cmp001 kubelet[8649]: E1116 16:59:34.996221    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.064076    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.064290    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.065852    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.096412    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.196738    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kube-proxy[8852]: E1116 16:59:35.246644    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp001 kube-proxy[8852]: E1116 16:59:35.247271    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.297069    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.397367    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.497738    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.598088    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.698441    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.798693    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.898971    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:35 cmp001 kubelet[8649]: E1116 16:59:35.999282    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.065419    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.066306    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.067685    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp001 kube-proxy[8852]: E1116 16:59:36.071216    8852 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.099578    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.199922    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:36 cmp001 kube-proxy[8852]: E1116 16:59:36.247926    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp001 kube-proxy[8852]: E1116 16:59:36.248819    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.300241    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.400424    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.500695    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.600885    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.701027    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.801212    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:36 cmp001 kubelet[8649]: E1116 16:59:36.901379    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.001568    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.066771    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.067240    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.068575    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.101923    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.202180    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kube-proxy[8852]: E1116 16:59:37.249416    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp001 kube-proxy[8852]: E1116 16:59:37.250372    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.302519    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.402871    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.503217    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.603552    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.703765    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.804087    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:37 cmp001 kubelet[8649]: E1116 16:59:37.904418    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.004777    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.068278    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.069550    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.070521    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.105151    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.205414    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.226652    8649 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp001" not found
Nov 16 16:59:38 cmp001 kube-proxy[8852]: E1116 16:59:38.250693    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp001 kube-proxy[8852]: E1116 16:59:38.251714    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.305777    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.406074    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.506420    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.606804    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.707148    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.807582    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:38 cmp001 kubelet[8649]: E1116 16:59:38.907945    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.008260    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.069505    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.070622    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.071784    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.108632    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.208844    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kube-proxy[8852]: E1116 16:59:39.251383    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp001 kube-proxy[8852]: E1116 16:59:39.252300    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.309025    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.409204    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.509381    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.609577    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.709768    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.809906    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:39 cmp001 kubelet[8649]: E1116 16:59:39.910098    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.010285    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.070391    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.071286    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.072476    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.110448    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.210639    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kube-proxy[8852]: E1116 16:59:40.252042    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp001 kube-proxy[8852]: E1116 16:59:40.253064    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.310776    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.410916    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.511098    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.611245    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.711444    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kubelet[8649]: I1116 16:59:40.800150    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:40 cmp001 kubelet[8649]: I1116 16:59:40.800569    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:40 cmp001 kubelet[8649]: I1116 16:59:40.802080    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:40 cmp001 kubelet[8649]: I1116 16:59:40.802148    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:40 cmp001 kubelet[8649]: I1116 16:59:40.802172    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:40 cmp001 kubelet[8649]: I1116 16:59:40.802207    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.803306    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.811898    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:40 cmp001 kubelet[8649]: E1116 16:59:40.912202    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.012476    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.071845    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.072865    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.073894    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.112741    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.213091    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kube-proxy[8852]: E1116 16:59:41.253352    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp001 kube-proxy[8852]: E1116 16:59:41.254147    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.313374    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.413650    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.514031    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.614253    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.714438    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.814674    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:41 cmp001 kubelet[8649]: E1116 16:59:41.914967    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.015287    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.072855    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.073820    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.074918    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.115504    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.215879    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kube-proxy[8852]: E1116 16:59:42.254392    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp001 kube-proxy[8852]: E1116 16:59:42.255037    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.316136    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.416405    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.516564    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.616716    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.716945    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.817108    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:42 cmp001 kubelet[8649]: E1116 16:59:42.917273    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.017414    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.073846    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.074552    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.075893    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.117589    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.217918    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kube-proxy[8852]: E1116 16:59:43.255206    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp001 kube-proxy[8852]: E1116 16:59:43.256152    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.318076    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.418308    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.518566    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.618730    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.718989    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.819215    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:43 cmp001 kubelet[8649]: E1116 16:59:43.919467    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.019686    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.075472    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.076108    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.077016    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.120007    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.220205    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kube-proxy[8852]: E1116 16:59:44.256428    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp001 kube-proxy[8852]: E1116 16:59:44.257128    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.320456    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.420705    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.521013    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.621274    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.721516    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.764870    8649 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.821790    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:44 cmp001 kubelet[8649]: E1116 16:59:44.922101    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.022348    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.076445    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.077330    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.078514    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.122626    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.222787    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kube-proxy[8852]: E1116 16:59:45.257410    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp001 kube-proxy[8852]: E1116 16:59:45.258282    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.323085    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.423458    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.523699    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.623961    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.724256    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.824483    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:45 cmp001 kubelet[8649]: E1116 16:59:45.924867    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.025135    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kube-proxy[8852]: E1116 16:59:46.072250    8852 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.077134    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.078122    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.079182    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.125375    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.225584    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kube-proxy[8852]: E1116 16:59:46.258239    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp001 kube-proxy[8852]: E1116 16:59:46.259242    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.325793    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.426135    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.526488    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.626849    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.727220    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.827466    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:46 cmp001 kubelet[8649]: E1116 16:59:46.927749    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.028074    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.078122    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.078717    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.079795    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.128258    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.228486    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kube-proxy[8852]: E1116 16:59:47.259488    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp001 kube-proxy[8852]: E1116 16:59:47.260032    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.328655    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.429014    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.529383    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.629615    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.729815    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kubelet[8649]: I1116 16:59:47.803610    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:47 cmp001 kubelet[8649]: I1116 16:59:47.804004    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:47 cmp001 kubelet[8649]: I1116 16:59:47.805207    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:47 cmp001 kubelet[8649]: I1116 16:59:47.805326    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:47 cmp001 kubelet[8649]: I1116 16:59:47.805347    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:47 cmp001 kubelet[8649]: I1116 16:59:47.805377    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.806323    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.829993    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:47 cmp001 kubelet[8649]: E1116 16:59:47.930170    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.030365    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.079031    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.079919    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.081055    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.130708    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.226918    8649 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.230954    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kube-proxy[8852]: E1116 16:59:48.261049    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp001 kube-proxy[8852]: E1116 16:59:48.261441    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.331171    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.431403    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.531639    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.631851    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.732062    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.832268    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:48 cmp001 kubelet[8649]: E1116 16:59:48.932490    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.032792    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.080151    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.080889    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.081951    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.133137    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.233388    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kube-proxy[8852]: E1116 16:59:49.262382    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp001 kube-proxy[8852]: E1116 16:59:49.263203    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.333644    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.433904    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.534121    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.634352    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.734625    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.834947    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:49 cmp001 kubelet[8649]: E1116 16:59:49.935186    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.035398    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.081013    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.081760    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.082947    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.135745    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.235965    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kube-proxy[8852]: E1116 16:59:50.263279    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp001 kube-proxy[8852]: E1116 16:59:50.264497    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.336185    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.436506    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.536837    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.637085    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.737307    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.837494    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:50 cmp001 kubelet[8649]: E1116 16:59:50.937741    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.037984    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.081881    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.082625    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.083723    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.138276    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.238468    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kube-proxy[8852]: E1116 16:59:51.264546    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp001 kube-proxy[8852]: E1116 16:59:51.265278    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.338668    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.438846    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.539196    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.639448    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.739648    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.839963    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:51 cmp001 kubelet[8649]: E1116 16:59:51.940177    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.040461    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.082916    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.083739    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.084922    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.140636    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.240809    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kube-proxy[8852]: E1116 16:59:52.265803    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp001 kube-proxy[8852]: E1116 16:59:52.266313    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.340940    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.441191    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.541442    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.641794    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.742013    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.842255    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:52 cmp001 kubelet[8649]: E1116 16:59:52.942452    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.042630    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.084385    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.084861    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.086046    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.142812    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.243016    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kube-proxy[8852]: E1116 16:59:53.266950    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp001 kube-proxy[8852]: E1116 16:59:53.267571    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.343218    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.443435    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.543677    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.643904    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.744380    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.844592    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:53 cmp001 kubelet[8649]: E1116 16:59:53.944819    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.045043    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.085355    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.086114    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.087259    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.146296    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.246680    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kube-proxy[8852]: E1116 16:59:54.267902    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp001 kube-proxy[8852]: E1116 16:59:54.268805    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.346855    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.447303    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.547555    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.648186    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.748450    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.766402    8649 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:54 cmp001 kubelet[8649]: I1116 16:59:54.806597    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:54 cmp001 kubelet[8649]: I1116 16:59:54.807198    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 16:59:54 cmp001 kubelet[8649]: I1116 16:59:54.808822    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 16:59:54 cmp001 kubelet[8649]: I1116 16:59:54.808891    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 16:59:54 cmp001 kubelet[8649]: I1116 16:59:54.808917    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 16:59:54 cmp001 kubelet[8649]: I1116 16:59:54.808955    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.810503    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.848901    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:54 cmp001 kubelet[8649]: E1116 16:59:54.949269    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.049836    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.086122    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.087091    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.088078    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.150121    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.250450    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kube-proxy[8852]: E1116 16:59:55.268800    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp001 kube-proxy[8852]: E1116 16:59:55.269538    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.350628    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.450985    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.551345    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.651528    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.751806    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.851970    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:55 cmp001 kubelet[8649]: E1116 16:59:55.952159    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.052379    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kube-proxy[8852]: E1116 16:59:56.073294    8852 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.086895    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.087878    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.088944    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.152922    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.253181    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kube-proxy[8852]: E1116 16:59:56.270404    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp001 kube-proxy[8852]: E1116 16:59:56.270864    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.353504    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.453716    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.553932    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.654145    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.754329    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.854505    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:56 cmp001 kubelet[8649]: E1116 16:59:56.954688    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.054877    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.087816    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.088454    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.089455    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.155120    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.255360    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kube-proxy[8852]: E1116 16:59:57.271818    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp001 kube-proxy[8852]: E1116 16:59:57.272988    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.355611    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.455842    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.556258    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.656652    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.756831    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.857025    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:57 cmp001 kubelet[8649]: E1116 16:59:57.957299    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.057426    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.088657    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.089441    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.090540    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.157610    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.227088    8649 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.257755    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kube-proxy[8852]: E1116 16:59:58.272444    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp001 kube-proxy[8852]: E1116 16:59:58.273527    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.358004    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.458275    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.558475    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.658640    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.758807    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.858969    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:58 cmp001 kubelet[8649]: E1116 16:59:58.959326    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.059497    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.089683    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.090399    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.091458    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.159686    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.260004    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kube-proxy[8852]: E1116 16:59:59.273823    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp001 kube-proxy[8852]: E1116 16:59:59.274496    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.360448    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.460880    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.561466    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.661745    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.762024    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.862558    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 16:59:59 cmp001 kubelet[8649]: E1116 16:59:59.962995    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.063348    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.091871    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.092139    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.093342    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.163616    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.264217    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kube-proxy[8852]: E1116 17:00:00.275169    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp001 kube-proxy[8852]: E1116 17:00:00.275757    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.364455    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.464687    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.564973    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.665354    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.765644    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.865990    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:00 cmp001 kubelet[8649]: E1116 17:00:00.966318    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.066642    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.093535    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.094245    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.095490    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.166891    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.267442    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kube-proxy[8852]: E1116 17:00:01.276344    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp001 kube-proxy[8852]: E1116 17:00:01.276948    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.368050    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.468521    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.568959    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.669454    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.769872    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kubelet[8649]: I1116 17:00:01.810898    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 17:00:01 cmp001 kubelet[8649]: I1116 17:00:01.811981    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:00:01 cmp001 kubelet[8649]: I1116 17:00:01.814091    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 17:00:01 cmp001 kubelet[8649]: I1116 17:00:01.814230    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 17:00:01 cmp001 kubelet[8649]: I1116 17:00:01.814260    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 17:00:01 cmp001 kubelet[8649]: I1116 17:00:01.814306    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.815465    8649 kubelet_node_status.go:94] Unable to register node "cmp001" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.870547    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:01 cmp001 kubelet[8649]: E1116 17:00:01.970867    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.071217    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.094959    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.096268    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.097022    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.171558    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.271886    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kube-proxy[8852]: E1116 17:00:02.278357    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp001 kube-proxy[8852]: E1116 17:00:02.278441    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.372202    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.472744    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.573414    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.673947    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.774244    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.874729    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:02 cmp001 kubelet[8649]: E1116 17:00:02.975298    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.076162    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.096734    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.097450    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.098570    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.176467    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kube-proxy[8852]: I1116 17:00:03.243458    8852 proxier.go:645] Not syncing iptables until Services and Endpoints have been received from master
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.276766    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kube-proxy[8852]: E1116 17:00:03.279793    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp001 kube-proxy[8852]: E1116 17:00:03.280587    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.377420    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.477675    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.577936    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.678348    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.778596    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.878844    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:03 cmp001 kubelet[8649]: E1116 17:00:03.979128    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.079416    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.098659    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.099379    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.100305    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.179693    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.280045    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kube-proxy[8852]: E1116 17:00:04.281168    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp001 kube-proxy[8852]: E1116 17:00:04.282019    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.380378    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.480774    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.581021    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.681295    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.768039    8649 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.781522    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.881887    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:04 cmp001 kubelet[8649]: E1116 17:00:04.982133    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.082477    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.100411    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.101001    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.102084    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.182826    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kube-proxy[8852]: E1116 17:00:05.282570    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp001 kube-proxy[8852]: E1116 17:00:05.283245    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.283025    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.383349    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.483671    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.583975    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.684266    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.784574    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.884874    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:05 cmp001 kubelet[8649]: E1116 17:00:05.985187    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kube-proxy[8852]: E1116 17:00:06.074839    8852 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.085402    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.101369    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.102323    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.103460    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.185720    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.285980    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.386249    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.486582    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.586875    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.687083    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.787331    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.887583    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:06 cmp001 kubelet[8649]: E1116 17:00:06.987809    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.088095    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.188332    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.288589    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.388829    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.489051    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.589272    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.689541    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.789900    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.890175    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:07 cmp001 kubelet[8649]: E1116 17:00:07.990480    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.090787    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.191082    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.227349    8649 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.291267    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.391553    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.491816    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.591973    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.692099    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.792276    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: I1116 17:00:08.815668    8649 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 17:00:08 cmp001 kubelet[8649]: I1116 17:00:08.816066    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:00:08 cmp001 kubelet[8649]: I1116 17:00:08.817039    8649 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp001
Nov 16 17:00:08 cmp001 kubelet[8649]: I1116 17:00:08.817084    8649 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp001
Nov 16 17:00:08 cmp001 kubelet[8649]: I1116 17:00:08.817100    8649 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp001
Nov 16 17:00:08 cmp001 kubelet[8649]: I1116 17:00:08.817128    8649 kubelet_node_status.go:72] Attempting to register node cmp001
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.892475    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:08 cmp001 kubelet[8649]: E1116 17:00:08.992743    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.093009    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.193347    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.293533    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.393816    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.494076    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.594432    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.694627    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.794828    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.895125    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:09 cmp001 kubelet[8649]: E1116 17:00:09.995567    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.095885    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.196253    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.296443    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.396663    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.496940    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.597324    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.697599    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.797784    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.898081    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:10 cmp001 kubelet[8649]: E1116 17:00:10.998511    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:11 cmp001 kubelet[8649]: E1116 17:00:11.098774    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:11 cmp001 kubelet[8649]: E1116 17:00:11.199001    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:11 cmp001 kubelet[8649]: E1116 17:00:11.299338    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:11 cmp001 kubelet[8649]: E1116 17:00:11.399544    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:11 cmp001 kubelet[8649]: E1116 17:00:11.499966    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:11 cmp001 kubelet[8649]: E1116 17:00:11.600424    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:11 cmp001 kubelet[8649]: E1116 17:00:11.700650    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:11 cmp001 kubelet[8649]: E1116 17:00:11.801321    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:11 cmp001 kubelet[8649]: E1116 17:00:11.901878    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.002439    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.103327    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.203951    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.304455    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.404960    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.505449    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.606175    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.706825    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.807121    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:12 cmp001 kubelet[8649]: E1116 17:00:12.907532    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.008030    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.109004    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.209700    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.310031    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.410412    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.510637    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.610841    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.711081    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.811435    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:13 cmp001 kubelet[8649]: E1116 17:00:13.911670    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.011854    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.112063    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.212280    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.312509    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.412774    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.513094    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.613332    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.713563    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.813834    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:14 cmp001 kubelet[8649]: E1116 17:00:14.914476    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.014724    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.115597    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.215818    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.315999    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.416187    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.516434    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.616860    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.717107    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.817640    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:15 cmp001 kubelet[8649]: E1116 17:00:15.917817    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.018078    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.118325    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.218783    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kube-proxy[8852]: E1116 17:00:16.283765    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 cmp001 kube-proxy[8852]: E1116 17:00:16.284875    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.319048    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.419352    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.519609    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.619822    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.720037    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.820286    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:16 cmp001 kubelet[8649]: E1116 17:00:16.920474    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.020703    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.103151    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.104019    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp001&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.104411    8649 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp001&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.120907    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.221141    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.321382    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.421667    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.522176    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.622688    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.723097    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.823509    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:17 cmp001 kubelet[8649]: E1116 17:00:17.924159    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:18 cmp001 kubelet[8649]: E1116 17:00:18.024687    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:18 cmp001 kubelet[8649]: E1116 17:00:18.124866    8649 kubelet.go:2266] node "cmp001" not found
Nov 16 17:00:18 cmp001 kubelet[8649]: I1116 17:00:18.130399    8649 kubelet.go:1908] SyncLoop (ADD, "api"): ""
Nov 16 17:00:18 cmp001 kube-proxy[8852]: E1116 17:00:18.131241    8852 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cmp001.15d7b3269ec00206", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"cmp001", UID:"cmp001", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kube-proxy.", Source:v1.EventSource{Component:"kube-proxy", Host:"cmp001"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6c289d4e78b006, ext:499325702, loc:(*time.Location)(0xaf74780)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6c289d4e78b006, ext:499325702, loc:(*time.Location)(0xaf74780)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:kube-proxy" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Nov 16 17:00:18 cmp001 kube-proxy[8852]: E1116 17:00:18.157556    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
Nov 16 17:00:18 cmp001 kube-proxy[8852]: E1116 17:00:18.159389    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
Nov 16 17:00:18 cmp001 kubelet[8649]: I1116 17:00:18.188928    8649 reconciler.go:154] Reconciler: start to sync state
Nov 16 17:00:18 cmp001 kubelet[8649]: I1116 17:00:18.209983    8649 kubelet_node_status.go:75] Successfully registered node cmp001
Nov 16 17:00:18 cmp001 kubelet[8649]: I1116 17:00:18.213586    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:00:19 cmp001 kube-proxy[8852]: E1116 17:00:19.159872    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
Nov 16 17:00:19 cmp001 kube-proxy[8852]: E1116 17:00:19.161216    8852 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
Nov 16 17:00:20 cmp001 kube-proxy[8852]: I1116 17:00:20.243062    8852 controller_utils.go:1034] Caches are synced for service config controller
Nov 16 17:00:20 cmp001 kube-proxy[8852]: I1116 17:00:20.243194    8852 proxier.go:645] Not syncing iptables until Services and Endpoints have been received from master
Nov 16 17:00:20 cmp001 kube-proxy[8852]: I1116 17:00:20.243230    8852 controller_utils.go:1034] Caches are synced for endpoints config controller
Nov 16 17:00:20 cmp001 kube-proxy[8852]: I1116 17:00:20.243327    8852 service.go:309] Adding new service port "default/kubernetes:https" at 10.254.0.1:443/TCP
Nov 16 17:00:28 cmp001 kubelet[8649]: I1116 17:00:28.232249    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:00:33 cmp001 salt-minion[4458]: [INFO    ] User sudo_ubuntu Executing command cp.push_dir with jid 20191116170033971440
Nov 16 17:00:34 cmp001 salt-minion[4458]: [INFO    ] Starting a new job with PID 9101
Nov 16 17:00:34 cmp001 salt-minion[4458]: [INFO    ] Returning information for job: 20191116170033971440
Nov 16 17:00:38 cmp001 kubelet[8649]: I1116 17:00:38.240121    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:00:40 cmp001 kube-proxy[8852]: I1116 17:00:40.289423    8852 service.go:309] Adding new service port "kube-system/coredns:dns" at 10.254.0.10:53/UDP
Nov 16 17:00:40 cmp001 kube-proxy[8852]: I1116 17:00:40.289474    8852 service.go:309] Adding new service port "kube-system/coredns:dns-tcp" at 10.254.0.10:53/TCP
Nov 16 17:00:40 cmp001 kubelet[8649]: I1116 17:00:40.332811    8649 kubelet.go:1908] SyncLoop (ADD, "api"): "netchecker-agent-vs5pv_netchecker(9e5a52b0-0892-11ea-a35a-5254009caaa4)"
Nov 16 17:00:40 cmp001 kube-proxy[8852]: I1116 17:00:40.406013    8852 service.go:309] Adding new service port "netchecker/netchecker:" at 10.254.54.193:80/TCP
Nov 16 17:00:40 cmp001 kube-proxy[8852]: I1116 17:00:40.425208    8852 proxier.go:1427] Opened local port "nodePort for netchecker/netchecker:" (:30276/tcp)
Nov 16 17:00:40 cmp001 kubelet[8649]: I1116 17:00:40.429268    8649 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-czmvh" (UniqueName: "kubernetes.io/secret/9e5a52b0-0892-11ea-a35a-5254009caaa4-default-token-czmvh") pod "netchecker-agent-vs5pv" (UID: "9e5a52b0-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:40 cmp001 kubelet[8649]: I1116 17:00:40.529700    8649 reconciler.go:252] operationExecutor.MountVolume started for volume "default-token-czmvh" (UniqueName: "kubernetes.io/secret/9e5a52b0-0892-11ea-a35a-5254009caaa4-default-token-czmvh") pod "netchecker-agent-vs5pv" (UID: "9e5a52b0-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:40 cmp001 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/9e5a52b0-0892-11ea-a35a-5254009caaa4/volumes/kubernetes.io~secret/default-token-czmvh.
Nov 16 17:00:40 cmp001 kubelet[8649]: I1116 17:00:40.547112    8649 operation_generator.go:571] MountVolume.SetUp succeeded for volume "default-token-czmvh" (UniqueName: "kubernetes.io/secret/9e5a52b0-0892-11ea-a35a-5254009caaa4-default-token-czmvh") pod "netchecker-agent-vs5pv" (UID: "9e5a52b0-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:40 cmp001 kubelet[8649]: I1116 17:00:40.725785    8649 kuberuntime_manager.go:397] No sandbox for pod "netchecker-agent-vs5pv_netchecker(9e5a52b0-0892-11ea-a35a-5254009caaa4)" can be found. Need to start a new one
Nov 16 17:00:40 cmp001 containerd[5688]: time="2019-11-16T17:00:40.727170247Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:netchecker-agent-vs5pv,Uid:9e5a52b0-0892-11ea-a35a-5254009caaa4,Namespace:netchecker,Attempt:0,}"
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.236921    8649 kubelet.go:1908] SyncLoop (ADD, "api"): "calico-kube-controllers-996f9b774-xqc8g_kube-system(9ee5517e-0892-11ea-a35a-5254009caaa4)"
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.291953    8649 kubelet.go:1908] SyncLoop (ADD, "api"): "coredns-7f8f94c97b-x7l26_kube-system(9eed852c-0892-11ea-a35a-5254009caaa4)"
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.331782    8649 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/9ee5517e-0892-11ea-a35a-5254009caaa4-etcd-certs") pod "calico-kube-controllers-996f9b774-xqc8g" (UID: "9ee5517e-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.331830    8649 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-kube-controllers-token-xrt7b" (UniqueName: "kubernetes.io/secret/9ee5517e-0892-11ea-a35a-5254009caaa4-calico-kube-controllers-token-xrt7b") pod "calico-kube-controllers-996f9b774-xqc8g" (UID: "9ee5517e-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.331862    8649 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9eed852c-0892-11ea-a35a-5254009caaa4-config-volume") pod "coredns-7f8f94c97b-x7l26" (UID: "9eed852c-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.331889    8649 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-lkbkg" (UniqueName: "kubernetes.io/secret/9eed852c-0892-11ea-a35a-5254009caaa4-coredns-token-lkbkg") pod "coredns-7f8f94c97b-x7l26" (UID: "9eed852c-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.375705    8649 kubelet.go:1908] SyncLoop (ADD, "api"): "netchecker-server-7876fb46d4-l6hmd_netchecker(9efa4e39-0892-11ea-a35a-5254009caaa4)"
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.432139    8649 reconciler.go:252] operationExecutor.MountVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/9ee5517e-0892-11ea-a35a-5254009caaa4-etcd-certs") pod "calico-kube-controllers-996f9b774-xqc8g" (UID: "9ee5517e-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.432221    8649 reconciler.go:252] operationExecutor.MountVolume started for volume "calico-kube-controllers-token-xrt7b" (UniqueName: "kubernetes.io/secret/9ee5517e-0892-11ea-a35a-5254009caaa4-calico-kube-controllers-token-xrt7b") pod "calico-kube-controllers-996f9b774-xqc8g" (UID: "9ee5517e-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.432266    8649 reconciler.go:252] operationExecutor.MountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9eed852c-0892-11ea-a35a-5254009caaa4-config-volume") pod "coredns-7f8f94c97b-x7l26" (UID: "9eed852c-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.432309    8649 reconciler.go:252] operationExecutor.MountVolume started for volume "coredns-token-lkbkg" (UniqueName: "kubernetes.io/secret/9eed852c-0892-11ea-a35a-5254009caaa4-coredns-token-lkbkg") pod "coredns-7f8f94c97b-x7l26" (UID: "9eed852c-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.432356    8649 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/9efa4e39-0892-11ea-a35a-5254009caaa4-etcd-certs") pod "netchecker-server-7876fb46d4-l6hmd" (UID: "9efa4e39-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.432393    8649 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "netchecker-token-z4xbf" (UniqueName: "kubernetes.io/secret/9efa4e39-0892-11ea-a35a-5254009caaa4-netchecker-token-z4xbf") pod "netchecker-server-7876fb46d4-l6hmd" (UID: "9efa4e39-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.432506    8649 operation_generator.go:571] MountVolume.SetUp succeeded for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/9ee5517e-0892-11ea-a35a-5254009caaa4-etcd-certs") pod "calico-kube-controllers-996f9b774-xqc8g" (UID: "9ee5517e-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.446808    8649 operation_generator.go:571] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9eed852c-0892-11ea-a35a-5254009caaa4-config-volume") pod "coredns-7f8f94c97b-x7l26" (UID: "9eed852c-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/9eed852c-0892-11ea-a35a-5254009caaa4/volumes/kubernetes.io~secret/coredns-token-lkbkg.
Nov 16 17:00:41 cmp001 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/9ee5517e-0892-11ea-a35a-5254009caaa4/volumes/kubernetes.io~secret/calico-kube-controllers-token-xrt7b.
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.464828    8649 operation_generator.go:571] MountVolume.SetUp succeeded for volume "coredns-token-lkbkg" (UniqueName: "kubernetes.io/secret/9eed852c-0892-11ea-a35a-5254009caaa4-coredns-token-lkbkg") pod "coredns-7f8f94c97b-x7l26" (UID: "9eed852c-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.466721    8649 operation_generator.go:571] MountVolume.SetUp succeeded for volume "calico-kube-controllers-token-xrt7b" (UniqueName: "kubernetes.io/secret/9ee5517e-0892-11ea-a35a-5254009caaa4-calico-kube-controllers-token-xrt7b") pod "calico-kube-controllers-996f9b774-xqc8g" (UID: "9ee5517e-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.532803    8649 reconciler.go:252] operationExecutor.MountVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/9efa4e39-0892-11ea-a35a-5254009caaa4-etcd-certs") pod "netchecker-server-7876fb46d4-l6hmd" (UID: "9efa4e39-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.532905    8649 reconciler.go:252] operationExecutor.MountVolume started for volume "netchecker-token-z4xbf" (UniqueName: "kubernetes.io/secret/9efa4e39-0892-11ea-a35a-5254009caaa4-netchecker-token-z4xbf") pod "netchecker-server-7876fb46d4-l6hmd" (UID: "9efa4e39-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.533531    8649 operation_generator.go:571] MountVolume.SetUp succeeded for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/9efa4e39-0892-11ea-a35a-5254009caaa4-etcd-certs") pod "netchecker-server-7876fb46d4-l6hmd" (UID: "9efa4e39-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/9efa4e39-0892-11ea-a35a-5254009caaa4/volumes/kubernetes.io~secret/netchecker-token-z4xbf.
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.553379    8649 operation_generator.go:571] MountVolume.SetUp succeeded for volume "netchecker-token-z4xbf" (UniqueName: "kubernetes.io/secret/9efa4e39-0892-11ea-a35a-5254009caaa4-netchecker-token-z4xbf") pod "netchecker-server-7876fb46d4-l6hmd" (UID: "9efa4e39-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.567593    8649 kuberuntime_manager.go:397] No sandbox for pod "calico-kube-controllers-996f9b774-xqc8g_kube-system(9ee5517e-0892-11ea-a35a-5254009caaa4)" can be found. Need to start a new one
Nov 16 17:00:41 cmp001 containerd[5688]: time="2019-11-16T17:00:41.568243391Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:calico-kube-controllers-996f9b774-xqc8g,Uid:9ee5517e-0892-11ea-a35a-5254009caaa4,Namespace:kube-system,Attempt:0,}"
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.628743    8649 kuberuntime_manager.go:397] No sandbox for pod "coredns-7f8f94c97b-x7l26_kube-system(9eed852c-0892-11ea-a35a-5254009caaa4)" can be found. Need to start a new one
Nov 16 17:00:41 cmp001 containerd[5688]: time="2019-11-16T17:00:41.629478818Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-7f8f94c97b-x7l26,Uid:9eed852c-0892-11ea-a35a-5254009caaa4,Namespace:kube-system,Attempt:0,}"
Nov 16 17:00:41 cmp001 kubelet[8649]: I1116 17:00:41.699977    8649 kuberuntime_manager.go:397] No sandbox for pod "netchecker-server-7876fb46d4-l6hmd_netchecker(9efa4e39-0892-11ea-a35a-5254009caaa4)" can be found. Need to start a new one
Nov 16 17:00:41 cmp001 containerd[5688]: time="2019-11-16T17:00:41.700992488Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:netchecker-server-7876fb46d4-l6hmd,Uid:9efa4e39-0892-11ea-a35a-5254009caaa4,Namespace:netchecker,Attempt:0,}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.318715376Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.342355275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.343334388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.348208097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.348763857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.349301695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.360129442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.369594375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.370015706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.438446575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.439549066Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.441837488Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.445531398Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.448471095Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64@sha256:9e2b9d9d64bab9f4d80790ec6c6fe09cdb5714d43bf23357e0ed0d0ab512fffd,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.451714525Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.453433496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.454406611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64@sha256:9e2b9d9d64bab9f4d80790ec6c6fe09cdb5714d43bf23357e0ed0d0ab512fffd,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.455266881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.459348744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64@sha256:9e2b9d9d64bab9f4d80790ec6c6fe09cdb5714d43bf23357e0ed0d0ab512fffd,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.459881081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.461974094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.464455933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.464839824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.467273122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64@sha256:9e2b9d9d64bab9f4d80790ec6c6fe09cdb5714d43bf23357e0ed0d0ab512fffd,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.469102904Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/62553b080232bba1a1ab147e665861ee818b8bf4f09f0ad60eed1c016d003afb/shim.sock" debug=false pid=9247
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.665966619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-996f9b774-xqc8g,Uid:9ee5517e-0892-11ea-a35a-5254009caaa4,Namespace:kube-system,Attempt:0,} returns sandbox id "62553b080232bba1a1ab147e665861ee818b8bf4f09f0ad60eed1c016d003afb""
Nov 16 17:00:42 cmp001 kubelet[8649]: I1116 17:00:42.667460    8649 provider.go:116] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.668152408Z" level=info msg="PullImage "docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/kube-controllers:v3.3.2""
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.703 [INFO][9306] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"netchecker", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"cmp001", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"netchecker-agent-vs5pv", ContainerID:"76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8"}}
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.772 [INFO][9306] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" Namespace="netchecker" Pod="netchecker-agent-vs5pv" WorkloadEndpoint="cmp001-k8s-netchecker--agent--vs5pv-eth0"
Nov 16 17:00:42 cmp001 containerd[5688]: Calico CNI IPAM request count IPv4=1 IPv6=0
Nov 16 17:00:42 cmp001 containerd[5688]: Calico CNI IPAM handle=calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.858 [INFO][9319] calico-ipam.go 186: Auto assigning IP ContainerID="76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" HandleID="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" Workload="cmp001-k8s-netchecker--agent--vs5pv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc4204d2780), Attrs:map[string]string(nil), Hostname:"cmp001", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}}
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.859 [INFO][9319] ipam.go 70: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'cmp001'
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.861 [INFO][9319] ipam.go 254: Looking up existing affinities for host handle="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.862 [INFO][9319] ipam.go 265: Ran out of existing affine blocks for host handle="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.864 [INFO][9319] ipam.go 324: No more affine blocks, but need to allocate 1 more addresses - allocate another block handle="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.864 [INFO][9319] ipam.go 328: Looking for an unclaimed block handle="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.865 [INFO][9319] ipam_block_reader_writer.go 106: Found free block: 192.168.128.192/26
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.865 [INFO][9319] ipam.go 340: Found unclaimed block host="cmp001" subnet=192.168.128.192/26
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.865 [INFO][9319] ipam_block_reader_writer.go 122: Trying to create affinity in pending state host="cmp001" subnet=192.168.128.192/26
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.866 [INFO][9319] ipam_block_reader_writer.go 152: Successfully created pending affinity for block host="cmp001" subnet=192.168.128.192/26
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.866 [INFO][9319] ipam.go 118: Attempting to load block cidr=192.168.128.192/26 host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.867 [INFO][9319] ipam.go 123: The referenced block doesn't exist, trying to create it cidr=192.168.128.192/26 host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.868 [INFO][9319] ipam.go 130: Wrote affinity as pending cidr=192.168.128.192/26 host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.869 [INFO][9319] ipam.go 139: Attempting to claim the block cidr=192.168.128.192/26 host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.869 [INFO][9319] ipam_block_reader_writer.go 175: Attempting to create a new block host="cmp001" subnet=192.168.128.192/26
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.874 [INFO][9319] ipam_block_reader_writer.go 217: Successfully created block
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.874 [INFO][9319] ipam_block_reader_writer.go 228: Confirming affinity host="cmp001" subnet=192.168.128.192/26
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.875 [INFO][9319] ipam_block_reader_writer.go 243: Successfully confirmed affinity host="cmp001" subnet=192.168.128.192/26
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.875 [INFO][9319] ipam.go 372: Claimed new block &{BlockKey(cidr=192.168.128.192/26) 0xc420122480 496 0s} - assigning 1 addresses host="cmp001" subnet=192.168.128.192/26
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.875 [INFO][9319] ipam.go 677: Attempting to assign 1 addresses from block block=192.168.128.192/26 handle="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.876 [INFO][9319] ipam.go 1110: Creating new handle: calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.877 [INFO][9319] ipam.go 700: Writing block in order to claim IPs block=192.168.128.192/26 handle="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.879 [INFO][9319] ipam.go 710: Successfully claimed IPs: [192.168.128.192] block=192.168.128.192/26 handle="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.879 [INFO][9319] ipam.go 456: Auto-assigned 1 out of 1 IPv4s: [192.168.128.192] handle="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" host="cmp001"
Nov 16 17:00:42 cmp001 containerd[5688]: Calico CNI IPAM assigned addresses IPv4=[192.168.128.192] IPv6=[]
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.879 [INFO][9319] calico-ipam.go 214: IPAM Result ContainerID="76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" HandleID="calico-k8s-network.76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" Workload="cmp001-k8s-netchecker--agent--vs5pv-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc4206e82a0)}
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.880 [INFO][9306] k8s.go 365: Populated endpoint ContainerID="76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" Namespace="netchecker" Pod="netchecker-agent-vs5pv" WorkloadEndpoint="cmp001-k8s-netchecker--agent--vs5pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp001-k8s-netchecker--agent--vs5pv-eth0", GenerateName:"", Namespace:"netchecker", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp001", ContainerID:"", Pod:"netchecker-agent-vs5pv", Endpoint:"eth0", IPNetworks:[]string{"192.168.128.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"", MAC:"", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:42 cmp001 containerd[5688]: Calico CNI using IPs: [192.168.128.192/32]
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.880 [INFO][9306] network.go 75: Setting the host side veth name to cali872d39137cf ContainerID="76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" Namespace="netchecker" Pod="netchecker-agent-vs5pv" WorkloadEndpoint="cmp001-k8s-netchecker--agent--vs5pv-eth0"
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.895 [INFO][9306] network.go 380: Disabling IPv4 forwarding ContainerID="76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" Namespace="netchecker" Pod="netchecker-agent-vs5pv" WorkloadEndpoint="cmp001-k8s-netchecker--agent--vs5pv-eth0"
Nov 16 17:00:42 cmp001 kernel: [  302.353997] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Nov 16 17:00:42 cmp001 kernel: [  302.354609] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Nov 16 17:00:42 cmp001 systemd-udevd[9342]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.927 [INFO][9306] k8s.go 392: Added Mac, interface name, and active container ID to endpoint ContainerID="76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" Namespace="netchecker" Pod="netchecker-agent-vs5pv" WorkloadEndpoint="cmp001-k8s-netchecker--agent--vs5pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp001-k8s-netchecker--agent--vs5pv-eth0", GenerateName:"", Namespace:"netchecker", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp001", ContainerID:"76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8", Pod:"netchecker-agent-vs5pv", Endpoint:"eth0", IPNetworks:[]string{"192.168.128.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"cali872d39137cf", MAC:"06:29:11:ad:89:21", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.937 [INFO][9306] k8s.go 424: Wrote updated endpoint to datastore ContainerID="76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" Namespace="netchecker" Pod="netchecker-agent-vs5pv" WorkloadEndpoint="cmp001-k8s-netchecker--agent--vs5pv-eth0"
Nov 16 17:00:42 cmp001 containerd[5688]: time="2019-11-16T17:00:42.952694782Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8/shim.sock" debug=false pid=9392
Nov 16 17:00:42 cmp001 containerd[5688]: 2019-11-16 17:00:42.963 [INFO][9357] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"cmp001", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"coredns-7f8f94c97b-x7l26", ContainerID:"2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e"}}
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.025 [INFO][9357] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" Namespace="kube-system" Pod="coredns-7f8f94c97b-x7l26" WorkloadEndpoint="cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0"
Nov 16 17:00:43 cmp001 containerd[5688]: time="2019-11-16T17:00:43.099515657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:netchecker-agent-vs5pv,Uid:9e5a52b0-0892-11ea-a35a-5254009caaa4,Namespace:netchecker,Attempt:0,} returns sandbox id "76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8""
Nov 16 17:00:43 cmp001 containerd[5688]: Calico CNI IPAM request count IPv4=1 IPv6=0
Nov 16 17:00:43 cmp001 containerd[5688]: Calico CNI IPAM handle=calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.110 [INFO][9459] calico-ipam.go 186: Auto assigning IP ContainerID="2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" HandleID="calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" Workload="cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc420221fe0), Attrs:map[string]string(nil), Hostname:"cmp001", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}}
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.110 [INFO][9459] ipam.go 70: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'cmp001'
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.111 [INFO][9459] ipam.go 254: Looking up existing affinities for host handle="calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.112 [INFO][9459] ipam.go 275: Trying affinity for 192.168.128.192/26 handle="calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.112 [INFO][9459] ipam.go 118: Attempting to load block cidr=192.168.128.192/26 host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.113 [INFO][9459] ipam.go 195: Affinity is confirmed and block has been loaded cidr=192.168.128.192/26 host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.113 [INFO][9459] ipam.go 677: Attempting to assign 1 addresses from block block=192.168.128.192/26 handle="calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.114 [INFO][9459] ipam.go 1110: Creating new handle: calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.115 [INFO][9459] ipam.go 700: Writing block in order to claim IPs block=192.168.128.192/26 handle="calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.125 [INFO][9459] ipam.go 710: Successfully claimed IPs: [192.168.128.193] block=192.168.128.192/26 handle="calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.125 [INFO][9459] ipam.go 307: Block '192.168.128.192/26' provided addresses: [192.168.128.193] handle="calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.126 [INFO][9459] ipam.go 456: Auto-assigned 1 out of 1 IPv4s: [192.168.128.193] handle="calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: Calico CNI IPAM assigned addresses IPv4=[192.168.128.193] IPv6=[]
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.126 [INFO][9459] calico-ipam.go 214: IPAM Result ContainerID="2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" HandleID="calico-k8s-network.2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" Workload="cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc42010e7e0)}
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.128 [INFO][9357] k8s.go 365: Populated endpoint ContainerID="2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" Namespace="kube-system" Pod="coredns-7f8f94c97b-x7l26" WorkloadEndpoint="cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp001", ContainerID:"", Pod:"coredns-7f8f94c97b-x7l26", Endpoint:"eth0", IPNetworks:[]string{"192.168.128.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"", MAC:"", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:43 cmp001 containerd[5688]: Calico CNI using IPs: [192.168.128.193/32]
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.128 [INFO][9357] network.go 75: Setting the host side veth name to calif79d416650b ContainerID="2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" Namespace="kube-system" Pod="coredns-7f8f94c97b-x7l26" WorkloadEndpoint="cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0"
Nov 16 17:00:43 cmp001 kernel: [  302.588607] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.129 [INFO][9357] network.go 380: Disabling IPv4 forwarding ContainerID="2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" Namespace="kube-system" Pod="coredns-7f8f94c97b-x7l26" WorkloadEndpoint="cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0"
Nov 16 17:00:43 cmp001 kernel: [  302.588936] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.157 [INFO][9357] k8s.go 392: Added Mac, interface name, and active container ID to endpoint ContainerID="2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" Namespace="kube-system" Pod="coredns-7f8f94c97b-x7l26" WorkloadEndpoint="cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp001", ContainerID:"2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e", Pod:"coredns-7f8f94c97b-x7l26", Endpoint:"eth0", IPNetworks:[]string{"192.168.128.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"calif79d416650b", MAC:"7a:46:b3:ba:20:17", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:43 cmp001 systemd-udevd[9490]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.171 [INFO][9357] k8s.go 424: Wrote updated endpoint to datastore ContainerID="2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" Namespace="kube-system" Pod="coredns-7f8f94c97b-x7l26" WorkloadEndpoint="cmp001-k8s-coredns--7f8f94c97b--x7l26-eth0"
Nov 16 17:00:43 cmp001 kubelet[8649]: I1116 17:00:43.188475    8649 kubelet.go:1953] SyncLoop (PLEG): "netchecker-agent-vs5pv_netchecker(9e5a52b0-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9e5a52b0-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8"}
Nov 16 17:00:43 cmp001 kubelet[8649]: I1116 17:00:43.189450    8649 kubelet.go:1953] SyncLoop (PLEG): "calico-kube-controllers-996f9b774-xqc8g_kube-system(9ee5517e-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9ee5517e-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"62553b080232bba1a1ab147e665861ee818b8bf4f09f0ad60eed1c016d003afb"}
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.200 [INFO][9528] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"netchecker", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"cmp001", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"netchecker-server-7876fb46d4-l6hmd", ContainerID:"98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b"}}
Nov 16 17:00:43 cmp001 containerd[5688]: time="2019-11-16T17:00:43.205437422Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e/shim.sock" debug=false pid=9572
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.283 [INFO][9528] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" Namespace="netchecker" Pod="netchecker-server-7876fb46d4-l6hmd" WorkloadEndpoint="cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0"
Nov 16 17:00:43 cmp001 containerd[5688]: Calico CNI IPAM request count IPv4=1 IPv6=0
Nov 16 17:00:43 cmp001 containerd[5688]: time="2019-11-16T17:00:43.381783311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7f8f94c97b-x7l26,Uid:9eed852c-0892-11ea-a35a-5254009caaa4,Namespace:kube-system,Attempt:0,} returns sandbox id "2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e""
Nov 16 17:00:43 cmp001 containerd[5688]: Calico CNI IPAM handle=calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.382 [INFO][9612] calico-ipam.go 186: Auto assigning IP ContainerID="98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" HandleID="calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" Workload="cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc4201f8df0), Attrs:map[string]string(nil), Hostname:"cmp001", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}}
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.382 [INFO][9612] ipam.go 70: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'cmp001'
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.383 [INFO][9612] ipam.go 254: Looking up existing affinities for host handle="calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.384 [INFO][9612] ipam.go 275: Trying affinity for 192.168.128.192/26 handle="calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.385 [INFO][9612] ipam.go 118: Attempting to load block cidr=192.168.128.192/26 host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.386 [INFO][9612] ipam.go 195: Affinity is confirmed and block has been loaded cidr=192.168.128.192/26 host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.386 [INFO][9612] ipam.go 677: Attempting to assign 1 addresses from block block=192.168.128.192/26 handle="calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.387 [INFO][9612] ipam.go 1110: Creating new handle: calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.388 [INFO][9612] ipam.go 700: Writing block in order to claim IPs block=192.168.128.192/26 handle="calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.389 [INFO][9612] ipam.go 710: Successfully claimed IPs: [192.168.128.194] block=192.168.128.192/26 handle="calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.389 [INFO][9612] ipam.go 307: Block '192.168.128.192/26' provided addresses: [192.168.128.194] handle="calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.390 [INFO][9612] ipam.go 456: Auto-assigned 1 out of 1 IPv4s: [192.168.128.194] handle="calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" host="cmp001"
Nov 16 17:00:43 cmp001 containerd[5688]: Calico CNI IPAM assigned addresses IPv4=[192.168.128.194] IPv6=[]
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.390 [INFO][9612] calico-ipam.go 214: IPAM Result ContainerID="98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" HandleID="calico-k8s-network.98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" Workload="cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc4204d2300)}
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.393 [INFO][9528] k8s.go 365: Populated endpoint ContainerID="98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" Namespace="netchecker" Pod="netchecker-server-7876fb46d4-l6hmd" WorkloadEndpoint="cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0", GenerateName:"", Namespace:"netchecker", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp001", ContainerID:"", Pod:"netchecker-server-7876fb46d4-l6hmd", Endpoint:"eth0", IPNetworks:[]string{"192.168.128.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"", MAC:"", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:43 cmp001 containerd[5688]: Calico CNI using IPs: [192.168.128.194/32]
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.393 [INFO][9528] network.go 75: Setting the host side veth name to cali6bdbdd14fcd ContainerID="98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" Namespace="netchecker" Pod="netchecker-server-7876fb46d4-l6hmd" WorkloadEndpoint="cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0"
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.395 [INFO][9528] network.go 380: Disabling IPv4 forwarding ContainerID="98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" Namespace="netchecker" Pod="netchecker-server-7876fb46d4-l6hmd" WorkloadEndpoint="cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0"
Nov 16 17:00:43 cmp001 kernel: [  302.853980] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Nov 16 17:00:43 cmp001 kernel: [  302.854232] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Nov 16 17:00:43 cmp001 systemd-udevd[9650]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.427 [INFO][9528] k8s.go 392: Added Mac, interface name, and active container ID to endpoint ContainerID="98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" Namespace="netchecker" Pod="netchecker-server-7876fb46d4-l6hmd" WorkloadEndpoint="cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0", GenerateName:"", Namespace:"netchecker", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp001", ContainerID:"98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b", Pod:"netchecker-server-7876fb46d4-l6hmd", Endpoint:"eth0", IPNetworks:[]string{"192.168.128.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"cali6bdbdd14fcd", MAC:"b2:13:be:94:19:51", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:43 cmp001 containerd[5688]: 2019-11-16 17:00:43.431 [INFO][9528] k8s.go 424: Wrote updated endpoint to datastore ContainerID="98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" Namespace="netchecker" Pod="netchecker-server-7876fb46d4-l6hmd" WorkloadEndpoint="cmp001-k8s-netchecker--server--7876fb46d4--l6hmd-eth0"
Nov 16 17:00:43 cmp001 containerd[5688]: time="2019-11-16T17:00:43.447956701Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b/shim.sock" debug=false pid=9692
Nov 16 17:00:43 cmp001 containerd[5688]: time="2019-11-16T17:00:43.620559248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:netchecker-server-7876fb46d4-l6hmd,Uid:9efa4e39-0892-11ea-a35a-5254009caaa4,Namespace:netchecker,Attempt:0,} returns sandbox id "98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b""
Nov 16 17:00:44 cmp001 kubelet[8649]: I1116 17:00:44.194023    8649 kubelet.go:1953] SyncLoop (PLEG): "coredns-7f8f94c97b-x7l26_kube-system(9eed852c-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9eed852c-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e"}
Nov 16 17:00:44 cmp001 kubelet[8649]: I1116 17:00:44.195752    8649 kubelet.go:1953] SyncLoop (PLEG): "netchecker-server-7876fb46d4-l6hmd_netchecker(9efa4e39-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9efa4e39-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b"}
Nov 16 17:00:44 cmp001 containerd[5688]: time="2019-11-16T17:00:44.560008102Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/kube-controllers:v3.3.2,Labels:map[string]string{},}"
Nov 16 17:00:44 cmp001 containerd[5688]: time="2019-11-16T17:00:44.584166291Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:22c16a9aecce95ea9631683dd66ee0329e399e07bb86950239d4a6db1c6d201b,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:44 cmp001 containerd[5688]: time="2019-11-16T17:00:44.585321466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/kube-controllers:v3.3.2,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:45 cmp001 containerd[5688]: time="2019-11-16T17:00:45.919061931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:22c16a9aecce95ea9631683dd66ee0329e399e07bb86950239d4a6db1c6d201b,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:45 cmp001 containerd[5688]: time="2019-11-16T17:00:45.921308636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/kube-controllers:v3.3.2,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:45 cmp001 containerd[5688]: time="2019-11-16T17:00:45.923292805Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/kube-controllers@sha256:59296255e44bcb3ab00dae468a795dd669c41d6d1f16b8b5a3f2a1271e88e810,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:45 cmp001 containerd[5688]: time="2019-11-16T17:00:45.923889288Z" level=info msg="PullImage "docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/kube-controllers:v3.3.2" returns image reference "sha256:22c16a9aecce95ea9631683dd66ee0329e399e07bb86950239d4a6db1c6d201b""
Nov 16 17:00:45 cmp001 containerd[5688]: time="2019-11-16T17:00:45.924741871Z" level=info msg="PullImage "mirantis/k8s-netchecker-agent:stable""
Nov 16 17:00:45 cmp001 containerd[5688]: time="2019-11-16T17:00:45.926974350Z" level=info msg="CreateContainer within sandbox "62553b080232bba1a1ab147e665861ee818b8bf4f09f0ad60eed1c016d003afb" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Nov 16 17:00:46 cmp001 kernel: [  305.488732] audit: type=1400 audit(1573923646.022:18): apparmor="STATUS" operation="profile_load" profile="unconfined" name="cri-containerd.apparmor.d" pid=9788 comm="apparmor_parser"
Nov 16 17:00:46 cmp001 containerd[5688]: time="2019-11-16T17:00:46.036482384Z" level=info msg="CreateContainer within sandbox "62553b080232bba1a1ab147e665861ee818b8bf4f09f0ad60eed1c016d003afb" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id "3f1a9ce6dae5e5b0e191337d49d5d4f0412d8ea56b8aae2580d8d883be3da20c""
Nov 16 17:00:46 cmp001 containerd[5688]: time="2019-11-16T17:00:46.037602971Z" level=info msg="StartContainer for "3f1a9ce6dae5e5b0e191337d49d5d4f0412d8ea56b8aae2580d8d883be3da20c""
Nov 16 17:00:46 cmp001 containerd[5688]: time="2019-11-16T17:00:46.038530445Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/3f1a9ce6dae5e5b0e191337d49d5d4f0412d8ea56b8aae2580d8d883be3da20c/shim.sock" debug=false pid=9792
Nov 16 17:00:46 cmp001 containerd[5688]: time="2019-11-16T17:00:46.300551514Z" level=info msg="StartContainer for "3f1a9ce6dae5e5b0e191337d49d5d4f0412d8ea56b8aae2580d8d883be3da20c" returns successfully"
Nov 16 17:00:46 cmp001 kubelet[8649]: I1116 17:00:46.303280    8649 kubelet.go:1953] SyncLoop (PLEG): "calico-kube-controllers-996f9b774-xqc8g_kube-system(9ee5517e-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9ee5517e-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"3f1a9ce6dae5e5b0e191337d49d5d4f0412d8ea56b8aae2580d8d883be3da20c"}
Nov 16 17:00:47 cmp001 containerd[5688]: time="2019-11-16T17:00:47.683743646Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/mirantis/k8s-netchecker-agent:stable,Labels:map[string]string{},}"
Nov 16 17:00:47 cmp001 containerd[5688]: time="2019-11-16T17:00:47.700033596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d16c8d7f4d5bdaecda098c54c655027871da52f292683759f5a32c8ecd43b3f0,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:47 cmp001 containerd[5688]: time="2019-11-16T17:00:47.700580966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/mirantis/k8s-netchecker-agent:stable,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:47 cmp001 containerd[5688]: time="2019-11-16T17:00:47.938273145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:d16c8d7f4d5bdaecda098c54c655027871da52f292683759f5a32c8ecd43b3f0,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:47 cmp001 containerd[5688]: time="2019-11-16T17:00:47.939991682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/mirantis/k8s-netchecker-agent:stable,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:47 cmp001 containerd[5688]: time="2019-11-16T17:00:47.941305907Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/mirantis/k8s-netchecker-agent@sha256:4ac49ebef7eaeaa5a8a19f56faa73740bf4861979aa067d0125867a72846720a,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:47 cmp001 containerd[5688]: time="2019-11-16T17:00:47.941651901Z" level=info msg="PullImage "mirantis/k8s-netchecker-agent:stable" returns image reference "sha256:d16c8d7f4d5bdaecda098c54c655027871da52f292683759f5a32c8ecd43b3f0""
Nov 16 17:00:47 cmp001 containerd[5688]: time="2019-11-16T17:00:47.945194839Z" level=info msg="PullImage "docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96""
Nov 16 17:00:47 cmp001 containerd[5688]: time="2019-11-16T17:00:47.946598947Z" level=info msg="CreateContainer within sandbox "76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" for container &ContainerMetadata{Name:netchecker-agent,Attempt:0,}"
Nov 16 17:00:48 cmp001 containerd[5688]: time="2019-11-16T17:00:48.018655261Z" level=info msg="CreateContainer within sandbox "76127db7389c802f6ef4bc7c0818f33891f8a2f814e967fe9e10d7f27021ebe8" for &ContainerMetadata{Name:netchecker-agent,Attempt:0,} returns container id "b2d1b5527ace0409f5af7e33231ad4eba6e122bd87071d4b80733aa1092da5f6""
Nov 16 17:00:48 cmp001 containerd[5688]: time="2019-11-16T17:00:48.019753557Z" level=info msg="StartContainer for "b2d1b5527ace0409f5af7e33231ad4eba6e122bd87071d4b80733aa1092da5f6""
Nov 16 17:00:48 cmp001 containerd[5688]: time="2019-11-16T17:00:48.021155926Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/b2d1b5527ace0409f5af7e33231ad4eba6e122bd87071d4b80733aa1092da5f6/shim.sock" debug=false pid=9872
Nov 16 17:00:48 cmp001 containerd[5688]: time="2019-11-16T17:00:48.240959583Z" level=info msg="StartContainer for "b2d1b5527ace0409f5af7e33231ad4eba6e122bd87071d4b80733aa1092da5f6" returns successfully"
Nov 16 17:00:48 cmp001 kubelet[8649]: I1116 17:00:48.249894    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:00:48 cmp001 kubelet[8649]: I1116 17:00:48.308085    8649 kubelet.go:1953] SyncLoop (PLEG): "netchecker-agent-vs5pv_netchecker(9e5a52b0-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9e5a52b0-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"b2d1b5527ace0409f5af7e33231ad4eba6e122bd87071d4b80733aa1092da5f6"}
Nov 16 17:00:48 cmp001 kube-proxy[8852]: I1116 17:00:48.482909    8852 proxier.go:659] Stale udp service kube-system/coredns:dns -> 10.254.0.10
Nov 16 17:00:48 cmp001 kernel: [  307.976942] ctnetlink v0.93: registering with nfnetlink.
Nov 16 17:00:50 cmp001 containerd[5688]: time="2019-11-16T17:00:50.387186754Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96,Labels:map[string]string{},}"
Nov 16 17:00:50 cmp001 containerd[5688]: time="2019-11-16T17:00:50.398379856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31b943183ba2eec561c44c1fdda2a39ab03c0cb72f90a24c1bea4adf0a44bfbc,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:50 cmp001 containerd[5688]: time="2019-11-16T17:00:50.398944713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:51 cmp001 containerd[5688]: time="2019-11-16T17:00:51.908644477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:31b943183ba2eec561c44c1fdda2a39ab03c0cb72f90a24c1bea4adf0a44bfbc,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:51 cmp001 containerd[5688]: time="2019-11-16T17:00:51.910577665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:51 cmp001 containerd[5688]: time="2019-11-16T17:00:51.912188935Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns@sha256:30f28dcd8a8c9a97c206bba187bd9b1e8fbc1cf52ce38f7e937c85a191709376,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:51 cmp001 containerd[5688]: time="2019-11-16T17:00:51.912429603Z" level=info msg="PullImage "docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96" returns image reference "sha256:31b943183ba2eec561c44c1fdda2a39ab03c0cb72f90a24c1bea4adf0a44bfbc""
Nov 16 17:00:51 cmp001 containerd[5688]: time="2019-11-16T17:00:51.913052284Z" level=info msg="PullImage "mirantis/k8s-netchecker-server:stable""
Nov 16 17:00:51 cmp001 containerd[5688]: time="2019-11-16T17:00:51.916318290Z" level=info msg="CreateContainer within sandbox "2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Nov 16 17:00:52 cmp001 containerd[5688]: time="2019-11-16T17:00:52.232179838Z" level=info msg="CreateContainer within sandbox "2ddbac09df333ebf08d044e5e95ff3b8855701a77e490c8bc72a2e48d26bfa5e" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id "591c1527dd80462950a54efdf75ab323ab6f4a7dad92a057d50a5ad417ae2f07""
Nov 16 17:00:52 cmp001 containerd[5688]: time="2019-11-16T17:00:52.233124420Z" level=info msg="StartContainer for "591c1527dd80462950a54efdf75ab323ab6f4a7dad92a057d50a5ad417ae2f07""
Nov 16 17:00:52 cmp001 containerd[5688]: time="2019-11-16T17:00:52.234424021Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/591c1527dd80462950a54efdf75ab323ab6f4a7dad92a057d50a5ad417ae2f07/shim.sock" debug=false pid=9983
Nov 16 17:00:52 cmp001 containerd[5688]: time="2019-11-16T17:00:52.429970120Z" level=info msg="StartContainer for "591c1527dd80462950a54efdf75ab323ab6f4a7dad92a057d50a5ad417ae2f07" returns successfully"
Nov 16 17:00:52 cmp001 kubelet[8649]: I1116 17:00:52.432624    8649 kubelet.go:1953] SyncLoop (PLEG): "coredns-7f8f94c97b-x7l26_kube-system(9eed852c-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9eed852c-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"591c1527dd80462950a54efdf75ab323ab6f4a7dad92a057d50a5ad417ae2f07"}
Nov 16 17:00:53 cmp001 containerd[5688]: time="2019-11-16T17:00:53.332746375Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/mirantis/k8s-netchecker-server:stable,Labels:map[string]string{},}"
Nov 16 17:00:53 cmp001 containerd[5688]: time="2019-11-16T17:00:53.338179191Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:22d27d4fe076561ad51f5e7a0228ab7b1035773a492c95fd4a246b40ba4f9b58,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:53 cmp001 containerd[5688]: time="2019-11-16T17:00:53.339384735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/mirantis/k8s-netchecker-server:stable,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:56 cmp001 containerd[5688]: time="2019-11-16T17:00:56.624785256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:22d27d4fe076561ad51f5e7a0228ab7b1035773a492c95fd4a246b40ba4f9b58,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:56 cmp001 containerd[5688]: time="2019-11-16T17:00:56.626323444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/mirantis/k8s-netchecker-server:stable,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:56 cmp001 containerd[5688]: time="2019-11-16T17:00:56.627821201Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/mirantis/k8s-netchecker-server@sha256:87064b3cb72a8fb419eb714ae892ebd5929649e84499c08111e46e466acc8407,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:56 cmp001 containerd[5688]: time="2019-11-16T17:00:56.628280125Z" level=info msg="PullImage "mirantis/k8s-netchecker-server:stable" returns image reference "sha256:22d27d4fe076561ad51f5e7a0228ab7b1035773a492c95fd4a246b40ba4f9b58""
Nov 16 17:00:56 cmp001 containerd[5688]: time="2019-11-16T17:00:56.629937412Z" level=info msg="CreateContainer within sandbox "98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" for container &ContainerMetadata{Name:netchecker-server,Attempt:0,}"
Nov 16 17:00:56 cmp001 containerd[5688]: time="2019-11-16T17:00:56.709707578Z" level=info msg="CreateContainer within sandbox "98ffb62030f33f3ddf9bf356ef4501e2f9788ee263c6453b726efd4e91f61c3b" for &ContainerMetadata{Name:netchecker-server,Attempt:0,} returns container id "c2395a0f31a23d19e2487b9f5ef38e330ba3da9bcc4231aac777d58006820423""
Nov 16 17:00:56 cmp001 containerd[5688]: time="2019-11-16T17:00:56.710125528Z" level=info msg="StartContainer for "c2395a0f31a23d19e2487b9f5ef38e330ba3da9bcc4231aac777d58006820423""
Nov 16 17:00:56 cmp001 containerd[5688]: time="2019-11-16T17:00:56.710984827Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/c2395a0f31a23d19e2487b9f5ef38e330ba3da9bcc4231aac777d58006820423/shim.sock" debug=false pid=10081
Nov 16 17:00:56 cmp001 containerd[5688]: time="2019-11-16T17:00:56.881210402Z" level=info msg="StartContainer for "c2395a0f31a23d19e2487b9f5ef38e330ba3da9bcc4231aac777d58006820423" returns successfully"
Nov 16 17:00:57 cmp001 kubelet[8649]: I1116 17:00:57.443734    8649 kubelet.go:1953] SyncLoop (PLEG): "netchecker-server-7876fb46d4-l6hmd_netchecker(9efa4e39-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9efa4e39-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"c2395a0f31a23d19e2487b9f5ef38e330ba3da9bcc4231aac777d58006820423"}
Nov 16 17:00:58 cmp001 kubelet[8649]: I1116 17:00:58.258578    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:01:08 cmp001 kubelet[8649]: I1116 17:01:08.268736    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:01:18 cmp001 kubelet[8649]: I1116 17:01:18.279717    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:01:28 cmp001 kubelet[8649]: I1116 17:01:28.294787    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:01:38 cmp001 kubelet[8649]: I1116 17:01:38.309573    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:01:48 cmp001 kubelet[8649]: I1116 17:01:48.324637    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:01:58 cmp001 kubelet[8649]: I1116 17:01:58.338743    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:02:08 cmp001 kubelet[8649]: I1116 17:02:08.360143    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:02:18 cmp001 kubelet[8649]: I1116 17:02:18.378250    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:02:28 cmp001 kubelet[8649]: I1116 17:02:28.392344    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:02:38 cmp001 kubelet[8649]: I1116 17:02:38.403676    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:02:48 cmp001 kubelet[8649]: I1116 17:02:48.413523    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:02:58 cmp001 kubelet[8649]: I1116 17:02:58.425765    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:03:08 cmp001 kubelet[8649]: I1116 17:03:08.440773    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:03:18 cmp001 kubelet[8649]: I1116 17:03:18.450985    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:03:28 cmp001 kubelet[8649]: I1116 17:03:28.461822    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:03:38 cmp001 kubelet[8649]: I1116 17:03:38.472236    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:03:48 cmp001 kubelet[8649]: I1116 17:03:48.486468    8649 setters.go:72] Using node IP: "172.16.10.55"
Nov 16 17:03:48 cmp001 salt-minion[4458]: [INFO    ] User sudo_ubuntu Executing command cp.push_dir with jid 20191116170348759627
Nov 16 17:03:48 cmp001 salt-minion[4458]: [INFO    ] Starting a new job with PID 10509
