Nov 16 16:51:42 cmp002 systemd-modules-load[424]: Inserted module 'iscsi_tcp'
Nov 16 16:51:42 cmp002 systemd-modules-load[424]: Inserted module 'ib_iser'
Nov 16 16:51:42 cmp002 systemd[1]: Started Set the console keyboard layout.
Nov 16 16:51:42 cmp002 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 16 16:51:42 cmp002 systemd[1]: Started udev Coldplug all Devices.
Nov 16 16:51:42 cmp002 systemd[1]: Started Create Static Device Nodes in /dev.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Linux version 4.15.0-70-generic (buildd@lgw01-amd64-055) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 (Ubuntu 4.15.0-70.79-generic 4.15.18)
Nov 16 16:51:42 cmp002 systemd[1]: Starting udev Kernel Device Manager...
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:51:42 cmp002 kernel: [    0.000000] KERNEL supported cpus:
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   Intel GenuineIntel
Nov 16 16:51:42 cmp002 systemd[1]: Started Apply Kernel Variables.
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   AMD AuthenticAMD
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   Centaur CentaurHauls
Nov 16 16:51:42 cmp002 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 16 16:51:42 cmp002 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 16 16:51:42 cmp002 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 16 16:51:42 cmp002 kernel: [    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 16 16:51:42 cmp002 kernel: [    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Local File Systems (Pre).
Nov 16 16:51:42 cmp002 kernel: [    0.000000] e820: BIOS-provided physical RAM map:
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 16 16:51:42 cmp002 systemd[1]: Started Flush Journal to Persistent Storage.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdefff] usable
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x00000000bffdf000-0x00000000bfffffff] reserved
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 16 16:51:42 cmp002 systemd[1]: Started udev Kernel Device Manager.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000033fffffff] usable
Nov 16 16:51:42 cmp002 kernel: [    0.000000] NX (Execute Disable) protection: active
Nov 16 16:51:42 cmp002 kernel: [    0.000000] SMBIOS 2.8 present.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Hypervisor detected: KVM
Nov 16 16:51:42 cmp002 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 16 16:51:42 cmp002 kernel: [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 16 16:51:42 cmp002 kernel: [    0.000000] e820: last_pfn = 0x340000 max_arch_pfn = 0x400000000
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Local Encrypted Volumes.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] MTRR default type: write-back
Nov 16 16:51:42 cmp002 kernel: [    0.000000] MTRR fixed ranges enabled:
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   00000-9FFFF write-back
Nov 16 16:51:42 cmp002 systemd-udevd[463]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   A0000-BFFFF uncachable
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   C0000-FFFFF write-protect
Nov 16 16:51:42 cmp002 kernel: [    0.000000] MTRR variable ranges enabled:
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   0 base 00C0000000 mask FFC0000000 uncachable
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   1 disabled
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   2 disabled
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   3 disabled
Nov 16 16:51:42 cmp002 systemd-udevd[459]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   4 disabled
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   5 disabled
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   6 disabled
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   7 disabled
Nov 16 16:51:42 cmp002 kernel: [    0.000000] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 16 16:51:42 cmp002 kernel: [    0.000000] e820: last_pfn = 0xbffdf max_arch_pfn = 0x400000000
Nov 16 16:51:42 cmp002 systemd-udevd[470]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] found SMP MP-table at [mem 0x000f6590-0x000f659f]
Nov 16 16:51:42 cmp002 systemd-udevd[472]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:42 cmp002 systemd[1]: Found device /dev/ttyS0.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Scanning 1 areas for low memory corruption
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Using GB pages for direct mapping
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BRK [0x20541000, 0x20541fff] PGTABLE
Nov 16 16:51:42 cmp002 systemd-udevd[474]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BRK [0x20542000, 0x20542fff] PGTABLE
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BRK [0x20543000, 0x20543fff] PGTABLE
Nov 16 16:51:42 cmp002 kernel: [    0.000000] BRK [0x20544000, 0x20544fff] PGTABLE
Nov 16 16:51:42 cmp002 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] RAMDISK: [mem 0x35a87000-0x36d3afff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: Early table checksum verification disabled
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: RSDP 0x00000000000F6540 000014 (v00 BOCHS )
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: RSDT 0x00000000BFFE14B2 000030 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
Nov 16 16:51:42 cmp002 systemd[1]: Found device /dev/disk/by-label/UEFI.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: FACP 0x00000000BFFE08D4 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: DSDT 0x00000000BFFDFD00 000BD4 (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
Nov 16 16:51:42 cmp002 systemd[1]: Mounting /boot/efi...
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: FACS 0x00000000BFFDFCC0 000040
Nov 16 16:51:42 cmp002 systemd[1]: Mounted /boot/efi.
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Local File Systems.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: SSDT 0x00000000BFFE0948 000ACA (v01 BOCHS  BXPCSSDT 00000001 BXPC 00000001)
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: APIC 0x00000000BFFE1412 0000A0 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:51:42 cmp002 systemd[1]: Starting Set console font and keymap...
Nov 16 16:51:42 cmp002 systemd[1]: Starting Commit a transient machine-id on disk...
Nov 16 16:51:42 cmp002 kernel: [    0.000000] No NUMA configuration found
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Faking a node at [mem 0x0000000000000000-0x000000033fffffff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] NODE_DATA(0) allocated [mem 0x33ffd3000-0x33fffdfff]
Nov 16 16:51:42 cmp002 systemd[1]: Starting Create Volatile Files and Directories...
Nov 16 16:51:42 cmp002 kernel: [    0.000000] kvm-clock: cpu 0, msr 3:3ff52001, primary cpu clock
Nov 16 16:51:42 cmp002 kernel: [    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 16 16:51:42 cmp002 kernel: [    0.000000] kvm-clock: using sched offset of 11103848385 cycles
Nov 16 16:51:42 cmp002 kernel: [    0.000000] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Zone ranges:
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 16 16:51:42 cmp002 systemd[1]: Starting AppArmor initialization...
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 16 16:51:42 cmp002 systemd[1]: Starting Tell Plymouth To Write Out Runtime Data...
Nov 16 16:51:42 cmp002 systemd[1]: Starting ebtables ruleset management...
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   Normal   [mem 0x0000000100000000-0x000000033fffffff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   Device   empty
Nov 16 16:51:42 cmp002 systemd[1]: Started Set console font and keymap.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Movable zone start for each node
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Early memory node ranges
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   node   0: [mem 0x0000000000100000-0x00000000bffdefff]
Nov 16 16:51:42 cmp002 systemd[1]: Started Create Volatile Files and Directories.
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   node   0: [mem 0x0000000100000000-0x000000033fffffff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Reserved but unavailable: 98 pages
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x000000033fffffff]
Nov 16 16:51:42 cmp002 systemd[1]: Starting Network Time Synchronization...
Nov 16 16:51:42 cmp002 kernel: [    0.000000] On node 0 totalpages: 3145597
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   DMA zone: 64 pages used for memmap
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   DMA zone: 21 pages reserved
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   DMA zone: 3998 pages, LIFO batch:0
Nov 16 16:51:42 cmp002 systemd[1]: Starting Update UTMP about System Boot/Shutdown...
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   DMA32 zone: 12224 pages used for memmap
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   DMA32 zone: 782303 pages, LIFO batch:31
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   Normal zone: 36864 pages used for memmap
Nov 16 16:51:42 cmp002 kernel: [    0.000000]   Normal zone: 2359296 pages, LIFO batch:31
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: PM-Timer IO Port: 0x608
Nov 16 16:51:42 cmp002 systemd[1]: Started Tell Plymouth To Write Out Runtime Data.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 16 16:51:42 cmp002 kernel: [    0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 16 16:51:42 cmp002 systemd[1]: Started Update UTMP about System Boot/Shutdown.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 16 16:51:42 cmp002 systemd[1]: Started ebtables ruleset management.
Nov 16 16:51:42 cmp002 apparmor[595]:  * Starting AppArmor profiles
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 16 16:51:42 cmp002 apparmor[595]: Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: IRQ0 used by override.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: IRQ5 used by override.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: IRQ9 used by override.
Nov 16 16:51:42 cmp002 systemd[1]: Started Commit a transient machine-id on disk.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: IRQ10 used by override.
Nov 16 16:51:42 cmp002 systemd[1]: Started Network Time Synchronization.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ACPI: IRQ11 used by override.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Using ACPI (MADT) for SMP configuration information
Nov 16 16:51:42 cmp002 systemd[1]: Reached target System Time Synchronized.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] smpboot: Allowing 6 CPUs, 0 hotplug CPUs
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 16 16:51:42 cmp002 apparmor[595]:    ...done.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xbffdf000-0xbfffffff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 16 16:51:42 cmp002 systemd[1]: Started AppArmor initialization.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 16 16:51:42 cmp002 kernel: [    0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Booting paravirtualized kernel on KVM
Nov 16 16:51:42 cmp002 systemd[1]: Starting Initial cloud-init job (pre-networking)...
Nov 16 16:51:42 cmp002 kernel: [    0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
Nov 16 16:51:42 cmp002 kernel: [    0.000000] random: get_random_bytes called from start_kernel+0x99/0x4fd with crng_init=0
Nov 16 16:51:42 cmp002 kernel: [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:6 nr_cpu_ids:6 nr_node_ids:1
Nov 16 16:51:42 cmp002 cloud-init[727]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init-local' at Sat, 16 Nov 2019 16:51:34 +0000. Up 8.01 seconds.
Nov 16 16:51:42 cmp002 kernel: [    0.000000] percpu: Embedded 45 pages/cpu s147456 r8192 d28672 u262144
Nov 16 16:51:42 cmp002 kernel: [    0.000000] pcpu-alloc: s147456 r8192 d28672 u262144 alloc=1*2097152
Nov 16 16:51:42 cmp002 kernel: [    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 - - 
Nov 16 16:51:42 cmp002 kernel: [    0.000000] KVM setup async PF for cpu 0
Nov 16 16:51:42 cmp002 systemd[1]: Started Initial cloud-init job (pre-networking).
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Network (Pre).
Nov 16 16:51:42 cmp002 systemd[1]: Starting Raise network interfaces...
Nov 16 16:51:42 cmp002 dhclient[819]: Internet Systems Consortium DHCP Client 4.3.5
Nov 16 16:51:42 cmp002 ifup[792]: Internet Systems Consortium DHCP Client 4.3.5
Nov 16 16:51:42 cmp002 dhclient[819]: Copyright 2004-2016 Internet Systems Consortium.
Nov 16 16:51:42 cmp002 ifup[792]: Copyright 2004-2016 Internet Systems Consortium.
Nov 16 16:51:42 cmp002 dhclient[819]: All rights reserved.
Nov 16 16:51:42 cmp002 ifup[792]: All rights reserved.
Nov 16 16:51:42 cmp002 dhclient[819]: For info, please visit https://www.isc.org/software/dhcp/
Nov 16 16:51:42 cmp002 ifup[792]: For info, please visit https://www.isc.org/software/dhcp/
Nov 16 16:51:42 cmp002 dhclient[819]: 
Nov 16 16:51:42 cmp002 kernel: [    0.000000] kvm-stealtime: cpu 0, msr 33fc23040
Nov 16 16:51:42 cmp002 dhclient[819]: Listening on LPF/ens3/52:54:00:c7:26:24
Nov 16 16:51:42 cmp002 kernel: [    0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 3096424
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Policy zone: Normal
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:51:42 cmp002 ifup[792]: Listening on LPF/ens3/52:54:00:c7:26:24
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Calgary: detecting Calgary via BIOS EBDA area
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
Nov 16 16:51:42 cmp002 dhclient[819]: Sending on   LPF/ens3/52:54:00:c7:26:24
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Memory: 12270800K/12582388K available (12300K kernel code, 2481K rwdata, 4264K rodata, 2432K init, 2388K bss, 311588K reserved, 0K cma-reserved)
Nov 16 16:51:42 cmp002 kernel: [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
Nov 16 16:51:42 cmp002 kernel: [    0.000000] Kernel/User page tables isolation: enabled
Nov 16 16:51:42 cmp002 kernel: [    0.000000] ftrace: allocating 39315 entries in 154 pages
Nov 16 16:51:42 cmp002 kernel: [    0.004000] Hierarchical RCU implementation.
Nov 16 16:51:42 cmp002 ifup[792]: Sending on   LPF/ens3/52:54:00:c7:26:24
Nov 16 16:51:42 cmp002 kernel: [    0.004000] 	RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=6.
Nov 16 16:51:42 cmp002 dhclient[819]: Sending on   Socket/fallback
Nov 16 16:51:42 cmp002 kernel: [    0.004000] 	Tasks RCU enabled.
Nov 16 16:51:42 cmp002 kernel: [    0.004000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
Nov 16 16:51:42 cmp002 kernel: [    0.004000] NR_IRQS: 524544, nr_irqs: 472, preallocated irqs: 16
Nov 16 16:51:42 cmp002 kernel: [    0.004000] Console: colour VGA+ 80x25
Nov 16 16:51:42 cmp002 kernel: [    0.004000] console [tty1] enabled
Nov 16 16:51:42 cmp002 kernel: [    0.004000] console [ttyS0] enabled
Nov 16 16:51:42 cmp002 kernel: [    0.004000] ACPI: Core revision 20170831
Nov 16 16:51:42 cmp002 ifup[792]: Sending on   Socket/fallback
Nov 16 16:51:42 cmp002 kernel: [    0.004000] ACPI: 2 ACPI AML tables successfully acquired and loaded
Nov 16 16:51:42 cmp002 kernel: [    0.004006] APIC: Switch to symmetric I/O mode setup
Nov 16 16:51:42 cmp002 dhclient[819]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 3 (xid=0xf0c38302)
Nov 16 16:51:42 cmp002 kernel: [    0.005541] x2apic enabled
Nov 16 16:51:42 cmp002 kernel: [    0.006631] Switched APIC routing to physical x2apic.
Nov 16 16:51:42 cmp002 kernel: [    0.008000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Nov 16 16:51:42 cmp002 kernel: [    0.008000] tsc: Detected 2799.994 MHz processor
Nov 16 16:51:42 cmp002 ifup[792]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 3 (xid=0xf0c38302)
Nov 16 16:51:42 cmp002 dhclient[819]: DHCPREQUEST of 192.168.11.236 on ens3 to 255.255.255.255 port 67 (xid=0x283c3f0)
Nov 16 16:51:42 cmp002 kernel: [    0.008000] Calibrating delay loop (skipped) preset value.. 5599.98 BogoMIPS (lpj=11199976)
Nov 16 16:51:42 cmp002 kernel: [    0.008002] pid_max: default: 32768 minimum: 301
Nov 16 16:51:42 cmp002 ifup[792]: DHCPREQUEST of 192.168.11.236 on ens3 to 255.255.255.255 port 67 (xid=0x283c3f0)
Nov 16 16:51:42 cmp002 kernel: [    0.009162] Security Framework initialized
Nov 16 16:51:42 cmp002 kernel: [    0.012002] Yama: becoming mindful.
Nov 16 16:51:42 cmp002 kernel: [    0.012929] AppArmor: AppArmor initialized
Nov 16 16:51:42 cmp002 kernel: [    0.020170] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes)
Nov 16 16:51:42 cmp002 dhclient[819]: DHCPOFFER of 192.168.11.236 from 192.168.11.3
Nov 16 16:51:42 cmp002 ifup[792]: DHCPOFFER of 192.168.11.236 from 192.168.11.3
Nov 16 16:51:42 cmp002 kernel: [    0.025776] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes)
Nov 16 16:51:42 cmp002 dhclient[819]: DHCPACK of 192.168.11.236 from 192.168.11.3
Nov 16 16:51:42 cmp002 kernel: [    0.027546] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:51:42 cmp002 kernel: [    0.028653] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:51:42 cmp002 kernel: [    0.030740] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Nov 16 16:51:42 cmp002 ifup[792]: DHCPACK of 192.168.11.236 from 192.168.11.3
Nov 16 16:51:42 cmp002 kernel: [    0.032002] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Nov 16 16:51:42 cmp002 ifup[792]: Failed to try-reload-or-restart systemd-resolved.service: Unit systemd-resolved.service is masked.
Nov 16 16:51:42 cmp002 kernel: [    0.033403] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 16 16:51:42 cmp002 kernel: [    0.036003] Spectre V2 : Mitigation: Full generic retpoline
Nov 16 16:51:42 cmp002 kernel: [    0.037333] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Nov 16 16:51:42 cmp002 dhclient[819]: bound to 192.168.11.236 -- renewal in 1530 seconds.
Nov 16 16:51:42 cmp002 kernel: [    0.039266] Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 16 16:51:42 cmp002 kernel: [    0.040011] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 16 16:51:42 cmp002 kernel: [    0.041994] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Nov 16 16:51:42 cmp002 kernel: [    0.044037] MDS: Mitigation: Clear CPU buffers
Nov 16 16:51:42 cmp002 kernel: [    0.045331] Freeing SMP alternatives memory: 36K
Nov 16 16:51:42 cmp002 kernel: [    0.050094] TSC deadline timer enabled
Nov 16 16:51:42 cmp002 ifup[792]: bound to 192.168.11.236 -- renewal in 1530 seconds.
Nov 16 16:51:42 cmp002 kernel: [    0.050097] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (family: 0x6, model: 0x3e, stepping: 0x4)
Nov 16 16:51:42 cmp002 systemd[1]: Started Raise network interfaces.
Nov 16 16:51:42 cmp002 kernel: [    0.052000] Performance Events: IvyBridge events, Intel PMU driver.
Nov 16 16:51:42 cmp002 kernel: [    0.052000] ... version:                2
Nov 16 16:51:42 cmp002 kernel: [    0.052005] ... bit width:              48
Nov 16 16:51:42 cmp002 kernel: [    0.053037] ... generic registers:      4
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Network.
Nov 16 16:51:42 cmp002 systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Nov 16 16:51:42 cmp002 kernel: [    0.054033] ... value mask:             0000ffffffffffff
Nov 16 16:51:42 cmp002 kernel: [    0.055301] ... max period:             000000007fffffff
Nov 16 16:51:42 cmp002 kernel: [    0.056004] ... fixed-purpose events:   3
Nov 16 16:51:42 cmp002 cloud-init[868]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init' at Sat, 16 Nov 2019 16:51:39 +0000. Up 12.39 seconds.
Nov 16 16:51:42 cmp002 kernel: [    0.057010] ... event mask:             000000070000000f
Nov 16 16:51:42 cmp002 kernel: [    0.058319] Hierarchical SRCU implementation.
Nov 16 16:51:42 cmp002 kernel: [    0.060126] smp: Bringing up secondary CPUs ...
Nov 16 16:51:42 cmp002 kernel: [    0.061354] x86: Booting SMP configuration:
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +--------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:51:42 cmp002 kernel: [    0.062402] .... node  #0, CPUs:      #1
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: | Device |   Up  |          Address           |      Mask     | Scope  |     Hw-Address    |
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +--------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:51:42 cmp002 kernel: [    0.004000] kvm-clock: cpu 1, msr 3:3ff52041, secondary cpu clock
Nov 16 16:51:42 cmp002 kernel: [    0.068053] KVM setup async PF for cpu 1
Nov 16 16:51:42 cmp002 kernel: [    0.069034] kvm-stealtime: cpu 1, msr 33fc63040
Nov 16 16:51:42 cmp002 kernel: [    0.070162]  #2
Nov 16 16:51:42 cmp002 kernel: [    0.004000] kvm-clock: cpu 2, msr 3:3ff52081, secondary cpu clock
Nov 16 16:51:42 cmp002 kernel: [    0.076030] KVM setup async PF for cpu 2
Nov 16 16:51:42 cmp002 kernel: [    0.077366] kvm-stealtime: cpu 2, msr 33fca3040
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |  ens3  |  True |       192.168.11.236       | 255.255.255.0 | global | 52:54:00:c7:26:24 |
Nov 16 16:51:42 cmp002 kernel: [    0.078957]  #3
Nov 16 16:51:42 cmp002 kernel: [    0.004000] kvm-clock: cpu 3, msr 3:3ff520c1, secondary cpu clock
Nov 16 16:51:42 cmp002 kernel: [    0.084025] KVM setup async PF for cpu 3
Nov 16 16:51:42 cmp002 kernel: [    0.085014] kvm-stealtime: cpu 3, msr 33fce3040
Nov 16 16:51:42 cmp002 kernel: [    0.086152]  #4
Nov 16 16:51:42 cmp002 kernel: [    0.004000] kvm-clock: cpu 4, msr 3:3ff52101, secondary cpu clock
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |  ens3  |  True | fe80::5054:ff:fec7:2624/64 |       .       |  link  | 52:54:00:c7:26:24 |
Nov 16 16:51:42 cmp002 kernel: [    0.088022] KVM setup async PF for cpu 4
Nov 16 16:51:42 cmp002 kernel: [    0.088996] kvm-stealtime: cpu 4, msr 33fd23040
Nov 16 16:51:42 cmp002 kernel: [    0.090121]  #5
Nov 16 16:51:42 cmp002 kernel: [    0.004000] kvm-clock: cpu 5, msr 3:3ff52141, secondary cpu clock
Nov 16 16:51:42 cmp002 kernel: [    0.092021] KVM setup async PF for cpu 5
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |  ens4  | False |             .              |       .       |   .    | 52:54:00:3d:c4:b4 |
Nov 16 16:51:42 cmp002 kernel: [    0.092998] kvm-stealtime: cpu 5, msr 33fd63040
Nov 16 16:51:42 cmp002 kernel: [    0.094120] smp: Brought up 1 node, 6 CPUs
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |  ens5  | False |             .              |       .       |   .    | 52:54:00:b4:da:df |
Nov 16 16:51:42 cmp002 kernel: [    0.094120] smpboot: Max logical packages: 6
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |  ens6  | False |             .              |       .       |   .    | 52:54:00:13:34:fc |
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |   lo   |  True |         127.0.0.1          |   255.0.0.0   |  host  |         .         |
Nov 16 16:51:42 cmp002 kernel: [    0.096007] smpboot: Total of 6 processors activated (33599.92 BogoMIPS)
Nov 16 16:51:42 cmp002 kernel: [    0.098396] devtmpfs: initialized
Nov 16 16:51:42 cmp002 kernel: [    0.098396] x86/mm: Memory block size: 128MB
Nov 16 16:51:42 cmp002 kernel: [    0.101376] evm: security.selinux
Nov 16 16:51:42 cmp002 kernel: [    0.102253] evm: security.SMACK64
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |   lo   |  True |          ::1/128           |       .       |  host  |         .         |
Nov 16 16:51:42 cmp002 kernel: [    0.103117] evm: security.SMACK64EXEC
Nov 16 16:51:42 cmp002 kernel: [    0.104005] evm: security.SMACK64TRANSMUTE
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +--------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:51:42 cmp002 kernel: [    0.105027] evm: security.SMACK64MMAP
Nov 16 16:51:42 cmp002 kernel: [    0.105960] evm: security.apparmor
Nov 16 16:51:42 cmp002 kernel: [    0.106837] evm: security.ima
Nov 16 16:51:42 cmp002 kernel: [    0.107618] evm: security.capability
Nov 16 16:51:42 cmp002 kernel: [    0.108125] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
Nov 16 16:51:42 cmp002 kernel: [    0.110322] futex hash table entries: 2048 (order: 5, 131072 bytes)
Nov 16 16:51:42 cmp002 kernel: [    0.112087] pinctrl core: initialized pinctrl subsystem
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:51:42 cmp002 kernel: [    0.113546] RTC time: 16:51:26, date: 11/16/19
Nov 16 16:51:42 cmp002 kernel: [    0.115496] NET: Registered protocol family 16
Nov 16 16:51:42 cmp002 kernel: [    0.116082] audit: initializing netlink subsys (disabled)
Nov 16 16:51:42 cmp002 kernel: [    0.117842] audit: type=2000 audit(1573923086.970:1): state=initialized audit_enabled=0 res=1
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: | Route | Destination  |   Gateway    |    Genmask    | Interface | Flags |
Nov 16 16:51:42 cmp002 kernel: [    0.120026] cpuidle: using governor ladder
Nov 16 16:51:42 cmp002 kernel: [    0.121171] cpuidle: using governor menu
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:51:42 cmp002 kernel: [    0.124233] ACPI: bus type PCI registered
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |   0   |   0.0.0.0    | 192.168.11.3 |    0.0.0.0    |    ens3   |   UG  |
Nov 16 16:51:42 cmp002 kernel: [    0.125329] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 16 16:51:42 cmp002 kernel: [    0.127113] PCI: Using configuration type 1 for base access
Nov 16 16:51:42 cmp002 kernel: [    0.128043] core: PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off
Nov 16 16:51:42 cmp002 kernel: [    0.131062] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |   1   | 192.168.11.0 |   0.0.0.0    | 255.255.255.0 |    ens3   |   U   |
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 16 16:51:42 cmp002 kernel: [    0.132008] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Nov 16 16:51:42 cmp002 kernel: [    0.133641] ACPI: Added _OSI(Module Device)
Nov 16 16:51:42 cmp002 kernel: [    0.133641] ACPI: Added _OSI(Processor Device)
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:51:42 cmp002 kernel: [    0.136006] ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 16 16:51:42 cmp002 kernel: [    0.137178] ACPI: Added _OSI(Processor Aggregator Device)
Nov 16 16:51:42 cmp002 kernel: [    0.138483] ACPI: Added _OSI(Linux-Dell-Video)
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 16 16:51:42 cmp002 kernel: [    0.139578] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Nov 16 16:51:42 cmp002 kernel: [    0.140011] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Nov 16 16:51:42 cmp002 kernel: [    0.142826] ACPI: Interpreter enabled
Nov 16 16:51:42 cmp002 kernel: [    0.143785] ACPI: (supports S0 S5)
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:51:42 cmp002 kernel: [    0.144011] ACPI: Using IOAPIC for interrupt routing
Nov 16 16:51:42 cmp002 kernel: [    0.145230] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 16 16:51:42 cmp002 kernel: [    0.147950] ACPI: Enabled 16 GPEs in block 00 to 0F
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |   1   |  fe80::/64  |    ::   |    ens3   |   U   |
Nov 16 16:51:42 cmp002 kernel: [    0.151612] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 16 16:51:42 cmp002 kernel: [    0.152011] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |   3   |    local    |    ::   |    ens3   |   U   |
Nov 16 16:51:42 cmp002 kernel: [    0.153634] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
Nov 16 16:51:42 cmp002 kernel: [    0.155185] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Nov 16 16:51:42 cmp002 kernel: [    0.156364] acpiphp: Slot [3] registered
Nov 16 16:51:42 cmp002 kernel: [    0.157410] acpiphp: Slot [4] registered
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: |   4   |   ff00::/8  |    ::   |    ens3   |   U   |
Nov 16 16:51:42 cmp002 kernel: [    0.158458] acpiphp: Slot [5] registered
Nov 16 16:51:42 cmp002 cloud-init[868]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:51:42 cmp002 kernel: [    0.159505] acpiphp: Slot [6] registered
Nov 16 16:51:42 cmp002 kernel: [    0.160053] acpiphp: Slot [7] registered
Nov 16 16:51:42 cmp002 kernel: [    0.161910] acpiphp: Slot [9] registered
Nov 16 16:51:42 cmp002 kernel: [    0.162958] acpiphp: Slot [10] registered
Nov 16 16:51:42 cmp002 kernel: [    0.164009] acpiphp: Slot [11] registered
Nov 16 16:51:42 cmp002 kernel: [    0.165070] acpiphp: Slot [12] registered
Nov 16 16:51:42 cmp002 kernel: [    0.166163] acpiphp: Slot [13] registered
Nov 16 16:51:42 cmp002 kernel: [    0.167219] acpiphp: Slot [14] registered
Nov 16 16:51:42 cmp002 kernel: [    0.168054] acpiphp: Slot [15] registered
Nov 16 16:51:42 cmp002 kernel: [    0.169113] acpiphp: Slot [16] registered
Nov 16 16:51:42 cmp002 kernel: [    0.170166] acpiphp: Slot [17] registered
Nov 16 16:51:42 cmp002 kernel: [    0.171228] acpiphp: Slot [18] registered
Nov 16 16:51:42 cmp002 kernel: [    0.172053] acpiphp: Slot [19] registered
Nov 16 16:51:42 cmp002 kernel: [    0.173109] acpiphp: Slot [20] registered
Nov 16 16:51:42 cmp002 kernel: [    0.174158] acpiphp: Slot [21] registered
Nov 16 16:51:42 cmp002 kernel: [    0.175214] acpiphp: Slot [22] registered
Nov 16 16:51:42 cmp002 kernel: [    0.176051] acpiphp: Slot [23] registered
Nov 16 16:51:42 cmp002 kernel: [    0.177107] acpiphp: Slot [24] registered
Nov 16 16:51:42 cmp002 kernel: [    0.178154] acpiphp: Slot [25] registered
Nov 16 16:51:42 cmp002 cloud-init[868]: Generating public/private rsa key pair.
Nov 16 16:51:42 cmp002 cloud-init[868]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
Nov 16 16:51:42 cmp002 kernel: [    0.179210] acpiphp: Slot [26] registered
Nov 16 16:51:42 cmp002 cloud-init[868]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
Nov 16 16:51:42 cmp002 kernel: [    0.180051] acpiphp: Slot [27] registered
Nov 16 16:51:42 cmp002 kernel: [    0.181107] acpiphp: Slot [28] registered
Nov 16 16:51:42 cmp002 kernel: [    0.182162] acpiphp: Slot [29] registered
Nov 16 16:51:42 cmp002 kernel: [    0.183209] acpiphp: Slot [30] registered
Nov 16 16:51:42 cmp002 cloud-init[868]: The key fingerprint is:
Nov 16 16:51:42 cmp002 cloud-init[868]: SHA256:Wv7FhxqtjDyLFsbXBUVuv6iT9nvxvd7tZ3ExAvfIupg root@cmp002
Nov 16 16:51:42 cmp002 kernel: [    0.184053] acpiphp: Slot [31] registered
Nov 16 16:51:42 cmp002 cloud-init[868]: The key's randomart image is:
Nov 16 16:51:42 cmp002 kernel: [    0.185077] PCI host bridge to bus 0000:00
Nov 16 16:51:42 cmp002 kernel: [    0.186108] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 16 16:51:42 cmp002 cloud-init[868]: +---[RSA 2048]----+
Nov 16 16:51:42 cmp002 kernel: [    0.187690] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 16 16:51:42 cmp002 kernel: [    0.188005] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 16 16:51:42 cmp002 kernel: [    0.189802] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 16 16:51:42 cmp002 kernel: [    0.191580] pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 16 16:51:42 cmp002 kernel: [    0.192054] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Nov 16 16:51:42 cmp002 cloud-init[868]: |           oo    |
Nov 16 16:51:42 cmp002 kernel: [    0.192655] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Nov 16 16:51:42 cmp002 kernel: [    0.193447] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Nov 16 16:51:42 cmp002 kernel: [    0.200990] pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Nov 16 16:51:42 cmp002 kernel: [    0.204650] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Nov 16 16:51:42 cmp002 cloud-init[868]: |          .o .   |
Nov 16 16:51:42 cmp002 kernel: [    0.206328] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Nov 16 16:51:42 cmp002 kernel: [    0.207852] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Nov 16 16:51:42 cmp002 cloud-init[868]: |           .* o  |
Nov 16 16:51:42 cmp002 kernel: [    0.208005] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Nov 16 16:51:42 cmp002 kernel: [    0.209770] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Nov 16 16:51:42 cmp002 kernel: [    0.210379] pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 16 16:51:42 cmp002 kernel: [    0.212018] pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 16 16:51:42 cmp002 cloud-init[868]: |           ..= + |
Nov 16 16:51:42 cmp002 kernel: [    0.214113] pci 0000:00:02.0: [1013:00b8] type 00 class 0x030000
Nov 16 16:51:42 cmp002 kernel: [    0.216012] pci 0000:00:02.0: reg 0x10: [mem 0xfc000000-0xfdffffff pref]
Nov 16 16:51:42 cmp002 kernel: [    0.218935] pci 0000:00:02.0: reg 0x14: [mem 0xfebd0000-0xfebd0fff]
Nov 16 16:51:42 cmp002 cloud-init[868]: |     .  S. .. o o|
Nov 16 16:51:42 cmp002 kernel: [    0.233790] pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref]
Nov 16 16:51:42 cmp002 kernel: [    0.235842] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:42 cmp002 cloud-init[868]: |      ++. .+ o.o.|
Nov 16 16:51:42 cmp002 cloud-init[868]: |     ..o. +.B oo+|
Nov 16 16:51:42 cmp002 kernel: [    0.237139] pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc05f]
Nov 16 16:51:42 cmp002 kernel: [    0.240008] pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff]
Nov 16 16:51:42 cmp002 kernel: [    0.252006] pci 0000:00:03.0: reg 0x30: [mem 0xfeac0000-0xfeafffff pref]
Nov 16 16:51:42 cmp002 kernel: [    0.252484] pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:42 cmp002 kernel: [    0.256006] pci 0000:00:04.0: reg 0x10: [io  0xc060-0xc07f]
Nov 16 16:51:42 cmp002 cloud-init[868]: |      .o.E+B .. B|
Nov 16 16:51:42 cmp002 cloud-init[868]: |     .. ++*ooo.=*|
Nov 16 16:51:42 cmp002 cloud-init[868]: +----[SHA256]-----+
Nov 16 16:51:42 cmp002 kernel: [    0.258141] pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff]
Nov 16 16:51:42 cmp002 kernel: [    0.270169] pci 0000:00:04.0: reg 0x30: [mem 0xfeb00000-0xfeb3ffff pref]
Nov 16 16:51:42 cmp002 cloud-init[868]: Generating public/private dsa key pair.
Nov 16 16:51:42 cmp002 kernel: [    0.270630] pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:42 cmp002 kernel: [    0.272007] pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Nov 16 16:51:42 cmp002 kernel: [    0.274209] pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff]
Nov 16 16:51:42 cmp002 kernel: [    0.285121] pci 0000:00:05.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
Nov 16 16:51:42 cmp002 cloud-init[868]: Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Nov 16 16:51:42 cmp002 kernel: [    0.287945] pci 0000:00:06.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:51:42 cmp002 cloud-init[868]: Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
Nov 16 16:51:42 cmp002 kernel: [    0.290277] pci 0000:00:06.0: reg 0x10: [io  0xc0a0-0xc0bf]
Nov 16 16:51:42 cmp002 kernel: [    0.292007] pci 0000:00:06.0: reg 0x14: [mem 0xfebd4000-0xfebd4fff]
Nov 16 16:51:42 cmp002 kernel: [    0.303105] pci 0000:00:06.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
Nov 16 16:51:42 cmp002 kernel: [    0.303576] pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Nov 16 16:51:42 cmp002 cloud-init[868]: The key fingerprint is:
Nov 16 16:51:42 cmp002 kernel: [    0.305120] pci 0000:00:07.0: reg 0x10: [io  0xc000-0xc03f]
Nov 16 16:51:42 cmp002 cloud-init[868]: SHA256:AGm9VEUVwkHljAFiA3m6I/kYs/4g4MmEdPQFluYCdj8 root@cmp002
Nov 16 16:51:42 cmp002 kernel: [    0.307362] pci 0000:00:07.0: reg 0x14: [mem 0xfebd5000-0xfebd5fff]
Nov 16 16:51:42 cmp002 cloud-init[868]: The key's randomart image is:
Nov 16 16:51:42 cmp002 kernel: [    0.320584] pci 0000:00:08.0: [8086:2934] type 00 class 0x0c0300
Nov 16 16:51:42 cmp002 kernel: [    0.326226] pci 0000:00:08.0: reg 0x20: [io  0xc0c0-0xc0df]
Nov 16 16:51:42 cmp002 kernel: [    0.328258] pci 0000:00:08.1: [8086:2935] type 00 class 0x0c0300
Nov 16 16:51:42 cmp002 cloud-init[868]: +---[DSA 1024]----+
Nov 16 16:51:42 cmp002 cloud-init[868]: |   .oB*oo*B++.   |
Nov 16 16:51:42 cmp002 kernel: [    0.335848] pci 0000:00:08.1: reg 0x20: [io  0xc0e0-0xc0ff]
Nov 16 16:51:42 cmp002 kernel: [    0.338175] pci 0000:00:08.2: [8086:2936] type 00 class 0x0c0300
Nov 16 16:51:42 cmp002 cloud-init[868]: | o..*==o  .*     |
Nov 16 16:51:42 cmp002 cloud-init[868]: |..oo+=..  . o    |
Nov 16 16:51:42 cmp002 kernel: [    0.346561] pci 0000:00:08.2: reg 0x20: [io  0xc100-0xc11f]
Nov 16 16:51:42 cmp002 cloud-init[868]: |o ...E..         |
Nov 16 16:51:42 cmp002 kernel: [    0.348979] pci 0000:00:08.7: [8086:293a] type 00 class 0x0c0320
Nov 16 16:51:42 cmp002 kernel: [    0.350290] pci 0000:00:08.7: reg 0x10: [mem 0xfebd6000-0xfebd6fff]
Nov 16 16:51:42 cmp002 kernel: [    0.360143] pci 0000:00:09.0: [1af4:1002] type 00 class 0x00ff00
Nov 16 16:51:42 cmp002 cloud-init[868]: |o. .... S        |
Nov 16 16:51:42 cmp002 kernel: [    0.361370] pci 0000:00:09.0: reg 0x10: [io  0xc120-0xc13f]
Nov 16 16:51:42 cmp002 kernel: [    0.370116] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
Nov 16 16:51:42 cmp002 kernel: [    0.371662] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
Nov 16 16:51:42 cmp002 kernel: [    0.372144] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
Nov 16 16:51:42 cmp002 cloud-init[868]: |= * o            |
Nov 16 16:51:42 cmp002 kernel: [    0.373614] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
Nov 16 16:51:42 cmp002 cloud-init[868]: |.+.B .           |
Nov 16 16:51:42 cmp002 cloud-init[868]: | .o..            |
Nov 16 16:51:42 cmp002 kernel: [    0.375035] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
Nov 16 16:51:42 cmp002 kernel: [    0.376918] SCSI subsystem initialized
Nov 16 16:51:42 cmp002 cloud-init[868]: | ....            |
Nov 16 16:51:42 cmp002 kernel: [    0.377963] libata version 3.00 loaded.
Nov 16 16:51:42 cmp002 kernel: [    0.377963] pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 16 16:51:42 cmp002 kernel: [    0.377963] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 16 16:51:42 cmp002 cloud-init[868]: +----[SHA256]-----+
Nov 16 16:51:42 cmp002 kernel: [    0.380008] pci 0000:00:02.0: vgaarb: bridge control possible
Nov 16 16:51:42 cmp002 kernel: [    0.381385] vgaarb: loaded
Nov 16 16:51:42 cmp002 kernel: [    0.382142] ACPI: bus type USB registered
Nov 16 16:51:42 cmp002 cloud-init[868]: Generating public/private ecdsa key pair.
Nov 16 16:51:42 cmp002 cloud-init[868]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key.
Nov 16 16:51:42 cmp002 kernel: [    0.383154] usbcore: registered new interface driver usbfs
Nov 16 16:51:42 cmp002 kernel: [    0.384019] usbcore: registered new interface driver hub
Nov 16 16:51:42 cmp002 cloud-init[868]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub.
Nov 16 16:51:42 cmp002 kernel: [    0.385341] usbcore: registered new device driver usb
Nov 16 16:51:42 cmp002 kernel: [    0.386642] EDAC MC: Ver: 3.0.0
Nov 16 16:51:42 cmp002 cloud-init[868]: The key fingerprint is:
Nov 16 16:51:42 cmp002 kernel: [    0.388080] PCI: Using ACPI for IRQ routing
Nov 16 16:51:42 cmp002 kernel: [    0.389086] PCI: pci_cache_line_size set to 64 bytes
Nov 16 16:51:42 cmp002 cloud-init[868]: SHA256:kZbqlpJeCr2phfByllUg1MXx9dADuUN9vMHqz/p143Y root@cmp002
Nov 16 16:51:42 cmp002 kernel: [    0.389420] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 16 16:51:42 cmp002 kernel: [    0.389422] e820: reserve RAM buffer [mem 0xbffdf000-0xbfffffff]
Nov 16 16:51:42 cmp002 kernel: [    0.389543] NetLabel: Initializing
Nov 16 16:51:42 cmp002 kernel: [    0.390432] NetLabel:  domain hash size = 128
Nov 16 16:51:42 cmp002 cloud-init[868]: The key's randomart image is:
Nov 16 16:51:42 cmp002 cloud-init[868]: +---[ECDSA 256]---+
Nov 16 16:51:42 cmp002 kernel: [    0.391505] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 16 16:51:42 cmp002 kernel: [    0.392026] NetLabel:  unlabeled traffic allowed by default
Nov 16 16:51:42 cmp002 kernel: [    0.393384] clocksource: Switched to clocksource kvm-clock
Nov 16 16:51:42 cmp002 kernel: [    0.407550] VFS: Disk quotas dquot_6.6.0
Nov 16 16:51:42 cmp002 cloud-init[868]: | .o..oo.  += o   |
Nov 16 16:51:42 cmp002 kernel: [    0.408589] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 16 16:51:42 cmp002 kernel: [    0.410323] AppArmor: AppArmor Filesystem Enabled
Nov 16 16:51:42 cmp002 kernel: [    0.411513] pnp: PnP ACPI init
Nov 16 16:51:42 cmp002 cloud-init[868]: |   ..... +oo+ =  |
Nov 16 16:51:42 cmp002 kernel: [    0.412389] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
Nov 16 16:51:42 cmp002 kernel: [    0.412430] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active)
Nov 16 16:51:42 cmp002 kernel: [    0.412458] pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active)
Nov 16 16:51:42 cmp002 kernel: [    0.412492] pnp 00:03: [dma 2]
Nov 16 16:51:42 cmp002 cloud-init[868]: |      . *. ..+ o |
Nov 16 16:51:42 cmp002 kernel: [    0.412512] pnp 00:03: Plug and Play ACPI device, IDs PNP0700 (active)
Nov 16 16:51:42 cmp002 kernel: [    0.412611] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
Nov 16 16:51:42 cmp002 kernel: [    0.412897] pnp: PnP ACPI: found 5 devices
Nov 16 16:51:42 cmp002 cloud-init[868]: |     . o .o . .  |
Nov 16 16:51:42 cmp002 kernel: [    0.421478] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 16 16:51:42 cmp002 kernel: [    0.423555] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 16 16:51:42 cmp002 cloud-init[868]: |.   . . S  o     |
Nov 16 16:51:42 cmp002 kernel: [    0.423556] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 16 16:51:42 cmp002 kernel: [    0.423557] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 16 16:51:42 cmp002 kernel: [    0.423558] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 16 16:51:42 cmp002 kernel: [    0.423621] NET: Registered protocol family 2
Nov 16 16:51:42 cmp002 cloud-init[868]: | o = o .    .    |
Nov 16 16:51:42 cmp002 cloud-init[868]: |. B = =      o .o|
Nov 16 16:51:42 cmp002 kernel: [    0.424921] TCP established hash table entries: 131072 (order: 8, 1048576 bytes)
Nov 16 16:51:42 cmp002 kernel: [    0.427673] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Nov 16 16:51:42 cmp002 kernel: [    0.429333] TCP: Hash tables configured (established 131072 bind 65536)
Nov 16 16:51:42 cmp002 kernel: [    0.430930] UDP hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:51:42 cmp002 cloud-init[868]: | + + B        +oE|
Nov 16 16:51:42 cmp002 kernel: [    0.432413] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:51:42 cmp002 kernel: [    0.434036] NET: Registered protocol family 1
Nov 16 16:51:42 cmp002 kernel: [    0.435126] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 16 16:51:42 cmp002 cloud-init[868]: |  ..=       .oo..|
Nov 16 16:51:42 cmp002 kernel: [    0.436545] pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 16 16:51:42 cmp002 kernel: [    0.437941] pci 0000:00:01.0: Activating ISA DMA hang workarounds
Nov 16 16:51:42 cmp002 cloud-init[868]: +----[SHA256]-----+
Nov 16 16:51:42 cmp002 kernel: [    0.439446] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 16 16:51:42 cmp002 cloud-init[868]: Generating public/private ed25519 key pair.
Nov 16 16:51:42 cmp002 cloud-init[868]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key.
Nov 16 16:51:42 cmp002 cloud-init[868]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub.
Nov 16 16:51:42 cmp002 cloud-init[868]: The key fingerprint is:
Nov 16 16:51:42 cmp002 cloud-init[868]: SHA256:r1B5RfawA3+Mke6lLTV3jAFzIxIQos63OuMSR8hGHL4 root@cmp002
Nov 16 16:51:42 cmp002 cloud-init[868]: The key's randomart image is:
Nov 16 16:51:42 cmp002 cloud-init[868]: +--[ED25519 256]--+
Nov 16 16:51:42 cmp002 cloud-init[868]: | ...  . oo+.Ooo  |
Nov 16 16:51:42 cmp002 cloud-init[868]: | .o  . .   *.@.. |
Nov 16 16:51:42 cmp002 cloud-init[868]: | o...      .* ++ |
Nov 16 16:51:42 cmp002 cloud-init[868]: |  ++.    . ..o= +|
Nov 16 16:51:42 cmp002 kernel: [    0.473163] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
Nov 16 16:51:42 cmp002 kernel: [    0.535508] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
Nov 16 16:51:42 cmp002 cloud-init[868]: | .E.o . S .. = o.|
Nov 16 16:51:42 cmp002 kernel: [    0.596147] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
Nov 16 16:51:42 cmp002 cloud-init[868]: |  . .. o o  + .  |
Nov 16 16:51:42 cmp002 kernel: [    0.656098] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
Nov 16 16:51:42 cmp002 kernel: [    0.686886] PCI: CLS 0 bytes, default 64
Nov 16 16:51:42 cmp002 kernel: [    0.686939] Unpacking initramfs...
Nov 16 16:51:42 cmp002 kernel: [    0.974909] Freeing initrd memory: 19152K
Nov 16 16:51:42 cmp002 cloud-init[868]: |   o  o   .  .   |
Nov 16 16:51:42 cmp002 kernel: [    0.976040] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 16 16:51:42 cmp002 kernel: [    0.977584] software IO TLB: mapped [mem 0xbbfdf000-0xbffdf000] (64MB)
Nov 16 16:51:42 cmp002 kernel: [    0.979170] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3aeaff3, max_idle_ns: 440795255742 ns
Nov 16 16:51:42 cmp002 kernel: [    0.981550] Scanning for low memory corruption every 60 seconds
Nov 16 16:51:42 cmp002 kernel: [    0.983836] Initialise system trusted keyrings
Nov 16 16:51:42 cmp002 kernel: [    0.984965] Key type blacklist registered
Nov 16 16:51:42 cmp002 kernel: [    0.986076] workingset: timestamp_bits=36 max_order=22 bucket_order=0
Nov 16 16:51:42 cmp002 kernel: [    0.989351] zbud: loaded
Nov 16 16:51:42 cmp002 cloud-init[868]: |  . o. . .       |
Nov 16 16:51:42 cmp002 kernel: [    0.990638] squashfs: version 4.0 (2009/01/31) Phillip Lougher
Nov 16 16:51:42 cmp002 cloud-init[868]: |   ooo  .        |
Nov 16 16:51:42 cmp002 kernel: [    0.992250] fuse init (API version 7.26)
Nov 16 16:51:42 cmp002 kernel: [    0.994887] Key type asymmetric registered
Nov 16 16:51:42 cmp002 kernel: [    0.995928] Asymmetric key parser 'x509' registered
Nov 16 16:51:42 cmp002 kernel: [    0.997158] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 16 16:51:42 cmp002 kernel: [    0.999046] io scheduler noop registered
Nov 16 16:51:42 cmp002 cloud-init[868]: +----[SHA256]-----+
Nov 16 16:51:42 cmp002 kernel: [    1.000052] io scheduler deadline registered
Nov 16 16:51:42 cmp002 kernel: [    1.001149] io scheduler cfq registered (default)
Nov 16 16:51:42 cmp002 systemd[1]: Started Initial cloud-init job (metadata service crawler).
Nov 16 16:51:42 cmp002 kernel: [    1.002772] intel_idle: Please enable MWAIT in BIOS SETUP
Nov 16 16:51:42 cmp002 kernel: [    1.002844] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 16 16:51:42 cmp002 kernel: [    1.004756] ACPI: Power Button [PWRF]
Nov 16 16:51:42 cmp002 kernel: [    1.034772] virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Network is Online.
Nov 16 16:51:42 cmp002 kernel: [    1.065765] virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:42 cmp002 kernel: [    1.097109] virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Remote File Systems (Pre).
Nov 16 16:51:42 cmp002 kernel: [    1.128035] virtio-pci 0000:00:06.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:42 cmp002 kernel: [    1.158955] virtio-pci 0000:00:07.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Remote File Systems.
Nov 16 16:51:42 cmp002 kernel: [    1.189421] virtio-pci 0000:00:09.0: virtio_pci: leaving for legacy driver
Nov 16 16:51:42 cmp002 systemd[1]: Starting Availability of block devices...
Nov 16 16:51:42 cmp002 kernel: [    1.192400] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
Nov 16 16:51:42 cmp002 kernel: [    1.217837] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 16 16:51:42 cmp002 kernel: [    1.222124] Linux agpgart interface v0.103
Nov 16 16:51:42 cmp002 kernel: [    1.226681] loop: module loaded
Nov 16 16:51:42 cmp002 systemd[1]: Reached target System Initialization.
Nov 16 16:51:42 cmp002 kernel: [    1.227615] ata_piix 0000:00:01.1: version 2.13
Nov 16 16:51:42 cmp002 kernel: [    1.228987] scsi host0: ata_piix
Nov 16 16:51:42 cmp002 systemd[1]: Starting LXD - unix socket.
Nov 16 16:51:42 cmp002 kernel: [    1.230181] scsi host1: ata_piix
Nov 16 16:51:42 cmp002 kernel: [    1.231051] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Nov 16 16:51:42 cmp002 kernel: [    1.232678] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Nov 16 16:51:42 cmp002 kernel: [    1.234355] libphy: Fixed MDIO Bus: probed
Nov 16 16:51:42 cmp002 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 16 16:51:42 cmp002 kernel: [    1.235483] tun: Universal TUN/TAP device driver, 1.6
Nov 16 16:51:42 cmp002 kernel: [    1.236779] PPP generic driver version 2.4.2
Nov 16 16:51:42 cmp002 kernel: [    1.237919] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Nov 16 16:51:42 cmp002 kernel: [    1.239659] ehci-pci: EHCI PCI platform driver
Nov 16 16:51:42 cmp002 kernel: [    1.270207] ehci-pci 0000:00:08.7: EHCI Host Controller
Nov 16 16:51:42 cmp002 systemd[1]: Starting Socket activation for snappy daemon.
Nov 16 16:51:42 cmp002 kernel: [    1.271483] ehci-pci 0000:00:08.7: new USB bus registered, assigned bus number 1
Nov 16 16:51:42 cmp002 kernel: [    1.273505] ehci-pci 0000:00:08.7: irq 11, io mem 0xfebd6000
Nov 16 16:51:42 cmp002 systemd[1]: Started ACPI Events Check.
Nov 16 16:51:42 cmp002 kernel: [    1.288059] ehci-pci 0000:00:08.7: USB 2.0 started, EHCI 1.00
Nov 16 16:51:42 cmp002 kernel: [    1.301480] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Nov 16 16:51:42 cmp002 systemd[1]: Listening on UUID daemon activation socket.
Nov 16 16:51:42 cmp002 kernel: [    1.303067] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:42 cmp002 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 16 16:51:42 cmp002 kernel: [    1.304783] usb usb1: Product: EHCI Host Controller
Nov 16 16:51:42 cmp002 kernel: [    1.305983] usb usb1: Manufacturer: Linux 4.15.0-70-generic ehci_hcd
Nov 16 16:51:42 cmp002 kernel: [    1.307462] usb usb1: SerialNumber: 0000:00:08.7
Nov 16 16:51:42 cmp002 kernel: [    1.308750] hub 1-0:1.0: USB hub found
Nov 16 16:51:42 cmp002 systemd[1]: Listening on ACPID Listen Socket.
Nov 16 16:51:42 cmp002 kernel: [    1.309733] hub 1-0:1.0: 6 ports detected
Nov 16 16:51:42 cmp002 kernel: [    1.310938] ehci-platform: EHCI generic platform driver
Nov 16 16:51:42 cmp002 systemd[1]: Started Message of the Day.
Nov 16 16:51:42 cmp002 kernel: [    1.312219] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Nov 16 16:51:42 cmp002 kernel: [    1.313690] ohci-pci: OHCI PCI platform driver
Nov 16 16:51:42 cmp002 kernel: [    1.314801] ohci-platform: OHCI generic platform driver
Nov 16 16:51:42 cmp002 kernel: [    1.316067] uhci_hcd: USB Universal Host Controller Interface driver
Nov 16 16:51:42 cmp002 systemd[1]: Started Daily apt download activities.
Nov 16 16:51:42 cmp002 kernel: [    1.350255] uhci_hcd 0000:00:08.0: UHCI Host Controller
Nov 16 16:51:42 cmp002 systemd[1]: Started Daily apt upgrade and clean activities.
Nov 16 16:51:42 cmp002 kernel: [    1.351679] uhci_hcd 0000:00:08.0: new USB bus registered, assigned bus number 2
Nov 16 16:51:42 cmp002 kernel: [    1.353699] uhci_hcd 0000:00:08.0: detected 2 ports
Nov 16 16:51:42 cmp002 kernel: [    1.355115] uhci_hcd 0000:00:08.0: irq 11, io base 0x0000c0c0
Nov 16 16:51:42 cmp002 kernel: [    1.356752] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:51:42 cmp002 kernel: [    1.358574] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:42 cmp002 kernel: [    1.360791] usb usb2: Product: UHCI Host Controller
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Paths.
Nov 16 16:51:42 cmp002 kernel: [    1.362183] usb usb2: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:51:42 cmp002 kernel: [    1.363885] usb usb2: SerialNumber: 0000:00:08.0
Nov 16 16:51:42 cmp002 kernel: [    1.365383] hub 2-0:1.0: USB hub found
Nov 16 16:51:42 cmp002 kernel: [    1.366480] hub 2-0:1.0: 2 ports detected
Nov 16 16:51:42 cmp002 kernel: [    1.392880] ata1.01: NODEV after polling detection
Nov 16 16:51:42 cmp002 kernel: [    1.393235] ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 16 16:51:42 cmp002 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 16 16:51:42 cmp002 kernel: [    1.395423] ata1.00: configured for MWDMA2
Nov 16 16:51:42 cmp002 systemd[1]: Started Discard unused blocks once a week.
Nov 16 16:51:42 cmp002 kernel: [    1.397459] scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 16 16:51:42 cmp002 kernel: [    1.401008] sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 16 16:51:42 cmp002 kernel: [    1.402838] cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 16 16:51:42 cmp002 kernel: [    1.404385] sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Timers.
Nov 16 16:51:42 cmp002 kernel: [    1.404433] sr 0:0:0:0: Attached scsi generic sg0 type 5
Nov 16 16:51:42 cmp002 kernel: [    1.408356] uhci_hcd 0000:00:08.1: UHCI Host Controller
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Cloud-config availability.
Nov 16 16:51:42 cmp002 kernel: [    1.409842] uhci_hcd 0000:00:08.1: new USB bus registered, assigned bus number 3
Nov 16 16:51:42 cmp002 kernel: [    1.411767] uhci_hcd 0000:00:08.1: detected 2 ports
Nov 16 16:51:42 cmp002 kernel: [    1.413119] uhci_hcd 0000:00:08.1: irq 10, io base 0x0000c0e0
Nov 16 16:51:42 cmp002 kernel: [    1.414679] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:51:42 cmp002 systemd[1]: Started Availability of block devices.
Nov 16 16:51:42 cmp002 kernel: [    1.416361] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:42 cmp002 kernel: [    1.418628] usb usb3: Product: UHCI Host Controller
Nov 16 16:51:42 cmp002 systemd[1]: Listening on LXD - unix socket.
Nov 16 16:51:42 cmp002 kernel: [    1.419876] usb usb3: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:51:42 cmp002 kernel: [    1.421445] usb usb3: SerialNumber: 0000:00:08.1
Nov 16 16:51:42 cmp002 kernel: [    1.422776] hub 3-0:1.0: USB hub found
Nov 16 16:51:42 cmp002 systemd[1]: Listening on Socket activation for snappy daemon.
Nov 16 16:51:42 cmp002 kernel: [    1.423783] hub 3-0:1.0: 2 ports detected
Nov 16 16:51:42 cmp002 kernel: [    1.462079] uhci_hcd 0000:00:08.2: UHCI Host Controller
Nov 16 16:51:42 cmp002 kernel: [    1.465093] uhci_hcd 0000:00:08.2: new USB bus registered, assigned bus number 4
Nov 16 16:51:42 cmp002 kernel: [    1.469632] uhci_hcd 0000:00:08.2: detected 2 ports
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Sockets.
Nov 16 16:51:42 cmp002 kernel: [    1.472746] uhci_hcd 0000:00:08.2: irq 10, io base 0x0000c100
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Basic System.
Nov 16 16:51:42 cmp002 kernel: [    1.476273] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:51:42 cmp002 kernel: [    1.480292] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:51:42 cmp002 kernel: [    1.484575] usb usb4: Product: UHCI Host Controller
Nov 16 16:51:42 cmp002 kernel: [    1.487526] usb usb4: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:51:42 cmp002 systemd[1]: Started Deferred execution scheduler.
Nov 16 16:51:42 cmp002 systemd[1]: Started irqbalance daemon.
Nov 16 16:51:42 cmp002 kernel: [    1.491182] usb usb4: SerialNumber: 0000:00:08.2
Nov 16 16:51:42 cmp002 kernel: [    1.494315] hub 4-0:1.0: USB hub found
Nov 16 16:51:42 cmp002 systemd[1]: Starting Permit User Sessions...
Nov 16 16:51:42 cmp002 kernel: [    1.496656] hub 4-0:1.0: 2 ports detected
Nov 16 16:51:42 cmp002 systemd[1]: Starting Login Service...
Nov 16 16:51:42 cmp002 kernel: [    1.499555] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 16 16:51:42 cmp002 kernel: [    1.506241] serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 16 16:51:42 cmp002 kernel: [    1.509229] serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 16 16:51:42 cmp002 kernel: [    1.512537] mousedev: PS/2 mouse device common for all mice
Nov 16 16:51:42 cmp002 systemd[1]: Starting LSB: Record successful boot for GRUB...
Nov 16 16:51:42 cmp002 kernel: [    1.516705] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 16 16:51:42 cmp002 systemd[1]: Starting The Salt Minion...
Nov 16 16:51:42 cmp002 kernel: [    1.522058] rtc_cmos 00:00: RTC can wake from S4
Nov 16 16:51:42 cmp002 kernel: [    1.526005] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
Nov 16 16:51:42 cmp002 systemd[1]: Starting Snappy daemon...
Nov 16 16:51:42 cmp002 kernel: [    1.529816] rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Nov 16 16:51:42 cmp002 kernel: [    1.533362] i2c /dev entries driver
Nov 16 16:51:42 cmp002 kernel: [    1.535568] device-mapper: uevent: version 1.0.3
Nov 16 16:51:42 cmp002 kernel: [    1.538656] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: dm-devel@redhat.com
Nov 16 16:51:42 cmp002 kernel: [    1.544108] ledtrig-cpu: registered to indicate activity on CPUs
Nov 16 16:51:42 cmp002 systemd[1]: Starting LSB: automatic crash report generation...
Nov 16 16:51:42 cmp002 systemd[1]: Started FUSE filesystem for LXC.
Nov 16 16:51:42 cmp002 kernel: [    1.548658] NET: Registered protocol family 10
Nov 16 16:51:42 cmp002 kernel: [    1.561418] Segment Routing with IPv6
Nov 16 16:51:42 cmp002 kernel: [    1.563211] NET: Registered protocol family 17
Nov 16 16:51:42 cmp002 systemd[1]: Starting System Logging Service...
Nov 16 16:51:42 cmp002 kernel: [    1.566053] Key type dns_resolver registered
Nov 16 16:51:42 cmp002 kernel: [    1.569674] mce: Using 10 MCE banks
Nov 16 16:51:42 cmp002 kernel: [    1.571706] RAS: Correctable Errors collector initialized.
Nov 16 16:51:42 cmp002 kernel: [    1.575120] sched_clock: Marking stable (1575102272, 0)->(2083918750, -508816478)
Nov 16 16:51:42 cmp002 kernel: [    1.577351] registered taskstats version 1
Nov 16 16:51:42 cmp002 kernel: [    1.578421] Loading compiled-in X.509 certificates
Nov 16 16:51:42 cmp002 systemd[1]: Starting dnsmasq - A lightweight DHCP and caching DNS server...
Nov 16 16:51:42 cmp002 systemd[1]: Started D-Bus System Message Bus.
Nov 16 16:51:42 cmp002 lxcfs[1020]: mount namespace: 5
Nov 16 16:51:42 cmp002 lxcfs[1020]: hierarchies:
Nov 16 16:51:42 cmp002 lxcfs[1020]:   0: fd:   6: perf_event
Nov 16 16:51:42 cmp002 lxcfs[1020]:   1: fd:   7: cpuset
Nov 16 16:51:42 cmp002 kernel: [    1.582145] Loaded X.509 cert 'Build time autogenerated kernel key: 1859b0531897959199376c446a0bd70df75fd1fc'
Nov 16 16:51:42 cmp002 kernel: [    1.584536] zswap: loaded using pool lzo/zbud
Nov 16 16:51:42 cmp002 kernel: [    1.590677] Key type big_key registered
Nov 16 16:51:42 cmp002 kernel: [    1.591677] Key type trusted registered
Nov 16 16:51:42 cmp002 lxcfs[1020]:   2: fd:   8: memory
Nov 16 16:51:42 cmp002 kernel: [    1.595222] Key type encrypted registered
Nov 16 16:51:42 cmp002 lxcfs[1020]:   3: fd:   9: devices
Nov 16 16:51:42 cmp002 kernel: [    1.596286] AppArmor: AppArmor sha1 policy hashing enabled
Nov 16 16:51:42 cmp002 kernel: [    1.597951] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
Nov 16 16:51:42 cmp002 kernel: [    1.599866] ima: Allocated hash algorithm: sha1
Nov 16 16:51:42 cmp002 kernel: [    1.601364] evm: HMAC attrs: 0x1
Nov 16 16:51:42 cmp002 lxcfs[1020]:   4: fd:  10: pids
Nov 16 16:51:42 cmp002 lxcfs[1020]:   5: fd:  11: freezer
Nov 16 16:51:42 cmp002 kernel: [    1.603013]   Magic number: 11:270:894
Nov 16 16:51:42 cmp002 lxcfs[1020]:   6: fd:  12: rdma
Nov 16 16:51:42 cmp002 kernel: [    1.604525] rtc_cmos 00:00: setting system clock to 2019-11-16 16:51:28 UTC (1573923088)
Nov 16 16:51:42 cmp002 kernel: [    1.607203] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
Nov 16 16:51:42 cmp002 kernel: [    1.608636] EDD information not available.
Nov 16 16:51:42 cmp002 lxcfs[1020]:   7: fd:  13: blkio
Nov 16 16:51:42 cmp002 lxcfs[1020]:   8: fd:  14: hugetlb
Nov 16 16:51:42 cmp002 kernel: [    1.612368] Freeing unused kernel image memory: 2432K
Nov 16 16:51:42 cmp002 kernel: [    1.640189] Write protecting the kernel read-only data: 20480k
Nov 16 16:51:42 cmp002 kernel: [    1.643997] Freeing unused kernel image memory: 2008K
Nov 16 16:51:42 cmp002 lxcfs[1020]:   9: fd:  15: cpu,cpuacct
Nov 16 16:51:42 cmp002 kernel: [    1.647094] Freeing unused kernel image memory: 1880K
Nov 16 16:51:42 cmp002 lxcfs[1020]:  10: fd:  16: net_cls,net_prio
Nov 16 16:51:42 cmp002 kernel: [    1.659797] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:51:42 cmp002 kernel: [    1.661340] x86/mm: Checking user space page tables
Nov 16 16:51:42 cmp002 kernel: [    1.669980] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:51:42 cmp002 lxcfs[1020]:  11: fd:  17: name=systemd
Nov 16 16:51:42 cmp002 lxcfs[1020]:  12: fd:  18: unified
Nov 16 16:51:42 cmp002 kernel: [    1.771791] AVX version of gcm_enc/dec engaged.
Nov 16 16:51:42 cmp002 kernel: [    1.772348] GPT:Primary header thinks Alt. header is not at the end of the disk.
Nov 16 16:51:42 cmp002 kernel: [    1.773314] AES CTR mode by8 optimization enabled
Nov 16 16:51:42 cmp002 dnsmasq[1024]: dnsmasq: syntax check OK.
Nov 16 16:51:42 cmp002 kernel: [    1.775109] GPT:4612095 != 209715199
Nov 16 16:51:42 cmp002 kernel: [    1.777231] GPT:Alternate GPT header not at the end of the disk.
Nov 16 16:51:42 cmp002 kernel: [    1.778671] GPT:4612095 != 209715199
Nov 16 16:51:42 cmp002 kernel: [    1.779592] GPT: Use GNU Parted to correct GPT errors.
Nov 16 16:51:42 cmp002 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.32.0]
Nov 16 16:51:42 cmp002 rsyslogd: rsyslogd's groupid changed to 106
Nov 16 16:51:42 cmp002 kernel: [    1.780850]  vda: vda1 vda14 vda15
Nov 16 16:51:42 cmp002 kernel: [    1.781857] FDC 0 is a S82078B
Nov 16 16:51:42 cmp002 rsyslogd: rsyslogd's userid changed to 102
Nov 16 16:51:42 cmp002 kernel: [    1.783338] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 16 16:51:42 cmp002 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="1022" x-info="http://www.rsyslog.com"] start
Nov 16 16:51:42 cmp002 kernel: [    1.785816] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 16 16:51:42 cmp002 kernel: [    1.793546] virtio_net virtio2 ens5: renamed from eth2
Nov 16 16:51:42 cmp002 kernel: [    1.812299] virtio_net virtio1 ens4: renamed from eth1
Nov 16 16:51:42 cmp002 kernel: [    1.848317] virtio_net virtio3 ens6: renamed from eth3
Nov 16 16:51:42 cmp002 kernel: [    1.868115] virtio_net virtio0 ens3: renamed from eth0
Nov 16 16:51:42 cmp002 kernel: [    3.480031] raid6: sse2x1   gen()  5297 MB/s
Nov 16 16:51:42 cmp002 kernel: [    3.528041] raid6: sse2x1   xor()  4533 MB/s
Nov 16 16:51:42 cmp002 kernel: [    3.576039] raid6: sse2x2   gen()  5963 MB/s
Nov 16 16:51:42 cmp002 kernel: [    3.624037] raid6: sse2x2   xor()  5226 MB/s
Nov 16 16:51:42 cmp002 kernel: [    3.672031] raid6: sse2x4   gen()  8901 MB/s
Nov 16 16:51:42 cmp002 kernel: [    3.720032] raid6: sse2x4   xor()  6400 MB/s
Nov 16 16:51:42 cmp002 kernel: [    3.721318] raid6: using algorithm sse2x4 gen() 8901 MB/s
Nov 16 16:51:42 cmp002 kernel: [    3.722844] raid6: .... xor() 6400 MB/s, rmw enabled
Nov 16 16:51:42 cmp002 kernel: [    3.724270] raid6: using ssse3x2 recovery algorithm
Nov 16 16:51:42 cmp002 kernel: [    3.727049] xor: automatically using best checksumming function   avx       
Nov 16 16:51:42 cmp002 kernel: [    3.730305] async_tx: api initialized (async)
Nov 16 16:51:42 cmp002 kernel: [    3.779771] Btrfs loaded, crc32c=crc32c-intel
Nov 16 16:51:42 cmp002 kernel: [    3.841455] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null)
Nov 16 16:51:42 cmp002 kernel: [    3.921855] random: fast init done
Nov 16 16:51:42 cmp002 kernel: [    4.179410] ip_tables: (C) 2000-2006 Netfilter Core Team
Nov 16 16:51:42 cmp002 kernel: [    4.212534] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:51:42 cmp002 kernel: [    4.219719] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Nov 16 16:51:42 cmp002 kernel: [    4.226561] systemd[1]: Detected virtualization kvm.
Nov 16 16:51:42 cmp002 kernel: [    4.228292] systemd[1]: Detected architecture x86-64.
Nov 16 16:51:42 cmp002 kernel: [    4.229902] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:51:42 cmp002 kernel: [    4.231469] random: systemd: uninitialized urandom read (16 bytes read)
Nov 16 16:51:42 cmp002 kernel: [    4.239295] systemd[1]: Set hostname to <ubuntu>.
Nov 16 16:51:42 cmp002 kernel: [    4.241669] systemd[1]: Initializing machine ID from KVM UUID.
Nov 16 16:51:42 cmp002 kernel: [    4.243128] systemd[1]: Installed transient /etc/machine-id file.
Nov 16 16:51:42 cmp002 kernel: [    4.563303] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 16 16:51:42 cmp002 kernel: [    4.571059] systemd[1]: Created slice User and Session Slice.
Nov 16 16:51:42 cmp002 kernel: [    4.575995] systemd[1]: Reached target User and Group Name Lookups.
Nov 16 16:51:42 cmp002 kernel: [    4.579545] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 16 16:51:42 cmp002 kernel: [    4.634718] Loading iSCSI transport class v2.0-870.
Nov 16 16:51:42 cmp002 kernel: [    4.648753] iscsi: registered transport (tcp)
Nov 16 16:51:42 cmp002 kernel: [    4.648774] EXT4-fs (vda1): re-mounted. Opts: (null)
Nov 16 16:51:42 cmp002 kernel: [    4.693800] iscsi: registered transport (iser)
Nov 16 16:51:42 cmp002 kernel: [    4.736534] systemd-journald[439]: Received request to flush runtime journal from PID 1
Nov 16 16:51:42 cmp002 kernel: [    5.598418] audit: type=1400 audit(1573923092.488:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=688 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    5.666092] audit: type=1400 audit(1573923092.556:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=689 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    5.666483] audit: type=1400 audit(1573923092.556:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=689 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    5.666903] audit: type=1400 audit(1573923092.556:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=689 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    5.713323] audit: type=1400 audit(1573923092.604:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/tcpdump" pid=697 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    5.775737] audit: type=1400 audit(1573923092.664:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/sbin/dhclient" pid=709 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    5.776294] audit: type=1400 audit(1573923092.668:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=709 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    5.776788] audit: type=1400 audit(1573923092.668:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=709 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    5.777264] audit: type=1400 audit(1573923092.668:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=709 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    5.844587] audit: type=1400 audit(1573923092.736:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=690 comm="apparmor_parser"
Nov 16 16:51:42 cmp002 kernel: [    8.711856] ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 16 16:51:42 cmp002 kernel: [    8.716703] ISO 9660 Extensions: RRIP_1991A
Nov 16 16:51:42 cmp002 kernel: [   13.101317] EXT4-fs (vda1): resizing filesystem from 548091 to 26185979 blocks
Nov 16 16:51:42 cmp002 kernel: [   13.379674] EXT4-fs (vda1): resized filesystem to 26185979
Nov 16 16:51:42 cmp002 kernel: [   15.057486] new mount options do not match the existing superblock, will be ignored
Nov 16 16:51:42 cmp002 dbus-daemon[1032]: [system] AppArmor D-Bus mediation is enabled
Nov 16 16:51:42 cmp002 systemd[1]: Starting Accounts Service...
Nov 16 16:51:42 cmp002 systemd[1]: Started Regular background program processing daemon.
Nov 16 16:51:42 cmp002 cron[1087]: (CRON) INFO (pidfile fd = 3)
Nov 16 16:51:42 cmp002 systemd[1]: Starting Pollinate to seed the pseudo random number generator...
Nov 16 16:51:42 cmp002 systemd[1]: Starting LXD - container startup/shutdown...
Nov 16 16:51:42 cmp002 cron[1087]: (CRON) INFO (Running @reboot jobs)
Nov 16 16:51:42 cmp002 systemd[1]: Started System Logging Service.
Nov 16 16:51:42 cmp002 systemd[1]: Started Permit User Sessions.
Nov 16 16:51:42 cmp002 systemd[1]: Started Login Service.
Nov 16 16:51:42 cmp002 grub-common[1001]:  * Recording successful boot for GRUB
Nov 16 16:51:42 cmp002 systemd[1]: Started Unattended Upgrades Shutdown.
Nov 16 16:51:42 cmp002 apport[1018]:  * Starting automatic crash report generation: apport
Nov 16 16:51:42 cmp002 systemd[1]: Starting Hold until boot process finishes up...
Nov 16 16:51:42 cmp002 systemd[1]: Starting Terminate Plymouth Boot Screen...
Nov 16 16:51:42 cmp002 systemd[1]: Started Hold until boot process finishes up.
Nov 16 16:51:42 cmp002 systemd[1]: Starting Set console scheme...
Nov 16 16:51:42 cmp002 systemd[1]: Started Serial Getty on ttyS0.
Nov 16 16:51:42 cmp002 systemd[1]: Started Terminate Plymouth Boot Screen.
Nov 16 16:51:42 cmp002 apport[1018]:    ...done.
Nov 16 16:51:42 cmp002 systemd[1]: Started LSB: automatic crash report generation.
Nov 16 16:51:42 cmp002 systemd[1]: Started Set console scheme.
Nov 16 16:51:42 cmp002 systemd[1]: Created slice system-getty.slice.
Nov 16 16:51:42 cmp002 systemd[1]: Started Getty on tty1.
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Login Prompts.
Nov 16 16:51:42 cmp002 pollinate[1096]: client sent challenge to [https://entropy.ubuntu.com/]
Nov 16 16:51:42 cmp002 grub-common[1001]:    ...done.
Nov 16 16:51:42 cmp002 systemd[1]: Started LSB: Record successful boot for GRUB.
Nov 16 16:51:42 cmp002 snapd[1017]: AppArmor status: apparmor is enabled and all features are available
Nov 16 16:51:42 cmp002 dbus-daemon[1032]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.2' (uid=0 pid=1081 comm="/usr/lib/accountsservice/accounts-daemon " label="unconfined")
Nov 16 16:51:42 cmp002 systemd[1]: Starting Authorization Manager...
Nov 16 16:51:42 cmp002 dnsmasq[1236]: started, version 2.79 cachesize 150
Nov 16 16:51:42 cmp002 dnsmasq[1236]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify
Nov 16 16:51:42 cmp002 dnsmasq[1236]: reading /etc/resolv.conf
Nov 16 16:51:42 cmp002 dnsmasq[1236]: using nameserver 8.8.8.8#53
Nov 16 16:51:42 cmp002 dnsmasq[1236]: read /etc/hosts - 7 addresses
Nov 16 16:51:42 cmp002 systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.
Nov 16 16:51:42 cmp002 systemd[1]: Reached target Host and Network Name Lookups.
Nov 16 16:51:42 cmp002 polkitd[1222]: started daemon version 0.105 using authority implementation `local' version `0.105'
Nov 16 16:51:42 cmp002 dbus-daemon[1032]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Nov 16 16:51:42 cmp002 systemd[1]: Started Authorization Manager.
Nov 16 16:51:42 cmp002 accounts-daemon[1081]: started daemon version 0.6.45
Nov 16 16:51:42 cmp002 systemd[1]: Started Accounts Service.
Nov 16 16:51:42 cmp002 snapd[1017]: helpers.go:145: error trying to compare the snap system key: system-key missing on disk
Nov 16 16:51:42 cmp002 snapd[1017]: daemon.go:338: started snapd/2.40+18.04 (series 16; classic) ubuntu/18.04 (amd64) linux/4.15.0-70-generic.
Nov 16 16:51:42 cmp002 systemd[1]: Started Snappy daemon.
Nov 16 16:51:42 cmp002 systemd[1]: Starting Wait until snapd is fully seeded...
Nov 16 16:51:42 cmp002 systemd[1]: Started LXD - container startup/shutdown.
Nov 16 16:51:42 cmp002 kernel: [   15.924477] random: crng init done
Nov 16 16:51:42 cmp002 kernel: [   15.924480] random: 7 urandom warning(s) missed due to ratelimiting
Nov 16 16:51:42 cmp002 pollinate[1096]: client verified challenge/response with [https://entropy.ubuntu.com/]
Nov 16 16:51:42 cmp002 pollinate[1096]: client hashed response from [https://entropy.ubuntu.com/]
Nov 16 16:51:42 cmp002 pollinate[1096]: client successfully seeded [/dev/urandom]
Nov 16 16:51:42 cmp002 systemd[1]: Started Pollinate to seed the pseudo random number generator.
Nov 16 16:51:42 cmp002 systemd[1]: Starting OpenBSD Secure Shell server...
Nov 16 16:51:42 cmp002 systemd[1]: Started OpenBSD Secure Shell server.
Nov 16 16:51:43 cmp002 systemd[1]: Started The Salt Minion.
Nov 16 16:51:43 cmp002 systemd[1]: Started Wait until snapd is fully seeded.
Nov 16 16:51:43 cmp002 systemd[1]: Starting Apply the settings specified in cloud-config...
Nov 16 16:51:43 cmp002 systemd[1]: Reached target Multi-User System.
Nov 16 16:51:43 cmp002 systemd[1]: Reached target Graphical Interface.
Nov 16 16:51:43 cmp002 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Nov 16 16:51:43 cmp002 systemd[1]: Started Update UTMP about System Runlevel Changes.
Nov 16 16:51:44 cmp002 salt-minion[1009]: [ERROR   ] DNS lookup or connection check of 'salt' failed.
Nov 16 16:51:44 cmp002 salt-minion[1009]: [ERROR   ] Master hostname: 'salt' not found or not responsive. Retrying in 30 seconds
Nov 16 16:51:44 cmp002 cloud-init[1394]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:config' at Sat, 16 Nov 2019 16:51:44 +0000. Up 17.32 seconds.
Nov 16 16:51:44 cmp002 systemd[1]: Started Apply the settings specified in cloud-config.
Nov 16 16:51:44 cmp002 systemd[1]: Starting Execute cloud user/final scripts...
Nov 16 16:51:45 cmp002 systemd[1]: Reloading.
Nov 16 16:51:45 cmp002 cloud-init[1485]: Synchronizing state of networking.service with SysV service script with /lib/systemd/systemd-sysv-install.
Nov 16 16:51:45 cmp002 cloud-init[1485]: Executing: /lib/systemd/systemd-sysv-install enable networking
Nov 16 16:51:45 cmp002 systemd[1]: Reloading.
Nov 16 16:51:46 cmp002 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:51:46 cmp002 cloud-init[1485]: Synchronizing state of salt-minion.service with SysV service script with /lib/systemd/systemd-sysv-install.
Nov 16 16:51:46 cmp002 cloud-init[1485]: Executing: /lib/systemd/systemd-sysv-install enable salt-minion
Nov 16 16:51:46 cmp002 systemd[1]: Reloading.
Nov 16 16:51:46 cmp002 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:51:46 cmp002 systemd[1]: Stopping The Salt Minion...
Nov 16 16:51:46 cmp002 salt-minion[1009]: [WARNING ] Minion received a SIGTERM. Exiting.
Nov 16 16:51:47 cmp002 salt-minion[1009]: The Salt Minion is shutdown. Minion received a SIGTERM. Exited.
Nov 16 16:51:47 cmp002 systemd[1]: Stopped The Salt Minion.
Nov 16 16:51:47 cmp002 systemd[1]: Starting The Salt Minion...
Nov 16 16:51:47 cmp002 systemd[1]: Started The Salt Minion.
Nov 16 16:51:47 cmp002 ec2: 
Nov 16 16:51:47 cmp002 ec2: #############################################################
Nov 16 16:51:47 cmp002 ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 16 16:51:47 cmp002 ec2: 1024 SHA256:AGm9VEUVwkHljAFiA3m6I/kYs/4g4MmEdPQFluYCdj8 root@cmp002 (DSA)
Nov 16 16:51:47 cmp002 ec2: 256 SHA256:kZbqlpJeCr2phfByllUg1MXx9dADuUN9vMHqz/p143Y root@cmp002 (ECDSA)
Nov 16 16:51:47 cmp002 ec2: 256 SHA256:r1B5RfawA3+Mke6lLTV3jAFzIxIQos63OuMSR8hGHL4 root@cmp002 (ED25519)
Nov 16 16:51:47 cmp002 ec2: 2048 SHA256:Wv7FhxqtjDyLFsbXBUVuv6iT9nvxvd7tZ3ExAvfIupg root@cmp002 (RSA)
Nov 16 16:51:47 cmp002 ec2: -----END SSH HOST KEY FINGERPRINTS-----
Nov 16 16:51:47 cmp002 ec2: #############################################################
Nov 16 16:51:47 cmp002 cloud-init[1485]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:final' at Sat, 16 Nov 2019 16:51:45 +0000. Up 18.39 seconds.
Nov 16 16:51:47 cmp002 cloud-init[1485]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 finished at Sat, 16 Nov 2019 16:51:47 +0000. Datasource DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 20.63 seconds
Nov 16 16:51:47 cmp002 systemd[1]: Started Execute cloud user/final scripts.
Nov 16 16:51:47 cmp002 systemd[1]: Reached target Cloud-init target.
Nov 16 16:51:47 cmp002 systemd[1]: Startup finished in 4.127s (kernel) + 16.568s (userspace) = 20.695s.
Nov 16 16:51:47 cmp002 snapd[1017]: daemon.go:576: gracefully waiting for running hooks
Nov 16 16:51:47 cmp002 snapd[1017]: daemon.go:578: done waiting for running hooks
Nov 16 16:51:47 cmp002 snapd[1017]: daemon stop requested to wait for socket activation
Nov 16 16:52:02 cmp002 systemd-timesyncd[610]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).
Nov 16 16:53:57 cmp002 salt-minion[1719]: [WARNING ] The function "module.run" is using its deprecated version and will expire in version "Sodium".
Nov 16 16:54:03 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y remove telnet.
Nov 16 16:54:08 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install smartmontools.
Nov 16 16:54:19 cmp002 systemd[1]: Reloading.
Nov 16 16:54:21 cmp002 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:54:22 cmp002 systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Nov 16 16:54:22 cmp002 systemd[1]: Reloading.
Nov 16 16:54:22 cmp002 smartd[4137]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-70-generic] (local build)
Nov 16 16:54:22 cmp002 smartd[4137]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Nov 16 16:54:22 cmp002 smartd[4137]: Opened configuration file /etc/smartd.conf
Nov 16 16:54:22 cmp002 smartd[4137]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Nov 16 16:54:22 cmp002 smartd[4137]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Nov 16 16:54:22 cmp002 smartd[4137]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Nov 16 16:54:22 cmp002 smartd[4137]: In the system's table of devices NO devices found to scan
Nov 16 16:54:22 cmp002 smartd[4137]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
Nov 16 16:54:23 cmp002 systemd[1]: smartd.service: Main process exited, code=exited, status=17/n/a
Nov 16 16:54:23 cmp002 systemd[1]: smartd.service: Failed with result 'exit-code'.
Nov 16 16:54:23 cmp002 systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Nov 16 16:54:23 cmp002 smartd[4178]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-70-generic] (local build)
Nov 16 16:54:23 cmp002 smartd[4178]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Nov 16 16:54:23 cmp002 smartd[4178]: Opened configuration file /etc/smartd.conf
Nov 16 16:54:23 cmp002 smartd[4178]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Nov 16 16:54:23 cmp002 smartd[4178]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Nov 16 16:54:23 cmp002 smartd[4178]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Nov 16 16:54:23 cmp002 smartd[4178]: In the system's table of devices NO devices found to scan
Nov 16 16:54:23 cmp002 smartd[4178]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
Nov 16 16:54:23 cmp002 systemd[1]: smartd.service: Main process exited, code=exited, status=17/n/a
Nov 16 16:54:23 cmp002 systemd[1]: smartd.service: Failed with result 'exit-code'.
Nov 16 16:54:24 cmp002 systemd[1]: Reloading.
Nov 16 16:54:26 cmp002 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:54:28 cmp002 systemd[1]: Created slice system-postfix.slice.
Nov 16 16:54:28 cmp002 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Nov 16 16:54:28 cmp002 configure-instance.sh[4311]: postconf: fatal: open /etc/postfix/main.cf: No such file or directory
Nov 16 16:54:29 cmp002 configure-instance.sh[4311]: postconf: fatal: open /etc/postfix/main.cf: No such file or directory
Nov 16 16:54:30 cmp002 systemd[1]: postfix@-.service: Control process exited, code=exited status=1
Nov 16 16:54:30 cmp002 systemd[1]: postfix@-.service: Failed with result 'exit-code'.
Nov 16 16:54:30 cmp002 systemd[1]: Failed to start Postfix Mail Transport Agent (instance -).
Nov 16 16:54:36 cmp002 systemd[1]: Reloading.
Nov 16 16:54:36 cmp002 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Nov 16 16:54:37 cmp002 postfix/postfix-script[4673]: starting the Postfix mail system
Nov 16 16:54:37 cmp002 postfix/master[4675]: daemon started -- version 3.3.0, configuration /etc/postfix
Nov 16 16:54:37 cmp002 systemd[1]: Started Postfix Mail Transport Agent (instance -).
Nov 16 16:54:37 cmp002 systemd[1]: Starting Postfix Mail Transport Agent...
Nov 16 16:54:37 cmp002 systemd[1]: Started Postfix Mail Transport Agent.
Nov 16 16:54:37 cmp002 systemd[1]: Reloading.
Nov 16 16:54:39 cmp002 systemd[1]: Reloading.
Nov 16 16:54:39 cmp002 systemd[1]: Stopping System Logging Service...
Nov 16 16:54:39 cmp002 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="1022" x-info="http://www.rsyslog.com"] exiting on signal 15.
Nov 16 16:54:39 cmp002 systemd[1]: Stopped System Logging Service.
Nov 16 16:54:39 cmp002 systemd[1]: Starting System Logging Service...
Nov 16 16:54:39 cmp002 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.32.0]
Nov 16 16:54:39 cmp002 rsyslogd: rsyslogd's groupid changed to 106
Nov 16 16:54:39 cmp002 systemd[1]: Started System Logging Service.
Nov 16 16:54:39 cmp002 rsyslogd: rsyslogd's userid changed to 102
Nov 16 16:54:39 cmp002 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="5163" x-info="http://www.rsyslog.com"] start
Nov 16 16:54:41 cmp002 dbus-daemon[1032]: [system] Activating via systemd: service name='org.freedesktop.timedate1' unit='dbus-org.freedesktop.timedate1.service' requested by ':1.22' (uid=0 pid=5243 comm="timedatectl " label="unconfined")
Nov 16 16:54:41 cmp002 systemd[1]: Starting Time & Date Service...
Nov 16 16:54:41 cmp002 dbus-daemon[1032]: [system] Successfully activated service 'org.freedesktop.timedate1'
Nov 16 16:54:41 cmp002 systemd[1]: Started Time & Date Service.
Nov 16 16:54:42 cmp002 salt-minion[1719]: [WARNING ] State for file: /boot/grub/grub.cfg - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:42 cmp002 kernel: [  195.538326] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Nov 16 16:54:47 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install sysfsutils.
Nov 16 16:54:49 cmp002 systemd[1]: Reloading.
Nov 16 16:54:49 cmp002 systemd[1]: Reloading.
Nov 16 16:54:49 cmp002 systemd[1]: Starting LSB: Set sysfs variables from /etc/sysfs.conf...
Nov 16 16:54:49 cmp002 sysfsutils[6063]:  * Setting sysfs variables...
Nov 16 16:54:49 cmp002 sysfsutils[6063]:    ...done.
Nov 16 16:54:49 cmp002 systemd[1]: Started LSB: Set sysfs variables from /etc/sysfs.conf.
Nov 16 16:54:49 cmp002 systemd[1]: Reloading.
Nov 16 16:54:53 cmp002 systemd[1]: Started /bin/systemctl disable ondemand.service.
Nov 16 16:54:53 cmp002 systemd[1]: Reloading.
Nov 16 16:54:53 cmp002 dbus-daemon[1032]: [system] Activating via systemd: service name='org.freedesktop.locale1' unit='dbus-org.freedesktop.locale1.service' requested by ':1.25' (uid=0 pid=6338 comm="localectl " label="unconfined")
Nov 16 16:54:53 cmp002 systemd[1]: Starting Locale Service...
Nov 16 16:54:53 cmp002 dbus-daemon[1032]: [system] Successfully activated service 'org.freedesktop.locale1'
Nov 16 16:54:53 cmp002 systemd[1]: Started Locale Service.
Nov 16 16:54:53 cmp002 systemd-localed[6346]: Changed locale to LANG=en_US.UTF-8.
Nov 16 16:54:53 cmp002 salt-minion[1719]: [WARNING ] The function "module.run" is using its deprecated version and will expire in version "Sodium".
Nov 16 16:54:54 cmp002 systemd[1]: Reloading.
Nov 16 16:54:54 cmp002 salt-minion[1719]: [WARNING ] State for file: /etc/shadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp002 salt-minion[1719]: [WARNING ] State for file: /etc/gshadow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp002 salt-minion[1719]: [WARNING ] State for file: /etc/group- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp002 salt-minion[1719]: [WARNING ] State for file: /etc/group - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp002 salt-minion[1719]: [WARNING ] State for file: /etc/passwd- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp002 salt-minion[1719]: [WARNING ] State for file: /etc/passwd - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp002 salt-minion[1719]: [WARNING ] State for file: /etc/gshadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp002 salt-minion[1719]: [WARNING ] State for file: /etc/shadow- - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:54:54 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install openvswitch-switch bridge-utils vlan.
Nov 16 16:54:59 cmp002 systemd[1]: Reloading.
Nov 16 16:54:59 cmp002 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:54:59 cmp002 systemd[1]: Starting Open vSwitch Database Unit...
Nov 16 16:54:59 cmp002 ovs-ctl[6647]:  * /etc/openvswitch/conf.db does not exist
Nov 16 16:54:59 cmp002 ovs-ctl[6647]:  * Creating empty database /etc/openvswitch/conf.db
Nov 16 16:54:59 cmp002 ovs-ctl[6647]:  * Starting ovsdb-server
Nov 16 16:54:59 cmp002 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.16.1
Nov 16 16:54:59 cmp002 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.11.1 "external-ids:system-id=\"ab8bb0a4-0e51-4298-ab40-d5c8f57d9d06\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"ubuntu\"" "system-version=\"18.04\""
Nov 16 16:54:59 cmp002 ovs-ctl[6647]:  * Configuring Open vSwitch system IDs
Nov 16 16:54:59 cmp002 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=cmp002
Nov 16 16:54:59 cmp002 ovs-ctl[6647]:  * Enabling remote OVSDB managers
Nov 16 16:54:59 cmp002 systemd[1]: Started Open vSwitch Database Unit.
Nov 16 16:54:59 cmp002 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 16 16:55:00 cmp002 ovs-ctl[6708]:  * Inserting openvswitch module
Nov 16 16:55:00 cmp002 kernel: [  212.901203] openvswitch: Open vSwitch switching datapath
Nov 16 16:55:00 cmp002 ovs-ctl[6708]:  * Starting ovs-vswitchd
Nov 16 16:55:00 cmp002 ovs-ctl[6708]:  * Enabling remote OVSDB managers
Nov 16 16:55:00 cmp002 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 16 16:55:00 cmp002 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=cmp002
Nov 16 16:55:00 cmp002 systemd[1]: Starting Open vSwitch...
Nov 16 16:55:00 cmp002 systemd[1]: Started Open vSwitch.
Nov 16 16:55:00 cmp002 systemd[1]: Reloading.
Nov 16 16:55:05 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install bridge-utils.
Nov 16 16:55:06 cmp002 systemd[1]: Reloading.
Nov 16 16:55:06 cmp002 systemd[1]: Started /bin/systemctl enable networking.service.
Nov 16 16:55:06 cmp002 systemd[1]: Reloading.
Nov 16 16:55:07 cmp002 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:55:07 cmp002 dnsmasq[1236]: reading /etc/resolv.conf
Nov 16 16:55:07 cmp002 dnsmasq[1236]: using nameserver 8.8.8.8#53
Nov 16 16:55:07 cmp002 salt-minion[1719]: [WARNING ] The network state sls is requiring a reboot of the system to properly apply network configuration.
Nov 16 16:55:07 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install vlan.
Nov 16 16:55:08 cmp002 systemd-udevd[7525]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:08 cmp002 kernel: [  220.861683] 8021q: 802.1Q VLAN Support v1.8
Nov 16 16:55:08 cmp002 kernel: [  220.861694] 8021q: adding VLAN 0 to HW filter on device ens3
Nov 16 16:55:08 cmp002 kernel: [  220.861749] 8021q: adding VLAN 0 to HW filter on device ens5
Nov 16 16:55:08 cmp002 systemd[1]: Reloading OpenBSD Secure Shell server.
Nov 16 16:55:08 cmp002 systemd[1]: Reloaded OpenBSD Secure Shell server.
Nov 16 16:55:08 cmp002 systemd[1]: Found device /sys/subsystem/net/devices/ens5.1000.
Nov 16 16:55:08 cmp002 systemd[1]: Started ifup for ens5.1000.
Nov 16 16:55:08 cmp002 sh[7629]: ifup: waiting for lock on /run/network/ifstate.ens5
Nov 16 16:55:08 cmp002 systemd[1]: Reloading Postfix Mail Transport Agent (instance -).
Nov 16 16:55:08 cmp002 postfix/postfix-script[7650]: refreshing the Postfix mail system
Nov 16 16:55:08 cmp002 postfix/master[4675]: reload -- version 3.3.0, configuration /etc/postfix
Nov 16 16:55:08 cmp002 systemd[1]: Reloaded Postfix Mail Transport Agent (instance -).
Nov 16 16:55:08 cmp002 systemd[1]: Reloading Postfix Mail Transport Agent.
Nov 16 16:55:08 cmp002 systemd[1]: Reloaded Postfix Mail Transport Agent.
Nov 16 16:55:08 cmp002 sh[7629]: ifup: interface ens5.1000 already configured
Nov 16 16:55:09 cmp002 salt-minion[1719]: [ERROR   ] Command '['umount', '/dev/shm']' failed with return code: 32
Nov 16 16:55:09 cmp002 salt-minion[1719]: [ERROR   ] stderr: umount: /dev/shm: target is busy.
Nov 16 16:55:09 cmp002 salt-minion[1719]: [ERROR   ] retcode: 32
Nov 16 16:55:13 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef dist-upgrade.
Nov 16 16:55:26 cmp002 systemd[1]: Stopped target Graphical Interface.
Nov 16 16:55:26 cmp002 systemd[1]: Stopping Accounts Service...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping Availability of block devices...
Nov 16 16:55:26 cmp002 systemd[1]: Stopped target Cloud-init target.
Nov 16 16:55:26 cmp002 systemd[1]: Stopped Execute cloud user/final scripts.
Nov 16 16:55:26 cmp002 systemd[1]: Stopped target Multi-User System.
Nov 16 16:55:26 cmp002 systemd[1]: Stopping LSB: automatic crash report generation...
Nov 16 16:55:26 cmp002 systemd[1]: Stopped Postfix Mail Transport Agent.
Nov 16 16:55:26 cmp002 systemd[1]: Stopping Postfix Mail Transport Agent (instance -)...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping LSB: Record successful boot for GRUB...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping LXD - container startup/shutdown...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping Deferred execution scheduler...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping OpenBSD Secure Shell server...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping irqbalance daemon...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping D-Bus System Message Bus...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping Unattended Upgrades Shutdown...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping FUSE filesystem for LXC...
Nov 16 16:55:26 cmp002 systemd[1]: Stopping LSB: Set sysfs variables from /etc/sysfs.conf...
Nov 16 16:55:56 cmp002 systemd-modules-load[430]: Inserted module 'iscsi_tcp'
Nov 16 16:55:56 cmp002 systemd-modules-load[430]: Inserted module 'ib_iser'
Nov 16 16:55:56 cmp002 systemd-modules-load[430]: Inserted module 'nf_conntrack'
Nov 16 16:55:56 cmp002 systemd[1]: Started Set the console keyboard layout.
Nov 16 16:55:56 cmp002 systemd[1]: Started udev Coldplug all Devices.
Nov 16 16:55:56 cmp002 systemd[1]: Mounted FUSE Control File System.
Nov 16 16:55:56 cmp002 systemd[1]: Mounted Kernel Configuration File System.
Nov 16 16:55:56 cmp002 systemd[1]: Started Load/Save Random Seed.
Nov 16 16:55:56 cmp002 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 16 16:55:56 cmp002 systemd[1]: Started Apply Kernel Variables.
Nov 16 16:55:56 cmp002 systemd[1]: Started Create Static Device Nodes in /dev.
Nov 16 16:55:56 cmp002 systemd[1]: Starting udev Kernel Device Manager...
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Local File Systems (Pre).
Nov 16 16:55:56 cmp002 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 16 16:55:56 cmp002 systemd[1]: Started udev Kernel Device Manager.
Nov 16 16:55:56 cmp002 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Local Encrypted Volumes.
Nov 16 16:55:56 cmp002 systemd[1]: Started Flush Journal to Persistent Storage.
Nov 16 16:55:56 cmp002 systemd-udevd[471]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:56 cmp002 systemd-udevd[472]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:56 cmp002 systemd-udevd[478]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:56 cmp002 systemd-udevd[473]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:56 cmp002 systemd[1]: Found device /dev/ttyS0.
Nov 16 16:55:56 cmp002 systemd-udevd[474]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:56 cmp002 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 16 16:55:56 cmp002 systemd-udevd[484]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:56 cmp002 systemd-udevd[484]: Could not generate persistent MAC address for br-mgmt: No such file or directory
Nov 16 16:55:56 cmp002 systemd[1]: Found device /sys/subsystem/net/devices/br-mgmt.
Nov 16 16:55:56 cmp002 systemd[1]: Found device Virtio network device.
Nov 16 16:55:56 cmp002 systemd[1]: message repeated 2 times: [ Found device Virtio network device.]
Nov 16 16:55:56 cmp002 systemd[1]: Found device /dev/disk/by-label/UEFI.
Nov 16 16:55:56 cmp002 systemd[1]: Mounting /boot/efi...
Nov 16 16:55:56 cmp002 systemd[1]: Mounted /boot/efi.
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Local File Systems.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Create Volatile Files and Directories...
Nov 16 16:55:56 cmp002 systemd[1]: Starting Set console font and keymap...
Nov 16 16:55:56 cmp002 systemd[1]: Starting Tell Plymouth To Write Out Runtime Data...
Nov 16 16:55:56 cmp002 systemd[1]: Starting ebtables ruleset management...
Nov 16 16:55:56 cmp002 systemd[1]: Starting AppArmor initialization...
Nov 16 16:55:56 cmp002 systemd[1]: Started Set console font and keymap.
Nov 16 16:55:56 cmp002 systemd[1]: Started Tell Plymouth To Write Out Runtime Data.
Nov 16 16:55:56 cmp002 systemd[1]: Started Create Volatile Files and Directories.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Linux version 4.15.0-70-generic (buildd@lgw01-amd64-055) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 (Ubuntu 4.15.0-70.79-generic 4.15.18)
Nov 16 16:55:56 cmp002 apparmor[908]:  * Starting AppArmor profiles
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:55:56 cmp002 kernel: [    0.000000] KERNEL supported cpus:
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   Intel GenuineIntel
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   AMD AuthenticAMD
Nov 16 16:55:56 cmp002 systemd[1]: Starting Network Time Synchronization...
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   Centaur CentaurHauls
Nov 16 16:55:56 cmp002 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 16 16:55:56 cmp002 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 16 16:55:56 cmp002 kernel: [    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 16 16:55:56 cmp002 systemd[1]: Starting Update UTMP about System Boot/Shutdown...
Nov 16 16:55:56 cmp002 kernel: [    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 16 16:55:56 cmp002 kernel: [    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Nov 16 16:55:56 cmp002 systemd[1]: Started ebtables ruleset management.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] e820: BIOS-provided physical RAM map:
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdefff] usable
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x00000000bffdf000-0x00000000bfffffff] reserved
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 16 16:55:56 cmp002 systemd[1]: Started Update UTMP about System Boot/Shutdown.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000033fffffff] usable
Nov 16 16:55:56 cmp002 kernel: [    0.000000] NX (Execute Disable) protection: active
Nov 16 16:55:56 cmp002 kernel: [    0.000000] SMBIOS 2.8 present.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Hypervisor detected: KVM
Nov 16 16:55:56 cmp002 apparmor[908]: Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
Nov 16 16:55:56 cmp002 kernel: [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 16 16:55:56 cmp002 kernel: [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 16 16:55:56 cmp002 kernel: [    0.000000] e820: last_pfn = 0x340000 max_arch_pfn = 0x400000000
Nov 16 16:55:56 cmp002 kernel: [    0.000000] MTRR default type: write-back
Nov 16 16:55:56 cmp002 kernel: [    0.000000] MTRR fixed ranges enabled:
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   00000-9FFFF write-back
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   A0000-BFFFF uncachable
Nov 16 16:55:56 cmp002 apparmor[908]:    ...done.
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   C0000-FFFFF write-protect
Nov 16 16:55:56 cmp002 kernel: [    0.000000] MTRR variable ranges enabled:
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   0 base 00C0000000 mask FFC0000000 uncachable
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   1 disabled
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   2 disabled
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   3 disabled
Nov 16 16:55:56 cmp002 systemd[1]: Started AppArmor initialization.
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   4 disabled
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   5 disabled
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   6 disabled
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   7 disabled
Nov 16 16:55:56 cmp002 kernel: [    0.000000] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 16 16:55:56 cmp002 kernel: [    0.000000] e820: last_pfn = 0xbffdf max_arch_pfn = 0x400000000
Nov 16 16:55:56 cmp002 systemd[1]: Starting Initial cloud-init job (pre-networking)...
Nov 16 16:55:56 cmp002 kernel: [    0.000000] found SMP MP-table at [mem 0x000f6590-0x000f659f]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Scanning 1 areas for low memory corruption
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Using GB pages for direct mapping
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BRK [0x25b941000, 0x25b941fff] PGTABLE
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BRK [0x25b942000, 0x25b942fff] PGTABLE
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BRK [0x25b943000, 0x25b943fff] PGTABLE
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BRK [0x25b944000, 0x25b944fff] PGTABLE
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BRK [0x25b945000, 0x25b945fff] PGTABLE
Nov 16 16:55:56 cmp002 systemd[1]: Started Network Time Synchronization.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] BRK [0x25b946000, 0x25b946fff] PGTABLE
Nov 16 16:55:56 cmp002 kernel: [    0.000000] RAMDISK: [mem 0x35a87000-0x36d3afff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: Early table checksum verification disabled
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: RSDP 0x00000000000F6540 000014 (v00 BOCHS )
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: RSDT 0x00000000BFFE14B2 000030 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: FACP 0x00000000BFFE08D4 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
Nov 16 16:55:56 cmp002 systemd[1]: Reached target System Time Synchronized.
Nov 16 16:55:56 cmp002 cloud-init[1043]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init-local' at Sat, 16 Nov 2019 16:55:53 +0000. Up 10.94 seconds.
Nov 16 16:55:56 cmp002 systemd[1]: Started Initial cloud-init job (pre-networking).
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Network (Pre).
Nov 16 16:55:56 cmp002 systemd[1]: Started ifup for ens5.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Open vSwitch Database Unit...
Nov 16 16:55:56 cmp002 systemd[1]: Started ifup for br-mgmt.
Nov 16 16:55:56 cmp002 systemd[1]: Started ifup for ens3.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: DSDT 0x00000000BFFDFD00 000BD4 (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: FACS 0x00000000BFFDFCC0 000040
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: SSDT 0x00000000BFFE0948 000ACA (v01 BOCHS  BXPCSSDT 00000001 BXPC 00000001)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: APIC 0x00000000BFFE1412 0000A0 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:55:56 cmp002 kernel: [    0.000000] No NUMA configuration found
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Faking a node at [mem 0x0000000000000000-0x000000033fffffff]
Nov 16 16:55:56 cmp002 systemd[1]: Started ifup for ens4.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] NODE_DATA(0) allocated [mem 0x33ffd5000-0x33fffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] kvm-clock: cpu 0, msr 3:3ff54001, primary cpu clock
Nov 16 16:55:56 cmp002 kernel: [    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 16 16:55:56 cmp002 kernel: [    0.000000] kvm-clock: using sched offset of 265038526291 cycles
Nov 16 16:55:56 cmp002 kernel: [    0.000000] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Zone ranges:
Nov 16 16:55:56 cmp002 sh[1117]: Waiting for br-mgmt to get ready (MAXWAIT is 32 seconds).
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   Normal   [mem 0x0000000100000000-0x000000033fffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   Device   empty
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Movable zone start for each node
Nov 16 16:55:56 cmp002 sh[1106]: WARNING:  Could not open /proc/net/vlan/config.  Maybe you need to load the 8021q module, or maybe you are not using PROCFS??
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Early memory node ranges
Nov 16 16:55:56 cmp002 sh[1106]: Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
Nov 16 16:55:56 cmp002 sh[1106]: Added VLAN with VID == 1000 to IF -:ens5:-
Nov 16 16:55:56 cmp002 systemd-udevd[1398]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 16:55:56 cmp002 ovs-ctl[1109]:  * Starting ovsdb-server
Nov 16 16:55:56 cmp002 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.16.1
Nov 16 16:55:56 cmp002 systemd[1]: Found device /sys/subsystem/net/devices/ens5.1000.
Nov 16 16:55:56 cmp002 systemd[1]: Started ifup for ens5.1000.
Nov 16 16:55:56 cmp002 sh[1500]: Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config
Nov 16 16:55:56 cmp002 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.11.1 "external-ids:system-id=\"ab8bb0a4-0e51-4298-ab40-d5c8f57d9d06\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"ubuntu\"" "system-version=\"18.04\""
Nov 16 16:55:56 cmp002 ovs-ctl[1109]:  * Configuring Open vSwitch system IDs
Nov 16 16:55:56 cmp002 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=cmp002
Nov 16 16:55:56 cmp002 ovs-ctl[1109]:  * Enabling remote OVSDB managers
Nov 16 16:55:56 cmp002 systemd[1]: Started Open vSwitch Database Unit.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 16 16:55:56 cmp002 ovs-ctl[1600]:  * Inserting openvswitch module
Nov 16 16:55:56 cmp002 ovs-ctl[1600]:  * Starting ovs-vswitchd
Nov 16 16:55:56 cmp002 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . external-ids:hostname=cmp002
Nov 16 16:55:56 cmp002 ovs-ctl[1600]:  * Enabling remote OVSDB managers
Nov 16 16:55:56 cmp002 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Raise network interfaces...
Nov 16 16:55:56 cmp002 systemd[1]: Started Raise network interfaces.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Nov 16 16:55:56 cmp002 cloud-init[1866]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'init' at Sat, 16 Nov 2019 16:55:54 +0000. Up 12.72 seconds.
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: ++++++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++++++
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +-----------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   Device  |   Up  |          Address           |      Mask     | Scope  |     Hw-Address    |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +-----------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |  br-mgmt  |  True |        172.16.10.56        | 255.255.255.0 | global | 52:54:00:3d:c4:b4 |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |  br-mgmt  |  True | fe80::5054:ff:fe3d:c4b4/64 |       .       |  link  | 52:54:00:3d:c4:b4 |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |    ens3   |  True |       192.168.11.37        | 255.255.255.0 | global | 52:54:00:c7:26:24 |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |    ens3   |  True | fe80::5054:ff:fec7:2624/64 |       .       |  link  | 52:54:00:c7:26:24 |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |    ens4   |  True | fe80::5054:ff:fe3d:c4b4/64 |       .       |  link  | 52:54:00:3d:c4:b4 |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |    ens5   |  True | fe80::5054:ff:feb4:dadf/64 |       .       |  link  | 52:54:00:b4:da:df |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: | ens5.1000 |  True | fe80::5054:ff:feb4:dadf/64 |       .       |  link  | 52:54:00:b4:da:df |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |    ens6   | False |             .              |       .       |   .    | 52:54:00:13:34:fc |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |     lo    |  True |         127.0.0.1          |   255.0.0.0   |  host  |         .         |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |     lo    |  True |          ::1/128           |       .       |  host  |         .         |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +-----------+-------+----------------------------+---------------+--------+-------------------+
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: | Route | Destination  |   Gateway    |    Genmask    | Interface | Flags |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   0   |   0.0.0.0    | 192.168.11.3 |    0.0.0.0    |    ens3   |   UG  |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   1   |  10.254.0.0  | 172.16.10.56 |  255.255.0.0  |  br-mgmt  |   UG  |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   2   | 172.16.10.0  |   0.0.0.0    | 255.255.255.0 |  br-mgmt  |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   3   | 192.168.11.0 |   0.0.0.0    | 255.255.255.0 |    ens3   |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +-------+--------------+--------------+---------------+-----------+-------+
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   1   |  fe80::/64  |    ::   |    ens4   |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   2   |  fe80::/64  |    ::   |  br-mgmt  |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   3   |  fe80::/64  |    ::   |    ens5   |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   4   |  fe80::/64  |    ::   |    ens3   |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   5   |  fe80::/64  |    ::   | ens5.1000 |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   7   |    local    |    ::   |    ens4   |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   8   |   ff00::/8  |    ::   |    ens4   |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   9   |   ff00::/8  |    ::   |  br-mgmt  |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   10  |   ff00::/8  |    ::   |    ens5   |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   11  |   ff00::/8  |    ::   |    ens3   |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: |   12  |   ff00::/8  |    ::   | ens5.1000 |   U   |
Nov 16 16:55:56 cmp002 cloud-init[1866]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 16 16:55:56 cmp002 systemd[1]: Started Initial cloud-init job (metadata service crawler).
Nov 16 16:55:56 cmp002 systemd[1]: Reached target System Initialization.
Nov 16 16:55:56 cmp002 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 16 16:55:56 cmp002 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 16 16:55:56 cmp002 systemd[1]: Starting LXD - unix socket.
Nov 16 16:55:56 cmp002 systemd[1]: Listening on UUID daemon activation socket.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Socket activation for snappy daemon.
Nov 16 16:55:56 cmp002 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 16 16:55:56 cmp002 systemd[1]: Started Discard unused blocks once a week.
Nov 16 16:55:56 cmp002 systemd[1]: Started ACPI Events Check.
Nov 16 16:55:56 cmp002 systemd[1]: Started Message of the Day.
Nov 16 16:55:56 cmp002 systemd[1]: Listening on ACPID Listen Socket.
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Paths.
Nov 16 16:55:56 cmp002 systemd[1]: Started Daily apt download activities.
Nov 16 16:55:56 cmp002 systemd[1]: Started Daily apt upgrade and clean activities.
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Timers.
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Cloud-config availability.
Nov 16 16:55:56 cmp002 systemd[1]: Listening on LXD - unix socket.
Nov 16 16:55:56 cmp002 systemd[1]: Listening on Socket activation for snappy daemon.
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Sockets.
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Basic System.
Nov 16 16:55:56 cmp002 systemd[1]: Started irqbalance daemon.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Login Service...
Nov 16 16:55:56 cmp002 systemd[1]: Started Regular background program processing daemon.
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   node   0: [mem 0x0000000000100000-0x00000000bffdefff]
Nov 16 16:55:56 cmp002 systemd[1]: Started D-Bus System Message Bus.
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   node   0: [mem 0x0000000100000000-0x000000033fffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Reserved but unavailable: 98 pages
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x000000033fffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] On node 0 totalpages: 3145597
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   DMA zone: 64 pages used for memmap
Nov 16 16:55:56 cmp002 cron[1958]: (CRON) INFO (pidfile fd = 3)
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   DMA zone: 21 pages reserved
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   DMA zone: 3998 pages, LIFO batch:0
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   DMA32 zone: 12224 pages used for memmap
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   DMA32 zone: 782303 pages, LIFO batch:31
Nov 16 16:55:56 cmp002 cron[1958]: (CRON) INFO (Running @reboot jobs)
Nov 16 16:55:56 cmp002 dbus-daemon[1960]: [system] AppArmor D-Bus mediation is enabled
Nov 16 16:55:56 cmp002 systemd[1]: Starting LSB: Set sysfs variables from /etc/sysfs.conf...
Nov 16 16:55:56 cmp002 systemd[1]: Starting LSB: Record successful boot for GRUB...
Nov 16 16:55:56 cmp002 systemd[1]: Starting System Logging Service...
Nov 16 16:55:56 cmp002 systemd[1]: Starting Snappy daemon...
Nov 16 16:55:56 cmp002 systemd[1]: Starting Accounts Service...
Nov 16 16:55:56 cmp002 systemd[1]: Starting Open vSwitch...
Nov 16 16:55:56 cmp002 systemd[1]: Started Deferred execution scheduler.
Nov 16 16:55:56 cmp002 systemd[1]: Started FUSE filesystem for LXC.
Nov 16 16:55:56 cmp002 systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Nov 16 16:55:56 cmp002 systemd[1]: Starting LXD - container startup/shutdown...
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   Normal zone: 36864 pages used for memmap
Nov 16 16:55:56 cmp002 kernel: [    0.000000]   Normal zone: 2359296 pages, LIFO batch:31
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: PM-Timer IO Port: 0x608
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: Local APIC address 0xfee00000
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 16 16:55:56 cmp002 kernel: [    0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: IRQ0 used by override.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: IRQ5 used by override.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: IRQ9 used by override.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: IRQ10 used by override.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ACPI: IRQ11 used by override.
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Using ACPI (MADT) for SMP configuration information
Nov 16 16:55:56 cmp002 kernel: [    0.000000] smpboot: Allowing 6 CPUs, 0 hotplug CPUs
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xbffdf000-0xbfffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Booting paravirtualized kernel on KVM
Nov 16 16:55:56 cmp002 kernel: [    0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
Nov 16 16:55:56 cmp002 kernel: [    0.000000] random: get_random_bytes called from start_kernel+0x99/0x4fd with crng_init=0
Nov 16 16:55:56 cmp002 kernel: [    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:6 nr_cpu_ids:6 nr_node_ids:1
Nov 16 16:55:56 cmp002 kernel: [    0.000000] percpu: Embedded 45 pages/cpu s147456 r8192 d28672 u262144
Nov 16 16:55:56 cmp002 kernel: [    0.000000] pcpu-alloc: s147456 r8192 d28672 u262144 alloc=1*2097152
Nov 16 16:55:56 cmp002 kernel: [    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 - - 
Nov 16 16:55:56 cmp002 kernel: [    0.000000] KVM setup async PF for cpu 0
Nov 16 16:55:56 cmp002 kernel: [    0.000000] kvm-stealtime: cpu 0, msr 33fc23040
Nov 16 16:55:56 cmp002 kernel: [    0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 3096424
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Policy zone: Normal
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-70-generic root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Calgary: detecting Calgary via BIOS EBDA area
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Memory: 12270808K/12582388K available (12300K kernel code, 2481K rwdata, 4264K rodata, 2432K init, 2388K bss, 311580K reserved, 0K cma-reserved)
Nov 16 16:55:56 cmp002 kernel: [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
Nov 16 16:55:56 cmp002 kernel: [    0.000000] Kernel/User page tables isolation: enabled
Nov 16 16:55:56 cmp002 kernel: [    0.000000] ftrace: allocating 39315 entries in 154 pages
Nov 16 16:55:56 cmp002 kernel: [    0.004000] Hierarchical RCU implementation.
Nov 16 16:55:56 cmp002 kernel: [    0.004000] 	RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=6.
Nov 16 16:55:56 cmp002 kernel: [    0.004000] 	Tasks RCU enabled.
Nov 16 16:55:56 cmp002 kernel: [    0.004000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
Nov 16 16:55:56 cmp002 kernel: [    0.004000] NR_IRQS: 524544, nr_irqs: 472, preallocated irqs: 16
Nov 16 16:55:56 cmp002 kernel: [    0.004000] Console: colour VGA+ 80x25
Nov 16 16:55:56 cmp002 kernel: [    0.004000] console [tty1] enabled
Nov 16 16:55:56 cmp002 kernel: [    0.004000] console [ttyS0] enabled
Nov 16 16:55:56 cmp002 kernel: [    0.004000] ACPI: Core revision 20170831
Nov 16 16:55:56 cmp002 kernel: [    0.004000] ACPI: 2 ACPI AML tables successfully acquired and loaded
Nov 16 16:55:56 cmp002 kernel: [    0.004007] APIC: Switch to symmetric I/O mode setup
Nov 16 16:55:56 cmp002 kernel: [    0.005239] x2apic enabled
Nov 16 16:55:56 cmp002 kernel: [    0.006108] Switched APIC routing to physical x2apic.
Nov 16 16:55:56 cmp002 kernel: [    0.008000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Nov 16 16:55:56 cmp002 kernel: [    0.008000] tsc: Detected 2799.994 MHz processor
Nov 16 16:55:56 cmp002 kernel: [    0.008000] Calibrating delay loop (skipped) preset value.. 5599.98 BogoMIPS (lpj=11199976)
Nov 16 16:55:56 cmp002 kernel: [    0.008002] pid_max: default: 32768 minimum: 301
Nov 16 16:55:56 cmp002 kernel: [    0.008932] Security Framework initialized
Nov 16 16:55:56 cmp002 kernel: [    0.009749] Yama: becoming mindful.
Nov 16 16:55:56 cmp002 kernel: [    0.010508] AppArmor: AppArmor initialized
Nov 16 16:55:56 cmp002 kernel: [    0.013954] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.017022] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.018479] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.019829] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.020280] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Nov 16 16:55:56 cmp002 kernel: [    0.021336] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Nov 16 16:55:56 cmp002 kernel: [    0.022467] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 16 16:55:56 cmp002 kernel: [    0.024002] Spectre V2 : Mitigation: Full generic retpoline
Nov 16 16:55:56 cmp002 kernel: [    0.025052] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Nov 16 16:55:56 cmp002 kernel: [    0.026638] Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 16 16:55:56 cmp002 kernel: [    0.028008] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 16 16:55:56 cmp002 kernel: [    0.029605] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Nov 16 16:55:56 cmp002 kernel: [    0.032030] MDS: Mitigation: Clear CPU buffers
Nov 16 16:55:56 cmp002 kernel: [    0.038415] Freeing SMP alternatives memory: 36K
Nov 16 16:55:56 cmp002 kernel: [    0.041058] TSC deadline timer enabled
Nov 16 16:55:56 cmp002 kernel: [    0.041061] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (family: 0x6, model: 0x3e, stepping: 0x4)
Nov 16 16:55:56 cmp002 kernel: [    0.043040] Performance Events: IvyBridge events, Intel PMU driver.
Nov 16 16:55:56 cmp002 kernel: [    0.044000] ... version:                2
Nov 16 16:55:56 cmp002 kernel: [    0.044000] ... bit width:              48
Nov 16 16:55:56 cmp002 kernel: [    0.044004] ... generic registers:      4
Nov 16 16:55:56 cmp002 kernel: [    0.044868] ... value mask:             0000ffffffffffff
Nov 16 16:55:56 cmp002 kernel: [    0.045897] ... max period:             000000007fffffff
Nov 16 16:55:56 cmp002 kernel: [    0.046898] ... fixed-purpose events:   3
Nov 16 16:55:56 cmp002 kernel: [    0.047686] ... event mask:             000000070000000f
Nov 16 16:55:56 cmp002 kernel: [    0.048044] Hierarchical SRCU implementation.
Nov 16 16:55:56 cmp002 kernel: [    0.049591] smp: Bringing up secondary CPUs ...
Nov 16 16:55:56 cmp002 kernel: [    0.050589] x86: Booting SMP configuration:
Nov 16 16:55:56 cmp002 kernel: [    0.051421] .... node  #0, CPUs:      #1
Nov 16 16:55:56 cmp002 kernel: [    0.004000] kvm-clock: cpu 1, msr 3:3ff54041, secondary cpu clock
Nov 16 16:55:56 cmp002 kernel: [    0.056021] KVM setup async PF for cpu 1
Nov 16 16:55:56 cmp002 kernel: [    0.056825] kvm-stealtime: cpu 1, msr 33fc63040
Nov 16 16:55:56 cmp002 kernel: [    0.057747]  #2
Nov 16 16:55:56 cmp002 kernel: [    0.004000] kvm-clock: cpu 2, msr 3:3ff54081, secondary cpu clock
Nov 16 16:55:56 cmp002 kernel: [    0.060034] KVM setup async PF for cpu 2
Nov 16 16:55:56 cmp002 kernel: [    0.060838] kvm-stealtime: cpu 2, msr 33fca3040
Nov 16 16:55:56 cmp002 kernel: [    0.062131]  #3
Nov 16 16:55:56 cmp002 kernel: [    0.004000] kvm-clock: cpu 3, msr 3:3ff540c1, secondary cpu clock
Nov 16 16:55:56 cmp002 kernel: [    0.064021] KVM setup async PF for cpu 3
Nov 16 16:55:56 cmp002 kernel: [    0.064848] kvm-stealtime: cpu 3, msr 33fce3040
Nov 16 16:55:56 cmp002 kernel: [    0.065748]  #4
Nov 16 16:55:56 cmp002 kernel: [    0.004000] kvm-clock: cpu 4, msr 3:3ff54101, secondary cpu clock
Nov 16 16:55:56 cmp002 kernel: [    0.068020] KVM setup async PF for cpu 4
Nov 16 16:55:56 cmp002 kernel: [    0.068835] kvm-stealtime: cpu 4, msr 33fd23040
Nov 16 16:55:56 cmp002 kernel: [    0.069760]  #5
Nov 16 16:55:56 cmp002 kernel: [    0.004000] kvm-clock: cpu 5, msr 3:3ff54141, secondary cpu clock
Nov 16 16:55:56 cmp002 kernel: [    0.072021] KVM setup async PF for cpu 5
Nov 16 16:55:56 cmp002 kernel: [    0.072855] kvm-stealtime: cpu 5, msr 33fd63040
Nov 16 16:55:56 cmp002 kernel: [    0.073824] smp: Brought up 1 node, 6 CPUs
Nov 16 16:55:56 cmp002 kernel: [    0.073824] smpboot: Max logical packages: 6
Nov 16 16:55:56 cmp002 kernel: [    0.073824] smpboot: Total of 6 processors activated (33599.92 BogoMIPS)
Nov 16 16:55:56 cmp002 kernel: [    0.076893] devtmpfs: initialized
Nov 16 16:55:56 cmp002 kernel: [    0.076893] x86/mm: Memory block size: 128MB
Nov 16 16:55:56 cmp002 kernel: [    0.081231] evm: security.selinux
Nov 16 16:55:56 cmp002 kernel: [    0.081931] evm: security.SMACK64
Nov 16 16:55:56 cmp002 kernel: [    0.082652] evm: security.SMACK64EXEC
Nov 16 16:55:56 cmp002 kernel: [    0.083436] evm: security.SMACK64TRANSMUTE
Nov 16 16:55:56 cmp002 kernel: [    0.084004] evm: security.SMACK64MMAP
Nov 16 16:55:56 cmp002 kernel: [    0.084750] evm: security.apparmor
Nov 16 16:55:56 cmp002 kernel: [    0.085444] evm: security.ima
Nov 16 16:55:56 cmp002 kernel: [    0.086065] evm: security.capability
Nov 16 16:55:56 cmp002 kernel: [    0.086851] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
Nov 16 16:55:56 cmp002 kernel: [    0.088017] futex hash table entries: 2048 (order: 5, 131072 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.089282] pinctrl core: initialized pinctrl subsystem
Nov 16 16:55:56 cmp002 kernel: [    0.090514] RTC time: 16:55:41, date: 11/16/19
Nov 16 16:55:56 cmp002 kernel: [    0.092151] NET: Registered protocol family 16
Nov 16 16:55:56 cmp002 kernel: [    0.093110] audit: initializing netlink subsys (disabled)
Nov 16 16:55:56 cmp002 kernel: [    0.094200] audit: type=2000 audit(1573923340.841:1): state=initialized audit_enabled=0 res=1
Nov 16 16:55:56 cmp002 kernel: [    0.096009] cpuidle: using governor ladder
Nov 16 16:55:56 cmp002 kernel: [    0.096846] cpuidle: using governor menu
Nov 16 16:55:56 cmp002 kernel: [    0.097865] ACPI: bus type PCI registered
Nov 16 16:55:56 cmp002 kernel: [    0.098709] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 16 16:55:56 cmp002 kernel: [    0.100118] PCI: Using configuration type 1 for base access
Nov 16 16:55:56 cmp002 kernel: [    0.101223] core: PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off
Nov 16 16:55:56 cmp002 kernel: [    0.104041] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Nov 16 16:55:56 cmp002 kernel: [    0.105306] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Nov 16 16:55:56 cmp002 kernel: [    0.106661] ACPI: Added _OSI(Module Device)
Nov 16 16:55:56 cmp002 kernel: [    0.106661] ACPI: Added _OSI(Processor Device)
Nov 16 16:55:56 cmp002 kernel: [    0.108009] ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 16 16:55:56 cmp002 kernel: [    0.108972] ACPI: Added _OSI(Processor Aggregator Device)
Nov 16 16:55:56 cmp002 kernel: [    0.110023] ACPI: Added _OSI(Linux-Dell-Video)
Nov 16 16:55:56 cmp002 kernel: [    0.110923] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Nov 16 16:55:56 cmp002 kernel: [    0.111934] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Nov 16 16:55:56 cmp002 kernel: [    0.113269] ACPI: Interpreter enabled
Nov 16 16:55:56 cmp002 kernel: [    0.114073] ACPI: (supports S0 S5)
Nov 16 16:55:56 cmp002 kernel: [    0.114772] ACPI: Using IOAPIC for interrupt routing
Nov 16 16:55:56 cmp002 kernel: [    0.115799] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 16 16:55:56 cmp002 kernel: [    0.116493] ACPI: Enabled 16 GPEs in block 00 to 0F
Nov 16 16:55:56 cmp002 kernel: [    0.120800] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 16 16:55:56 cmp002 kernel: [    0.122004] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
Nov 16 16:55:56 cmp002 kernel: [    0.123362] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
Nov 16 16:55:56 cmp002 kernel: [    0.124012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Nov 16 16:55:56 cmp002 kernel: [    0.126495] acpiphp: Slot [3] registered
Nov 16 16:55:56 cmp002 kernel: [    0.127329] acpiphp: Slot [4] registered
Nov 16 16:55:56 cmp002 kernel: [    0.128042] acpiphp: Slot [5] registered
Nov 16 16:55:56 cmp002 kernel: [    0.128877] acpiphp: Slot [6] registered
Nov 16 16:55:56 cmp002 kernel: [    0.129775] acpiphp: Slot [7] registered
Nov 16 16:55:56 cmp002 kernel: [    0.130644] acpiphp: Slot [9] registered
Nov 16 16:55:56 cmp002 kernel: [    0.131483] acpiphp: Slot [10] registered
Nov 16 16:55:56 cmp002 kernel: [    0.132041] acpiphp: Slot [11] registered
Nov 16 16:55:56 cmp002 kernel: [    0.132916] acpiphp: Slot [12] registered
Nov 16 16:55:56 cmp002 kernel: [    0.133837] acpiphp: Slot [13] registered
Nov 16 16:55:56 cmp002 kernel: [    0.134676] acpiphp: Slot [14] registered
Nov 16 16:55:56 cmp002 kernel: [    0.135553] acpiphp: Slot [15] registered
Nov 16 16:55:56 cmp002 kernel: [    0.136041] acpiphp: Slot [16] registered
Nov 16 16:55:56 cmp002 kernel: [    0.136919] acpiphp: Slot [17] registered
Nov 16 16:55:56 cmp002 kernel: [    0.137807] acpiphp: Slot [18] registered
Nov 16 16:55:56 cmp002 kernel: [    0.138661] acpiphp: Slot [19] registered
Nov 16 16:55:56 cmp002 kernel: [    0.139514] acpiphp: Slot [20] registered
Nov 16 16:55:56 cmp002 kernel: [    0.140042] acpiphp: Slot [21] registered
Nov 16 16:55:56 cmp002 kernel: [    0.140930] acpiphp: Slot [22] registered
Nov 16 16:55:56 cmp002 kernel: [    0.141782] acpiphp: Slot [23] registered
Nov 16 16:55:56 cmp002 kernel: [    0.142651] acpiphp: Slot [24] registered
Nov 16 16:55:56 cmp002 kernel: [    0.143493] acpiphp: Slot [25] registered
Nov 16 16:55:56 cmp002 kernel: [    0.144041] acpiphp: Slot [26] registered
Nov 16 16:55:56 cmp002 kernel: [    0.144956] acpiphp: Slot [27] registered
Nov 16 16:55:56 cmp002 kernel: [    0.145790] acpiphp: Slot [28] registered
Nov 16 16:55:56 cmp002 kernel: [    0.146663] acpiphp: Slot [29] registered
Nov 16 16:55:56 cmp002 kernel: [    0.147518] acpiphp: Slot [30] registered
Nov 16 16:55:56 cmp002 kernel: [    0.148048] acpiphp: Slot [31] registered
Nov 16 16:55:56 cmp002 kernel: [    0.148899] PCI host bridge to bus 0000:00
Nov 16 16:55:56 cmp002 kernel: [    0.149732] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 16 16:55:56 cmp002 kernel: [    0.151009] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 16 16:55:56 cmp002 kernel: [    0.152005] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 16 16:55:56 cmp002 kernel: [    0.153435] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 16 16:55:56 cmp002 kernel: [    0.154925] pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 16 16:55:56 cmp002 kernel: [    0.156044] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Nov 16 16:55:56 cmp002 kernel: [    0.156528] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Nov 16 16:55:56 cmp002 kernel: [    0.157168] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Nov 16 16:55:56 cmp002 kernel: [    0.164006] pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Nov 16 16:55:56 cmp002 kernel: [    0.167749] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Nov 16 16:55:56 cmp002 kernel: [    0.168005] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Nov 16 16:55:56 cmp002 kernel: [    0.169261] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Nov 16 16:55:56 cmp002 kernel: [    0.170676] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Nov 16 16:55:56 cmp002 kernel: [    0.172080] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Nov 16 16:55:56 cmp002 kernel: [    0.172561] pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 16 16:55:56 cmp002 kernel: [    0.174021] pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 16 16:55:56 cmp002 kernel: [    0.175607] pci 0000:00:02.0: [1013:00b8] type 00 class 0x030000
Nov 16 16:55:56 cmp002 kernel: [    0.177415] pci 0000:00:02.0: reg 0x10: [mem 0xfc000000-0xfdffffff pref]
Nov 16 16:55:56 cmp002 kernel: [    0.180009] pci 0000:00:02.0: reg 0x14: [mem 0xfebd0000-0xfebd0fff]
Nov 16 16:55:56 cmp002 kernel: [    0.192009] pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref]
Nov 16 16:55:56 cmp002 kernel: [    0.192259] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:56 cmp002 kernel: [    0.194369] pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc05f]
Nov 16 16:55:56 cmp002 kernel: [    0.196934] pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff]
Nov 16 16:55:56 cmp002 kernel: [    0.207757] pci 0000:00:03.0: reg 0x30: [mem 0xfeac0000-0xfeafffff pref]
Nov 16 16:55:56 cmp002 kernel: [    0.208143] pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:56 cmp002 kernel: [    0.210097] pci 0000:00:04.0: reg 0x10: [io  0xc060-0xc07f]
Nov 16 16:55:56 cmp002 kernel: [    0.211959] pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff]
Nov 16 16:55:56 cmp002 kernel: [    0.221984] pci 0000:00:04.0: reg 0x30: [mem 0xfeb00000-0xfeb3ffff pref]
Nov 16 16:55:56 cmp002 kernel: [    0.222365] pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:56 cmp002 kernel: [    0.224014] pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Nov 16 16:55:56 cmp002 kernel: [    0.226209] pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff]
Nov 16 16:55:56 cmp002 kernel: [    0.236006] pci 0000:00:05.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
Nov 16 16:55:56 cmp002 kernel: [    0.236383] pci 0000:00:06.0: [1af4:1000] type 00 class 0x020000
Nov 16 16:55:56 cmp002 kernel: [    0.239690] pci 0000:00:06.0: reg 0x10: [io  0xc0a0-0xc0bf]
Nov 16 16:55:56 cmp002 kernel: [    0.240954] pci 0000:00:06.0: reg 0x14: [mem 0xfebd4000-0xfebd4fff]
Nov 16 16:55:56 cmp002 kernel: [    0.251081] pci 0000:00:06.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
Nov 16 16:55:56 cmp002 kernel: [    0.251485] pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Nov 16 16:55:56 cmp002 kernel: [    0.253115] pci 0000:00:07.0: reg 0x10: [io  0xc000-0xc03f]
Nov 16 16:55:56 cmp002 kernel: [    0.255171] pci 0000:00:07.0: reg 0x14: [mem 0xfebd5000-0xfebd5fff]
Nov 16 16:55:56 cmp002 kernel: [    0.264375] pci 0000:00:08.0: [8086:2934] type 00 class 0x0c0300
Nov 16 16:55:56 cmp002 kernel: [    0.270955] pci 0000:00:08.0: reg 0x20: [io  0xc0c0-0xc0df]
Nov 16 16:55:56 cmp002 kernel: [    0.272754] pci 0000:00:08.1: [8086:2935] type 00 class 0x0c0300
Nov 16 16:55:56 cmp002 kernel: [    0.277523] pci 0000:00:08.1: reg 0x20: [io  0xc0e0-0xc0ff]
Nov 16 16:55:56 cmp002 kernel: [    0.280045] pci 0000:00:08.2: [8086:2936] type 00 class 0x0c0300
Nov 16 16:55:56 cmp002 kernel: [    0.285010] pci 0000:00:08.2: reg 0x20: [io  0xc100-0xc11f]
Nov 16 16:55:56 cmp002 kernel: [    0.287225] pci 0000:00:08.7: [8086:293a] type 00 class 0x0c0320
Nov 16 16:55:56 cmp002 kernel: [    0.288005] pci 0000:00:08.7: reg 0x10: [mem 0xfebd6000-0xfebd6fff]
Nov 16 16:55:56 cmp002 kernel: [    0.295642] pci 0000:00:09.0: [1af4:1002] type 00 class 0x00ff00
Nov 16 16:55:56 cmp002 kernel: [    0.296481] pci 0000:00:09.0: reg 0x10: [io  0xc120-0xc13f]
Nov 16 16:55:56 cmp002 kernel: [    0.304228] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
Nov 16 16:55:56 cmp002 kernel: [    0.305493] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
Nov 16 16:55:56 cmp002 kernel: [    0.306736] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
Nov 16 16:55:56 cmp002 kernel: [    0.308008] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
Nov 16 16:55:56 cmp002 kernel: [    0.309178] ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
Nov 16 16:55:56 cmp002 kernel: [    0.311077] SCSI subsystem initialized
Nov 16 16:55:56 cmp002 kernel: [    0.311950] libata version 3.00 loaded.
Nov 16 16:55:56 cmp002 kernel: [    0.312524] pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 16 16:55:56 cmp002 kernel: [    0.313743] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 16 16:55:56 cmp002 kernel: [    0.315447] pci 0000:00:02.0: vgaarb: bridge control possible
Nov 16 16:55:56 cmp002 kernel: [    0.316005] vgaarb: loaded
Nov 16 16:55:56 cmp002 kernel: [    0.316628] ACPI: bus type USB registered
Nov 16 16:55:56 cmp002 kernel: [    0.317502] usbcore: registered new interface driver usbfs
Nov 16 16:55:56 cmp002 kernel: [    0.318722] usbcore: registered new interface driver hub
Nov 16 16:55:56 cmp002 kernel: [    0.319799] usbcore: registered new device driver usb
Nov 16 16:55:56 cmp002 kernel: [    0.320074] EDAC MC: Ver: 3.0.0
Nov 16 16:55:56 cmp002 kernel: [    0.321179] PCI: Using ACPI for IRQ routing
Nov 16 16:55:56 cmp002 kernel: [    0.321179] PCI: pci_cache_line_size set to 64 bytes
Nov 16 16:55:56 cmp002 kernel: [    0.321179] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 16 16:55:56 cmp002 kernel: [    0.321179] e820: reserve RAM buffer [mem 0xbffdf000-0xbfffffff]
Nov 16 16:55:56 cmp002 kernel: [    0.321286] NetLabel: Initializing
Nov 16 16:55:56 cmp002 kernel: [    0.322017] NetLabel:  domain hash size = 128
Nov 16 16:55:56 cmp002 kernel: [    0.324004] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 16 16:55:56 cmp002 kernel: [    0.325244] NetLabel:  unlabeled traffic allowed by default
Nov 16 16:55:56 cmp002 kernel: [    0.326364] clocksource: Switched to clocksource kvm-clock
Nov 16 16:55:56 cmp002 kernel: [    0.335967] VFS: Disk quotas dquot_6.6.0
Nov 16 16:55:56 cmp002 kernel: [    0.336800] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.338271] AppArmor: AppArmor Filesystem Enabled
Nov 16 16:55:56 cmp002 kernel: [    0.339213] pnp: PnP ACPI init
Nov 16 16:55:56 cmp002 kernel: [    0.339910] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
Nov 16 16:55:56 cmp002 kernel: [    0.339943] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active)
Nov 16 16:55:56 cmp002 kernel: [    0.339965] pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active)
Nov 16 16:55:56 cmp002 kernel: [    0.339993] pnp 00:03: [dma 2]
Nov 16 16:55:56 cmp002 kernel: [    0.340017] pnp 00:03: Plug and Play ACPI device, IDs PNP0700 (active)
Nov 16 16:55:56 cmp002 kernel: [    0.340097] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
Nov 16 16:55:56 cmp002 kernel: [    0.340329] pnp: PnP ACPI: found 5 devices
Nov 16 16:55:56 cmp002 kernel: [    0.348065] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 16 16:55:56 cmp002 kernel: [    0.349941] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 16 16:55:56 cmp002 kernel: [    0.349942] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 16 16:55:56 cmp002 kernel: [    0.349943] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 16 16:55:56 cmp002 kernel: [    0.349943] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 16 16:55:56 cmp002 kernel: [    0.350003] NET: Registered protocol family 2
Nov 16 16:55:56 cmp002 kernel: [    0.351099] TCP established hash table entries: 131072 (order: 8, 1048576 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.352730] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.354251] TCP: Hash tables configured (established 131072 bind 65536)
Nov 16 16:55:56 cmp002 kernel: [    0.355541] UDP hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.356739] UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes)
Nov 16 16:55:56 cmp002 kernel: [    0.358155] NET: Registered protocol family 1
Nov 16 16:55:56 cmp002 kernel: [    0.359035] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 16 16:55:56 cmp002 kernel: [    0.360220] pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 16 16:55:56 cmp002 kernel: [    0.361374] pci 0000:00:01.0: Activating ISA DMA hang workarounds
Nov 16 16:55:56 cmp002 kernel: [    0.362629] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 16 16:55:56 cmp002 kernel: [    0.387768] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
Nov 16 16:55:56 cmp002 kernel: [    0.436358] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
Nov 16 16:55:56 cmp002 kernel: [    0.485291] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
Nov 16 16:55:56 cmp002 kernel: [    0.535913] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
Nov 16 16:55:56 cmp002 kernel: [    0.562214] PCI: CLS 0 bytes, default 64
Nov 16 16:55:56 cmp002 kernel: [    0.562251] Unpacking initramfs...
Nov 16 16:55:56 cmp002 kernel: [    0.797905] Freeing initrd memory: 19152K
Nov 16 16:55:56 cmp002 kernel: [    0.798906] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 16 16:55:56 cmp002 kernel: [    0.800294] software IO TLB: mapped [mem 0xbbfdf000-0xbffdf000] (64MB)
Nov 16 16:55:56 cmp002 kernel: [    0.801717] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3aeaff3, max_idle_ns: 440795255742 ns
Nov 16 16:55:56 cmp002 kernel: [    0.803887] Scanning for low memory corruption every 60 seconds
Nov 16 16:55:56 cmp002 kernel: [    0.805869] Initialise system trusted keyrings
Nov 16 16:55:56 cmp002 kernel: [    0.806882] Key type blacklist registered
Nov 16 16:55:56 cmp002 kernel: [    0.807857] workingset: timestamp_bits=36 max_order=22 bucket_order=0
Nov 16 16:55:56 cmp002 kernel: [    0.810306] zbud: loaded
Nov 16 16:55:56 cmp002 kernel: [    0.811456] squashfs: version 4.0 (2009/01/31) Phillip Lougher
Nov 16 16:55:56 cmp002 kernel: [    0.812934] fuse init (API version 7.26)
Nov 16 16:55:56 cmp002 kernel: [    0.815725] Key type asymmetric registered
Nov 16 16:55:56 cmp002 kernel: [    0.816659] Asymmetric key parser 'x509' registered
Nov 16 16:55:56 cmp002 kernel: [    0.817775] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 16 16:55:56 cmp002 kernel: [    0.819518] io scheduler noop registered
Nov 16 16:55:56 cmp002 kernel: [    0.820447] io scheduler deadline registered
Nov 16 16:55:56 cmp002 kernel: [    0.821426] io scheduler cfq registered (default)
Nov 16 16:55:56 cmp002 kernel: [    0.822930] intel_idle: Please enable MWAIT in BIOS SETUP
Nov 16 16:55:56 cmp002 kernel: [    0.823000] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 16 16:55:56 cmp002 kernel: [    0.824749] ACPI: Power Button [PWRF]
Nov 16 16:55:56 cmp002 kernel: [    0.852072] virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:56 cmp002 kernel: [    0.878029] virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:56 cmp002 kernel: [    0.905840] virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:56 cmp002 kernel: [    0.933681] virtio-pci 0000:00:06.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:56 cmp002 kernel: [    0.960087] virtio-pci 0000:00:07.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:56 cmp002 kernel: [    0.986320] virtio-pci 0000:00:09.0: virtio_pci: leaving for legacy driver
Nov 16 16:55:56 cmp002 kernel: [    0.988823] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
Nov 16 16:55:56 cmp002 kernel: [    1.013608] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 16 16:55:56 cmp002 kernel: [    1.016895] Linux agpgart interface v0.103
Nov 16 16:55:56 cmp002 kernel: [    1.020250] loop: module loaded
Nov 16 16:55:56 cmp002 kernel: [    1.021042] ata_piix 0000:00:01.1: version 2.13
Nov 16 16:55:56 cmp002 kernel: [    1.022334] scsi host0: ata_piix
Nov 16 16:55:56 cmp002 kernel: [    1.023213] scsi host1: ata_piix
Nov 16 16:55:56 cmp002 kernel: [    1.023918] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Nov 16 16:55:56 cmp002 kernel: [    1.025269] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Nov 16 16:55:56 cmp002 kernel: [    1.026621] libphy: Fixed MDIO Bus: probed
Nov 16 16:55:56 cmp002 kernel: [    1.027530] tun: Universal TUN/TAP device driver, 1.6
Nov 16 16:55:56 cmp002 kernel: [    1.028603] PPP generic driver version 2.4.2
Nov 16 16:55:56 cmp002 kernel: [    1.029660] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Nov 16 16:55:56 cmp002 kernel: [    1.031076] ehci-pci: EHCI PCI platform driver
Nov 16 16:55:56 cmp002 kernel: [    1.056216] ehci-pci 0000:00:08.7: EHCI Host Controller
Nov 16 16:55:56 cmp002 kernel: [    1.057297] ehci-pci 0000:00:08.7: new USB bus registered, assigned bus number 1
Nov 16 16:55:56 cmp002 kernel: [    1.058900] ehci-pci 0000:00:08.7: irq 11, io mem 0xfebd6000
Nov 16 16:55:56 cmp002 kernel: [    1.076063] ehci-pci 0000:00:08.7: USB 2.0 started, EHCI 1.00
Nov 16 16:55:56 cmp002 kernel: [    1.087783] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Nov 16 16:55:56 cmp002 kernel: [    1.089718] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:56 cmp002 kernel: [    1.091140] usb usb1: Product: EHCI Host Controller
Nov 16 16:55:56 cmp002 kernel: [    1.092115] usb usb1: Manufacturer: Linux 4.15.0-70-generic ehci_hcd
Nov 16 16:55:56 cmp002 kernel: [    1.093687] usb usb1: SerialNumber: 0000:00:08.7
Nov 16 16:55:56 cmp002 kernel: [    1.094714] hub 1-0:1.0: USB hub found
Nov 16 16:55:56 cmp002 kernel: [    1.095469] hub 1-0:1.0: 6 ports detected
Nov 16 16:55:56 cmp002 kernel: [    1.096423] ehci-platform: EHCI generic platform driver
Nov 16 16:55:56 cmp002 kernel: [    1.097405] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Nov 16 16:55:56 cmp002 kernel: [    1.098548] ohci-pci: OHCI PCI platform driver
Nov 16 16:55:56 cmp002 kernel: [    1.099408] ohci-platform: OHCI generic platform driver
Nov 16 16:55:56 cmp002 kernel: [    1.100391] uhci_hcd: USB Universal Host Controller Interface driver
Nov 16 16:55:56 cmp002 kernel: [    1.124393] uhci_hcd 0000:00:08.0: UHCI Host Controller
Nov 16 16:55:56 cmp002 kernel: [    1.125373] uhci_hcd 0000:00:08.0: new USB bus registered, assigned bus number 2
Nov 16 16:55:56 cmp002 kernel: [    1.126761] uhci_hcd 0000:00:08.0: detected 2 ports
Nov 16 16:55:56 cmp002 kernel: [    1.127734] uhci_hcd 0000:00:08.0: irq 11, io base 0x0000c0c0
Nov 16 16:55:56 cmp002 kernel: [    1.128870] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:55:56 cmp002 kernel: [    1.130169] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:56 cmp002 kernel: [    1.131560] usb usb2: Product: UHCI Host Controller
Nov 16 16:55:56 cmp002 kernel: [    1.132567] usb usb2: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:55:56 cmp002 kernel: [    1.133780] usb usb2: SerialNumber: 0000:00:08.0
Nov 16 16:55:56 cmp002 kernel: [    1.134989] hub 2-0:1.0: USB hub found
Nov 16 16:55:56 cmp002 kernel: [    1.135784] hub 2-0:1.0: 2 ports detected
Nov 16 16:55:56 cmp002 kernel: [    1.160549] uhci_hcd 0000:00:08.1: UHCI Host Controller
Nov 16 16:55:56 cmp002 kernel: [    1.161570] uhci_hcd 0000:00:08.1: new USB bus registered, assigned bus number 3
Nov 16 16:55:56 cmp002 kernel: [    1.163067] uhci_hcd 0000:00:08.1: detected 2 ports
Nov 16 16:55:56 cmp002 kernel: [    1.164103] uhci_hcd 0000:00:08.1: irq 10, io base 0x0000c0e0
Nov 16 16:55:56 cmp002 kernel: [    1.165271] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:55:56 cmp002 kernel: [    1.166658] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:56 cmp002 kernel: [    1.168122] usb usb3: Product: UHCI Host Controller
Nov 16 16:55:56 cmp002 kernel: [    1.169133] usb usb3: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:55:56 cmp002 kernel: [    1.170539] usb usb3: SerialNumber: 0000:00:08.1
Nov 16 16:55:56 cmp002 kernel: [    1.171567] hub 3-0:1.0: USB hub found
Nov 16 16:55:56 cmp002 kernel: [    1.172360] hub 3-0:1.0: 2 ports detected
Nov 16 16:55:56 cmp002 kernel: [    1.184687] ata1.01: NODEV after polling detection
Nov 16 16:55:56 cmp002 kernel: [    1.185105] ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 16 16:55:56 cmp002 kernel: [    1.186807] ata1.00: configured for MWDMA2
Nov 16 16:55:56 cmp002 kernel: [    1.188423] scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 16 16:55:56 cmp002 kernel: [    1.191317] sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 16 16:55:56 cmp002 kernel: [    1.192678] cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 16 16:55:56 cmp002 kernel: [    1.193891] sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 16 16:55:56 cmp002 kernel: [    1.193928] sr 0:0:0:0: Attached scsi generic sg0 type 5
Nov 16 16:55:56 cmp002 kernel: [    1.197861] uhci_hcd 0000:00:08.2: UHCI Host Controller
Nov 16 16:55:56 cmp002 kernel: [    1.198954] uhci_hcd 0000:00:08.2: new USB bus registered, assigned bus number 4
Nov 16 16:55:56 cmp002 kernel: [    1.200518] uhci_hcd 0000:00:08.2: detected 2 ports
Nov 16 16:55:56 cmp002 kernel: [    1.201685] uhci_hcd 0000:00:08.2: irq 10, io base 0x0000c100
Nov 16 16:55:56 cmp002 kernel: [    1.203088] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
Nov 16 16:55:56 cmp002 kernel: [    1.204546] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 16 16:55:56 cmp002 kernel: [    1.206129] usb usb4: Product: UHCI Host Controller
Nov 16 16:55:56 cmp002 kernel: [    1.207298] usb usb4: Manufacturer: Linux 4.15.0-70-generic uhci_hcd
Nov 16 16:55:56 cmp002 kernel: [    1.208655] usb usb4: SerialNumber: 0000:00:08.2
Nov 16 16:55:56 cmp002 kernel: [    1.209793] hub 4-0:1.0: USB hub found
Nov 16 16:55:56 cmp002 kernel: [    1.210680] hub 4-0:1.0: 2 ports detected
Nov 16 16:55:56 cmp002 kernel: [    1.211746] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 16 16:55:56 cmp002 kernel: [    1.214297] serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 16 16:55:56 cmp002 kernel: [    1.215408] serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 16 16:55:56 cmp002 kernel: [    1.216630] mousedev: PS/2 mouse device common for all mice
Nov 16 16:55:56 cmp002 kernel: [    1.218381] rtc_cmos 00:00: RTC can wake from S4
Nov 16 16:55:56 cmp002 kernel: [    1.219851] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 16 16:55:56 cmp002 kernel: [    1.221775] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
Nov 16 16:55:56 cmp002 kernel: [    1.223280] rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Nov 16 16:55:56 cmp002 kernel: [    1.224623] i2c /dev entries driver
Nov 16 16:55:56 cmp002 kernel: [    1.225519] device-mapper: uevent: version 1.0.3
Nov 16 16:55:56 cmp002 kernel: [    1.227086] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: dm-devel@redhat.com
Nov 16 16:55:56 cmp002 kernel: [    1.230070] ledtrig-cpu: registered to indicate activity on CPUs
Nov 16 16:55:56 cmp002 kernel: [    1.232655] NET: Registered protocol family 10
Nov 16 16:55:56 cmp002 kernel: [    1.238867] Segment Routing with IPv6
Nov 16 16:55:56 cmp002 kernel: [    1.239748] NET: Registered protocol family 17
Nov 16 16:55:56 cmp002 kernel: [    1.240838] Key type dns_resolver registered
Nov 16 16:55:56 cmp002 kernel: [    1.242329] mce: Using 10 MCE banks
Nov 16 16:55:56 cmp002 kernel: [    1.243198] RAS: Correctable Errors collector initialized.
Nov 16 16:55:56 cmp002 kernel: [    1.244415] sched_clock: Marking stable (1244398370, 0)->(1678172461, -433774091)
Nov 16 16:55:56 cmp002 kernel: [    1.246417] registered taskstats version 1
Nov 16 16:55:56 cmp002 systemd[1]: Started Open vSwitch.
Nov 16 16:55:56 cmp002 systemd[1]: Started Login Service.
Nov 16 16:55:56 cmp002 sysfsutils[1990]:  * Setting sysfs variables...
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Network.
Nov 16 16:55:56 cmp002 grub-common[1994]:  * Recording successful boot for GRUB
Nov 16 16:55:56 cmp002 systemd[1]: Starting dnsmasq - A lightweight DHCP and caching DNS server...
Nov 16 16:55:56 cmp002 systemd[1]: Starting The Salt Minion...
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Network is Online.
Nov 16 16:55:56 cmp002 sysfsutils[1990]:    ...done.
Nov 16 16:55:56 cmp002 lxcfs[2031]: mount namespace: 5
Nov 16 16:55:56 cmp002 lxcfs[2031]: hierarchies:
Nov 16 16:55:56 cmp002 lxcfs[2031]:   0: fd:   6: blkio
Nov 16 16:55:56 cmp002 lxcfs[2031]:   1: fd:   7: pids
Nov 16 16:55:56 cmp002 lxcfs[2031]:   2: fd:   8: devices
Nov 16 16:55:56 cmp002 lxcfs[2031]:   3: fd:   9: hugetlb
Nov 16 16:55:56 cmp002 lxcfs[2031]:   4: fd:  10: net_cls,net_prio
Nov 16 16:55:56 cmp002 lxcfs[2031]:   5: fd:  11: freezer
Nov 16 16:55:56 cmp002 lxcfs[2031]:   6: fd:  12: cpu,cpuacct
Nov 16 16:55:56 cmp002 lxcfs[2031]:   7: fd:  13: cpuset
Nov 16 16:55:56 cmp002 lxcfs[2031]:   8: fd:  14: perf_event
Nov 16 16:55:56 cmp002 lxcfs[2031]:   9: fd:  15: rdma
Nov 16 16:55:56 cmp002 lxcfs[2031]:  10: fd:  16: memory
Nov 16 16:55:56 cmp002 lxcfs[2031]:  11: fd:  17: name=systemd
Nov 16 16:55:56 cmp002 lxcfs[2031]:  12: fd:  18: unified
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Remote File Systems (Pre).
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Remote File Systems.
Nov 16 16:55:56 cmp002 systemd[1]: Starting LSB: automatic crash report generation...
Nov 16 16:55:56 cmp002 systemd[1]: Starting Availability of block devices...
Nov 16 16:55:56 cmp002 systemd[1]: Starting OpenBSD Secure Shell server...
Nov 16 16:55:56 cmp002 rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.32.0]
Nov 16 16:55:56 cmp002 kernel: [    1.247392] Loading compiled-in X.509 certificates
Nov 16 16:55:56 cmp002 kernel: [    1.250827] Loaded X.509 cert 'Build time autogenerated kernel key: 1859b0531897959199376c446a0bd70df75fd1fc'
Nov 16 16:55:56 cmp002 kernel: [    1.252938] zswap: loaded using pool lzo/zbud
Nov 16 16:55:56 cmp002 kernel: [    1.257178] Key type big_key registered
Nov 16 16:55:56 cmp002 kernel: [    1.258094] Key type trusted registered
Nov 16 16:55:56 cmp002 kernel: [    1.260553] Key type encrypted registered
Nov 16 16:55:56 cmp002 kernel: [    1.261505] AppArmor: AppArmor sha1 policy hashing enabled
Nov 16 16:55:56 cmp002 kernel: [    1.262781] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
Nov 16 16:55:56 cmp002 kernel: [    1.264144] ima: Allocated hash algorithm: sha1
Nov 16 16:55:56 cmp002 kernel: [    1.265193] evm: HMAC attrs: 0x1
Nov 16 16:55:56 cmp002 kernel: [    1.266386]   Magic number: 11:820:944
Nov 16 16:55:56 cmp002 kernel: [    1.267314] tty tty1: hash matches
Nov 16 16:55:56 cmp002 kernel: [    1.268314] rtc_cmos 00:00: setting system clock to 2019-11-16 16:55:43 UTC (1573923343)
Nov 16 16:55:56 cmp002 kernel: [    1.270214] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
Nov 16 16:55:56 cmp002 kernel: [    1.271526] EDD information not available.
Nov 16 16:55:56 cmp002 kernel: [    1.631962] Freeing unused kernel image memory: 2432K
Nov 16 16:55:56 cmp002 kernel: [    1.656116] Write protecting the kernel read-only data: 20480k
Nov 16 16:55:56 cmp002 kernel: [    1.661991] Freeing unused kernel image memory: 2008K
Nov 16 16:55:56 cmp002 kernel: [    1.664642] Freeing unused kernel image memory: 1880K
Nov 16 16:55:56 cmp002 kernel: [    1.676548] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:55:56 cmp002 kernel: [    1.678090] x86/mm: Checking user space page tables
Nov 16 16:55:56 cmp002 kernel: [    1.686735] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 16 16:55:56 cmp002 kernel: [    1.773705] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 16 16:55:56 cmp002 kernel: [    1.776482] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 16 16:55:56 cmp002 kernel: [    1.780832]  vda: vda1 vda14 vda15
Nov 16 16:55:56 cmp002 kernel: [    1.784853] AVX version of gcm_enc/dec engaged.
Nov 16 16:55:56 cmp002 kernel: [    1.786039] AES CTR mode by8 optimization enabled
Nov 16 16:55:56 cmp002 kernel: [    1.787481] FDC 0 is a S82078B
Nov 16 16:55:56 cmp002 kernel: [    1.792114] virtio_net virtio0 ens3: renamed from eth0
Nov 16 16:55:56 cmp002 kernel: [    1.812318] virtio_net virtio2 ens5: renamed from eth2
Nov 16 16:55:56 cmp002 kernel: [    1.832127] virtio_net virtio1 ens4: renamed from eth1
Nov 16 16:55:56 cmp002 kernel: [    1.852240] virtio_net virtio3 ens6: renamed from eth3
Nov 16 16:55:56 cmp002 kernel: [    3.488131] raid6: sse2x1   gen()  6978 MB/s
Nov 16 16:55:56 cmp002 kernel: [    3.536112] raid6: sse2x1   xor()  4509 MB/s
Nov 16 16:55:56 cmp002 kernel: [    3.584012] raid6: sse2x2   gen()  7051 MB/s
Nov 16 16:55:56 cmp002 kernel: [    3.632009] raid6: sse2x2   xor()  6303 MB/s
Nov 16 16:55:56 cmp002 kernel: [    3.680009] raid6: sse2x4   gen() 11581 MB/s
Nov 16 16:55:56 cmp002 kernel: [    3.728011] raid6: sse2x4   xor()  7708 MB/s
Nov 16 16:55:56 cmp002 kernel: [    3.729124] raid6: using algorithm sse2x4 gen() 11581 MB/s
Nov 16 16:55:56 cmp002 kernel: [    3.730466] raid6: .... xor() 7708 MB/s, rmw enabled
Nov 16 16:55:56 cmp002 kernel: [    3.731691] raid6: using ssse3x2 recovery algorithm
Nov 16 16:55:56 cmp002 kernel: [    3.734193] xor: automatically using best checksumming function   avx       
Nov 16 16:55:56 cmp002 kernel: [    3.737007] async_tx: api initialized (async)
Nov 16 16:55:56 cmp002 kernel: [    3.787652] Btrfs loaded, crc32c=crc32c-intel
Nov 16 16:55:56 cmp002 kernel: [    3.817704] random: fast init done
Nov 16 16:55:56 cmp002 kernel: [    3.832984] random: wait-for-root: uninitialized urandom read (16 bytes read)
Nov 16 16:55:56 cmp002 kernel: [    3.835795] random: wait-for-root: uninitialized urandom read (16 bytes read)
Nov 16 16:55:56 cmp002 kernel: [    3.838526] random: wait-for-root: uninitialized urandom read (16 bytes read)
Nov 16 16:55:56 cmp002 kernel: [    3.867480] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null)
Nov 16 16:55:56 cmp002 kernel: [    4.554404] ip_tables: (C) 2000-2006 Netfilter Core Team
Nov 16 16:55:56 cmp002 kernel: [    4.612995] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Nov 16 16:55:56 cmp002 kernel: [    4.620389] systemd[1]: Detected virtualization kvm.
Nov 16 16:55:56 cmp002 kernel: [    4.621768] systemd[1]: Detected architecture x86-64.
Nov 16 16:55:56 cmp002 kernel: [    4.800821] systemd[1]: Set hostname to <cmp002>.
Nov 16 16:55:56 cmp002 kernel: [    5.525424] systemd[1]: Reached target Swap.
Nov 16 16:55:56 cmp002 kernel: [    5.528409] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 16 16:55:56 cmp002 kernel: [    5.532425] systemd[1]: Reached target User and Group Name Lookups.
Nov 16 16:55:56 cmp002 kernel: [    5.536516] systemd[1]: Created slice User and Session Slice.
Nov 16 16:55:56 cmp002 kernel: [    5.539153] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 16 16:55:56 cmp002 kernel: [    5.542688] systemd[1]: Created slice System Slice.
Nov 16 16:55:56 cmp002 kernel: [    5.734683] Loading iSCSI transport class v2.0-870.
Nov 16 16:55:56 cmp002 kernel: [    5.748763] EXT4-fs (vda1): re-mounted. Opts: (null)
Nov 16 16:55:56 cmp002 kernel: [    5.755368] iscsi: registered transport (tcp)
Nov 16 16:55:56 cmp002 kernel: [    5.795315] iscsi: registered transport (iser)
Nov 16 16:55:56 cmp002 kernel: [    5.805366] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Nov 16 16:55:56 cmp002 kernel: [    6.157007] systemd-journald[440]: Received request to flush runtime journal from PID 1
Nov 16 16:55:56 cmp002 kernel: [    6.735522] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 16 16:55:56 cmp002 kernel: [    6.737193] br-mgmt: port 1(ens4) entered blocking state
Nov 16 16:55:56 cmp002 kernel: [    6.737195] br-mgmt: port 1(ens4) entered disabled state
Nov 16 16:55:56 cmp002 kernel: [    6.737247] device ens4 entered promiscuous mode
Nov 16 16:55:56 cmp002 kernel: [    7.971946] audit: type=1400 audit(1573923350.196:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=1037 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [    7.974790] audit: type=1400 audit(1573923350.200:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=1038 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [    7.974796] audit: type=1400 audit(1573923350.200:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=1038 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [    7.974799] audit: type=1400 audit(1573923350.200:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=1038 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [    7.982050] audit: type=1400 audit(1573923350.208:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=1039 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [    7.982054] audit: type=1400 audit(1573923350.208:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=1039 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [    7.986120] audit: type=1400 audit(1573923350.212:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=1035 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [    7.986125] audit: type=1400 audit(1573923350.212:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-cgns" pid=1035 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [    7.986127] audit: type=1400 audit(1573923350.212:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=1035 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [    7.986130] audit: type=1400 audit(1573923350.212:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=1035 comm="apparmor_parser"
Nov 16 16:55:56 cmp002 kernel: [   11.334463] ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 16 16:55:56 cmp002 kernel: [   11.337939] ISO 9660 Extensions: RRIP_1991A
Nov 16 16:55:56 cmp002 kernel: [   11.570611] br-mgmt: port 1(ens4) entered blocking state
Nov 16 16:55:56 cmp002 kernel: [   11.570614] br-mgmt: port 1(ens4) entered forwarding state
Nov 16 16:55:56 cmp002 kernel: [   11.570678] IPv6: ADDRCONF(NETDEV_UP): br-mgmt: link is not ready
Nov 16 16:55:56 cmp002 kernel: [   11.570696] IPv6: ADDRCONF(NETDEV_CHANGE): br-mgmt: link becomes ready
Nov 16 16:55:56 cmp002 kernel: [   11.654615] 8021q: 802.1Q VLAN Support v1.8
Nov 16 16:55:56 cmp002 kernel: [   11.654623] 8021q: adding VLAN 0 to HW filter on device ens3
Nov 16 16:55:56 cmp002 kernel: [   11.654680] 8021q: adding VLAN 0 to HW filter on device ens4
Nov 16 16:55:56 cmp002 kernel: [   11.654703] 8021q: adding VLAN 0 to HW filter on device ens5
Nov 16 16:55:56 cmp002 kernel: [   11.998545] openvswitch: Open vSwitch switching datapath
Nov 16 16:55:56 cmp002 kernel: [   13.744994] new mount options do not match the existing superblock, will be ignored
Nov 16 16:55:56 cmp002 systemd[1]: Started Unattended Upgrades Shutdown.
Nov 16 16:55:56 cmp002 dnsmasq[2058]: dnsmasq: syntax check OK.
Nov 16 16:55:56 cmp002 rsyslogd: rsyslogd's groupid changed to 106
Nov 16 16:55:56 cmp002 systemd[1]: Starting Permit User Sessions...
Nov 16 16:55:56 cmp002 rsyslogd: rsyslogd's userid changed to 102
Nov 16 16:55:56 cmp002 systemd[1]: Started System Logging Service.
Nov 16 16:55:56 cmp002 rsyslogd:  [origin software="rsyslogd" swVersion="8.32.0" x-pid="2003" x-info="http://www.rsyslog.com"] start
Nov 16 16:55:56 cmp002 systemd[1]: Started LSB: Set sysfs variables from /etc/sysfs.conf.
Nov 16 16:55:56 cmp002 systemd[1]: Started Availability of block devices.
Nov 16 16:55:56 cmp002 systemd[1]: Started Permit User Sessions.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Terminate Plymouth Boot Screen...
Nov 16 16:55:56 cmp002 systemd[1]: Starting Hold until boot process finishes up...
Nov 16 16:55:56 cmp002 systemd[1]: Started Hold until boot process finishes up.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Set console scheme...
Nov 16 16:55:56 cmp002 apport[2097]:  * Starting automatic crash report generation: apport
Nov 16 16:55:56 cmp002 systemd[1]: Started Serial Getty on ttyS0.
Nov 16 16:55:56 cmp002 systemd[1]: Started Terminate Plymouth Boot Screen.
Nov 16 16:55:56 cmp002 systemd[1]: Started Set console scheme.
Nov 16 16:55:56 cmp002 systemd[1]: Created slice system-getty.slice.
Nov 16 16:55:56 cmp002 grub-common[1994]:    ...done.
Nov 16 16:55:56 cmp002 smartd[2045]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-70-generic] (local build)
Nov 16 16:55:56 cmp002 smartd[2045]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Nov 16 16:55:56 cmp002 smartd[2045]: Opened configuration file /etc/smartd.conf
Nov 16 16:55:56 cmp002 systemd[1]: Started Getty on tty1.
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Login Prompts.
Nov 16 16:55:56 cmp002 systemd[1]: Started LSB: Record successful boot for GRUB.
Nov 16 16:55:56 cmp002 smartd[2045]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Nov 16 16:55:56 cmp002 smartd[2045]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Nov 16 16:55:56 cmp002 smartd[2045]: DEVICESCAN failed: glob(3) aborted matching pattern /dev/discs/disc*
Nov 16 16:55:56 cmp002 smartd[2045]: In the system's table of devices NO devices found to scan
Nov 16 16:55:56 cmp002 smartd[2045]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
Nov 16 16:55:56 cmp002 systemd[1]: smartd.service: Main process exited, code=exited, status=17/n/a
Nov 16 16:55:56 cmp002 systemd[1]: smartd.service: Failed with result 'exit-code'.
Nov 16 16:55:56 cmp002 apport[2097]:    ...done.
Nov 16 16:55:56 cmp002 systemd[1]: Started LSB: automatic crash report generation.
Nov 16 16:55:56 cmp002 dbus-daemon[1960]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.2' (uid=0 pid=2022 comm="/usr/lib/accountsservice/accounts-daemon " label="unconfined")
Nov 16 16:55:56 cmp002 systemd[1]: Starting Authorization Manager...
Nov 16 16:55:56 cmp002 polkitd[2213]: started daemon version 0.105 using authority implementation `local' version `0.105'
Nov 16 16:55:56 cmp002 dbus-daemon[1960]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Nov 16 16:55:56 cmp002 systemd[1]: Started Authorization Manager.
Nov 16 16:55:56 cmp002 accounts-daemon[2022]: started daemon version 0.6.45
Nov 16 16:55:56 cmp002 systemd[1]: Started Accounts Service.
Nov 16 16:55:56 cmp002 dnsmasq[2262]: started, version 2.79 cachesize 150
Nov 16 16:55:56 cmp002 dnsmasq[2262]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify
Nov 16 16:55:56 cmp002 dnsmasq[2262]: reading /etc/resolv.conf
Nov 16 16:55:56 cmp002 dnsmasq[2262]: using nameserver 8.8.8.8#53
Nov 16 16:55:56 cmp002 dnsmasq[2262]: read /etc/hosts - 11 addresses
Nov 16 16:55:56 cmp002 systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.
Nov 16 16:55:56 cmp002 systemd[1]: Reached target Host and Network Name Lookups.
Nov 16 16:55:56 cmp002 systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Nov 16 16:55:56 cmp002 systemd[1]: Started OpenBSD Secure Shell server.
Nov 16 16:55:57 cmp002 snapd[2016]: AppArmor status: apparmor is enabled and all features are available
Nov 16 16:55:57 cmp002 snapd[2016]: patch.go:64: Patching system state level 6 to sublevel 1...
Nov 16 16:55:57 cmp002 snapd[2016]: daemon.go:338: started snapd/2.40+18.04 (series 16; classic) ubuntu/18.04 (amd64) linux/4.15.0-70-generic.
Nov 16 16:55:57 cmp002 systemd[1]: Started LXD - container startup/shutdown.
Nov 16 16:55:57 cmp002 systemd[1]: Started Snappy daemon.
Nov 16 16:55:57 cmp002 systemd[1]: Starting Wait until snapd is fully seeded...
Nov 16 16:55:58 cmp002 systemd[1]: Started Wait until snapd is fully seeded.
Nov 16 16:55:58 cmp002 systemd[1]: Starting Apply the settings specified in cloud-config...
Nov 16 16:55:58 cmp002 postfix/postfix-script[2541]: starting the Postfix mail system
Nov 16 16:55:58 cmp002 postfix/master[2545]: daemon started -- version 3.3.0, configuration /etc/postfix
Nov 16 16:55:58 cmp002 systemd[1]: Started Postfix Mail Transport Agent (instance -).
Nov 16 16:55:58 cmp002 systemd[1]: Starting Postfix Mail Transport Agent...
Nov 16 16:55:58 cmp002 systemd[1]: Started Postfix Mail Transport Agent.
Nov 16 16:55:58 cmp002 systemd[1]: Started The Salt Minion.
Nov 16 16:55:58 cmp002 systemd[1]: Reached target Multi-User System.
Nov 16 16:55:58 cmp002 systemd[1]: Reached target Graphical Interface.
Nov 16 16:55:58 cmp002 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Nov 16 16:55:58 cmp002 systemd[1]: Started Update UTMP about System Runlevel Changes.
Nov 16 16:55:58 cmp002 cloud-init[2467]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:config' at Sat, 16 Nov 2019 16:55:58 +0000. Up 16.26 seconds.
Nov 16 16:55:58 cmp002 systemd[1]: Started Apply the settings specified in cloud-config.
Nov 16 16:55:58 cmp002 systemd[1]: Starting Execute cloud user/final scripts...
Nov 16 16:55:59 cmp002 cloud-init[2600]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 running 'modules:final' at Sat, 16 Nov 2019 16:55:59 +0000. Up 16.80 seconds.
Nov 16 16:55:59 cmp002 cloud-init[2600]: Cloud-init v. 19.2-36-g059d049c-0ubuntu2~18.04.1 finished at Sat, 16 Nov 2019 16:55:59 +0000. Datasource DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 16.92 seconds
Nov 16 16:55:59 cmp002 systemd[1]: Started Execute cloud user/final scripts.
Nov 16 16:55:59 cmp002 systemd[1]: Reached target Cloud-init target.
Nov 16 16:55:59 cmp002 systemd[1]: Startup finished in 4.475s (kernel) + 12.492s (userspace) = 16.967s.
Nov 16 16:56:02 cmp002 snapd[2016]: daemon.go:576: gracefully waiting for running hooks
Nov 16 16:56:02 cmp002 snapd[2016]: daemon.go:578: done waiting for running hooks
Nov 16 16:56:02 cmp002 snapd[2016]: daemon stop requested to wait for socket activation
Nov 16 16:56:17 cmp002 kernel: [   35.580518] random: crng init done
Nov 16 16:56:17 cmp002 kernel: [   35.580521] random: 7 urandom warning(s) missed due to ratelimiting
Nov 16 16:56:19 cmp002 systemd-timesyncd[956]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).
Nov 16 16:56:31 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install python-oauth python-m2crypto.
Nov 16 16:56:38 cmp002 systemd[1]: Reloading.
Nov 16 16:56:39 cmp002 salt-minion[2064]: [WARNING ] The function "module.run" is using its deprecated version and will expire in version "Sodium".
Nov 16 16:56:40 cmp002 salt-minion[2064]: ...........................................................................................................................................................................................................................++++
Nov 16 16:56:41 cmp002 salt-minion[2064]: ................................................++++
Nov 16 16:56:42 cmp002 salt-minion[2064]: [WARNING ] State for file: /etc/kubernetes/ssl/ca-kubernetes.crt - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:56:42 cmp002 salt-minion[2064]: ..................................................++++
Nov 16 16:56:43 cmp002 salt-minion[2064]: ...................................++++
Nov 16 16:56:44 cmp002 salt-minion[2064]: ......................++++
Nov 16 16:56:44 cmp002 salt-minion[2064]: ..............................................................................++++
Nov 16 16:56:45 cmp002 salt-minion[2064]: [WARNING ] State for file: /var/lib/etcd/ca.pem - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Nov 16 16:56:45 cmp002 salt-minion[2064]: ....................................................++++
Nov 16 16:56:45 cmp002 salt-minion[2064]: ............................................++++
Nov 16 16:56:46 cmp002 salt-minion[2064]: ......................................................++++
Nov 16 16:56:47 cmp002 salt-minion[2064]: ...++++
Nov 16 16:56:47 cmp002 salt-minion[2064]: .................++++
Nov 16 16:56:48 cmp002 salt-minion[2064]: .............................................++++
Nov 16 16:56:49 cmp002 salt-minion[2064]: ..............................++++
Nov 16 16:56:49 cmp002 salt-minion[2064]: .....................................................................................................++++
Nov 16 16:56:50 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install ntp.
Nov 16 16:56:52 cmp002 systemd[1]: Reloading.
Nov 16 16:56:53 cmp002 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:56:53 cmp002 systemd[1]: Started ntp-systemd-netif.path.
Nov 16 16:56:53 cmp002 kernel: [   72.588036] kauditd_printk_skb: 5 callbacks suppressed
Nov 16 16:56:53 cmp002 kernel: [   72.588063] audit: type=1400 audit(1573923413.603:17): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/ntpd" pid=3927 comm="apparmor_parser"
Nov 16 16:56:53 cmp002 systemd[1]: Reloading.
Nov 16 16:56:54 cmp002 systemd[1]: message repeated 2 times: [ Reloading.]
Nov 16 16:56:54 cmp002 systemd[1]: Stopping Network Time Synchronization...
Nov 16 16:56:54 cmp002 systemd[1]: Starting Network Time Service...
Nov 16 16:56:54 cmp002 ntpd[4035]: ntpd 4.2.8p10@1.3728-o (1): Starting
Nov 16 16:56:54 cmp002 ntpd[4035]: Command line: /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 112:118
Nov 16 16:56:54 cmp002 systemd[1]: Started Network Time Service.
Nov 16 16:56:54 cmp002 systemd[1]: Stopped Network Time Synchronization.
Nov 16 16:56:54 cmp002 systemd[1]: Reloading.
Nov 16 16:56:54 cmp002 ntpd[4039]: proto: precision = 0.061 usec (-24)
Nov 16 16:56:54 cmp002 ntpd[4039]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature
Nov 16 16:56:54 cmp002 ntpd[4039]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2020-06-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen and drop on 0 v6wildcard [::]:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen normally on 2 lo 127.0.0.1:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen normally on 3 ens3 192.168.11.37:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen normally on 4 br-mgmt 172.16.10.56:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen normally on 5 lo [::1]:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen normally on 6 ens3 [fe80::5054:ff:fec7:2624%2]:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen normally on 7 ens4 [fe80::5054:ff:fe3d:c4b4%3]:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen normally on 8 ens5 [fe80::5054:ff:feb4:dadf%4]:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen normally on 9 br-mgmt [fe80::5054:ff:fe3d:c4b4%6]:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listen normally on 10 ens5.1000 [fe80::5054:ff:feb4:dadf%7]:123
Nov 16 16:56:54 cmp002 ntpd[4039]: Listening on routing socket on fd #27 for interface updates
Nov 16 16:56:55 cmp002 ntpd[4039]: Soliciting pool server 162.159.200.123
Nov 16 16:56:56 cmp002 ntpd[4039]: Soliciting pool server 192.36.143.130
Nov 16 16:56:56 cmp002 ntpd[4039]: Soliciting pool server 193.182.111.143
Nov 16 16:56:57 cmp002 ntpd[4039]: Soliciting pool server 83.168.200.198
Nov 16 16:56:57 cmp002 ntpd[4039]: Soliciting pool server 5.186.65.2
Nov 16 16:56:57 cmp002 ntpd[4039]: Soliciting pool server 92.246.24.228
Nov 16 16:56:57 cmp002 systemd[1]: Started /bin/systemctl restart ntp.service.
Nov 16 16:56:57 cmp002 ntpd[4039]: ntpd exiting on signal 15 (Terminated)
Nov 16 16:56:57 cmp002 systemd[1]: Stopping Network Time Service...
Nov 16 16:56:57 cmp002 ntpd[4039]: 162.159.200.123 local addr 192.168.11.37 -> <null>
Nov 16 16:56:57 cmp002 ntpd[4039]: 192.36.143.130 local addr 192.168.11.37 -> <null>
Nov 16 16:56:57 cmp002 ntpd[4039]: 193.182.111.143 local addr 192.168.11.37 -> <null>
Nov 16 16:56:57 cmp002 ntpd[4039]: 83.168.200.198 local addr 192.168.11.37 -> <null>
Nov 16 16:56:57 cmp002 ntpd[4039]: 5.186.65.2 local addr 192.168.11.37 -> <null>
Nov 16 16:56:57 cmp002 ntpd[4039]: 92.246.24.228 local addr 192.168.11.37 -> <null>
Nov 16 16:56:57 cmp002 systemd[1]: Stopped Network Time Service.
Nov 16 16:56:57 cmp002 systemd[1]: Starting Network Time Service...
Nov 16 16:56:57 cmp002 ntpd[4364]: ntpd 4.2.8p10@1.3728-o (1): Starting
Nov 16 16:56:57 cmp002 ntpd[4364]: Command line: /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 112:118
Nov 16 16:56:57 cmp002 systemd[1]: Started Network Time Service.
Nov 16 16:56:58 cmp002 ntpd[4367]: proto: precision = 0.061 usec (-24)
Nov 16 16:56:58 cmp002 ntpd[4367]: restrict 0.0.0.0: KOD does nothing without LIMITED.
Nov 16 16:56:58 cmp002 ntpd[4367]: restrict ::: KOD does nothing without LIMITED.
Nov 16 16:56:58 cmp002 ntpd[4367]: switching logging to file /var/log/ntp.log
Nov 16 16:57:01 cmp002 salt-minion[2064]: [INFO    ] Executing command ['systemctl', 'status', 'salt-minion.service', '-n', '0'] in directory '/root'
Nov 16 16:57:01 cmp002 salt-minion[2064]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'salt-minion.service'] in directory '/root'
Nov 16 16:57:01 cmp002 systemd[1]: Started /bin/systemctl restart salt-minion.service.
Nov 16 16:57:01 cmp002 systemd[1]: Stopping The Salt Minion...
Nov 16 16:57:01 cmp002 salt-minion[2064]: [WARNING ] Minion received a SIGTERM. Exiting.
Nov 16 16:57:02 cmp002 salt-minion[2064]: The Salt Minion is shutdown. Minion received a SIGTERM. Exited.
Nov 16 16:57:02 cmp002 systemd[1]: Stopped The Salt Minion.
Nov 16 16:57:02 cmp002 systemd[1]: salt-minion.service: Found left-over process 4370 (bash) in control group while starting unit. Ignoring.
Nov 16 16:57:02 cmp002 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Nov 16 16:57:02 cmp002 systemd[1]: salt-minion.service: Found left-over process 4436 (salt-call) in control group while starting unit. Ignoring.
Nov 16 16:57:02 cmp002 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Nov 16 16:57:02 cmp002 systemd[1]: Starting The Salt Minion...
Nov 16 16:57:02 cmp002 systemd[1]: Started The Salt Minion.
Nov 16 16:57:02 cmp002 salt-minion[2064]: local:
Nov 16 16:57:02 cmp002 salt-minion[2064]:     True
Nov 16 16:57:02 cmp002 salt-minion[4490]: [INFO    ] Setting up the Salt Minion "cmp002.mcp-k8s-calico-noha.local"
Nov 16 16:57:02 cmp002 salt-minion[4490]: [INFO    ] Starting up the Salt Minion
Nov 16 16:57:02 cmp002 salt-minion[4490]: [INFO    ] Starting pull socket on /var/run/salt/minion/minion_event_d677558cdd_pull.ipc
Nov 16 16:57:03 cmp002 salt-minion[4490]: [INFO    ] Creating minion process manager
Nov 16 16:57:04 cmp002 salt-minion[4490]: [INFO    ] Executing command ['date', '+%z'] in directory '/root'
Nov 16 16:57:04 cmp002 salt-minion[4490]: [INFO    ] Updating job settings for scheduled job: __mine_interval
Nov 16 16:57:04 cmp002 salt-minion[4490]: [INFO    ] Added mine.update to scheduler
Nov 16 16:57:04 cmp002 salt-minion[4490]: [INFO    ] Minion is starting as user 'root'
Nov 16 16:57:04 cmp002 salt-minion[4490]: [INFO    ] Minion is ready to receive requests!
Nov 16 16:57:53 cmp002 salt-minion[4490]: [INFO    ] User sudo_ubuntu Executing command state.sls with jid 20191116165753766584
Nov 16 16:57:53 cmp002 salt-minion[4490]: [INFO    ] Starting a new job with PID 4581
Nov 16 16:57:54 cmp002 salt-minion[4490]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:57:54 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/init.sls'
Nov 16 16:57:54 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:54 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:54 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/calico.sls'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/service.sls'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/_common.sls'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/pool/kube-proxy.sls'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Running state [/usr/bin/calicoctl] at time 16:57:55.621377
Nov 16 16:57:55 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/usr/bin/calicoctl]
Nov 16 16:57:57 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:57:57 cmp002 salt-minion[4490]: New file
Nov 16 16:57:57 cmp002 salt-minion[4490]: [INFO    ] Completed state [/usr/bin/calicoctl] at time 16:57:57.278498 duration_in_ms=1657.119
Nov 16 16:57:57 cmp002 salt-minion[4490]: [INFO    ] Running state [/usr/bin/birdcl] at time 16:57:57.279237
Nov 16 16:57:57 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/usr/bin/birdcl]
Nov 16 16:57:57 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:57:57 cmp002 salt-minion[4490]: New file
Nov 16 16:57:57 cmp002 salt-minion[4490]: [INFO    ] Completed state [/usr/bin/birdcl] at time 16:57:57.634178 duration_in_ms=354.941
Nov 16 16:57:57 cmp002 salt-minion[4490]: [INFO    ] Running state [/opt/cni/bin/calico] at time 16:57:57.634549
Nov 16 16:57:57 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/opt/cni/bin/calico]
Nov 16 16:57:58 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:57:58 cmp002 salt-minion[4490]: New file
Nov 16 16:57:58 cmp002 salt-minion[4490]: [INFO    ] Completed state [/opt/cni/bin/calico] at time 16:57:58.805892 duration_in_ms=1171.34
Nov 16 16:57:58 cmp002 salt-minion[4490]: [INFO    ] Running state [/opt/cni/bin/calico-ipam] at time 16:57:58.806282
Nov 16 16:57:58 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/opt/cni/bin/calico-ipam]
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:00 cmp002 salt-minion[4490]: New file
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Completed state [/opt/cni/bin/calico-ipam] at time 16:58:00.043636 duration_in_ms=1237.354
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/cni/net.d/10-calico.conf] at time 16:58:00.043896
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/cni/net.d/10-calico.conf]
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/calico.conf'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:00 cmp002 salt-minion[4490]: New file
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/cni/net.d/10-calico.conf] at time 16:58:00.170841 duration_in_ms=126.943
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/calico/network-environment] at time 16:58:00.171150
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/calico/network-environment]
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/network-environment.pool'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:00 cmp002 salt-minion[4490]: New file
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/calico/network-environment] at time 16:58:00.261248 duration_in_ms=90.097
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/calico/calicoctl.cfg] at time 16:58:00.261548
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/calico/calicoctl.cfg]
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/calicoctl.cfg.pool'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:00 cmp002 salt-minion[4490]: New file
Nov 16 16:58:00 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/calico/calicoctl.cfg] at time 16:58:00.353771 duration_in_ms=92.222
Nov 16 16:58:01 cmp002 salt-minion[4490]: [INFO    ] Running state [containerd] at time 16:58:01.007633
Nov 16 16:58:01 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [containerd]
Nov 16 16:58:01 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:01 cmp002 salt-minion[4490]: [INFO    ] Executing command ['apt-cache', '-q', 'policy', 'containerd'] in directory '/root'
Nov 16 16:58:02 cmp002 salt-minion[4490]: [INFO    ] Executing command ['apt-get', '-q', 'update'] in directory '/root'
Nov 16 16:58:04 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:04 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'containerd'] in directory '/root'
Nov 16 16:58:04 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install containerd.
Nov 16 16:58:08 cmp002 salt-minion[4490]: [INFO    ] User sudo_ubuntu Executing command saltutil.find_job with jid 20191116165808820551
Nov 16 16:58:08 cmp002 salt-minion[4490]: [INFO    ] Starting a new job with PID 5332
Nov 16 16:58:08 cmp002 salt-minion[4490]: [INFO    ] Returning information for job: 20191116165808820551
Nov 16 16:58:09 cmp002 systemd[1]: Reloading.
Nov 16 16:58:09 cmp002 systemd[1]: Reloading.
Nov 16 16:58:09 cmp002 systemd[1]: Starting containerd container runtime...
Nov 16 16:58:09 cmp002 systemd[1]: Started containerd container runtime.
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.779942356Z" level=info msg="starting containerd" revision= version="1.2.6-0ubuntu1~18.04.2"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.780512177Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.780685741Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.780886869Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.780953160Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.795357401Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.795558059Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.795880808Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.796155904Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.796265150Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Nov 16 16:58:09 cmp002 kernel: [  148.775033] aufs 4.15-20180219
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.796426050Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.796623342Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.801842845Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.802012813Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.802180945Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.802323129Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.802452509Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.802578985Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.802707728Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.802833866Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.802955689Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.803078372Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.803302791Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.803504694Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804035722Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804084948Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804145694Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804163522Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804176685Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804189838Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804202281Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804215278Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804227804Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804240095Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804252554Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804414083Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804434879Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804448970Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804461116Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804557479Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804633366Z" level=info msg="Connect containerd service"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.804775638Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs""
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.805563932Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.805784824Z" level=info msg=serving... address="/run/containerd/containerd.sock"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.805816547Z" level=info msg="containerd successfully booted in 0.026290s"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.806341634Z" level=info msg="Start subscribing containerd event"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.806420275Z" level=info msg="Start recovering state"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.806553684Z" level=info msg="Start event monitor"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.806582530Z" level=info msg="Start snapshots syncer"
Nov 16 16:58:09 cmp002 containerd[5409]: time="2019-11-16T16:58:09.806597907Z" level=info msg="Start streaming server"
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Made the following changes:
Nov 16 16:58:12 cmp002 salt-minion[4490]: 'containerd' changed from 'absent' to '1.2.6-0ubuntu1~18.04.2'
Nov 16 16:58:12 cmp002 salt-minion[4490]: 'runc' changed from 'absent' to '1.0.0~rc7+git20190403.029124da-0ubuntu1~18.04.2'
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Completed state [containerd] at time 16:58:12.436131 duration_in_ms=11428.498
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/containerd/config.toml] at time 16:58:12.439920
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/containerd/config.toml]
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/containerd/config.toml'
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:12 cmp002 salt-minion[4490]: New file
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/containerd/config.toml] at time 16:58:12.539398 duration_in_ms=99.477
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Running state [containerd] at time 16:58:12.988930
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Executing state service.running for [containerd]
Nov 16 16:58:12 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'status', 'containerd.service', '-n', '0'] in directory '/root'
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-active', 'containerd.service'] in directory '/root'
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'containerd.service'] in directory '/root'
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] The service containerd is already running
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Completed state [containerd] at time 16:58:13.043460 duration_in_ms=54.53
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Running state [containerd] at time 16:58:13.043817
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing state service.mod_watch for [containerd]
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-active', 'containerd.service'] in directory '/root'
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'restart', 'containerd.service'] in directory '/root'
Nov 16 16:58:13 cmp002 systemd[1]: Started /bin/systemctl restart containerd.service.
Nov 16 16:58:13 cmp002 systemd[1]: Stopping containerd container runtime...
Nov 16 16:58:13 cmp002 containerd[5409]: time="2019-11-16T16:58:13.080951182Z" level=info msg="Stop CRI service"
Nov 16 16:58:13 cmp002 systemd[1]: Stopped containerd container runtime.
Nov 16 16:58:13 cmp002 systemd[1]: Starting containerd container runtime...
Nov 16 16:58:13 cmp002 systemd[1]: Started containerd container runtime.
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] {'containerd': True}
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Completed state [containerd] at time 16:58:13.092143 duration_in_ms=48.326
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/systemd/system/calico-node.service] at time 16:58:13.092996
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/systemd/system/calico-node.service]
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/calico/calico-node.service.ctr'
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:13 cmp002 salt-minion[4490]: New file
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/systemd/system/calico-node.service] at time 16:58:13.132817 duration_in_ms=39.821
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Running state [/var/lib/calico] at time 16:58:13.133087
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing state file.directory for [/var/lib/calico]
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] {'/var/lib/calico': 'New Dir'}
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Completed state [/var/lib/calico] at time 16:58:13.134658 duration_in_ms=1.571
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Running state [/var/log/calico] at time 16:58:13.134887
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing state file.directory for [/var/log/calico]
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] {'/var/log/calico': 'New Dir'}
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Completed state [/var/log/calico] at time 16:58:13.136394 duration_in_ms=1.507
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Running state [calico-node] at time 16:58:13.138557
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing state service.running for [calico-node]
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'status', 'calico-node.service', '-n', '0'] in directory '/root'
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.145418055Z" level=info msg="starting containerd" revision= version="1.2.6-0ubuntu1~18.04.2"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.146388851Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.146425674Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.146596602Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.148080841Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.150803191Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.150857696Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.150938893Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151127646Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151155303Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151174979Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151183558Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151278509Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151299092Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151334579Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151349957Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151363601Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151377887Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151391637Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151408575Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151421492Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151435365Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151476607Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151522690Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.151990719Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152057686Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152145764Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152188031Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152216564Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152246168Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152274117Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152300484Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152325788Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152388731Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152416594Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152532562Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152568874Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152596917Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152626229Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152764949Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.152872714Z" level=info msg="Connect containerd service"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.153115571Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs""
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.153916023Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.154184383Z" level=info msg="Start subscribing containerd event"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.154228625Z" level=info msg="Start recovering state"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.154621128Z" level=info msg=serving... address="/run/containerd/containerd.sock"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.154660645Z" level=info msg="containerd successfully booted in 0.009681s"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.155297029Z" level=info msg="Start event monitor"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.155567824Z" level=info msg="Start snapshots syncer"
Nov 16 16:58:13 cmp002 containerd[5713]: time="2019-11-16T16:58:13.155596214Z" level=info msg="Start streaming server"
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-active', 'calico-node.service'] in directory '/root'
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:13 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'calico-node.service'] in directory '/root'
Nov 16 16:58:13 cmp002 systemd[1]: Started /bin/systemctl start calico-node.service.
Nov 16 16:58:13 cmp002 systemd[1]: Starting calico-node...
Nov 16 16:58:13 cmp002 ctr[5764]: ctr: container "calico-node" in namespace "default": not found
Nov 16 16:58:13 cmp002 ctr[5785]: time="2019-11-16T16:58:13Z" level=error msg="failed to delete container "calico-node"" error="container "calico-node" in namespace "default": not found"
Nov 16 16:58:13 cmp002 ctr[5785]: ctr: container "calico-node" in namespace "default": not found
Nov 16 16:58:13 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:13 cmp002 ctr[5793]: elapsed: 0.1 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:13 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:13 cmp002 ctr[5793]: elapsed: 0.2 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:13 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:13 cmp002 ctr[5793]: elapsed: 0.3 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:13 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:13 cmp002 ctr[5793]: elapsed: 0.4 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:13 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:13 cmp002 ctr[5793]: elapsed: 0.5 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:13 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:13 cmp002 ctr[5793]: elapsed: 0.6 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolving      |#033[32m#033[0m--------------------------------------|
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 0.7 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 0.8 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 0.9 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.3 KiB
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 1.0 s                                                                        total:   0.0 B (0.0 B/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/2.1 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/3.2 KiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/13.7 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.8 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.9 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/47.1 KiB
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 1.1 s                                                                        total:  1.3 Ki (1.2 KiB/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/2.1 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/3.2 KiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/13.7 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.8 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/1.9 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        downloading    |#033[32m#033[0m--------------------------------------|    0.0 B/47.1 KiB
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 1.2 s                                                                        total:  1.3 Ki (1.1 KiB/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m#033[0m--------------------------------------| 39.1 KiB/2.1 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------| 39.1 KiB/13.7 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m#033[0m--------------------------------------| 31.1 KiB/1.8 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m#033[0m--------------------------------------| 31.1 KiB/1.9 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        downloading    |#033[32m+++++++++++++++++++++++++++++++#033[0m-------| 39.1 KiB/47.1 KiB
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 1.3 s                                                                        total:  184.2  (141.5 KiB/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m+++#033[0m-----------------------------------| 207.1 Ki/2.1 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m#033[0m--------------------------------------| 199.1 Ki/13.7 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m++++#033[0m----------------------------------| 199.1 Ki/1.8 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m++++++#033[0m--------------------------------| 351.1 Ki/1.9 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 1.4 s                                                                        total:  1008.2 (719.2 KiB/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        downloading    |#033[32m++++++++++++++++++++++++++#033[0m------------|  1.5 MiB/2.1 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++#033[0m------------------------------------| 847.3 Ki/13.7 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m++++++++++#033[0m----------------------------| 535.2 Ki/1.8 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        downloading    |#033[32m+++++++++++++++++++++++++++++++#033[0m-------|  1.5 MiB/1.9 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 1.5 s                                                                        total:  4.4 Mi (2.9 MiB/s)
Nov 16 16:58:14 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++#033[0m--------------------------------|  2.5 MiB/13.7 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|  1.8 MiB/1.8 MiB
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:14 cmp002 ctr[5793]: elapsed: 1.6 s                                                                        total:  8.3 Mi (5.2 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++#033[0m----------------------------|  3.8 MiB/13.7 MiB
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: elapsed: 1.7 s                                                                        total:  9.6 Mi (5.7 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++#033[0m------------------------|  5.4 MiB/13.7 MiB
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: elapsed: 1.8 s                                                                        total:  11.2 M (6.2 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++#033[0m--------------------|  6.6 MiB/13.7 MiB
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: elapsed: 1.9 s                                                                        total:  12.4 M (6.5 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++++#033[0m----------------|  8.3 MiB/13.7 MiB
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: elapsed: 2.0 s                                                                        total:  14.1 M (7.1 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++++++++#033[0m------------|  9.6 MiB/13.7 MiB
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: elapsed: 2.1 s                                                                        total:  15.5 M (7.4 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m+++++++++++++++++++++++++++++++#033[0m-------| 11.3 MiB/13.7 MiB
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: elapsed: 2.2 s                                                                        total:  17.1 M (7.8 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m+++++++++++++++++++++++++++++++++++#033[0m---| 12.8 MiB/13.7 MiB
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: elapsed: 2.3 s                                                                        total:  18.6 M (8.1 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        downloading    |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m| 13.7 MiB/13.7 MiB
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: elapsed: 2.4 s                                                                        total:  19.6 M (8.2 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: docker-prod-local.artifactory.mirantis.com/mirantis/projectcalico/calico/node:v3.3.2: resolved       |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: manifest-sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18:     done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:4fe2ade4980c2dda4fc95858ebb981489baec8c1e4bd282ab1c3560be8ff9bde:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: config-sha256:4e9be81e3a5948d40df6358fdae2cc0dde85a0085723666c50ee7d15427a9b48:       done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:798a8ef97f6bac88a42e34bf15d8e412242025372c4f567590579ba37d381b2c:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:b788fa0813576d69f1efd12e893b97db7d007b5b93ca1cce6663f96ba1ab6488:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:3f90cdf570685ae358ac0456a9d07fffc96858427c71d754f82e222d96f1c683:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: layer-sha256:37159c5154b88277f12fe9aa20d728ca5c92fd38e6e707660ee27eef281de923:        done           |#033[32m++++++++++++++++++++++++++++++++++++++#033[0m|
Nov 16 16:58:15 cmp002 ctr[5793]: elapsed: 2.5 s                                                                        total:  19.6 M (7.8 MiB/s)
Nov 16 16:58:15 cmp002 ctr[5793]: unpacking linux/amd64 sha256:4b3e3750deeb97cf6f68e5d021f60891a0562f7412efdb545599e6ea505eaf18...
Nov 16 16:58:18 cmp002 ctr[5793]: done
Nov 16 16:58:18 cmp002 systemd[1]: Started calico-node.
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-active', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp002 containerd[5713]: time="2019-11-16T16:58:18.246535351Z" level=info msg="shim containerd-shim started" address="/containerd-shim/default/calico-node/shim.sock" debug=false pid=5853
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp002 systemd[1]: Started /bin/systemctl enable calico-node.service.
Nov 16 16:58:18 cmp002 systemd[1]: Reloading.
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.362 [INFO][8] startup.go 264: Early log level set to info
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.362 [INFO][8] startup.go 280: Using NODENAME environment for node name
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.362 [INFO][8] startup.go 292: Determined node name: cmp002
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'calico-node.service'] in directory '/root'
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] {'calico-node': True}
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Completed state [calico-node] at time 16:58:18.422911 duration_in_ms=5284.353
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Running state [curl] at time 16:58:18.424752
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [curl]
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.428 [INFO][8] startup.go 105: Skipping datastore connection test
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.429 [INFO][8] startup.go 365: Building new node resource Name="cmp002"
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.429 [INFO][8] startup.go 380: Initialize BGP data
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.430 [INFO][8] startup.go 474: Using IPv4 address from environment: IP=172.16.10.56
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.432 [INFO][8] startup.go 507: IPv4 address 172.16.10.56 discovered on interface br-mgmt
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.432 [INFO][8] startup.go 450: Node IPv4 changed, will check for conflicts
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.433 [INFO][8] startup.go 640: Using AS number specified in environment (AS=64512)
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.445 [INFO][8] startup.go 534: CALICO_IPV4POOL_NAT_OUTGOING is true (defaulted) through environment variable
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.445 [INFO][8] startup.go 797: Ensure default IPv4 pool is created. IPIP mode:
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.448 [INFO][8] startup.go 807: Created default IPv4 pool (192.168.0.0/16) with NAT outgoing true. IPIP mode:
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.448 [INFO][8] startup.go 534: FELIX_IPV6SUPPORT is true (defaulted) through environment variable
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.448 [INFO][8] startup.go 764: IPv6 supported on this platform: true
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.448 [INFO][8] startup.go 534: CALICO_IPV6POOL_NAT_OUTGOING is false (defaulted) through environment variable
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.448 [INFO][8] startup.go 797: Ensure default IPv6 pool is created. IPIP mode: Never
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.451 [INFO][8] startup.go 807: Created default IPv6 pool (fd0b:52a8:5995::/48) with NAT outgoing false. IPIP mode: Never
Nov 16 16:58:18 cmp002 ctr[5828]: 2019-11-16 16:58:18.458 [INFO][8] startup.go 189: Using node name: cmp002
Nov 16 16:58:18 cmp002 ctr[5828]: Calico node started successfully
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] All specified packages are already installed
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Completed state [curl] at time 16:58:18.627330 duration_in_ms=202.578
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Running state [git] at time 16:58:18.627681
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [git]
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] All specified packages are already installed
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Completed state [git] at time 16:58:18.637123 duration_in_ms=9.442
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Running state [apt-transport-https] at time 16:58:18.637476
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [apt-transport-https]
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] All specified packages are already installed
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Completed state [apt-transport-https] at time 16:58:18.646569 duration_in_ms=9.093
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Running state [python-apt] at time 16:58:18.646868
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [python-apt]
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] All specified packages are already installed
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Completed state [python-apt] at time 16:58:18.656116 duration_in_ms=9.247
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Running state [socat] at time 16:58:18.656409
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [socat]
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:18 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'socat'] in directory '/root'
Nov 16 16:58:18 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install socat.
Nov 16 16:58:19 cmp002 kernel: [  158.790882] Netfilter messages via NETLINK v0.30.
Nov 16 16:58:19 cmp002 kernel: [  158.795529] ip_set: protocol 6
Nov 16 16:58:19 cmp002 kernel: [  158.936267] ip6_tables: (C) 2000-2006 Netfilter Core Team
Nov 16 16:58:22 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:22 cmp002 salt-minion[4490]: [INFO    ] Made the following changes:
Nov 16 16:58:22 cmp002 salt-minion[4490]: 'socat' changed from 'absent' to '1.7.3.2-2ubuntu2'
Nov 16 16:58:22 cmp002 salt-minion[4490]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:22 cmp002 salt-minion[4490]: [INFO    ] Completed state [socat] at time 16:58:22.825668 duration_in_ms=4169.258
Nov 16 16:58:22 cmp002 salt-minion[4490]: [INFO    ] Running state [openssl] at time 16:58:22.831482
Nov 16 16:58:22 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [openssl]
Nov 16 16:58:23 cmp002 salt-minion[4490]: [INFO    ] All specified packages are already installed
Nov 16 16:58:23 cmp002 salt-minion[4490]: [INFO    ] Completed state [openssl] at time 16:58:23.531712 duration_in_ms=700.229
Nov 16 16:58:23 cmp002 salt-minion[4490]: [INFO    ] Running state [conntrack] at time 16:58:23.532007
Nov 16 16:58:23 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [conntrack]
Nov 16 16:58:23 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:23 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'conntrack'] in directory '/root'
Nov 16 16:58:23 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install conntrack.
Nov 16 16:58:27 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:27 cmp002 salt-minion[4490]: [INFO    ] Made the following changes:
Nov 16 16:58:27 cmp002 salt-minion[4490]: 'conntrack' changed from 'absent' to '1:1.4.4+snapshot20161117-6ubuntu2'
Nov 16 16:58:27 cmp002 salt-minion[4490]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:27 cmp002 salt-minion[4490]: [INFO    ] Completed state [conntrack] at time 16:58:27.613096 duration_in_ms=4081.089
Nov 16 16:58:27 cmp002 salt-minion[4490]: [INFO    ] Running state [nfs-common] at time 16:58:27.619957
Nov 16 16:58:27 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [nfs-common]
Nov 16 16:58:28 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:28 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'nfs-common'] in directory '/root'
Nov 16 16:58:28 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install nfs-common.
Nov 16 16:58:31 cmp002 systemd[1]: Reloading.
Nov 16 16:58:32 cmp002 systemd[1]: message repeated 4 times: [ Reloading.]
Nov 16 16:58:32 cmp002 systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 16 16:58:32 cmp002 systemd[1]: Starting RPC bind portmap service...
Nov 16 16:58:32 cmp002 systemd[1]: Started RPC bind portmap service.
Nov 16 16:58:32 cmp002 systemd[1]: Reached target RPC Port Mapper.
Nov 16 16:58:33 cmp002 systemd[1]: Reloading.
Nov 16 16:58:34 cmp002 systemd[1]: message repeated 4 times: [ Reloading.]
Nov 16 16:58:37 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:37 cmp002 salt-minion[4490]: [INFO    ] Made the following changes:
Nov 16 16:58:37 cmp002 salt-minion[4490]: 'keyutils' changed from 'absent' to '1.5.9-9.2ubuntu2'
Nov 16 16:58:37 cmp002 salt-minion[4490]: 'nfs-common' changed from 'absent' to '1:1.3.4-2.1ubuntu5.2'
Nov 16 16:58:37 cmp002 salt-minion[4490]: 'rpcbind' changed from 'absent' to '0.2.3-0.6'
Nov 16 16:58:37 cmp002 salt-minion[4490]: 'libtirpc1' changed from 'absent' to '0.2.5-1.2ubuntu0.1'
Nov 16 16:58:37 cmp002 salt-minion[4490]: 'nfs-client' changed from 'absent' to '1'
Nov 16 16:58:37 cmp002 salt-minion[4490]: 'libnfsidmap2' changed from 'absent' to '0.25-5.1'
Nov 16 16:58:37 cmp002 salt-minion[4490]: 'portmap' changed from 'absent' to '1'
Nov 16 16:58:37 cmp002 salt-minion[4490]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:37 cmp002 salt-minion[4490]: [INFO    ] Completed state [nfs-common] at time 16:58:37.692119 duration_in_ms=10072.162
Nov 16 16:58:37 cmp002 salt-minion[4490]: [INFO    ] Running state [cifs-utils] at time 16:58:37.697873
Nov 16 16:58:37 cmp002 salt-minion[4490]: [INFO    ] Executing state pkg.installed for [cifs-utils]
Nov 16 16:58:38 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg', '--get-selections', '*'] in directory '/root'
Nov 16 16:58:38 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'apt-get', '-q', '-y', '-o', 'DPkg::Options::=--force-confold', '-o', 'DPkg::Options::=--force-confdef', 'install', 'cifs-utils'] in directory '/root'
Nov 16 16:58:38 cmp002 systemd[1]: Started /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install cifs-utils.
Nov 16 16:58:38 cmp002 salt-minion[4490]: [INFO    ] User sudo_ubuntu Executing command saltutil.find_job with jid 20191116165838849070
Nov 16 16:58:38 cmp002 salt-minion[4490]: [INFO    ] Starting a new job with PID 7726
Nov 16 16:58:38 cmp002 salt-minion[4490]: [INFO    ] Returning information for job: 20191116165838849070
Nov 16 16:58:50 cmp002 salt-minion[4490]: [INFO    ] Executing command ['dpkg-query', '--showformat', '${Status} ${Package} ${Version} ${Architecture}', '-W'] in directory '/root'
Nov 16 16:58:50 cmp002 salt-minion[4490]: [INFO    ] Made the following changes:
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'python2.7-ldb' changed from 'absent' to '1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'python-ldb' changed from 'absent' to '2:1.2.3-1ubuntu0.1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libtdb1' changed from 'absent' to '1.3.15-2'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libavahi-common3' changed from 'absent' to '0.7-3.1ubuntu1.2'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'python2.7-talloc' changed from 'absent' to '1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libavahi-client3' changed from 'absent' to '0.7-3.1ubuntu1.2'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libwbclient0' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libavahi-common-data' changed from 'absent' to '0.7-3.1ubuntu1.2'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libcups2' changed from 'absent' to '2.2.7-1ubuntu2.7'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'cifs-utils' changed from 'absent' to '2:6.8-1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'samba-common' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'python2.7-tdb' changed from 'absent' to '1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'samba-libs' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libldb1' changed from 'absent' to '2:1.2.3-1ubuntu0.1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libtevent0' changed from 'absent' to '0.9.34-1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'python-talloc' changed from 'absent' to '2.1.10-2ubuntu1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'samba-common-bin' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'python-samba' changed from 'absent' to '2:4.7.6+dfsg~ubuntu-0ubuntu2.13'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libtalloc2' changed from 'absent' to '2.1.10-2ubuntu1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'python2.7-samba' changed from 'absent' to '1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'libjansson4' changed from 'absent' to '2.11-1'
Nov 16 16:58:50 cmp002 salt-minion[4490]: 'python-tdb' changed from 'absent' to '1.3.15-2'
Nov 16 16:58:50 cmp002 salt-minion[4490]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:50 cmp002 salt-minion[4490]: [INFO    ] Completed state [cifs-utils] at time 16:58:50.406850 duration_in_ms=12708.976
Nov 16 16:58:50 cmp002 salt-minion[4490]: [INFO    ] Running state [/usr/bin/hyperkube] at time 16:58:50.412492
Nov 16 16:58:50 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/usr/bin/hyperkube]
Nov 16 16:58:54 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:54 cmp002 salt-minion[4490]: New file
Nov 16 16:58:54 cmp002 salt-minion[4490]: [INFO    ] Completed state [/usr/bin/hyperkube] at time 16:58:54.661029 duration_in_ms=4248.536
Nov 16 16:58:54 cmp002 salt-minion[4490]: [INFO    ] Running state [/usr/bin/kubectl] at time 16:58:54.661964
Nov 16 16:58:54 cmp002 salt-minion[4490]: [INFO    ] Executing state file.symlink for [/usr/bin/kubectl]
Nov 16 16:58:54 cmp002 salt-minion[4490]: [INFO    ] {'new': '/usr/bin/kubectl'}
Nov 16 16:58:54 cmp002 salt-minion[4490]: [INFO    ] Loading fresh modules for state activity
Nov 16 16:58:54 cmp002 salt-minion[4490]: [INFO    ] Completed state [/usr/bin/kubectl] at time 16:58:54.719463 duration_in_ms=57.498
Nov 16 16:58:54 cmp002 salt-minion[4490]: [INFO    ] Running state [/tmp/crictl] at time 16:58:54.724227
Nov 16 16:58:54 cmp002 salt-minion[4490]: [INFO    ] Executing state archive.extracted for [/tmp/crictl]
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Executing command ['tar', 'xz', '-f', '/var/cache/salt/minion/extrn_files/base/github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz'] in directory '/tmp/crictl/'
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Executing command ['tar', '--version'] in directory '/root'
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] {'extracted_files': 'no tar output so far', 'directories_created': ['/tmp/crictl/']}
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Completed state [/tmp/crictl] at time 16:58:57.489309 duration_in_ms=2765.078
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Running state [/usr/local/bin/crictl] at time 16:58:57.490260
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/usr/local/bin/crictl]
Nov 16 16:58:57 cmp002 salt-minion[4490]: [WARNING ] Use of argument owner found, "owner" is invalid, please use "user"
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:57 cmp002 salt-minion[4490]: New file
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Completed state [/usr/local/bin/crictl] at time 16:58:57.803523 duration_in_ms=313.263
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/crictl.yaml] at time 16:58:57.803966
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/crictl.yaml]
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:57 cmp002 salt-minion[4490]: New file
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/crictl.yaml] at time 16:58:57.807643 duration_in_ms=3.677
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/criproxy] at time 16:58:57.808029
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Executing state file.absent for [/etc/criproxy]
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] File /etc/criproxy is not present
Nov 16 16:58:57 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/criproxy] at time 16:58:57.809350 duration_in_ms=1.322
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Running state [criproxy] at time 16:58:58.301836
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Executing state service.dead for [criproxy]
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'status', 'criproxy.service', '-n', '0'] in directory '/root'
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] The named service criproxy is not available
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Completed state [criproxy] at time 16:58:58.325033 duration_in_ms=23.197
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/systemd/system/kubelet.service] at time 16:58:58.325644
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kubelet.service]
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/systemd/kubelet.service'
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:58 cmp002 salt-minion[4490]: New file
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/systemd/system/kubelet.service] at time 16:58:58.708286 duration_in_ms=382.641
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/kubernetes/config] at time 16:58:58.708933
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Executing state file.absent for [/etc/kubernetes/config]
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] File /etc/kubernetes/config is not present
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/kubernetes/config] at time 16:58:58.710016 duration_in_ms=1.083
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/default/kubelet] at time 16:58:58.710266
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/default/kubelet]
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kubelet/default.pool'
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:58 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:58:59 cmp002 salt-minion[4490]: New file
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/default/kubelet] at time 16:58:59.650478 duration_in_ms=940.211
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/kubernetes/kubelet.kubeconfig] at time 16:58:59.650969
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/kubernetes/kubelet.kubeconfig]
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kubelet/kubelet.kubeconfig.pool'
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:58:59 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:59:00 cmp002 salt-minion[4490]: New file
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/kubernetes/kubelet.kubeconfig] at time 16:59:00.241991 duration_in_ms=591.021
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/kubernetes/manifests] at time 16:59:00.242510
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing state file.directory for [/etc/kubernetes/manifests]
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] {'/etc/kubernetes/manifests': 'New Dir'}
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/kubernetes/manifests] at time 16:59:00.245288 duration_in_ms=2.777
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Running state [kubelet] at time 16:59:00.247685
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing state service.running for [kubelet]
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'status', 'kubelet.service', '-n', '0'] in directory '/root'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-active', 'kubelet.service'] in directory '/root'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'kubelet.service'] in directory '/root'
Nov 16 16:59:00 cmp002 systemd[1]: Started /bin/systemctl start kubelet.service.
Nov 16 16:59:00 cmp002 systemd[1]: Started Kubernetes Kubelet Server.
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-active', 'kubelet.service'] in directory '/root'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'kubelet.service'] in directory '/root'
Nov 16 16:59:00 cmp002 systemd[1]: Started /bin/systemctl enable kubelet.service.
Nov 16 16:59:00 cmp002 systemd[1]: Reloading.
Nov 16 16:59:00 cmp002 kubelet[8640]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:00 cmp002 kubelet[8640]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:00 cmp002 kubelet[8640]: Flag --allow-privileged has been deprecated, will be removed in a future version
Nov 16 16:59:00 cmp002 kubelet[8640]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:00 cmp002 kubelet[8640]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:00 cmp002 kubelet[8640]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:00 cmp002 kubelet[8640]: Flag --file-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513795    8640 flags.go:33] FLAG: --address="172.16.10.56"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513840    8640 flags.go:33] FLAG: --allow-privileged="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513849    8640 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513866    8640 flags.go:33] FLAG: --alsologtostderr="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513874    8640 flags.go:33] FLAG: --anonymous-auth="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513880    8640 flags.go:33] FLAG: --application-metrics-count-limit="100"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513886    8640 flags.go:33] FLAG: --authentication-token-webhook="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513892    8640 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513899    8640 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513906    8640 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513913    8640 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513919    8640 flags.go:33] FLAG: --azure-container-registry-config=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513925    8640 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513931    8640 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513937    8640 flags.go:33] FLAG: --bootstrap-kubeconfig=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513942    8640 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513948    8640 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513954    8640 flags.go:33] FLAG: --cgroup-root=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513959    8640 flags.go:33] FLAG: --cgroups-per-qos="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513965    8640 flags.go:33] FLAG: --chaos-chance="0"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513973    8640 flags.go:33] FLAG: --client-ca-file=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513982    8640 flags.go:33] FLAG: --cloud-config=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513988    8640 flags.go:33] FLAG: --cloud-provider=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.513993    8640 flags.go:33] FLAG: --cluster-dns="[10.254.0.10]"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514004    8640 flags.go:33] FLAG: --cluster-domain="cluster.local"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514010    8640 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514016    8640 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514021    8640 flags.go:33] FLAG: --config=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514027    8640 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514033    8640 flags.go:33] FLAG: --container-log-max-files="5"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514041    8640 flags.go:33] FLAG: --container-log-max-size="10Mi"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514050    8640 flags.go:33] FLAG: --container-runtime="remote"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514056    8640 flags.go:33] FLAG: --container-runtime-endpoint="unix:///run/containerd/containerd.sock"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514062    8640 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514068    8640 flags.go:33] FLAG: --containerized="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514074    8640 flags.go:33] FLAG: --contention-profiling="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514079    8640 flags.go:33] FLAG: --cpu-cfs-quota="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514088    8640 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514094    8640 flags.go:33] FLAG: --cpu-manager-policy="none"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514099    8640 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514105    8640 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514111    8640 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514117    8640 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514122    8640 flags.go:33] FLAG: --docker-only="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514128    8640 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514133    8640 flags.go:33] FLAG: --docker-tls="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514138    8640 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514144    8640 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514149    8640 flags.go:33] FLAG: --docker-tls-key="key.pem"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514155    8640 flags.go:33] FLAG: --dynamic-config-dir=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514162    8640 flags.go:33] FLAG: --enable-controller-attach-detach="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514167    8640 flags.go:33] FLAG: --enable-debugging-handlers="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514173    8640 flags.go:33] FLAG: --enable-load-reader="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514182    8640 flags.go:33] FLAG: --enable-server="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514187    8640 flags.go:33] FLAG: --enforce-node-allocatable="[pods]"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514200    8640 flags.go:33] FLAG: --event-burst="10"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514206    8640 flags.go:33] FLAG: --event-qps="5"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514212    8640 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514218    8640 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514223    8640 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514240    8640 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514246    8640 flags.go:33] FLAG: --eviction-minimum-reclaim=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514254    8640 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514260    8640 flags.go:33] FLAG: --eviction-soft=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514266    8640 flags.go:33] FLAG: --eviction-soft-grace-period=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514272    8640 flags.go:33] FLAG: --exit-on-lock-contention="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514277    8640 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514282    8640 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514288    8640 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514297    8640 flags.go:33] FLAG: --experimental-dockershim="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514302    8640 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514308    8640 flags.go:33] FLAG: --experimental-fail-swap-on="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514314    8640 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514319    8640 flags.go:33] FLAG: --experimental-mounter-path=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514324    8640 flags.go:33] FLAG: --fail-swap-on="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514330    8640 flags.go:33] FLAG: --feature-gates=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514338    8640 flags.go:33] FLAG: --file-check-frequency="5s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514343    8640 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514349    8640 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514356    8640 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514361    8640 flags.go:33] FLAG: --healthz-port="10248"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514367    8640 flags.go:33] FLAG: --help="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514372    8640 flags.go:33] FLAG: --host-ipc-sources="[*]"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514382    8640 flags.go:33] FLAG: --host-network-sources="[*]"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514394    8640 flags.go:33] FLAG: --host-pid-sources="[*]"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514407    8640 flags.go:33] FLAG: --hostname-override="cmp002"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514413    8640 flags.go:33] FLAG: --housekeeping-interval="10s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514419    8640 flags.go:33] FLAG: --http-check-frequency="20s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514425    8640 flags.go:33] FLAG: --image-gc-high-threshold="85"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514431    8640 flags.go:33] FLAG: --image-gc-low-threshold="80"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514436    8640 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514442    8640 flags.go:33] FLAG: --image-service-endpoint=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514447    8640 flags.go:33] FLAG: --iptables-drop-bit="15"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514453    8640 flags.go:33] FLAG: --iptables-masquerade-bit="14"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514458    8640 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514464    8640 flags.go:33] FLAG: --kube-api-burst="10"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514469    8640 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514475    8640 flags.go:33] FLAG: --kube-api-qps="5"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514481    8640 flags.go:33] FLAG: --kube-reserved=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514487    8640 flags.go:33] FLAG: --kube-reserved-cgroup=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514492    8640 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.kubeconfig"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514502    8640 flags.go:33] FLAG: --kubelet-cgroups=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514508    8640 flags.go:33] FLAG: --lock-file=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514513    8640 flags.go:33] FLAG: --log-backtrace-at=":0"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514520    8640 flags.go:33] FLAG: --log-cadvisor-usage="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514526    8640 flags.go:33] FLAG: --log-dir=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514531    8640 flags.go:33] FLAG: --log-file=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514536    8640 flags.go:33] FLAG: --log-flush-frequency="5s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514542    8640 flags.go:33] FLAG: --logtostderr="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514547    8640 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514554    8640 flags.go:33] FLAG: --make-iptables-util-chains="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514559    8640 flags.go:33] FLAG: --manifest-url=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514564    8640 flags.go:33] FLAG: --manifest-url-header=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514599    8640 flags.go:33] FLAG: --master-service-namespace="default"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514607    8640 flags.go:33] FLAG: --max-open-files="1000000"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514618    8640 flags.go:33] FLAG: --max-pods="110"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514627    8640 flags.go:33] FLAG: --maximum-dead-containers="-1"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514639    8640 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514647    8640 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514655    8640 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514664    8640 flags.go:33] FLAG: --network-plugin="cni"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514671    8640 flags.go:33] FLAG: --network-plugin-mtu="0"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514679    8640 flags.go:33] FLAG: --node-ip="172.16.10.56"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514686    8640 flags.go:33] FLAG: --node-labels="extraRuntime=virtlet,node-role.kubernetes.io/node=true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514703    8640 flags.go:33] FLAG: --node-status-max-images="50"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514711    8640 flags.go:33] FLAG: --node-status-update-frequency="10s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514719    8640 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514727    8640 flags.go:33] FLAG: --oom-score-adj="-999"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514735    8640 flags.go:33] FLAG: --pod-cidr=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514742    8640 flags.go:33] FLAG: --pod-infra-container-image="docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514761    8640 flags.go:33] FLAG: --pod-manifest-path="/etc/kubernetes/manifests"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514770    8640 flags.go:33] FLAG: --pod-max-pids="-1"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514778    8640 flags.go:33] FLAG: --pods-per-core="0"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514790    8640 flags.go:33] FLAG: --port="10250"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514799    8640 flags.go:33] FLAG: --protect-kernel-defaults="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514807    8640 flags.go:33] FLAG: --provider-id=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514814    8640 flags.go:33] FLAG: --qos-reserved=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514823    8640 flags.go:33] FLAG: --read-only-port="10255"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514830    8640 flags.go:33] FLAG: --really-crash-for-testing="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514838    8640 flags.go:33] FLAG: --redirect-container-streaming="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514845    8640 flags.go:33] FLAG: --register-node="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514853    8640 flags.go:33] FLAG: --register-schedulable="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514860    8640 flags.go:33] FLAG: --register-with-taints=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514870    8640 flags.go:33] FLAG: --registry-burst="10"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514878    8640 flags.go:33] FLAG: --registry-qps="5"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514885    8640 flags.go:33] FLAG: --resolv-conf="/etc/resolv.conf"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514894    8640 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514902    8640 flags.go:33] FLAG: --rotate-certificates="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514909    8640 flags.go:33] FLAG: --rotate-server-certificates="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514921    8640 flags.go:33] FLAG: --runonce="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514929    8640 flags.go:33] FLAG: --runtime-cgroups=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514937    8640 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514945    8640 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514953    8640 flags.go:33] FLAG: --serialize-image-pulls="true"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514961    8640 flags.go:33] FLAG: --stderrthreshold="2"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514969    8640 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514976    8640 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514984    8640 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514992    8640 flags.go:33] FLAG: --storage-driver-password="root"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.514999    8640 flags.go:33] FLAG: --storage-driver-secure="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515007    8640 flags.go:33] FLAG: --storage-driver-table="stats"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515014    8640 flags.go:33] FLAG: --storage-driver-user="root"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515022    8640 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515030    8640 flags.go:33] FLAG: --sync-frequency="1m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515038    8640 flags.go:33] FLAG: --system-cgroups=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515052    8640 flags.go:33] FLAG: --system-reserved=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515060    8640 flags.go:33] FLAG: --system-reserved-cgroup=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515068    8640 flags.go:33] FLAG: --tls-cert-file=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515076    8640 flags.go:33] FLAG: --tls-cipher-suites="[]"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515092    8640 flags.go:33] FLAG: --tls-min-version=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515100    8640 flags.go:33] FLAG: --tls-private-key-file=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515107    8640 flags.go:33] FLAG: --v="2"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515115    8640 flags.go:33] FLAG: --version="false"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515126    8640 flags.go:33] FLAG: --vmodule=""
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515134    8640 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515143    8640 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515187    8640 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:00 cmp002 kubelet[8640]: W1116 16:59:00.515210    8640 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/node]
Nov 16 16:59:00 cmp002 kubelet[8640]: W1116 16:59:00.515226    8640 options.go:266] in 1.15, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
Nov 16 16:59:00 cmp002 kubelet[8640]: W1116 16:59:00.515239    8640 server.go:182] Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead
Nov 16 16:59:00 cmp002 kubelet[8640]: I1116 16:59:00.515325    8640 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:00 cmp002 systemd[1]: kubelet.service: Dependency Conflicts=kubelet.service dropped
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kubelet.service'] in directory '/root'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] {'kubelet': True}
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Completed state [kubelet] at time 16:59:00.609601 duration_in_ms=361.915
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/logrotate.d/kubernetes] at time 16:59:00.610175
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/logrotate.d/kubernetes]
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/logrotate'
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:59:00 cmp002 salt-minion[4490]: New file
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/logrotate.d/kubernetes] at time 16:59:00.643598 duration_in_ms=33.424
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Running state [/opt/cni/bin] at time 16:59:00.643846
Nov 16 16:59:00 cmp002 salt-minion[4490]: [INFO    ] Executing state archive.extracted for [/opt/cni/bin]
Nov 16 16:59:01 cmp002 systemd[1]: Started Kubernetes systemd probe.
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.174003    8640 mount_linux.go:180] Detected OS with systemd
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.174190    8640 server.go:407] Version: v1.13.5-3+98374c02d2d8c1
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.174361    8640 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.174971    8640 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:01 cmp002 kubelet[8640]: W1116 16:59:01.175013    8640 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/node]
Nov 16 16:59:01 cmp002 kubelet[8640]: W1116 16:59:01.175036    8640 options.go:266] in 1.15, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.175297    8640 plugins.go:103] No cloud provider specified.
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.175323    8640 server.go:523] No cloud provider specified: "" from the config file: ""
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.182007    8640 manager.go:155] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.183767    8640 fs.go:142] Filesystem UUIDs: map[2019-11-16-17-50-46-00:/dev/sr0 2ff13334-aecb-43c4-82f2-b8bb5fa56dda:/dev/vda1 9E29-9F5A:/dev/vda15]
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.183955    8640 fs.go:143] Filesystem partitions: map[tmpfs:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /dev/vda1:{mountpoint:/ major:252 minor:1 fsType:ext4 blockSize:0}]
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.197576    8640 manager.go:229] Machine: {NumCores:6 CpuFrequency:2799994 MemoryCapacity:12591427584 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:edff5f6ed84443f8b76ce2eb2b25b859 SystemUUID:EDFF5F6E-D844-43F8-B76C-E2EB2B25B859 BootID:067984a7-eee5-400e-b628-f26054f0c52e Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:24 Capacity:1259143168 Type:vfs Inodes:1537039 HasInodes:true} {Device:/dev/vda1 DeviceMajor:252 DeviceMinor:1 Capacity:103880232960 Type:vfs Inodes:12902400 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:107374182400 Scheduler:none}] NetworkDevices:[{Name:br-mgmt MacAddress:52:54:00:3d:c4:b4 Speed:0 Mtu:9000} {Name:ens3 MacAddress:52:54:00:c7:26:24 Speed:-1 Mtu:9000} {Name:ens4 MacAddress:52:54:00:3d:c4:b4 Speed:-1 Mtu:9000} {Name:ens5 MacAddress:52:54:00:b4:da:df Speed:-1 Mtu:9000} {Name:ens5.1000 MacAddress:52:54:00:b4:da:df Speed:-1 Mtu:9000} {Name:ens6 MacAddress:52:54:00:13:34:fc Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:12591427584 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:1 Memory:0 Cores:[{Id:0 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:2 Memory:0 Cores:[{Id:0 Threads:[2] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:3 Memory:0 Cores:[{Id:0 Threads:[3] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:4 Memory:0 Cores:[{Id:0 Threads:[4] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:5 Memory:0 Cores:[{Id:0 Threads:[5] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.199265    8640 manager.go:235] Version: {KernelVersion:4.15.0-70-generic ContainerOsVersion:Ubuntu 18.04.3 LTS DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.199428    8640 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.200662    8640 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: []
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.200689    8640 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.200893    8640 container_manager_linux.go:272] Creating device plugin manager: true
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.200906    8640 manager.go:109] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.200988    8640 state_mem.go:36] [cpumanager] initializing new in-memory state store
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.203927    8640 server.go:941] Using root directory: /var/lib/kubelet
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.203962    8640 kubelet.go:281] Adding pod path: /etc/kubernetes/manifests
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.203988    8640 file.go:68] Watching path "/etc/kubernetes/manifests"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.204013    8640 kubelet.go:306] Watching apiserver
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.205050    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.205065    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.205120    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.215763    8640 kuberuntime_manager.go:198] Container runtime containerd initialized, version: 1.2.6-0ubuntu1~18.04.2, apiVersion: v1alpha2
Nov 16 16:59:01 cmp002 kubelet[8640]: W1116 16:59:01.216144    8640 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216596    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/aws-ebs"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216630    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/empty-dir"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216646    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/gce-pd"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216659    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/git-repo"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216673    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/host-path"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216685    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/nfs"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216698    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/secret"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216712    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/iscsi"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216728    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/glusterfs"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216749    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/rbd"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216763    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/cinder"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216776    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/quobyte"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216789    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/cephfs"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216805    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/downward-api"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216819    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/fc"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216832    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/flocker"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216844    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-file"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216858    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/configmap"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216872    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/vsphere-volume"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216884    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-disk"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216899    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/photon-pd"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216912    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/projected"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216938    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/portworx-volume"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216954    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/scaleio"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216968    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/local-volume"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216981    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/storageos"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.216999    8640 plugins.go:547] Loaded volume plugin "kubernetes.io/csi"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.217349    8640 server.go:999] Started kubelet
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.217736    8640 server.go:157] Starting to listen read-only on 172.16.10.56:10255
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.218066    8640 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.218102    8640 server.go:137] Starting to listen on 172.16.10.56:10250
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.219043    8640 server.go:333] Adding debug handlers to kubelet server.
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.219560    8640 cri_stats_provider.go:320] Failed to get the info of the filesystem with mountpoint "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.219599    8640 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.219767    8640 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.219805    8640 status_manager.go:152] Starting to sync pod status with apiserver
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.219820    8640 kubelet.go:1829] Starting kubelet main sync loop.
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.219836    8640 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.220032    8640 volume_manager.go:246] The desired_state_of_world populator starts
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.220046    8640 volume_manager.go:248] Starting Kubelet Volume Manager
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.220300    8640 desired_state_of_world_populator.go:130] Desired state populator starts to run
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.223704    8640 factory.go:136] Registering containerd factory
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.224754    8640 factory.go:54] Registering systemd factory
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.224971    8640 factory.go:97] Registering Raw factory
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.225140    8640 manager.go:1222] Started watching for new ooms in manager
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.226270    8640 manager.go:365] Starting recovery of all containers
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.281152    8640 manager.go:370] Recovery completed
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.319978    8640 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet]
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.320385    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.320434    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.320968    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.322042    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.322077    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.322093    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.322112    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.322666    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.337069    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.337299    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.337929    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.338122    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.338283    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.338443    8640 cpu_manager.go:155] [cpumanager] starting with none policy
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.338578    8640 cpu_manager.go:156] [cpumanager] reconciling every 10s
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.338710    8640 policy_none.go:42] [cpumanager] none policy: Start
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.339188    8640 container_manager_linux.go:376] Updating kernel flag: kernel/panic, expected value: 10, actual value: 60
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.339390    8640 container_manager_linux.go:376] Updating kernel flag: kernel/panic_on_oops, expected value: 1, actual value: 0
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.339632    8640 container_manager_linux.go:376] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.355875    8640 manager.go:196] Starting Device Plugin manager
Nov 16 16:59:01 cmp002 kubelet[8640]: W1116 16:59:01.356485    8640 manager.go:537] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.356754    8640 manager.go:231] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.357108    8640 plugin_watcher.go:90] Plugin Watcher Start at /var/lib/kubelet/plugins_registry
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.357499    8640 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp002" not found
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.420904    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.424708    8640 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.520645    8640 kubelet.go:1908] SyncLoop (ADD, "file"): ""
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.521734    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.522985    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.523340    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.524716    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.525217    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.525698    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.526043    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.527348    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.621974    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.722298    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.822888    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.923449    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.927554    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.927974    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.928947    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.929132    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.929343    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: I1116 16:59:01.929569    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:01 cmp002 kubelet[8640]: E1116 16:59:01.930257    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.024007    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.124373    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.206141    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.206795    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.208041    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.224717    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.325067    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.425528    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.526000    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:02 cmp002 salt-minion[4490]: [INFO    ] Executing command ['tar', 'xz', '-f', '/var/cache/salt/minion/extrn_files/base/docker-prod-local.artifactory.mirantis.com/artifactory/binary-prod-local/mirantis/kubernetes/containernetworking-plugins/containernetworking-plugins_v0.7.2-173-g8db2808.tar.gz'] in directory '/opt/cni/bin/'
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.626773    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.727121    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:02 cmp002 kubelet[8640]: I1116 16:59:02.730580    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:02 cmp002 kubelet[8640]: I1116 16:59:02.731024    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:02 cmp002 kubelet[8640]: I1116 16:59:02.731938    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:02 cmp002 kubelet[8640]: I1116 16:59:02.732100    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:02 cmp002 kubelet[8640]: I1116 16:59:02.732265    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:02 cmp002 kubelet[8640]: I1116 16:59:02.732415    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.733172    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.827589    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:02 cmp002 kubelet[8640]: E1116 16:59:02.928020    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.028500    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.128953    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.207382    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.208084    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.209199    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.229374    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command ['tar', '--version'] in directory '/root'
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] {'extracted_files': 'no tar output so far'}
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Completed state [/opt/cni/bin] at time 16:59:03.248289 duration_in_ms=2604.437
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/kubernetes/proxy.kubeconfig] at time 16:59:03.248719
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/kubernetes/proxy.kubeconfig]
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/kube-proxy/proxy.kubeconfig'
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.329694    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.429998    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.530132    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/-.*//g' -e 's/v//g' -e 's/Kubernetes //g' | awk -F'.' '{print $1 "." $2}'' in directory '/root'
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.630316    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.730480    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command '(hyperkube --version kubelet 2> /dev/null || echo '0.0') | sed -e 's/+.*//g' -e 's/v//g' -e 's/Kubernetes //g'' in directory '/root'
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.830679    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:59:03 cmp002 salt-minion[4490]: New file
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/kubernetes/proxy.kubeconfig] at time 16:59:03.862562 duration_in_ms=613.841
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/systemd/system/kube-proxy.service] at time 16:59:03.862946
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/systemd/system/kube-proxy.service]
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Fetching file from saltenv 'base', ** done ** 'kubernetes/files/systemd/kube-proxy.service'
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:59:03 cmp002 salt-minion[4490]: New file
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/systemd/system/kube-proxy.service] at time 16:59:03.893776 duration_in_ms=30.829
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Running state [/etc/default/kube-proxy] at time 16:59:03.894019
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing state file.managed for [/etc/default/kube-proxy]
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] File changed:
Nov 16 16:59:03 cmp002 salt-minion[4490]: New file
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Completed state [/etc/default/kube-proxy] at time 16:59:03.896177 duration_in_ms=2.158
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Running state [kube-proxy] at time 16:59:03.897864
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing state service.running for [kube-proxy]
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'status', 'kube-proxy.service', '-n', '0'] in directory '/root'
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:03 cmp002 kubelet[8640]: E1116 16:59:03.931377    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'start', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:03 cmp002 systemd[1]: Started /bin/systemctl start kube-proxy.service.
Nov 16 16:59:03 cmp002 systemd[1]: Started Kubernetes Kube-Proxy Server.
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-active', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:03 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:04 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.031647    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:04 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemd-run', '--scope', 'systemctl', 'enable', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:04 cmp002 systemd[1]: Started /bin/systemctl enable kube-proxy.service.
Nov 16 16:59:04 cmp002 systemd[1]: Reloading.
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.121459    8844 flags.go:33] FLAG: --alsologtostderr="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122564    8844 flags.go:33] FLAG: --application-metrics-count-limit="100"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122611    8844 flags.go:33] FLAG: --azure-container-registry-config=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122629    8844 flags.go:33] FLAG: --bind-address="0.0.0.0"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122643    8844 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122656    8844 flags.go:33] FLAG: --cleanup="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122670    8844 flags.go:33] FLAG: --cleanup-iptables="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122680    8844 flags.go:33] FLAG: --cleanup-ipvs="true"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122691    8844 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122709    8844 flags.go:33] FLAG: --cluster-cidr="192.168.0.0/16"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122720    8844 flags.go:33] FLAG: --config=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122730    8844 flags.go:33] FLAG: --config-sync-period="15m0s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122743    8844 flags.go:33] FLAG: --conntrack-max="0"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122755    8844 flags.go:33] FLAG: --conntrack-max-per-core="32768"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122765    8844 flags.go:33] FLAG: --conntrack-min="131072"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122775    8844 flags.go:33] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122786    8844 flags.go:33] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122797    8844 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122810    8844 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122821    8844 flags.go:33] FLAG: --default-not-ready-toleration-seconds="300"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122832    8844 flags.go:33] FLAG: --default-unreachable-toleration-seconds="300"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122843    8844 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122854    8844 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122863    8844 flags.go:33] FLAG: --docker-only="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122873    8844 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122883    8844 flags.go:33] FLAG: --docker-tls="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122893    8844 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122901    8844 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122919    8844 flags.go:33] FLAG: --docker-tls-key="key.pem"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122930    8844 flags.go:33] FLAG: --enable-load-reader="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122940    8844 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122949    8844 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122959    8844 flags.go:33] FLAG: --feature-gates=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122981    8844 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.122994    8844 flags.go:33] FLAG: --healthz-bind-address="0.0.0.0:10256"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123004    8844 flags.go:33] FLAG: --healthz-port="10256"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123013    8844 flags.go:33] FLAG: --help="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123018    8844 flags.go:33] FLAG: --hostname-override=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123024    8844 flags.go:33] FLAG: --housekeeping-interval="10s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123030    8844 flags.go:33] FLAG: --iptables-masquerade-bit="14"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123036    8844 flags.go:33] FLAG: --iptables-min-sync-period="0s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123041    8844 flags.go:33] FLAG: --iptables-sync-period="30s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123047    8844 flags.go:33] FLAG: --ipvs-exclude-cidrs="[]"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123065    8844 flags.go:33] FLAG: --ipvs-min-sync-period="0s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123072    8844 flags.go:33] FLAG: --ipvs-scheduler=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123077    8844 flags.go:33] FLAG: --ipvs-sync-period="30s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123083    8844 flags.go:33] FLAG: --kube-api-burst="10"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123089    8844 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123095    8844 flags.go:33] FLAG: --kube-api-qps="5"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123103    8844 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/proxy.kubeconfig"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123109    8844 flags.go:33] FLAG: --log-backtrace-at=":0"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123117    8844 flags.go:33] FLAG: --log-cadvisor-usage="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123130    8844 flags.go:33] FLAG: --log-dir=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123136    8844 flags.go:33] FLAG: --log-file=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123141    8844 flags.go:33] FLAG: --log-flush-frequency="5s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123147    8844 flags.go:33] FLAG: --logtostderr="true"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123153    8844 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123159    8844 flags.go:33] FLAG: --masquerade-all="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123165    8844 flags.go:33] FLAG: --master=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123170    8844 flags.go:33] FLAG: --mesos-agent="127.0.0.1:5051"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123176    8844 flags.go:33] FLAG: --mesos-agent-timeout="10s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123182    8844 flags.go:33] FLAG: --metrics-bind-address="127.0.0.1:10249"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123188    8844 flags.go:33] FLAG: --metrics-port="10249"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123194    8844 flags.go:33] FLAG: --nodeport-addresses="[]"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123208    8844 flags.go:33] FLAG: --oom-score-adj="-999"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123214    8844 flags.go:33] FLAG: --profiling="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123220    8844 flags.go:33] FLAG: --proxy-mode="iptables"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123228    8844 flags.go:33] FLAG: --proxy-port-range=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123240    8844 flags.go:33] FLAG: --resource-container="/kube-proxy"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123246    8844 flags.go:33] FLAG: --skip-headers="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123252    8844 flags.go:33] FLAG: --stderrthreshold="2"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123258    8844 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123264    8844 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123269    8844 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123275    8844 flags.go:33] FLAG: --storage-driver-password="root"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123281    8844 flags.go:33] FLAG: --storage-driver-secure="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123286    8844 flags.go:33] FLAG: --storage-driver-table="stats"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123292    8844 flags.go:33] FLAG: --storage-driver-user="root"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123297    8844 flags.go:33] FLAG: --udp-timeout="250ms"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123303    8844 flags.go:33] FLAG: --v="2"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123308    8844 flags.go:33] FLAG: --version="false"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123317    8844 flags.go:33] FLAG: --vmodule=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123323    8844 flags.go:33] FLAG: --write-config-to=""
Nov 16 16:59:04 cmp002 kube-proxy[8844]: W1116 16:59:04.123332    8844 server.go:198] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.123372    8844 feature_gate.go:206] feature gates: &{map[]}
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.131866    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:04 cmp002 kernel: [  203.117515] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
Nov 16 16:59:04 cmp002 kernel: [  203.117560] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
Nov 16 16:59:04 cmp002 systemd[1]: kubelet.service: Dependency Conflicts=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:04 cmp002 systemd[1]: kubelet.service: Dependency ConflictedBy=cadvisor.service dropped, merged into kubelet.service
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.208438    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.209362    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.211261    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.231987    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:04 cmp002 salt-minion[4490]: [INFO    ] Executing command ['systemctl', 'is-enabled', 'kube-proxy.service'] in directory '/root'
Nov 16 16:59:04 cmp002 salt-minion[4490]: [INFO    ] {'kube-proxy': True}
Nov 16 16:59:04 cmp002 salt-minion[4490]: [INFO    ] Completed state [kube-proxy] at time 16:59:04.261262 duration_in_ms=363.398
Nov 16 16:59:04 cmp002 salt-minion[4490]: [INFO    ] Returning information for job: 20191116165753766584
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.332155    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:04 cmp002 kubelet[8640]: I1116 16:59:04.333409    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:04 cmp002 kubelet[8640]: I1116 16:59:04.333791    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:04 cmp002 kubelet[8640]: I1116 16:59:04.334896    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:04 cmp002 kubelet[8640]: I1116 16:59:04.334933    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:04 cmp002 kubelet[8640]: I1116 16:59:04.334947    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:04 cmp002 kubelet[8640]: I1116 16:59:04.334970    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.335707    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 cmp002 kernel: [  203.368574] IPVS: ipvs loaded.
Nov 16 16:59:04 cmp002 kernel: [  203.378166] IPVS: [rr] scheduler registered.
Nov 16 16:59:04 cmp002 kernel: [  203.382770] IPVS: [wrr] scheduler registered.
Nov 16 16:59:04 cmp002 kernel: [  203.385206] IPVS: [sh] scheduler registered.
Nov 16 16:59:04 cmp002 kube-proxy[8844]: W1116 16:59:04.424613    8844 node.go:103] Failed to retrieve node info: Get https://172.16.10.36:443/api/v1/nodes/cmp002: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.424657    8844 server_others.go:148] Using iptables Proxier.
Nov 16 16:59:04 cmp002 kube-proxy[8844]: W1116 16:59:04.424751    8844 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.424807    8844 server_others.go:178] Tearing down inactive rules.
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.432551    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.442671    8844 server.go:483] Version: v1.13.5-3+98374c02d2d8c1
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.450381    8844 server.go:509] Running in resource-only container "/kube-proxy"
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.451531    8844 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 196608
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.451591    8844 conntrack.go:52] Setting nf_conntrack_max to 196608
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.455405    8844 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.455544    8844 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.455804    8844 config.go:102] Starting endpoints config controller
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.455833    8844 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.456315    8844 config.go:202] Starting service config controller
Nov 16 16:59:04 cmp002 kube-proxy[8844]: I1116 16:59:04.456327    8844 controller_utils.go:1027] Waiting for caches to sync for service config controller
Nov 16 16:59:04 cmp002 kube-proxy[8844]: E1116 16:59:04.456613    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 cmp002 kube-proxy[8844]: E1116 16:59:04.456727    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:04 cmp002 kube-proxy[8844]: E1116 16:59:04.457804    8844 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.532709    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.632964    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.733203    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.833424    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:04 cmp002 kubelet[8640]: E1116 16:59:04.933839    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.034280    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.134715    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.209948    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.211475    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.212711    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.235144    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.335365    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.435692    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kube-proxy[8844]: E1116 16:59:05.457881    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:05 cmp002 kube-proxy[8844]: E1116 16:59:05.458659    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.536004    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.636284    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.736595    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.836807    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:05 cmp002 kubelet[8640]: E1116 16:59:05.937049    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.037276    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.137494    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.210899    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.212240    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.213336    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.237726    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.338052    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.438294    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kube-proxy[8844]: E1116 16:59:06.459146    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 cmp002 kube-proxy[8844]: E1116 16:59:06.460040    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.538599    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.638784    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.739071    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.839433    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:06 cmp002 kubelet[8640]: E1116 16:59:06.939746    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.040097    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.140360    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.212204    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.213100    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.214307    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.240592    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.340924    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.441212    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kube-proxy[8844]: E1116 16:59:07.460721    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 cmp002 kube-proxy[8844]: E1116 16:59:07.461753    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 cmp002 kubelet[8640]: I1116 16:59:07.535994    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:07 cmp002 kubelet[8640]: I1116 16:59:07.536533    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:07 cmp002 kubelet[8640]: I1116 16:59:07.538087    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:07 cmp002 kubelet[8640]: I1116 16:59:07.538170    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:07 cmp002 kubelet[8640]: I1116 16:59:07.538202    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:07 cmp002 kubelet[8640]: I1116 16:59:07.538247    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.539288    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.541572    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.641880    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.742219    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.842497    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:07 cmp002 kubelet[8640]: E1116 16:59:07.942835    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.043173    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.143484    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.213666    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.214644    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.215660    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.243991    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.344380    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.444718    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kube-proxy[8844]: E1116 16:59:08.461864    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 cmp002 kube-proxy[8844]: E1116 16:59:08.463081    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.544989    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.645231    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.745591    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.845931    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:08 cmp002 kubelet[8640]: E1116 16:59:08.946152    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.046404    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.146691    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.215511    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.216258    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.217191    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.247069    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.347456    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.447649    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kube-proxy[8844]: E1116 16:59:09.462706    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 cmp002 kube-proxy[8844]: E1116 16:59:09.463906    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.547879    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.648092    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.748355    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.848628    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:09 cmp002 kubelet[8640]: E1116 16:59:09.948933    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.049294    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.149585    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.217051    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.217910    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.219284    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.249943    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kube-proxy[8844]: E1116 16:59:10.332671    8844 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.350207    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.450491    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kube-proxy[8844]: E1116 16:59:10.464160    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 cmp002 kube-proxy[8844]: E1116 16:59:10.465128    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.550741    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.650984    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.751311    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.851664    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:10 cmp002 kubelet[8640]: E1116 16:59:10.951952    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.052335    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.152604    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.218477    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.219253    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.220611    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.252912    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.353173    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.357843    8640 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.426333    8640 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.453507    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kube-proxy[8844]: E1116 16:59:11.465609    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 cmp002 kube-proxy[8844]: E1116 16:59:11.466428    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.553914    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.654298    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.754616    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.854920    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:11 cmp002 kubelet[8640]: E1116 16:59:11.955239    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.055531    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.155809    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.219840    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.220943    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.222029    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.256104    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.356344    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.457085    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kube-proxy[8844]: E1116 16:59:12.467050    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 cmp002 kube-proxy[8844]: E1116 16:59:12.467914    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.557496    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.657869    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.758261    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.858518    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:12 cmp002 kubelet[8640]: E1116 16:59:12.958828    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.059127    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.159467    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.221303    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.222207    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.223109    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.259840    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.360237    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.460574    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kube-proxy[8844]: E1116 16:59:13.468573    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 cmp002 kube-proxy[8844]: E1116 16:59:13.469392    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.561385    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.661629    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.761959    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.862291    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:13 cmp002 kubelet[8640]: I1116 16:59:13.939704    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:13 cmp002 kubelet[8640]: I1116 16:59:13.940206    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:13 cmp002 kubelet[8640]: I1116 16:59:13.941677    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:13 cmp002 kubelet[8640]: I1116 16:59:13.942150    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:13 cmp002 kubelet[8640]: I1116 16:59:13.942429    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:13 cmp002 kubelet[8640]: I1116 16:59:13.942707    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.944117    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:13 cmp002 kubelet[8640]: E1116 16:59:13.962659    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.063056    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.163436    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.223023    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.224213    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.225661    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.263788    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.364164    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.464528    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kube-proxy[8844]: E1116 16:59:14.470019    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 cmp002 kube-proxy[8844]: E1116 16:59:14.470998    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.564908    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.665301    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.765740    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.865915    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:14 cmp002 kubelet[8640]: E1116 16:59:14.966129    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.066456    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.166730    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.225263    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.225917    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.227199    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.267007    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.367373    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.467698    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kube-proxy[8844]: E1116 16:59:15.471513    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 cmp002 kube-proxy[8844]: E1116 16:59:15.472236    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.568300    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.668629    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.768991    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.869262    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:15 cmp002 kubelet[8640]: E1116 16:59:15.969629    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.069910    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.170252    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.226407    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.227394    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.228660    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.270582    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.370957    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.471301    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kube-proxy[8844]: E1116 16:59:16.472867    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 cmp002 kube-proxy[8844]: E1116 16:59:16.473928    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.571653    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.672033    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.772368    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.873265    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:16 cmp002 kubelet[8640]: E1116 16:59:16.973607    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.073910    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.174137    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.227455    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.228288    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.229604    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.274369    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.374737    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kube-proxy[8844]: E1116 16:59:17.474355    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 cmp002 kube-proxy[8844]: E1116 16:59:17.475189    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.475012    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.575286    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.675582    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.775967    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.876371    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:17 cmp002 kubelet[8640]: E1116 16:59:17.976716    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.077008    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.177319    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.228907    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.229850    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.230881    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.277717    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.378057    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kube-proxy[8844]: E1116 16:59:18.475649    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 cmp002 kube-proxy[8844]: E1116 16:59:18.476748    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.478344    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.578783    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.679080    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.779354    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.879667    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:18 cmp002 kubelet[8640]: E1116 16:59:18.979975    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.080255    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.180618    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.230624    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.231084    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.232445    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.280924    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.381283    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kube-proxy[8844]: E1116 16:59:19.476998    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 cmp002 kube-proxy[8844]: E1116 16:59:19.478097    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.481621    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.581952    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.682144    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.782480    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.882800    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:19 cmp002 kubelet[8640]: E1116 16:59:19.983043    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.083298    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.183564    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.232386    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.233637    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.234202    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.283879    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kube-proxy[8844]: E1116 16:59:20.333762    8844 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.384071    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kube-proxy[8844]: E1116 16:59:20.477760    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 cmp002 kube-proxy[8844]: E1116 16:59:20.478696    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.484229    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.584538    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.684698    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.784878    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.885062    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:20 cmp002 kubelet[8640]: I1116 16:59:20.944358    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:20 cmp002 kubelet[8640]: I1116 16:59:20.944803    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:20 cmp002 kubelet[8640]: I1116 16:59:20.945895    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:20 cmp002 kubelet[8640]: I1116 16:59:20.946095    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:20 cmp002 kubelet[8640]: I1116 16:59:20.946270    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:20 cmp002 kubelet[8640]: I1116 16:59:20.946449    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.947354    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:20 cmp002 kubelet[8640]: E1116 16:59:20.985432    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.085746    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.186022    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.243039    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.243062    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.243625    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.286231    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.358067    8640 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.386625    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.427302    8640 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:21 cmp002 kube-proxy[8844]: E1116 16:59:21.478670    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 cmp002 kube-proxy[8844]: E1116 16:59:21.479774    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.486801    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.587145    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.687397    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.787619    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.887973    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:21 cmp002 kubelet[8640]: E1116 16:59:21.988571    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.088834    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.189108    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.245030    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.245671    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.246326    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.289398    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.389572    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kube-proxy[8844]: E1116 16:59:22.479651    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 cmp002 kube-proxy[8844]: E1116 16:59:22.480571    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.489779    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.590149    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.690332    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.790566    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.890859    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:22 cmp002 kubelet[8640]: E1116 16:59:22.991031    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.091226    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.191412    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.246674    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.247528    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.248872    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.291782    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.392054    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kube-proxy[8844]: E1116 16:59:23.480840    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 cmp002 kube-proxy[8844]: E1116 16:59:23.481734    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.492270    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.592670    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.692851    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.793184    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.893427    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:23 cmp002 kubelet[8640]: E1116 16:59:23.993682    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.093818    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.194025    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.247953    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.248639    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.249770    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.294131    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.394292    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kube-proxy[8844]: E1116 16:59:24.481914    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 cmp002 kube-proxy[8844]: E1116 16:59:24.482552    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.494413    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.594605    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.694737    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.794915    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.895058    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:24 cmp002 kubelet[8640]: E1116 16:59:24.995325    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.095451    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.195669    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.248997    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.249760    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.250779    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.295788    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.396020    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kube-proxy[8844]: E1116 16:59:25.483182    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 cmp002 kube-proxy[8844]: E1116 16:59:25.483713    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.496128    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.596303    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.696530    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.796719    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.896963    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:25 cmp002 kubelet[8640]: E1116 16:59:25.997120    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.097241    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.197385    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.250297    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.250976    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.251946    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.306065    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.406243    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:26 cmp002 kube-proxy[8844]: E1116 16:59:26.484173    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 cmp002 kube-proxy[8844]: E1116 16:59:26.484884    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.506462    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.606719    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.706807    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.807011    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:26 cmp002 kubelet[8640]: E1116 16:59:26.907192    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.007501    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.107626    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.207907    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.252068    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.252670    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.253572    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.308057    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.408224    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kube-proxy[8844]: E1116 16:59:27.485544    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 cmp002 kube-proxy[8844]: E1116 16:59:27.486196    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.508332    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.608596    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.708774    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.809028    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.909302    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:27 cmp002 kubelet[8640]: I1116 16:59:27.947618    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:27 cmp002 kubelet[8640]: I1116 16:59:27.948087    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:27 cmp002 kubelet[8640]: I1116 16:59:27.949752    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:27 cmp002 kubelet[8640]: I1116 16:59:27.949807    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:27 cmp002 kubelet[8640]: I1116 16:59:27.949925    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:27 cmp002 kubelet[8640]: I1116 16:59:27.949988    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:27 cmp002 kubelet[8640]: E1116 16:59:27.951038    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.009511    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.109650    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.209833    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.254068    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.254485    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.255287    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.310064    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.410418    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:28 cmp002 kube-proxy[8844]: E1116 16:59:28.486782    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp002 kube-proxy[8844]: E1116 16:59:28.488064    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.510614    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.610774    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.711037    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.822166    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:28 cmp002 kubelet[8640]: E1116 16:59:28.922412    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.022767    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.123031    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.223252    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.254918    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.256252    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.257850    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.323513    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.423852    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kube-proxy[8844]: E1116 16:59:29.488238    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 cmp002 kube-proxy[8844]: E1116 16:59:29.489272    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.524240    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.624599    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.724784    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.824972    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:29 cmp002 kubelet[8640]: E1116 16:59:29.925131    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.025340    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.125624    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.225872    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.256113    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.257246    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.258814    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.326047    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kube-proxy[8844]: E1116 16:59:30.334427    8844 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.426227    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kube-proxy[8844]: E1116 16:59:30.489185    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 cmp002 kube-proxy[8844]: E1116 16:59:30.490226    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.526470    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.626691    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.726890    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.827081    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:30 cmp002 kubelet[8640]: E1116 16:59:30.927371    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.027517    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.127667    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.227789    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.256910    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.257831    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.259410    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.327922    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.358228    8640 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.428045    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.428331    8640 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:31 cmp002 kube-proxy[8844]: E1116 16:59:31.489963    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 cmp002 kube-proxy[8844]: E1116 16:59:31.490856    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.528199    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.628364    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.728540    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.828709    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:31 cmp002 kubelet[8640]: E1116 16:59:31.928861    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.029092    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.129333    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.229557    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.257833    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.258509    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.259964    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.329731    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.429947    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kube-proxy[8844]: E1116 16:59:32.490849    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 cmp002 kube-proxy[8844]: E1116 16:59:32.491597    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.530181    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.630428    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.730701    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.830945    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:32 cmp002 kubelet[8640]: E1116 16:59:32.931151    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.031449    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.131744    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.232007    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.259107    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.259775    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.260901    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.332190    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.432523    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 kube-proxy[8844]: E1116 16:59:33.492200    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp002 kube-proxy[8844]: E1116 16:59:33.492867    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.532808    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.633108    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 salt-minion[4490]: [INFO    ] User sudo_ubuntu Executing command cmd.run with jid 20191116165933687300
Nov 16 16:59:33 cmp002 salt-minion[4490]: [INFO    ] Starting a new job with PID 8995
Nov 16 16:59:33 cmp002 salt-minion[4490]: [INFO    ] Executing command 'calicoctl node status' in directory '/root'
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.733432    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 salt-minion[4490]: [INFO    ] Returning information for job: 20191116165933687300
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.833731    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:33 cmp002 kubelet[8640]: E1116 16:59:33.933968    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.034362    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.134611    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.234923    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.260738    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.261270    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.262295    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.335187    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 salt-minion[4490]: [INFO    ] User sudo_ubuntu Executing command cmd.run with jid 20191116165934415277
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.435429    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 salt-minion[4490]: [INFO    ] Starting a new job with PID 9015
Nov 16 16:59:34 cmp002 salt-minion[4490]: [INFO    ] Executing command 'calicoctl get ippool' in directory '/root'
Nov 16 16:59:34 cmp002 kube-proxy[8844]: I1116 16:59:34.456432    8844 proxier.go:645] Not syncing iptables until Services and Endpoints have been received from master
Nov 16 16:59:34 cmp002 kube-proxy[8844]: E1116 16:59:34.492997    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp002 kube-proxy[8844]: E1116 16:59:34.493944    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.535631    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 salt-minion[4490]: [INFO    ] Returning information for job: 20191116165934415277
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.636032    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.736411    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.836634    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.936863    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:34 cmp002 kubelet[8640]: I1116 16:59:34.951241    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:34 cmp002 kubelet[8640]: I1116 16:59:34.951657    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:34 cmp002 kubelet[8640]: I1116 16:59:34.952876    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:34 cmp002 kubelet[8640]: I1116 16:59:34.952938    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:34 cmp002 kubelet[8640]: I1116 16:59:34.952960    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:34 cmp002 kubelet[8640]: I1116 16:59:34.952994    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:34 cmp002 kubelet[8640]: E1116 16:59:34.953917    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.037168    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.137472    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.237734    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.262170    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.262898    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.264032    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.338001    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.438382    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:35 cmp002 kube-proxy[8844]: E1116 16:59:35.494895    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp002 kube-proxy[8844]: E1116 16:59:35.495551    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.538670    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.638944    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.739848    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.840251    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:35 cmp002 kubelet[8640]: E1116 16:59:35.941041    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.041325    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.141595    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.241918    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.263263    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.264348    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.265326    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.342211    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.442495    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kube-proxy[8844]: E1116 16:59:36.496318    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp002 kube-proxy[8844]: E1116 16:59:36.497355    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.542793    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.643123    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.743388    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.843647    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:36 cmp002 kubelet[8640]: E1116 16:59:36.943999    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.044198    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.144528    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.244869    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.265009    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.266089    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.267562    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.345536    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.445974    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kube-proxy[8844]: E1116 16:59:37.497787    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp002 kube-proxy[8844]: E1116 16:59:37.498768    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.546488    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.646881    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.747915    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.848190    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:37 cmp002 kubelet[8640]: E1116 16:59:37.948510    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.048877    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.149235    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.249591    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.266706    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.267530    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.268853    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.350036    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.450399    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kube-proxy[8844]: E1116 16:59:38.499110    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp002 kube-proxy[8844]: E1116 16:59:38.499879    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.550769    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.651068    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.751375    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.852142    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:38 cmp002 kubelet[8640]: E1116 16:59:38.952369    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.052663    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.153117    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.253443    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.267742    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.268703    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.269898    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.353737    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.453945    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kube-proxy[8844]: E1116 16:59:39.499785    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp002 kube-proxy[8844]: E1116 16:59:39.500889    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.554145    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.654347    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.754541    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.854739    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:39 cmp002 kubelet[8640]: E1116 16:59:39.954937    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.055109    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.155271    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.255473    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.268574    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.269488    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.270477    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp002 kube-proxy[8844]: E1116 16:59:40.335096    8844 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.355670    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.455865    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kube-proxy[8844]: E1116 16:59:40.500401    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp002 kube-proxy[8844]: E1116 16:59:40.501497    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.556021    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.656353    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.757029    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.857749    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:40 cmp002 kubelet[8640]: E1116 16:59:40.958168    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.058804    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.159471    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.259818    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.270333    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.271507    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.271717    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.358583    8640 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.360345    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.429549    8640 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.460567    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kube-proxy[8844]: E1116 16:59:41.501700    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp002 kube-proxy[8844]: E1116 16:59:41.502809    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.560973    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.661159    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.761418    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.861865    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:41 cmp002 kubelet[8640]: I1116 16:59:41.954298    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:41 cmp002 kubelet[8640]: I1116 16:59:41.954969    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:41 cmp002 kubelet[8640]: I1116 16:59:41.956810    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:41 cmp002 kubelet[8640]: I1116 16:59:41.956873    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:41 cmp002 kubelet[8640]: I1116 16:59:41.956897    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:41 cmp002 kubelet[8640]: I1116 16:59:41.956933    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.958071    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:41 cmp002 kubelet[8640]: E1116 16:59:41.962396    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.062639    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.162944    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.263429    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.271487    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.272541    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.273535    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.364027    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.464343    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kube-proxy[8844]: E1116 16:59:42.502556    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp002 kube-proxy[8844]: E1116 16:59:42.503348    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.564560    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.664898    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.765153    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.865347    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:42 cmp002 kubelet[8640]: E1116 16:59:42.965512    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.065661    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.165856    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.266081    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.272456    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.273388    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.274561    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.366285    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.466587    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kube-proxy[8844]: E1116 16:59:43.503943    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp002 kube-proxy[8844]: E1116 16:59:43.504836    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.566923    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.667149    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.767371    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.867572    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:43 cmp002 kubelet[8640]: E1116 16:59:43.967801    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.068238    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.168506    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.268867    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.273971    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.274539    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.275676    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.369205    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.469586    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kube-proxy[8844]: E1116 16:59:44.505395    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp002 kube-proxy[8844]: E1116 16:59:44.506297    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.569813    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.670155    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.770549    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.870791    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:44 cmp002 kubelet[8640]: E1116 16:59:44.971028    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.071262    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.171460    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.271690    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.274857    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.275763    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.276818    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.371979    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.472163    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kube-proxy[8844]: E1116 16:59:45.506288    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp002 kube-proxy[8844]: E1116 16:59:45.507134    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.572365    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.672546    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.772906    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.873257    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:45 cmp002 kubelet[8640]: E1116 16:59:45.973694    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.073901    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.174099    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.274284    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.276019    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.276783    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.278219    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.374570    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.474830    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kube-proxy[8844]: E1116 16:59:46.507350    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp002 kube-proxy[8844]: E1116 16:59:46.508135    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.575096    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.675441    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.775673    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.875874    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:46 cmp002 kubelet[8640]: E1116 16:59:46.976055    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.076256    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.176500    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.276702    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.276878    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.277768    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.278890    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.376904    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.477142    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kube-proxy[8844]: E1116 16:59:47.508685    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp002 kube-proxy[8844]: E1116 16:59:47.509451    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.577319    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.677550    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.777797    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.878010    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:47 cmp002 kubelet[8640]: E1116 16:59:47.978304    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.078556    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.178859    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.278335    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.278640    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.279050    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.279671    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.379369    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.479592    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kube-proxy[8844]: E1116 16:59:48.510006    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp002 kube-proxy[8844]: E1116 16:59:48.510532    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.579984    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.680357    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.780587    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.880907    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:48 cmp002 kubelet[8640]: I1116 16:59:48.958408    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:48 cmp002 kubelet[8640]: I1116 16:59:48.958771    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:48 cmp002 kubelet[8640]: I1116 16:59:48.959977    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:48 cmp002 kubelet[8640]: I1116 16:59:48.960050    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:48 cmp002 kubelet[8640]: I1116 16:59:48.960072    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:48 cmp002 kubelet[8640]: I1116 16:59:48.960123    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.961178    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:48 cmp002 kubelet[8640]: E1116 16:59:48.981099    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.081357    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.181619    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.279637    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.280694    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.281283    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.281928    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.382215    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.482550    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kube-proxy[8844]: E1116 16:59:49.510931    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp002 kube-proxy[8844]: E1116 16:59:49.511725    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.582784    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.683133    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.783410    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.883708    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:49 cmp002 kubelet[8640]: E1116 16:59:49.983964    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.084180    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.184409    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.280829    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.282544    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.282715    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.284549    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kube-proxy[8844]: E1116 16:59:50.336104    8844 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.384834    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.485270    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kube-proxy[8844]: E1116 16:59:50.512822    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp002 kube-proxy[8844]: E1116 16:59:50.513231    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.585560    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.685743    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.786062    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.886271    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:50 cmp002 kubelet[8640]: E1116 16:59:50.986740    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.087030    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.187464    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.281978    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.283615    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.285207    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.287684    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.358932    8640 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.388017    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.431467    8640 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.488332    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kube-proxy[8844]: E1116 16:59:51.514182    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp002 kube-proxy[8844]: E1116 16:59:51.515260    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.588611    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.688919    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.789199    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.889497    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:51 cmp002 kubelet[8640]: E1116 16:59:51.989869    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.090192    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.190517    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.282785    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.284215    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.285975    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.290676    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.390907    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.491235    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kube-proxy[8844]: E1116 16:59:52.515567    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp002 kube-proxy[8844]: E1116 16:59:52.516416    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.591483    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.691684    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.791923    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.892215    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:52 cmp002 kubelet[8640]: E1116 16:59:52.992423    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.092606    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.192935    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.283554    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.284830    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.286626    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.293075    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.393382    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.493568    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kube-proxy[8844]: E1116 16:59:53.516460    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp002 kube-proxy[8844]: E1116 16:59:53.517280    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.593787    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.694128    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.794330    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.894710    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:53 cmp002 kubelet[8640]: E1116 16:59:53.995271    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.095468    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.195765    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.284519    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.285492    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.287275    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.295963    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.396281    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.496648    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kube-proxy[8844]: E1116 16:59:54.518062    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp002 kube-proxy[8844]: E1116 16:59:54.518789    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.597013    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.697451    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.797811    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.898133    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:54 cmp002 kubelet[8640]: E1116 16:59:54.998339    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.098507    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.198730    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.285688    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.286376    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.288037    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.298989    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.399230    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.499617    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kube-proxy[8844]: E1116 16:59:55.519092    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp002 kube-proxy[8844]: E1116 16:59:55.519774    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.600012    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.700261    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.800585    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.900887    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:55 cmp002 kubelet[8640]: I1116 16:59:55.961421    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 16:59:55 cmp002 kubelet[8640]: I1116 16:59:55.961727    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 16:59:55 cmp002 kubelet[8640]: I1116 16:59:55.962717    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 16:59:55 cmp002 kubelet[8640]: I1116 16:59:55.962784    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 16:59:55 cmp002 kubelet[8640]: I1116 16:59:55.962803    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 16:59:55 cmp002 kubelet[8640]: I1116 16:59:55.962826    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 16:59:55 cmp002 kubelet[8640]: E1116 16:59:55.963568    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.001133    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.101351    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.201703    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.287057    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.287701    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.288888    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.301944    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.402224    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.502593    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:56 cmp002 kube-proxy[8844]: E1116 16:59:56.520432    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp002 kube-proxy[8844]: E1116 16:59:56.520971    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.602909    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.703177    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.803442    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:56 cmp002 kubelet[8640]: E1116 16:59:56.903704    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.003978    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.104264    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.204503    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.288537    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.289560    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.290399    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.304736    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.404921    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.505168    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kube-proxy[8844]: E1116 16:59:57.521369    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp002 kube-proxy[8844]: E1116 16:59:57.522159    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.605418    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.705615    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.805810    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:57 cmp002 kubelet[8640]: E1116 16:59:57.905954    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.006101    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.106298    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.206656    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.289974    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.290504    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.291626    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.306905    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.407058    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.507286    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kube-proxy[8844]: E1116 16:59:58.522373    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp002 kube-proxy[8844]: E1116 16:59:58.523264    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.607588    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.707871    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.808090    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:58 cmp002 kubelet[8640]: E1116 16:59:58.908391    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.008722    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.108951    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.209252    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.291597    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.292665    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.293693    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.309650    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.409946    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.510257    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kube-proxy[8844]: E1116 16:59:59.523453    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp002 kube-proxy[8844]: E1116 16:59:59.524537    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.610658    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.710937    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.811178    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 16:59:59 cmp002 kubelet[8640]: E1116 16:59:59.911473    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.011737    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.112123    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.212418    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.292564    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.293160    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.294231    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.312705    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kube-proxy[8844]: E1116 17:00:00.337349    8844 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.412911    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.513284    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kube-proxy[8844]: E1116 17:00:00.524946    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp002 kube-proxy[8844]: E1116 17:00:00.525597    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.613542    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.713727    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.813947    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:00 cmp002 kubelet[8640]: E1116 17:00:00.914131    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.014416    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.114653    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.214974    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.293522    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.294421    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.295364    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.315277    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.359190    8640 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.415632    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.432463    8640 event.go:212] Unable to write event: 'Post https://172.16.10.36:443/api/v1/namespaces/default/events: dial tcp 172.16.10.36:443: connect: connection refused' (may retry after sleeping)
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.516016    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kube-proxy[8844]: E1116 17:00:01.526133    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp002 kube-proxy[8844]: E1116 17:00:01.526898    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.616508    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.716972    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.817161    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:01 cmp002 kubelet[8640]: E1116 17:00:01.917424    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.017655    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.117837    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.218226    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.294683    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.295828    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.296733    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.318552    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.418736    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.519134    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kube-proxy[8844]: E1116 17:00:02.527304    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp002 kube-proxy[8844]: E1116 17:00:02.530386    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.619345    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.719562    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.819795    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.920016    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:02 cmp002 kubelet[8640]: I1116 17:00:02.963803    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 17:00:02 cmp002 kubelet[8640]: I1116 17:00:02.964166    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:00:02 cmp002 kubelet[8640]: I1116 17:00:02.965781    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 17:00:02 cmp002 kubelet[8640]: I1116 17:00:02.966156    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 17:00:02 cmp002 kubelet[8640]: I1116 17:00:02.966537    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 17:00:02 cmp002 kubelet[8640]: I1116 17:00:02.966956    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 17:00:02 cmp002 kubelet[8640]: E1116 17:00:02.968612    8640 kubelet_node_status.go:94] Unable to register node "cmp002" with API server: Post https://172.16.10.36:443/api/v1/nodes: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.020232    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.120666    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.220901    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.296254    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.297121    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.297913    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.321122    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.421459    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.521697    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:03 cmp002 kube-proxy[8844]: E1116 17:00:03.528186    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp002 kube-proxy[8844]: E1116 17:00:03.531042    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.622028    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.722275    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.822522    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:03 cmp002 kubelet[8640]: E1116 17:00:03.922737    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.022941    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.123170    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.223395    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.297159    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.298547    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.299753    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.323627    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.423952    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kube-proxy[8844]: I1116 17:00:04.456679    8844 proxier.go:645] Not syncing iptables until Services and Endpoints have been received from master
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.524143    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kube-proxy[8844]: E1116 17:00:04.529025    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp002 kube-proxy[8844]: E1116 17:00:04.531826    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.624453    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.724817    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.825132    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:04 cmp002 kubelet[8640]: E1116 17:00:04.925340    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.025686    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.125994    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.226196    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.298471    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.299656    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.300794    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.326414    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.426624    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.526816    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kube-proxy[8844]: E1116 17:00:05.530071    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp002 kube-proxy[8844]: E1116 17:00:05.532534    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.16.10.36:443: connect: connection refused
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.627116    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.727393    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.827652    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:05 cmp002 kubelet[8640]: E1116 17:00:05.927919    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.028161    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.128377    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.228593    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.328775    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.428918    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.529084    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.629427    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.729627    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.829845    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:06 cmp002 kubelet[8640]: E1116 17:00:06.930107    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.030513    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.130724    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.230919    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.331165    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.431383    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.531700    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.632051    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.732248    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.832420    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:07 cmp002 kubelet[8640]: E1116 17:00:07.932654    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.032917    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.133135    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.233332    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.333553    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.433831    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.534193    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.634567    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.734795    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.835048    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:08 cmp002 kubelet[8640]: E1116 17:00:08.935268    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.035492    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.135759    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.235971    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.336190    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.436464    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.536709    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.636874    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.737114    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.837444    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: E1116 17:00:09.937651    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:09 cmp002 kubelet[8640]: I1116 17:00:09.968869    8640 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
Nov 16 17:00:09 cmp002 kubelet[8640]: I1116 17:00:09.969174    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:00:09 cmp002 kubelet[8640]: I1116 17:00:09.970211    8640 kubelet_node_status.go:447] Recording NodeHasSufficientMemory event message for node cmp002
Nov 16 17:00:09 cmp002 kubelet[8640]: I1116 17:00:09.970253    8640 kubelet_node_status.go:447] Recording NodeHasNoDiskPressure event message for node cmp002
Nov 16 17:00:09 cmp002 kubelet[8640]: I1116 17:00:09.970269    8640 kubelet_node_status.go:447] Recording NodeHasSufficientPID event message for node cmp002
Nov 16 17:00:09 cmp002 kubelet[8640]: I1116 17:00:09.970296    8640 kubelet_node_status.go:72] Attempting to register node cmp002
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.037959    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.138402    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.238734    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.339129    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.439419    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.539816    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.640587    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.741273    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.842065    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:10 cmp002 kubelet[8640]: E1116 17:00:10.942324    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.042956    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.143313    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.243574    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.343765    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.359591    8640 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.444020    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.544350    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.644729    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.745033    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.845352    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:11 cmp002 kubelet[8640]: E1116 17:00:11.945669    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.045900    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.146129    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.246614    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.346956    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.447284    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.547598    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.647946    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.748288    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.848626    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:12 cmp002 kubelet[8640]: E1116 17:00:12.948883    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.049185    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.149420    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.249766    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.350093    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.450426    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.550631    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.650914    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.751166    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.851459    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:13 cmp002 kubelet[8640]: E1116 17:00:13.951758    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.052018    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.152234    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.262763    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.363064    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.463265    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.563570    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.663843    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.764199    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.864437    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:14 cmp002 kubelet[8640]: E1116 17:00:14.964771    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.065039    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.165282    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.265593    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.365840    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.466044    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.566330    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.666673    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.766997    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.867387    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:15 cmp002 kubelet[8640]: E1116 17:00:15.967619    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.067981    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.168312    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.268699    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.299909    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.16.10.36:443/api/v1/pods?fieldSelector=spec.nodeName%3Dcmp002&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.300804    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://172.16.10.36:443/api/v1/nodes?fieldSelector=metadata.name%3Dcmp002&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.301723    8640 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.368942    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.469194    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kube-proxy[8844]: E1116 17:00:16.531308    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://172.16.10.36:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 cmp002 kube-proxy[8844]: E1116 17:00:16.533683    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://172.16.10.36:443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.569492    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.669858    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.770175    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.870933    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:16 cmp002 kubelet[8640]: E1116 17:00:16.971292    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.071672    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.172063    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.272255    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.372618    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.473319    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.573967    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.674262    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.774720    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.875045    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:17 cmp002 kubelet[8640]: E1116 17:00:17.976149    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:18 cmp002 kubelet[8640]: E1116 17:00:18.076461    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:18 cmp002 kube-proxy[8844]: E1116 17:00:18.091088    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
Nov 16 17:00:18 cmp002 kube-proxy[8844]: E1116 17:00:18.167221    8844 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cmp002.15d7b31feae83420", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"cmp002", UID:"cmp002", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kube-proxy.", Source:v1.EventSource{Component:"kube-proxy", Host:"cmp002"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6c28961b29c420, ext:478366584, loc:(*time.Location)(0xaf74780)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6c28961b29c420, ext:478366584, loc:(*time.Location)(0xaf74780)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:kube-proxy" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Nov 16 17:00:18 cmp002 kubelet[8640]: I1116 17:00:18.173568    8640 kubelet.go:1908] SyncLoop (ADD, "api"): ""
Nov 16 17:00:18 cmp002 kubelet[8640]: E1116 17:00:18.176900    8640 kubelet.go:2266] node "cmp002" not found
Nov 16 17:00:18 cmp002 kubelet[8640]: I1116 17:00:18.209811    8640 kubelet_node_status.go:75] Successfully registered node cmp002
Nov 16 17:00:18 cmp002 kubelet[8640]: I1116 17:00:18.215689    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:00:18 cmp002 kubelet[8640]: I1116 17:00:18.222268    8640 reconciler.go:154] Reconciler: start to sync state
Nov 16 17:00:18 cmp002 kube-proxy[8844]: E1116 17:00:18.253310    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
Nov 16 17:00:19 cmp002 kube-proxy[8844]: E1116 17:00:19.093530    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-proxy" cannot list resource "services" in API group "" at the cluster scope
Nov 16 17:00:19 cmp002 kube-proxy[8844]: E1116 17:00:19.257264    8844 reflector.go:125] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-proxy" cannot list resource "endpoints" in API group "" at the cluster scope
Nov 16 17:00:20 cmp002 kube-proxy[8844]: I1116 17:00:20.156489    8844 controller_utils.go:1034] Caches are synced for service config controller
Nov 16 17:00:20 cmp002 kube-proxy[8844]: I1116 17:00:20.156619    8844 proxier.go:645] Not syncing iptables until Services and Endpoints have been received from master
Nov 16 17:00:20 cmp002 kube-proxy[8844]: I1116 17:00:20.356034    8844 controller_utils.go:1034] Caches are synced for endpoints config controller
Nov 16 17:00:20 cmp002 kube-proxy[8844]: I1116 17:00:20.356165    8844 service.go:309] Adding new service port "default/kubernetes:https" at 10.254.0.1:443/TCP
Nov 16 17:00:28 cmp002 kubelet[8640]: I1116 17:00:28.232643    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:00:33 cmp002 salt-minion[4490]: [INFO    ] User sudo_ubuntu Executing command cp.push_dir with jid 20191116170033971440
Nov 16 17:00:34 cmp002 salt-minion[4490]: [INFO    ] Starting a new job with PID 9136
Nov 16 17:00:34 cmp002 salt-minion[4490]: [INFO    ] Returning information for job: 20191116170033971440
Nov 16 17:00:38 cmp002 kubelet[8640]: I1116 17:00:38.242743    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:00:40 cmp002 kube-proxy[8844]: I1116 17:00:40.289888    8844 service.go:309] Adding new service port "kube-system/coredns:dns" at 10.254.0.10:53/UDP
Nov 16 17:00:40 cmp002 kube-proxy[8844]: I1116 17:00:40.289942    8844 service.go:309] Adding new service port "kube-system/coredns:dns-tcp" at 10.254.0.10:53/TCP
Nov 16 17:00:40 cmp002 kubelet[8640]: I1116 17:00:40.333767    8640 kubelet.go:1908] SyncLoop (ADD, "api"): "netchecker-agent-w98nt_netchecker(9e5aeb31-0892-11ea-a35a-5254009caaa4)"
Nov 16 17:00:40 cmp002 kubelet[8640]: I1116 17:00:40.372824    8640 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-czmvh" (UniqueName: "kubernetes.io/secret/9e5aeb31-0892-11ea-a35a-5254009caaa4-default-token-czmvh") pod "netchecker-agent-w98nt" (UID: "9e5aeb31-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:40 cmp002 kube-proxy[8844]: I1116 17:00:40.412926    8844 service.go:309] Adding new service port "netchecker/netchecker:" at 10.254.54.193:80/TCP
Nov 16 17:00:40 cmp002 kube-proxy[8844]: I1116 17:00:40.431592    8844 proxier.go:1427] Opened local port "nodePort for netchecker/netchecker:" (:30276/tcp)
Nov 16 17:00:40 cmp002 kubelet[8640]: I1116 17:00:40.473365    8640 reconciler.go:252] operationExecutor.MountVolume started for volume "default-token-czmvh" (UniqueName: "kubernetes.io/secret/9e5aeb31-0892-11ea-a35a-5254009caaa4-default-token-czmvh") pod "netchecker-agent-w98nt" (UID: "9e5aeb31-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:40 cmp002 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/9e5aeb31-0892-11ea-a35a-5254009caaa4/volumes/kubernetes.io~secret/default-token-czmvh.
Nov 16 17:00:40 cmp002 kubelet[8640]: I1116 17:00:40.494231    8640 operation_generator.go:571] MountVolume.SetUp succeeded for volume "default-token-czmvh" (UniqueName: "kubernetes.io/secret/9e5aeb31-0892-11ea-a35a-5254009caaa4-default-token-czmvh") pod "netchecker-agent-w98nt" (UID: "9e5aeb31-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:40 cmp002 kubelet[8640]: I1116 17:00:40.717302    8640 kuberuntime_manager.go:397] No sandbox for pod "netchecker-agent-w98nt_netchecker(9e5aeb31-0892-11ea-a35a-5254009caaa4)" can be found. Need to start a new one
Nov 16 17:00:40 cmp002 containerd[5713]: time="2019-11-16T17:00:40.718380848Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:netchecker-agent-w98nt,Uid:9e5aeb31-0892-11ea-a35a-5254009caaa4,Namespace:netchecker,Attempt:0,}"
Nov 16 17:00:41 cmp002 kubelet[8640]: I1116 17:00:41.287166    8640 kubelet.go:1908] SyncLoop (ADD, "api"): "coredns-7f8f94c97b-v2ccc_kube-system(9eecd5ba-0892-11ea-a35a-5254009caaa4)"
Nov 16 17:00:41 cmp002 kubelet[8640]: I1116 17:00:41.375449    8640 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9eecd5ba-0892-11ea-a35a-5254009caaa4-config-volume") pod "coredns-7f8f94c97b-v2ccc" (UID: "9eecd5ba-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp002 kubelet[8640]: I1116 17:00:41.375502    8640 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-lkbkg" (UniqueName: "kubernetes.io/secret/9eecd5ba-0892-11ea-a35a-5254009caaa4-coredns-token-lkbkg") pod "coredns-7f8f94c97b-v2ccc" (UID: "9eecd5ba-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp002 kubelet[8640]: I1116 17:00:41.475867    8640 reconciler.go:252] operationExecutor.MountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9eecd5ba-0892-11ea-a35a-5254009caaa4-config-volume") pod "coredns-7f8f94c97b-v2ccc" (UID: "9eecd5ba-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp002 kubelet[8640]: I1116 17:00:41.476254    8640 reconciler.go:252] operationExecutor.MountVolume started for volume "coredns-token-lkbkg" (UniqueName: "kubernetes.io/secret/9eecd5ba-0892-11ea-a35a-5254009caaa4-coredns-token-lkbkg") pod "coredns-7f8f94c97b-v2ccc" (UID: "9eecd5ba-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp002 kubelet[8640]: I1116 17:00:41.480657    8640 operation_generator.go:571] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9eecd5ba-0892-11ea-a35a-5254009caaa4-config-volume") pod "coredns-7f8f94c97b-v2ccc" (UID: "9eecd5ba-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp002 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/9eecd5ba-0892-11ea-a35a-5254009caaa4/volumes/kubernetes.io~secret/coredns-token-lkbkg.
Nov 16 17:00:41 cmp002 kubelet[8640]: I1116 17:00:41.495457    8640 operation_generator.go:571] MountVolume.SetUp succeeded for volume "coredns-token-lkbkg" (UniqueName: "kubernetes.io/secret/9eecd5ba-0892-11ea-a35a-5254009caaa4-coredns-token-lkbkg") pod "coredns-7f8f94c97b-v2ccc" (UID: "9eecd5ba-0892-11ea-a35a-5254009caaa4")
Nov 16 17:00:41 cmp002 kubelet[8640]: I1116 17:00:41.616350    8640 kuberuntime_manager.go:397] No sandbox for pod "coredns-7f8f94c97b-v2ccc_kube-system(9eecd5ba-0892-11ea-a35a-5254009caaa4)" can be found. Need to start a new one
Nov 16 17:00:41 cmp002 containerd[5713]: time="2019-11-16T17:00:41.617580155Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-7f8f94c97b-v2ccc,Uid:9eecd5ba-0892-11ea-a35a-5254009caaa4,Namespace:kube-system,Attempt:0,}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.283947552Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.294316168Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.294868923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.397832598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.400117438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.401432242Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.406637708Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64@sha256:9e2b9d9d64bab9f4d80790ec6c6fe09cdb5714d43bf23357e0ed0d0ab512fffd,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.407168574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.407653330Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.409262529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:478b5c5586708a45bfb7aab46b7a298c4eb65c3442311e7868f49ba269f358a4,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.410773532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64:v1.13.5-3,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.411635385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/kubernetes/pause-amd64@sha256:9e2b9d9d64bab9f4d80790ec6c6fe09cdb5714d43bf23357e0ed0d0ab512fffd,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.657 [INFO][9263] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"netchecker", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"cmp002", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"netchecker-agent-w98nt", ContainerID:"ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957"}}
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.720 [INFO][9263] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" Namespace="netchecker" Pod="netchecker-agent-w98nt" WorkloadEndpoint="cmp002-k8s-netchecker--agent--w98nt-eth0"
Nov 16 17:00:42 cmp002 containerd[5713]: Calico CNI IPAM request count IPv4=1 IPv6=0
Nov 16 17:00:42 cmp002 containerd[5713]: Calico CNI IPAM handle=calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.807 [INFO][9276] calico-ipam.go 186: Auto assigning IP ContainerID="ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" HandleID="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" Workload="cmp002-k8s-netchecker--agent--w98nt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc42027ce00), Attrs:map[string]string(nil), Hostname:"cmp002", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}}
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.807 [INFO][9276] ipam.go 70: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'cmp002'
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.808 [INFO][9276] ipam.go 254: Looking up existing affinities for host handle="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.809 [INFO][9276] ipam.go 265: Ran out of existing affine blocks for host handle="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.809 [INFO][9276] ipam.go 324: No more affine blocks, but need to allocate 1 more addresses - allocate another block handle="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.810 [INFO][9276] ipam.go 328: Looking for an unclaimed block handle="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.811 [INFO][9276] ipam_block_reader_writer.go 106: Found free block: 192.168.112.64/26
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.811 [INFO][9276] ipam.go 340: Found unclaimed block host="cmp002" subnet=192.168.112.64/26
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.811 [INFO][9276] ipam_block_reader_writer.go 122: Trying to create affinity in pending state host="cmp002" subnet=192.168.112.64/26
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.812 [INFO][9276] ipam_block_reader_writer.go 152: Successfully created pending affinity for block host="cmp002" subnet=192.168.112.64/26
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.812 [INFO][9276] ipam.go 118: Attempting to load block cidr=192.168.112.64/26 host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.813 [INFO][9276] ipam.go 123: The referenced block doesn't exist, trying to create it cidr=192.168.112.64/26 host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.814 [INFO][9276] ipam.go 130: Wrote affinity as pending cidr=192.168.112.64/26 host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.815 [INFO][9276] ipam.go 139: Attempting to claim the block cidr=192.168.112.64/26 host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.816 [INFO][9276] ipam_block_reader_writer.go 175: Attempting to create a new block host="cmp002" subnet=192.168.112.64/26
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.818 [INFO][9276] ipam_block_reader_writer.go 217: Successfully created block
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.818 [INFO][9276] ipam_block_reader_writer.go 228: Confirming affinity host="cmp002" subnet=192.168.112.64/26
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.819 [INFO][9276] ipam_block_reader_writer.go 243: Successfully confirmed affinity host="cmp002" subnet=192.168.112.64/26
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.819 [INFO][9276] ipam.go 372: Claimed new block &{BlockKey(cidr=192.168.112.64/26) 0xc420694990 490 0s} - assigning 1 addresses host="cmp002" subnet=192.168.112.64/26
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.819 [INFO][9276] ipam.go 677: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.820 [INFO][9276] ipam.go 1110: Creating new handle: calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.821 [INFO][9276] ipam.go 700: Writing block in order to claim IPs block=192.168.112.64/26 handle="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.823 [INFO][9276] ipam.go 710: Successfully claimed IPs: [192.168.112.64] block=192.168.112.64/26 handle="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.823 [INFO][9276] ipam.go 456: Auto-assigned 1 out of 1 IPv4s: [192.168.112.64] handle="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" host="cmp002"
Nov 16 17:00:42 cmp002 containerd[5713]: Calico CNI IPAM assigned addresses IPv4=[192.168.112.64] IPv6=[]
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.823 [INFO][9276] calico-ipam.go 214: IPAM Result ContainerID="ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" HandleID="calico-k8s-network.ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" Workload="cmp002-k8s-netchecker--agent--w98nt-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc420340300)}
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.825 [INFO][9263] k8s.go 365: Populated endpoint ContainerID="ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" Namespace="netchecker" Pod="netchecker-agent-w98nt" WorkloadEndpoint="cmp002-k8s-netchecker--agent--w98nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp002-k8s-netchecker--agent--w98nt-eth0", GenerateName:"", Namespace:"netchecker", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp002", ContainerID:"", Pod:"netchecker-agent-w98nt", Endpoint:"eth0", IPNetworks:[]string{"192.168.112.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"", MAC:"", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:42 cmp002 containerd[5713]: Calico CNI using IPs: [192.168.112.64/32]
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.825 [INFO][9263] network.go 75: Setting the host side veth name to cali2d04ad34e65 ContainerID="ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" Namespace="netchecker" Pod="netchecker-agent-w98nt" WorkloadEndpoint="cmp002-k8s-netchecker--agent--w98nt-eth0"
Nov 16 17:00:42 cmp002 kernel: [  301.827888] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Nov 16 17:00:42 cmp002 kernel: [  301.828261] IPv6: ADDRCONF(NETDEV_UP): cali2d04ad34e65: link is not ready
Nov 16 17:00:42 cmp002 kernel: [  301.828280] IPv6: ADDRCONF(NETDEV_CHANGE): cali2d04ad34e65: link becomes ready
Nov 16 17:00:42 cmp002 kernel: [  301.828326] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.848 [INFO][9263] network.go 380: Disabling IPv4 forwarding ContainerID="ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" Namespace="netchecker" Pod="netchecker-agent-w98nt" WorkloadEndpoint="cmp002-k8s-netchecker--agent--w98nt-eth0"
Nov 16 17:00:42 cmp002 systemd-udevd[9293]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.866 [INFO][9263] k8s.go 392: Added Mac, interface name, and active container ID to endpoint ContainerID="ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" Namespace="netchecker" Pod="netchecker-agent-w98nt" WorkloadEndpoint="cmp002-k8s-netchecker--agent--w98nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp002-k8s-netchecker--agent--w98nt-eth0", GenerateName:"", Namespace:"netchecker", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp002", ContainerID:"ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957", Pod:"netchecker-agent-w98nt", Endpoint:"eth0", IPNetworks:[]string{"192.168.112.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"cali2d04ad34e65", MAC:"36:40:8e:0f:66:f9", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.877 [INFO][9263] k8s.go 424: Wrote updated endpoint to datastore ContainerID="ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" Namespace="netchecker" Pod="netchecker-agent-w98nt" WorkloadEndpoint="cmp002-k8s-netchecker--agent--w98nt-eth0"
Nov 16 17:00:42 cmp002 containerd[5713]: time="2019-11-16T17:00:42.912107117Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957/shim.sock" debug=false pid=9370
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.917 [INFO][9307] calico.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"cmp002", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"coredns-7f8f94c97b-v2ccc", ContainerID:"7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef"}}
Nov 16 17:00:42 cmp002 containerd[5713]: 2019-11-16 17:00:42.986 [INFO][9307] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" Namespace="kube-system" Pod="coredns-7f8f94c97b-v2ccc" WorkloadEndpoint="cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0"
Nov 16 17:00:43 cmp002 containerd[5713]: Calico CNI IPAM request count IPv4=1 IPv6=0
Nov 16 17:00:43 cmp002 containerd[5713]: Calico CNI IPAM handle=calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.076 [INFO][9412] calico-ipam.go 186: Auto assigning IP ContainerID="7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" HandleID="calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" Workload="cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc4204c8df0), Attrs:map[string]string(nil), Hostname:"cmp002", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}}
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.076 [INFO][9412] ipam.go 70: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'cmp002'
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.078 [INFO][9412] ipam.go 254: Looking up existing affinities for host handle="calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" host="cmp002"
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.078 [INFO][9412] ipam.go 275: Trying affinity for 192.168.112.64/26 handle="calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" host="cmp002"
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.079 [INFO][9412] ipam.go 118: Attempting to load block cidr=192.168.112.64/26 host="cmp002"
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.080 [INFO][9412] ipam.go 195: Affinity is confirmed and block has been loaded cidr=192.168.112.64/26 host="cmp002"
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.080 [INFO][9412] ipam.go 677: Attempting to assign 1 addresses from block block=192.168.112.64/26 handle="calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" host="cmp002"
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.081 [INFO][9412] ipam.go 1110: Creating new handle: calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.082 [INFO][9412] ipam.go 700: Writing block in order to claim IPs block=192.168.112.64/26 handle="calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" host="cmp002"
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.084 [INFO][9412] ipam.go 710: Successfully claimed IPs: [192.168.112.65] block=192.168.112.64/26 handle="calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" host="cmp002"
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.084 [INFO][9412] ipam.go 307: Block '192.168.112.64/26' provided addresses: [192.168.112.65] handle="calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" host="cmp002"
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.084 [INFO][9412] ipam.go 456: Auto-assigned 1 out of 1 IPv4s: [192.168.112.65] handle="calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" host="cmp002"
Nov 16 17:00:43 cmp002 containerd[5713]: Calico CNI IPAM assigned addresses IPv4=[192.168.112.65] IPv6=[]
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.085 [INFO][9412] calico-ipam.go 214: IPAM Result ContainerID="7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" HandleID="calico-k8s-network.7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" Workload="cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc420202720)}
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.086 [INFO][9307] k8s.go 365: Populated endpoint ContainerID="7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" Namespace="kube-system" Pod="coredns-7f8f94c97b-v2ccc" WorkloadEndpoint="cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp002", ContainerID:"", Pod:"coredns-7f8f94c97b-v2ccc", Endpoint:"eth0", IPNetworks:[]string{"192.168.112.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"", MAC:"", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:43 cmp002 containerd[5713]: Calico CNI using IPs: [192.168.112.65/32]
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.087 [INFO][9307] network.go 75: Setting the host side veth name to cali11c065c238a ContainerID="7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" Namespace="kube-system" Pod="coredns-7f8f94c97b-v2ccc" WorkloadEndpoint="cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0"
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.088 [INFO][9307] network.go 380: Disabling IPv4 forwarding ContainerID="7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" Namespace="kube-system" Pod="coredns-7f8f94c97b-v2ccc" WorkloadEndpoint="cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0"
Nov 16 17:00:43 cmp002 kernel: [  302.067846] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Nov 16 17:00:43 cmp002 containerd[5713]: time="2019-11-16T17:00:43.093644729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:netchecker-agent-w98nt,Uid:9e5aeb31-0892-11ea-a35a-5254009caaa4,Namespace:netchecker,Attempt:0,} returns sandbox id "ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957""
Nov 16 17:00:43 cmp002 kubelet[8640]: I1116 17:00:43.095045    8640 provider.go:116] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Nov 16 17:00:43 cmp002 containerd[5713]: time="2019-11-16T17:00:43.095328597Z" level=info msg="PullImage "mirantis/k8s-netchecker-agent:stable""
Nov 16 17:00:43 cmp002 systemd-udevd[9442]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 17:00:43 cmp002 kernel: [  302.095789] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.115 [INFO][9307] k8s.go 392: Added Mac, interface name, and active container ID to endpoint ContainerID="7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" Namespace="kube-system" Pod="coredns-7f8f94c97b-v2ccc" WorkloadEndpoint="cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"cmp002", ContainerID:"7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef", Pod:"coredns-7f8f94c97b-v2ccc", Endpoint:"eth0", IPNetworks:[]string{"192.168.112.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"calico-k8s-network"}, InterfaceName:"cali11c065c238a", MAC:"4a:37:ea:4e:d5:b3", Ports:[]v3.EndpointPort(nil)}}
Nov 16 17:00:43 cmp002 containerd[5713]: 2019-11-16 17:00:43.127 [INFO][9307] k8s.go 424: Wrote updated endpoint to datastore ContainerID="7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" Namespace="kube-system" Pod="coredns-7f8f94c97b-v2ccc" WorkloadEndpoint="cmp002-k8s-coredns--7f8f94c97b--v2ccc-eth0"
Nov 16 17:00:43 cmp002 containerd[5713]: time="2019-11-16T17:00:43.145497521Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef/shim.sock" debug=false pid=9482
Nov 16 17:00:43 cmp002 containerd[5713]: time="2019-11-16T17:00:43.292174413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7f8f94c97b-v2ccc,Uid:9eecd5ba-0892-11ea-a35a-5254009caaa4,Namespace:kube-system,Attempt:0,} returns sandbox id "7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef""
Nov 16 17:00:43 cmp002 kubelet[8640]: I1116 17:00:43.389870    8640 kubelet.go:1953] SyncLoop (PLEG): "netchecker-agent-w98nt_netchecker(9e5aeb31-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9e5aeb31-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957"}
Nov 16 17:00:43 cmp002 kubelet[8640]: I1116 17:00:43.390814    8640 kubelet.go:1953] SyncLoop (PLEG): "coredns-7f8f94c97b-v2ccc_kube-system(9eecd5ba-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9eecd5ba-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef"}
Nov 16 17:00:44 cmp002 containerd[5713]: time="2019-11-16T17:00:44.859573770Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/mirantis/k8s-netchecker-agent:stable,Labels:map[string]string{},}"
Nov 16 17:00:44 cmp002 containerd[5713]: time="2019-11-16T17:00:44.864828167Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d16c8d7f4d5bdaecda098c54c655027871da52f292683759f5a32c8ecd43b3f0,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:44 cmp002 containerd[5713]: time="2019-11-16T17:00:44.865460915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/mirantis/k8s-netchecker-agent:stable,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.134917507Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:d16c8d7f4d5bdaecda098c54c655027871da52f292683759f5a32c8ecd43b3f0,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.136779116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/mirantis/k8s-netchecker-agent:stable,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.139192798Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/mirantis/k8s-netchecker-agent@sha256:4ac49ebef7eaeaa5a8a19f56faa73740bf4861979aa067d0125867a72846720a,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.139528387Z" level=info msg="PullImage "mirantis/k8s-netchecker-agent:stable" returns image reference "sha256:d16c8d7f4d5bdaecda098c54c655027871da52f292683759f5a32c8ecd43b3f0""
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.142433267Z" level=info msg="PullImage "docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96""
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.144012934Z" level=info msg="CreateContainer within sandbox "ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" for container &ContainerMetadata{Name:netchecker-agent,Attempt:0,}"
Nov 16 17:00:45 cmp002 kernel: [  304.256117] audit: type=1400 audit(1573923645.270:18): apparmor="STATUS" operation="profile_load" profile="unconfined" name="cri-containerd.apparmor.d" pid=9587 comm="apparmor_parser"
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.282330033Z" level=info msg="CreateContainer within sandbox "ae372c4a1b8512bf6c2c04bfd7e13fd954762ee5f3dfb903848cb4c9ff546957" for &ContainerMetadata{Name:netchecker-agent,Attempt:0,} returns container id "12393936d97a6689ce22f47c38b888c0dc436c5a995c56631ed619fbbadd4a61""
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.282921152Z" level=info msg="StartContainer for "12393936d97a6689ce22f47c38b888c0dc436c5a995c56631ed619fbbadd4a61""
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.284554282Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/12393936d97a6689ce22f47c38b888c0dc436c5a995c56631ed619fbbadd4a61/shim.sock" debug=false pid=9592
Nov 16 17:00:45 cmp002 containerd[5713]: time="2019-11-16T17:00:45.459655033Z" level=info msg="StartContainer for "12393936d97a6689ce22f47c38b888c0dc436c5a995c56631ed619fbbadd4a61" returns successfully"
Nov 16 17:00:45 cmp002 kubelet[8640]: I1116 17:00:45.462193    8640 kubelet.go:1953] SyncLoop (PLEG): "netchecker-agent-w98nt_netchecker(9e5aeb31-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9e5aeb31-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"12393936d97a6689ce22f47c38b888c0dc436c5a995c56631ed619fbbadd4a61"}
Nov 16 17:00:47 cmp002 containerd[5713]: time="2019-11-16T17:00:47.113006074Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96,Labels:map[string]string{},}"
Nov 16 17:00:47 cmp002 containerd[5713]: time="2019-11-16T17:00:47.119970125Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31b943183ba2eec561c44c1fdda2a39ab03c0cb72f90a24c1bea4adf0a44bfbc,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:47 cmp002 containerd[5713]: time="2019-11-16T17:00:47.120595587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:48 cmp002 containerd[5713]: time="2019-11-16T17:00:48.134616276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:31b943183ba2eec561c44c1fdda2a39ab03c0cb72f90a24c1bea4adf0a44bfbc,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:48 cmp002 containerd[5713]: time="2019-11-16T17:00:48.136600455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:48 cmp002 containerd[5713]: time="2019-11-16T17:00:48.138296364Z" level=info msg="ImageCreate event &ImageCreate{Name:docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns@sha256:30f28dcd8a8c9a97c206bba187bd9b1e8fbc1cf52ce38f7e937c85a191709376,Labels:map[string]string{io.cri-containerd.image: managed,},}"
Nov 16 17:00:48 cmp002 containerd[5713]: time="2019-11-16T17:00:48.138809202Z" level=info msg="PullImage "docker-prod-local.artifactory.mirantis.com/mirantis/coredns/coredns:v1.4.0-96" returns image reference "sha256:31b943183ba2eec561c44c1fdda2a39ab03c0cb72f90a24c1bea4adf0a44bfbc""
Nov 16 17:00:48 cmp002 containerd[5713]: time="2019-11-16T17:00:48.141410427Z" level=info msg="CreateContainer within sandbox "7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Nov 16 17:00:48 cmp002 containerd[5713]: time="2019-11-16T17:00:48.214750643Z" level=info msg="CreateContainer within sandbox "7df13774fc76a2475b62369ad261a081e82f6ed3cd4e182004292a2f86aa8aef" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id "38040ab905e86377dcf1dc2cf2fcf51ad43d3bb29b679b05b8c545f7ecccb34f""
Nov 16 17:00:48 cmp002 containerd[5713]: time="2019-11-16T17:00:48.215709664Z" level=info msg="StartContainer for "38040ab905e86377dcf1dc2cf2fcf51ad43d3bb29b679b05b8c545f7ecccb34f""
Nov 16 17:00:48 cmp002 containerd[5713]: time="2019-11-16T17:00:48.216638051Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/38040ab905e86377dcf1dc2cf2fcf51ad43d3bb29b679b05b8c545f7ecccb34f/shim.sock" debug=false pid=9668
Nov 16 17:00:48 cmp002 kubelet[8640]: I1116 17:00:48.252642    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:00:48 cmp002 containerd[5713]: time="2019-11-16T17:00:48.397024194Z" level=info msg="StartContainer for "38040ab905e86377dcf1dc2cf2fcf51ad43d3bb29b679b05b8c545f7ecccb34f" returns successfully"
Nov 16 17:00:48 cmp002 kubelet[8640]: I1116 17:00:48.470518    8640 kubelet.go:1953] SyncLoop (PLEG): "coredns-7f8f94c97b-v2ccc_kube-system(9eecd5ba-0892-11ea-a35a-5254009caaa4)", event: &pleg.PodLifecycleEvent{ID:"9eecd5ba-0892-11ea-a35a-5254009caaa4", Type:"ContainerStarted", Data:"38040ab905e86377dcf1dc2cf2fcf51ad43d3bb29b679b05b8c545f7ecccb34f"}
Nov 16 17:00:48 cmp002 kube-proxy[8844]: I1116 17:00:48.483105    8844 proxier.go:659] Stale udp service kube-system/coredns:dns -> 10.254.0.10
Nov 16 17:00:48 cmp002 kernel: [  307.516784] ctnetlink v0.93: registering with nfnetlink.
Nov 16 17:00:58 cmp002 kubelet[8640]: I1116 17:00:58.264620    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:01:08 cmp002 kubelet[8640]: I1116 17:01:08.278988    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:01:18 cmp002 kubelet[8640]: I1116 17:01:18.292627    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:01:28 cmp002 kubelet[8640]: I1116 17:01:28.305054    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:01:38 cmp002 kubelet[8640]: I1116 17:01:38.319013    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:01:48 cmp002 kubelet[8640]: I1116 17:01:48.333984    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:01:58 cmp002 kubelet[8640]: I1116 17:01:58.345068    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:02:08 cmp002 kubelet[8640]: I1116 17:02:08.359825    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:02:18 cmp002 kubelet[8640]: I1116 17:02:18.376935    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:02:28 cmp002 kubelet[8640]: I1116 17:02:28.388818    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:02:38 cmp002 kubelet[8640]: I1116 17:02:38.404323    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:02:48 cmp002 kubelet[8640]: I1116 17:02:48.416570    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:02:58 cmp002 kubelet[8640]: I1116 17:02:58.429008    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:03:08 cmp002 kubelet[8640]: I1116 17:03:08.437161    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:03:18 cmp002 kubelet[8640]: I1116 17:03:18.445776    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:03:28 cmp002 kubelet[8640]: I1116 17:03:28.462039    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:03:38 cmp002 kubelet[8640]: I1116 17:03:38.475108    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:03:48 cmp002 kubelet[8640]: I1116 17:03:48.485662    8640 setters.go:72] Using node IP: "172.16.10.56"
Nov 16 17:03:48 cmp002 salt-minion[4490]: [INFO    ] User sudo_ubuntu Executing command cp.push_dir with jid 20191116170348759627
Nov 16 17:03:48 cmp002 salt-minion[4490]: [INFO    ] Starting a new job with PID 10081
